These machines have been trained to hunt conservatives

We are rapidly moving into a world in which big tech algorithms are involved in almost every aspect of our lives.

ADVERTISEMENT
ADVERTISEMENT

An exclusive extract from Allum Bokhari's new book, "#Deleted: Big Tech's Battle to Erase the Trump Movement and Steal the Election." Order the book now.


Understanding algorithms – and the importance of the people who create and manage them – couldn't be more important, because it will be algorithms, not humans, that will eventually govern big tech platforms.

This is already happening. In YouTube's 2019 "community guidelines enforcement report" – a yearly celebration of the amount of videos they've kicked off the internet – the video-sharing platform reported that the vast majority of videos removed from its platform that year were a result of "automated flagging." According to the report, over 8 million videos removed from the platform were a result of automated flagging, compared to just 345,435 that were taken down as a result of reports from users. Clearly, we are already living in a world run, at least in part, by machines. And—if those candid statements from current and former Twitter employees are representative of wider attitudes in Silicon Valley—those machines have been trained to hunt conservatives.

But it's not just about content removals. We are rapidly moving into a world in which big tech algorithms are involved in almost every aspect of our lives. Whether we're approved for loans, mortgage, insurance, whether we're allowed to rent an apartment, whether we can use platforms like Airbnb, Uber, and Lyft, whether our business appears at the top of Google search or whether it's buried five pages down, even who we'll be matched with on online dating apps. Algorithms already have an outsized impact on our daily lives, and this trend is only going to accelerate.

Control over this technology is the biggest prize in tech. In the title of his influential 2013 book, computer scientist Jaron Lanier asked, "Who owns the future?" With AI set to become a major feature of virtually every industry over the next century, the question may very well be rephrased as, "Who owns the AI?" Industries are unlikely to develop their own AI systems; they will likely rely on whatever is produced by the "experts" in Silicon Valley. The same tech giants that dominate today are likely to dominate the AI-powered world of tomorrow. AI trained in leftist ideology will carry those biases far beyond Silicon Valley, into finance, housing, journalism, law, commerce, and every other field you can imagine. In the near future, AI will help determine whether you're hired for a job, whether you're approved for a loan or a mortgage, whether your children are accepted into a university, whether you can finance a car, whether you can rent an apartment.

An incident at Google in 2019 showed just how determined the left is to maintain ideological hegemony over AI. In March of that year, Google announced the formation of an external "AI ethics council" comprised of eight members. Seven of the eight included mainstream academics, computing experts, and a former diplomat who served under Barack Obama. But for left-wing Google employees, one member stood out: Kay Coles James, president of the Heritage Foundation, one of the foremost conservative think tanks in the U.S.

The freakout was immediate. One the same day that Google announced its plans for an AI ethics council, employees created a thread on an internal discussion channel to complain about the inclusion of Coles James. The discussion quickly spiraled into a chorus of smears against the African-American conservative, with far-left employees accusing her of "hateful positions," "bigotry," and even something called "exterminationism." One employee branded the Heritage Foundation "monstrous," describing it as "organization dedicated to eliminating LGBTQ+ people from public life, driving them back into the closet" and "denying them healthcare." Another employee claimed the "rhetorical violence" of the think tank "translate[s] into real, material violence against trans people, particularly trans women of color."

Comments from one employee, leftist AI researcher Meredith Whittaker, were particularly illuminating. Like the others, she smeared Coles James, calling her an "outspoken bigot" whose favored policies "dehumanize and marginalize." But her later comments revealed how high she – and, most likely, other left-wingers involved in AI – believes the stakes of allowing conservative viewpoints to influence AI are.

"The potential harms of AI and 'advanced' tech are not evenly distributed and follow historical patterns of discrimination and exclusion," wrote Whittaker. "Those who have been historically marginalized are at the most risk of harm. See AI that doesn’t hear women, that doesn't 'see' trans people, or people of color."

"See systems deployed to aide ICE in targeting immigrants, to aid the Military in drone strikes, or to enhance worker control. Thus, in ensuring we are 'ethical' in our pursuit of AI dominance, we need to include and amplify the perspectives of those most at risk."

"Which brings us to the problem with this argument even on its own terms: nowhere is Civil Society represented, let alone representatives of the communities most at risk of harm. (where is the Trans Advocacy Network, criminal justice reform experts, the ACLU, etc.?) While there’s a member of the very-far Right, in the person of James, there is no equivalent far-left representative. (To be extremely clear, even if these voices were included, that does not justify the inclusion of an open bigot.)"

Needless to say, Whittaker's definition of Coles James as "very-far Right" and an "open bigot" are nothing but smears. The Heritage Foundation is as mainstream a conservative organization as you can get. But her comments reveal the acute paranoia that left-wingers in tech feel about the potential influence of conservative viewpoints on the AI technologies that will impact almost all aspects of humanity’s future. As Whittaker says, it isn’t enough that multiple viewpoints are represented – any presence of conservative thought in the development of AI is a threat. Elsewhere in her post, Whittaker warned against "justifying including bigots in the name of 'viewpoint diversity,'" arguing it is "weaponization of the language of D&I [diversity & inclusion]" that has been "used by the alt-right to argue against diversity efforts."

In short, Whittaker's post argues that Google should shut out mainstream conservative influences from its AI project because they are "open bigots,” while at the same time stacking the deck with left-wing intersectional outfits like the Trans Advocacy Network and the liberal ACLU. For Whittaker, conservative-influenced AI is a mortal threat, and neutral AI is unsatisfactory – only AI influenced by the left may be permitted. Indeed, Whittaker was so threatened by Coles James’ inclusion on the AI council that she quickly helped organize a company-wide petition to have her kicked out. Google’s leadership responded with cowardice – they didn’t want to undo their "conservative outreach" efforts in D.C., and they also didn’t want to confront their far-left crazies. So they cancelled the entire AI ethics board, much to the chagrin of the board's proposed members. "It's become clear that in the current environment, ATEAC [Advanced Technology External Advisory Council] can't function as we wanted. So we're ending the council and going back to the drawing board," wrote Google in a humiliating statement. Whittaker and the crazies had succeeded in preventing even a lone, fairly mainstream conservative voice influencing the company's work on AI.

It's no accident that Whittaker was so vocal in her opposition to Coles James. A deep dive into her background reveals that she is one of the pioneers of an emerging field called "Machine Learning Fairness," the goal of which is to ensure that artificial intelligence is trained to be "fair" – as defined by left-wing academics. She co-founded a research institute at NYU called "AI Now," the stated purpose of which is to understand the "social implications of artificial intelligence." The institute is partnered with the left-wing ACLU (something Whittaker conveniently neglected to disclose when she pushed for its inclusion in AI oversight at Google), and seeks to marry left-wing academia to the field of AI. A glance at the institute's publications reveal its left-wing, intersectional priorities – one study is titled "discriminating systems: gender, race, and power in AI." Another is titled "Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice."

The institute's description of its own work on "bias and inclusion" further reveals its left-wing, identitarians priorities. "At their best, AI and algorithmic decision-support systems can be used to augment human judgement and reduce both conscious and unconscious biases. However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities… When machine learning is built into complex social systems such as criminal justice, health diagnoses, academic admissions, and hiring and promotion, it may reinforce existing inequalities, regardless of the intentions of the technical developers."

The idea of AI as a tool for the left-wing ideological agenda is also revealed in Google's own "machine learning fairness" project, which cites studies like "Mind the GAP: A Balanced Dataset of Gendered Ambiguous Pronouns" and "The Reel Truth: Women Aren’t Seen or Heard." In its video explaining the ML fairness project, Google explains that its purpose is to "prevent [machine learning] from perpetuating negative human bias. From tackling offensive or clearly misleading information from appearing at the top of your search results page, to adding a feedback tool on the search bar, so that people can flag hateful or inappropriate autocomplete suggestions."

It is now hopefully a little clearer why the left needs to ensure that no conservatives ever intrude on their territory in the field of AI. Although Google's "ML fairness" is framed as a campaign against bias, it is the precise opposite – an attempt to imprint left-wing biases on the technology that will, increasingly, govern our lives. Ask yourself: would an AI designed to detect incitement to violence identify antifa, if it were trained by Silicon Valley SJWs? If you trained an AI to detect racism and bigotry, would it identify New York Times’ anti-white bigot Sarah Jeong? Would it identify the feminists who like to joke about killing all men? Would AI categorize Covingtongate as a harassment campaign? An unbiased AI certainly would!

On a deeper level, would an AI developed to help landlords allow them to fully and properly scour the criminal records of prospective tenants? Looking at the work of the AI Now institute, we can already see the use of criminal databases in AI training as a major emerging concern.

It’s easy to imagine even simple AI systems producing outputs that would horrify the intersectional far-left. AIs, after all, are trained to detect patterns in data, and an unbiased examination of data often yields conclusions the left would rather not talk about. Remember, tech companies like Twitter think journalists like Andy Ngo should be kicked off the platform for stating empirically verifiable facts about trans people.

Imagine, for example, if you asked an AI to figure out the type of people who carry the highest risk of possessing an illegal firearm, or a class-A drug, or those who at the highest risk of committing a robbery. Do you think an AI, examining all the data available, would reach a conclusion that the left would be OK with? What if you trained it to identify individuals at the highest risk of joining a sex grooming gang in the U.K., or a terror cell in Belgium? The AI would start churning out the same kinds of empirically-based conclusions that got Tommy Robinson banned from Twitter! What if you trained it to find out whether men and women are paid different rates for the same amount of work? Another left-wing myth would be destroyed! What if you asked it to find out which culture has made the most contributions to science and technology? The list of questions for which AI could offer factually correct, yet politically incorrect, answers is endless.

I often think that the definition of being "right wing," today, is simply noticing things you're not supposed to. These include the blindingly obvious, like the innate differences between men and women (for discussing this simple truth, Google fired a top-rated engineer, James Damore, in 2017 – he had done too much noticing). They include uncomfortable topics, like racial divides in educational and professional achievement, or involvement in criminal gangs. They include issues of national security, like terrorism and extremism.

On these topics and many others, the fledgling AIs of big tech are in the same boat as right-wingers: they're in danger of being branded bigots, for the simple crime of noticing too much. The unspoken fear isn't that AI systems will be too biased – it's that they'll be too unbiased. Far from being complex, AIs perform a very simple task – they hoover up masses of empirical data, look for patterns, and reach conclusions. In this regard, AIs are inherently right-wing: they're machines for noticing things.

This divide between the modern left and right, the latter determined to notice uncomfortable empirical facts, and the former determined to suppress them, could not be more important – especially as the field of AI takes off. The left wrongly paints the right as bigots for its insistence on identifying, acknowledging, and discussing sensitive topics. They're wrong. The fact that many on the right want to acknowledge and discuss the problems of, say, crime in black communities, or extremism in Muslim ones, doesn't mean that the right hates those communities. On the contrary, the right knows that those communities will not prosper or be fulfilled unless they acknowledge, discuss, and solve their particular problems. Some in those communities – although more so the political left – may find those conversations painful, offensive, and enraging, but no progress can be made until they are had. The left prioritizes the feelings of its protected classes (perhaps just slightly behind the need to assuage their own pangs of white liberal guilt) over actually solving their problems.

The right should be encouraged by the fact that the default state of AI, which is to dispassionately analyze data and solve problems, is something that works in their favor. It's difficult, after all, to train an AI to make subtle, human considerations about offensiveness and political correctness before it produces its output. For all their hastily-constructed efforts to impose "fairness" on machine learning, the right has a natural advantage – the empirical data is on their side. And there's nothing an AI loves more than empirical data.

Nevertheless, the left enjoys its own massive advantage. As we saw in the case of Kay Coles James, it has developed an overwhelming cultural hegemony in Silicon Valley, which is building the AI systems of the future. If we don't want our future robot overlords to autocorrect all our emails to use gender-neutral pronouns, or pre-ban us for drafting a Facebook post that contains "hate speech," this crisis of political culture in tech is something that must be urgently addressed. If the vast power of AI were successfully turned to political purposes, we might as well elect big tech CEOs emperors of the world. As we detailed in previous chapters, the political and media establishment is exerting considerable pressure on tech companies to turn their products in an ideologically biased direction. As we'll see in the next chapter, the internal pressure, from big tech’s own employees, may be even greater.

ADVERTISEMENT
ADVERTISEMENT

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information