Facebook has been using Chinese methods of censorship to control what comes to the top of millions of feeds—and what doesn't. Many of those employed by Facebook are Chinese nationals, on H-1B visas.
An investigation led by Sohrab Ahmari, the op-ed editor at the New York Post, uncovered evidence that a wide range of Facebook practices and resources are designed to influence the social platform by means of what a former Facebook insider called "Hate Speech Engineering."
These methods, some of which are taken directly from Chinese political influences, are subtle enough to curve the direction of user content without being noticed, but sophisticated enough to filter through extremely large and complicated sets of online data. It's what a former Facebook insider described as an incredibly complex and technical process.
"What they don't do is ban a specific pro-Trump hashtag," the insider told the Post. "Content that is a little too conservative, they will down-rank. You can't tell it's censored."
Facebook is sifting its user's feeds. What's deemed as "borderline content" is weeded out. In the United States, these practices, if true, are alarming. In China, they're routine part of an attempt to control a public's outlook on political and social issues.
This report came as Project Veritas released a series of videos that show Google executives, on both the search and advertising sides, discussing how the algorithms are used to suppress content managers don’t want to allow viewers to see. The content that is routinely shuffled down to the bottom of the deck is conservative content.
The Post did not name the insider, describing them only as a former insider at the tech giant. The source suggested that Facebook has invested numerous resources to make use of "Hate Speech Engineering," including hiring as many as six Chinese nationalists. Like most of Facebook, they're based in Seattle, Washington.
The focus of these tactics is machine learning—teaching a computer to learn over time by amassing large quantities of data and discovering patterns or trends that may emerge. These patterns or codes can then be taught to learn the differences between categories of content. In essence, they're being taught to recognize the difference between flowers and weeds.
What that means in practice is teaching the Facebook algorithms what kinds of content should be prioritized and seen and which kinds of posts will be left to obscurity and never see the light of day.
There does appear to be a double standard at work, where Twitter suppressed stories from the New York Post that had to do with materials obtained from a laptop reported to have belonged to Hunter Biden, claiming it was hacked, while having no issues with The New York Times’ publication of dubiously obtained tax records belonging to Donald Trump.
Senators Ted Cruz and Lindsey Graham, Chairman of the Senate Committee on the Judiciary have said that they want to summon corporate leaders of both Facebook and Twitter to a hearing before the Presidential election on November 3, citing concerns of the tech giants posing a threat of interference in the upcoming political election.