IMG_0275.JPG

Anonymous G

Editor In Chief, Content Curator

 

The 7 Censorship Tactics ‘Big Tech’ Uses to Control the Flow of Information

The 7 Censorship Tactics ‘Big Tech’ Uses to Control the Flow of Information

(William Ammerman) Big Tech publishers, including Facebook, Twitter, and Google, use seven major censorship tactics to control the flow of information through their products. Under the beneficent guise of “content moderation” most censorship by Big Tech publishers was traditionally directed at vices such as obscenity, violence, drugs, and gambling, often in direct response to advertisers who don’t want their ads to appear adjacent to such content. But increasingly, these publishers are applying censorship tactics to more slippery categories such as disinformation, bullying, and hate speech, where definitions are elusive and judgments are subjective and prone to bias. As the public increasingly relies on these publishers for news and information, it is vital to understand these censorship tactics and the potential risks they pose to free speech. Here are the 7 D’s of Big Tech censorship:

Direct Censorship

The most obvious censorship tactic is direct censorship, which is the blocking and removing of information. For a social media giant like Twitter, this means users can’t access specific information or share it with their network. A recent example of direct censorship was Twitter’s decision to block a New York Post news story about foreign influence in US politics, involving Hunter Biden. To justify this direct censorship, Twitter cited its policy against publishing hacked materials, without any evidence the New York Post used hacked materials in its story. Massive public blowback against Twitter’s direct censorship of a news story by an established big-city newspaper (founded in 1801) eventually forced the social media giant to announce changes to its official publishing policies, but free speech advocates remain wary.

Deplatforming

Publishers regularly block the accounts of individuals and organizations through a practice known as deplatforming. In June, the BBC reported that Facebook had removed the account of the British ska band The Specials and its lead singer Neville Staple. Staple is a person of color who was incorrectly identified as a white supremist in Facebook’s evolving enforcement of restrictions on hate speech. Facebook eventually reversed itself in the case of The Specials, but the incident highlights the fact that Big Tech publishers have the power to enforce cancel-culture rules that are often subjective and prone to error. Facebook reported that it blocked or removed 1.3 billion accounts in Q3 2020 due to violations of its policies, and while the bulk of these were fake accounts, some were undoubtedly legitimate. The deplatforming trend is particularly problematic for journalists and news organizations who have their accounts blocked. Following publication of the Hunter Biden story, Twitter completely locked the New York Post’s account, preventing it from posting additional news stories for over 24 hours.

Delegitimizing

Publishers have begun flagging content with labels intended to notify users of concerns about the legitimacy of specific posts. In February, Twitter announced a new policy for flagging photos and videos that appear to be manipulated or altered in a way that is “likely to impact public safety or cause serious harm.”  Twitter soon expanded their labeling policy to flag a host of new topics, ranging from COVID-19 to election politics. Facebook has also deployed warning labels on specific categories of content, including advertising, and in June began offering users the ability to turn off political, electoral, and social issue ads. Delegitimizing content with flags, labels, and categories has been presented by publishers as a “compromise” between direct censorship and a laissez-faire approach of non-interference. Nonetheless, due to its subjective application, the practice of applying labels that delegitimize content has the potential for abuse.

Deamplification

Publishers have the power to amplify a news story and push it into the feeds of millions of users, often dwarfing the distribution the story would get through subscriptions and circulation. Normally, this amplification happens as the result of an unbiased algorithm that simply pushes content because it is trending in clicks, or because it mirrors the type of content users have consumed in the past. Deamplification happens when the publisher actively prevents their algorithms from pushing out specific content. Publishers can throttle their amplification engines to depress the visibility of a story. In November, Facebook acknowledged using a “news ecosystem quality” score or N.E.Q., which explicitly amplifies content from favored publishers including The New York TimesCNN, and NPR, with the equal and opposite effect of deamplifying content from smaller publishers and independent journalists. Forbes reported that Facebook took this action as a temporary response to post-election misinformation, but Glenn Greenwald argues that pressure from The New York Times in favor of Facebook’s N.E.Q. censorship is an obviously self-serving strategy for stifling competition, and represents a real threat to independent journalism.

Demotion

Search engines like Google present information in a prioritized list, ranked in terms of relevancy, with the most relevant results at the top of the list. Consumers understand the algorithm that generates search results can be overridden, as is the case with Google Ads, which injects paid ads to the top of Google’s search results. Consumers also understand that differences in the algorithms for search providers (like Google and Bing) produce different search results. Demotion occurs when a search result is intentionally moved down the rankings, either manually or through tweaks to the algorithm. Compare the top result from Google and Bing for this search: “Cost of Paris Agreement.” At the time of writing, the top result from Google is an article from the Natural Resources Defense Council (NRDC). The article is an environmental advocacy piece that alludes to $19 trillion in “major global rewards” which will result from the Paris Agreement, without ever detailing the costs of the agreement.

The top result from Bing is an article from The World at 1C, a communications initiative of the Global Campaign To Demand Climate Justice. The article’s relevance to the search question appears in the second sentence: “Under the United Nations Framework Convention on Climate Change, developed countries have committed to mobilise $100 billion in climate finance per year, but are falling far short of this goal. To date, just $10.3bn has been pledged by developed country parties to the Green Climate Fund.” How did Google miss this?  Well, it didn’t miss it exactly, but Google diminishes the impact of this critical perspective by simply demoting the article to the second page of its search results, where few will see it.

For those suffering from Googlenoia, they will see this as an intentional manipulation that’s intended to promote something Google supports, while burying the downside of its costs. Demotion is notoriously difficult to prove since search providers like Google are protective about their algorithms, but the potential for abuse is enormous.

Demonetization

Google’s subsidiary, YouTube, is popular with content creators who can get paid a share of the ad dollars generated by their videos. Forbes reports the top YouTuber for 2019 was 8-year old Ryan Kaji, who earned $26 million “opening presents in front of the camera.”  But not all YouTube videos are that G-Rated, and brand safety concerns by advertisers forced YouTube to restrict advertising on a wide range of topics including terrorism, pornography, and hate speech.  With a tweak of its algorithm, YouTube can remove videos from its account monetization program and shut down the cash flow for content creators.  An example of the problem with demonetization is that the algorithms often can’t distinguish between terrorism and news commentary about terrorism.   Philip DeFranco, a popular YouTuber known for unfiltered news commentary, “initially saw an 80% drop in revenue” when YouTube implemented demonetization.  Free speech advocates fear that demonetization has a chilling effect on important discussions about controversial topics and is ripe for misuse.

Discrediting

China is in the midst of implementing a social credit score for its citizens which offers benefits such as access to quality housing and transportation for high scores while denying similar benefits to those with low scores.  In this context, discrediting is the practice of lowering someone’s social credit score as a penalty for behaviors deemed unacceptable by the government.  Yaqiu Wang, reporting in 2017 for the Committee to Protect Journalists, writes, “In what would be a uniquely daunting form of censorship, the Chinese government is making plans to link journalists’ financial credibility to their online posts.”  Journalists critical of China’s massive military buildup, its mishandling of COVID-19, or its crackdown on democracy in Hong Kong are already feeling the squeeze of discrediting, including travel restrictions and even imprisonment.  In May, former Chinese state journalist Chen Jieren was sentenced to 15 years in prison for blog posts critical of the Communist Party.

Social credit scores are not limited to the Chinese government.  Fast Company reports that Big Tech is hard at work deploying social credit scores across a range of industries including insurance, hospitality, and transportation, in an extralegal system of privileges and penalties for consumers. The degree to which Big Tech publishers are already scoring contributors and how those scores are used to produce privileges and penalties is difficult to ascertain, but the implementation of discrediting in China offers a frightening example of its potential threat to free speech.

Conclusion

The 7 D’s of Big Tech censorship are tactics that are being applied across an increasing range of subjects to control the flow of news and information. Whether these tactics are applied consistently and fairly by Big Tech publishers is the subject of considerable debate among politicians, free speech advocates, and consumers.  In testimony before the Senate Judiciary Committee, Twitter’s CEO Jack Dorsey seemed to indicate support for reforms, “I believe the best way to address our mutually-held concerns is to require the publication of moderation processes and practices, a straightforward process to appeal decisions, and best efforts around algorithmic choice.”

Facebook’s CEO Mark Zuckerberg went further in calling for legislative reform of Section 230 of the Communications Decency Act.  “Section 230 made it possible for every major internet service to be built and ensured important values like free expression and openness were part of how platforms operate. Changing it is a significant decision. However, I believe Congress should update the law to make sure it’s working as intended.”  President-elect Biden was the top recipient of campaign contributions from Facebook, Google, and Twitter, ensuring Big Tech will have its say in how these reforms are implemented by Washington. One thing is certain: publishers who benefit from legal protections arising from Section 230 owe the public transparency and unbiased restraint in the use of these powerful tools.

William Ammerman

William Ammerman is a digital media veteran, freelance writer, and author of The Invisible Brand: Marketing in the Age of Big Data, Automation, and Machine Learning which won the 2019 Marketing & Sales Book of the Year.


Electoral Fraud in Black and White

Electoral Fraud in Black and White

Lockdowns Are Based On Fraud: Open Letter To People Who Want Freedom

Lockdowns Are Based On Fraud: Open Letter To People Who Want Freedom