Long-term coal power plants will have to control 90% of their carbon pollution, new EPA rules say
Wage hike costs workers Biden should listen Get the latest views Submit a column
Columnists' Opinions

My social media feeds look different from yours and it's driving political polarization

Echo chambers are an inevitable consequence of social networks that include actors on a mission to persuade you. Too often, those actors are machines.

Robert Elliott Smith
Opinion contributor

The election season is winding up, and my social media is once again awash with political stories. Headlines stream: “Warren and Bernie’s awkward truce...”, “Trump sees his base growing...” and “The Fed’s real message...”. This is the America I see today.

The trouble is, it’s not the America you see or anyone else sees. It is my personally-curated version of reality. A constantly shifting mirage, evolving in real-time, depending on my likes and dislikes, what I click on, and what I share.

A recent Pew Research Center study found black social media users are more likely to see race-related news. The Mueller report suggests Russian efforts against Hillary Clinton targeted Bernie Sanders supporters. In October 2016, Brad Parscale, then President Trump’s 2016 digital director, now 2020 campaign chairman, told Bloomberg News that he targeted Facebook and media posts at possible Clinton supporters so that they would sit the election out.

Parscale — who, as of early August, has spent more ($9.2 million) on Facebook ads for Trump 2020 than the four top Democratic candidates combined — said that in 2016 he typically ran 50,000 ad variations each day, micro-targeting different segments of the electorate.

Algorithms are prejudiced

While political operatives exploiting yellow journalism is nothing new, the coupling of their manipulative techniques to a technologically-driven world is a substantial change. Algorithms are now the most powerful curators of information, whose actions enable such manipulation by creating our fractured informational multiverse.

And those algorithms are prejudiced. That may sound extreme, but let me explain.

In analyses conducted by myself and colleagues at University College London (UCL), we modeled the behavior of social networks, using binary signals (1s and 0s) passed between simplified “agents” that represented people sharing of opinions about a divisive issue (say pro-life versus pro-choice or the merits of building a wall or not).

A computer screen displaying the logo of social networking site Facebook reflected in a window.

Most “agents” in this model determine the signals they broadcast based on the signals they receive from those surrounding them (as we do sharing news and stories online). But we added in a small number of agents we called “motivated reasoners,” who, regardless of what they hear, only broadcast their own pre-determined opinion.

Our results showed that in every case, motivated reasoners came to dominate the conversation, driving all other agents to fixed opinions, thus polarizing the network. This suggests that “echo chambers” are an inevitable consequence of social networks that include motivated reasoners.

It goes deeper than you think:Two years after Charlottesville, I'm fighting the conspiracy theory industrial complex

So who are these motivated reasoners? You might assume they are political campaigners, lobbyists or even just your most dogmatic Facebook friend. But, in reality, the most motivated reasoners online are the algorithms that curate our online news.

How technology generalizes

In the online media economy, the artificial intelligence in algorithms are single-minded in achieving their profit-driven agendas by ensuring the maximum frequency of human interaction by getting the user to click on an advertisement. But AIs are not only economically single-minded, they are also statistically simple-minded.

Take, for example, the 2016 story in The Guardian about Google searches for “unprofessional hair” returning images predominantly of black women.

Does this reveal a deep social bias towards racism and sexism? To conclude this, one would have to believe that people are using the term “unprofessional hair” in close correlation with images of black women to such an extent as to suggest most people feel their hairstyles define "unprofessional." Regardless of societal bias (which certainly exists), this seems doubtful.

It isn't all bad news for newspapers:I'm a journalism student in an era of closing newsrooms, 'fake news.' But I still want in.

Having worked in AI for 30 years, I know it is probably more statistically reliable for algorithms to recognize black women’s hairstyles than those of black men, white women, etc. This is simply an aspect of how algorithms “see,” by using overall features of color, shape, and size. Just as with real-world racism, resorting to simple features is easier for algorithms than deriving any real understanding of people. AIs codify this effect.

To be prejudiced means to pre-judge on simplified features, and then draw generalizations from those assumptions. This process is precisely what algorithms do technically. It is how they parse the incomprehensible “Big Data” from our online interactions into something digestible. AI engineers like me explicitly program generalization as a goal of the algorithms we design.

Given the simplifying features that algorithms use (gender, race, political persuasion, religion, age, etc.) and the statistical generalizations they draw, the real-life consequence is informational segregation, not unlike previous racial and social segregation.

Dangerous, divisive consequences

Groups striving for economic and political power will inevitably exploit these divisions, using techniques such as targeted marketing and digital gerrymandering to categorize groups. The consequence is not merely the outcome in an election, but the propagation of deep divisions in the real world we inhabit.

Recently, presidential hopeful Sen. Kamala Harris spoke about how federally-mandated desegregation busing transformed her life opportunities. Like her, I benefited from that conscious effort to mix segregated communities, when as a child in 1970s Birmingham, Alabama, black children were bused to my all white elementary school. Those first real interactions I had with children of a different race radically altered my perspective of the world.

It never gets easier:How many more birthdays will our journalist son, Austin Tice, spend captive in Syria?

The busing of the past ought now inspire efforts to overcome the digital segregation we see today. Our studies at UCL indicate that the key to counteracting the natural tendency of algorithmically-mediated social networks to segregate is to technically promote mixing of ideas, through greater informational connectivity between people.

Practically, this may mean the regulation of online media, and an imperative for AI engineers to design algorithms around new principles that balance optimization with the promotion of diverse ideas. This scientific shift in perspective will ensure a healthier mix of information, particularly around polarizing issues, just like those buses enabled racial and social mixing in my youth.

Robert Elliott Smith is the author of the newly released "Rage Inside the Machine: The Prejudice of Algorithms and How to Stop the Internet Making Bigots of Us All." He is the Chief Technology Office for BOXARR Ltd., and a senior research fellow of Computer Science at University College London. He is a founding member of The UCL Centre for The Study of Decision-Making Uncertainty. Follow him on Twitter @DrRESmith.

Featured Weekly Ad