How AI Can Track, Manipulate Voters

How AI Can Track, Manipulate Voters
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. (Josep Lago/AFP via Getty Images)
Kevin Stocklin
4/8/2023
Updated:
4/23/2023

How well do artificial intelligence (AI) programs know us humans?

In most cases, it’s quite well and, in some ways better than we know ourselves.

A study by AI experts at Brigham Young University, titled “Out of One, Many: Using Language Models to Simulate Human Samples,” found that predictive AI programs exhibited a striking degree of what they call “algorithmic fidelity,” or precise mapping to actual human behavior.

“Because these AI tools are basically trained on stuff that humans produce, things that we write, documents we make, websites we write, they can reflect back to us a lot of interesting and important things about ourselves,” Ethan Busby, political psychologist and co-author of the study, told The Epoch Times. “Kind of like if someone read your diary from start to finish, they would know a lot of things about you, and you’re not going to like every single thing.

“In a similar way,” Busby said, “these tools have read so many things that humans have produced, and they can replicate or say back to us things about ourselves that we didn’t necessarily know.”

The study sought to analyze human behavior in the context of elections and asked how accurately a GPT-3 language model could predict voting patterns based on socio-demographic factors like a person’s gender, age, location, religion, race, and economic status. The authors used these factors to create “silicon samples,” or composite personas based on varying combinations of these attributes.

“You can basically ask these tools to put themselves in a specific frame of mind and pretend to be essentially this person, pretend to have these characteristics,” Busby said. They asked the program how these “silicon samples” would vote in specific campaigns, then they compared the results to actual voters’ behavior in elections between 2012 and 2020, using data from the American National Elections studies.
For example, Busby said, regarding the 2016 election, “We could say what kinds of groups are going to be pivotal in Ohio?” What they found was that AI quickly learned to accurately predict how people would vote, based on their attributes.

Left and Right Decry AI, When It Costs Elections

Artificial intelligence is highly useful to organizations that want to target things like political messaging campaigns or fundraising efforts. But some political analysts have raised red flags about this, inferring unfairness and election interference. Their degree of outrage, however, largely depends on whether their candidates or causes succeeded or failed.
In 2017, The Guardian, a left-wing British newspaper, wrote a series of articles claiming that conservative tech entrepreneur Robert Mercer, whom it called “the big data billionaire waging war on the mainstream media,” had financed a campaign strategy using AI to circumvent mainstream media narratives. This, the paper alleged, illicitly swayed voters in favor of Donald Trump, resulting in his victory in the presidential election in 2016.
Latinos vote at a polling station in El Gallo Restaurant in Los Angeles, Calif., on Nov. 8, 2016. (David McNew/Getty Images)
Latinos vote at a polling station in El Gallo Restaurant in Los Angeles, Calif., on Nov. 8, 2016. (David McNew/Getty Images)
Then, in 2020, Forbes, a right-wing publication, published an article titled “How Artificial Intelligence Swayed the Midterm Elections.” This article decried the use of AI by Democrats to target fundraising campaigns at likely donors, allowing them to massively outspend Republicans in close races and deliver positive results for the Democratic Party.
Critics of AI say that these programs are not only used for voter analysis but for voter manipulation as well. A 2022 report by the Rand Corp. titled “Artificial Intelligence, Deepfakes and Disinformation,” warned of “disinformation warfare,” though largely from a left-wing perspective. Among the instruments of manipulation it cited were misleading images or videos shared on social media, known as memes.
“Russia used memes to target the 2016 U.S. election,” the report stated. “China used memes to target protesters in Hong Kong; and those seeking to question the efficacy of vaccines for coronavirus disease 2019 used memes as a favorite tool.” According to Rand, these memes, together with “fake news websites,” have “sown division in the American electorate and increased the adoption of conspiracy theories.”

Taking AI from Population to Personal Analysis

While AI is effective on an aggregate level in observing people, spotting patterns, learning our habits, and inferring, based on that, what we will do in various situations, getting down to the individual level is more challenging.

“I think at an individual level, it probably does okay; it’s not perfect. It’s not good necessarily at predicting specific people, not nearly as well as predicting groups and aggregating up,” Busby said.

A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken on Feb. 23, 2023. (Dado Ruvic/Reuters)
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken on Feb. 23, 2023. (Dado Ruvic/Reuters)

Anyone who has asked ChatGPT, a newly popular AI search program, about themselves or people they know often finds that some of the information is correct and some of it is wrong.

“These models, all of them, ChatGPT and any of the ones used by Facebook or others, they sometimes have a tendency to what is called ‘hallucinate,’” Busby said. “That means they just make things up that aren’t true.”

But he believes AI will soon get better at getting its facts straight.

“There’s just a lot of pressure from corporations, politicians, campaigns—they want to know how is this person going to respond to this message,” he said. “I think there will be a lot of emphasis on trying to develop that kind of accuracy, but I don’t think we’re there yet.”

Taking it to a personal level, a 2019 report in Scientific American titled “The Internet Knows You Better Than Your Spouse Does,” analyzed a program called Apply Magic Sauce. The program asks subjects for a few inputs, like something they have written, such as e-mails or blogs, along with information about their social media activity.

Based on these inputs, Apply Magic Sauce could generate “a detailed psychogram, or personality profile, that includes your presumed age and sex, whether you are anxious or easily stressed, how quickly you give in to impulses, and whether you are politically and socially conservative or liberal.” The report found that by analyzing people’s “likes” on social media, AI programs were able to paint an accurate portrait of their personalities.

“If the software had as few as 10 [likes] for analysis, it was able to evaluate that person about as well as a co-worker did,” the report stated. “Given 70 likes, the algorithm was about as accurate as a friend. With 300, it was more successful than the person’s spouse.”

“Organizations are going to be very interested in, ‘How do we get to understand you on a personal level?’” Busby said. “It makes me uneasy about being followed around and modeled in that sort of way by an AI tool.”

The darker capabilities of AI have become a headline issue in recent weeks, with urgent calls by scientists and tech experts to restrict its development until people can better understand its uses and effects. On March 22, an open letter, signed so far by 50,000 people, including Apple co-founder Steve Wozniak and tech entrepreneur Elon Musk, called for an immediate halt to the development of general artificial intelligence, or autonomous intelligence systems that can evolve beyond carrying out specific tasks and theoretically surpass human intelligence.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the letter stated. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”

If it can be contained to avoid the malicious and manipulative elements, artificial intelligence “has the ability and the potential to really expand our capabilities in lots of areas and solve lots of the problems that we face,” Busby said. “Some of the things we assumed that computers couldn’t do well, like write fiction or generate something new out of nothing, it can do those things.

AI can help answer research questions, but it can’t decide what we should research, or what our goals are, or what sort of society we should have, he said.

“[AI] forces us to think carefully about what we contribute,” he said. “It reminds us about what’s so unique, or something we offer that’s distinctive, to help us focus on those things instead of the mundane stuff that computers and these kinds of algorithms can really quickly automate.”

Kevin Stocklin is a business reporter, film producer and former Wall Street banker. He wrote and produced "We All Fall Down: The American Mortgage Crisis," a 2008 documentary on the collapse of the mortgage finance system. His most recent documentary is "The Shadow State," an investigation of the ESG industry.
Related Topics