Learn About

Transparency

Font size: +

Manipulating the Masses - The rise of social media algorithms and AI

by Wyoming Liberty Group

A lawyer for Rep. Harriet Hageman, Wyoming's member of the U.S. House of Representatives, recently sent a cease-and-desist letter to a candidate running for the U.S. Senate, demanding that he stop issuing social media posts suggesting that Hageman is backing his candidacy.

"[Y]ou have persisted in your attempts to mislead Representative Hageman's loyal supporters, some of whom may be considering donating to or voting for you based on a false belief that she has endorsed your candidacy," according to the attorney's letter.

In one post, for instance, the candidate said: "Hang on Harriet, I am on my way!" Another: "Together let's Make America Great Again!"

Leaving aside this brouhaha, the skirmish underscores how you can't always believe what you read, especially on social media—especially when it comes to politics.

That's especially true nowadays, thanks to the rise of social media algorithms and artificial intelligence—AI—that target people to influence what we see and, in some cases, come to believe.

We should all be on guard: Social media posts can—and do—influence elections in Wyoming, as Hageman's cease-and-desist letter illustrates. Social media—and AI—is also a national problem.

Recently, Congressman Michael McCaul of Texas, the foreign affairs committee chairman, said that "Russian propaganda has made its way into the United States." Meanwhile, Congressman Mike Turner of Ohio, chairman of the intelligence committee, said, "anti-Ukraine and pro-Russia messages" have been "uttered on the House floor."

Karl Rove, the political consultant writing for The Wall Street Journal, highlighted what he called "the magnitude of the problem," saying this: "[C]onsider one truly ludicrous fiction that GOP lawmakers parroted: that Ukrainian President Volodymyr Zelensky diverted U.S. aid to purchase two super yachts. The accusation surfaced in DC Weekly, a Russian website masquerading as a U.S. media organization."

There are, of course, many other examples of social media misinformation—or disinformation—and the bad actors pulling the strings behind them. AI has only fueled the fire.

In its more innocuous form, AI is aimed at personalizing what we see on social media to fit our likes and proclivities. As an example, AI can analyze our online search history and target us with content that's most pertinent to us. But then there's a more sinister approach of AI and social media.

"The platforms want to know users better than they know themselves—what scares them, makes them laugh, what they search on Google or what spelling mistakes they make," said one policy expert in the digital age. "This information produces a psychological profile of potential voters, which can be used by companies to target them with information or misinformation they would be sensitive to."

In a recent report, the think tank Rand Corporation raised concerns about national security based on the extensive use of AI and manipulation of social media by foreign adversaries of the United States, including the armed wing of the Chinese Communist Party.

The Rand report noted, for instance, that AI will use an army of bots—software applications that can perform repetitive tasks—to create the illusion of consensus on an issue, which is called "astroturfing."

There is no turning back, according to many experts, who say that the weaponization of AI and social media is just getting started; after all, there are already more than four billion active social media users across the globe, and they spend an average of about two-and-a-half hours on social media each day.

Think of the implications: Technology—AI and the social media algorithms—increasingly dictates what messages we see online, what people we engage with and what social media ideas we consume.

Researchers note, for instance, that algorithms can target us so that we see a particular point of view; as a result, we may come to think that there is a wider gulf between groups than there really is. The ploy already has a name: "false polarization."

Researchers also point to bad actors who want to spread false political information to ignite moral outrage as a way to get the masses to circulate the misinformation even further.

Alarming as that is, AI can become even more dangerous. "Cult leaders and dictators can use AI to monitor, manipulate, and control people, leading to their eventual subjugation," said an expert in political cults in a Psychology Today piece.

How? By gathering our browsing history and social media posts, "Cult leaders and dictators can use the information to create individually tailored content that reinforces their ideology and manipulates people into following their beliefs."

We should all be alert, especially as we enter the election season in Wyoming and elsewhere. We hope the good people of Wyoming will also turn to us at Wyoming Liberty Group and our Eye on the Issues program for reliable information and conversations with folks who have their boots on the ground.

And for what it's worth, Hageman endorsed U.S. Sen. John Barrasso in his bid for reelection. Even the bots of AI and social media manipulation can't distort that simple fact.

Wyoming's Business Landscape: Insights with Renny ...
The Link Between Pot and Fentanyl Cartels - While ...

Related Posts

wy-logo-sm-trees.png

Mailing Address: 1740 H Dell Range Blvd. #274
Cheyenne WY 82009

Phone: (307) 632-7020

Follow Us

Copyright © 2024 Wyoming Liberty Group.

Mailing Address:

1740 H Dell Range Blvd. #274
Cheyenne, WY 82009

Phone: (307) 632-7020