Online interactions are become less genuine

In a prescient tweet, OpenAI CEO Sam Altman noted that AI will become persuasive long before it becomes intelligent. A scintillating study conducted by researchers at the University of Zurich just proved him right.
In the study, researchers used AI to challenge Redditors’ perspectives in the site’s /changemyview subreddit, where users share an opinion on a topic and challenge others to present counter arguments in a civilized manner. Unbeknownst to users, researchers used AI to produce arguments on everything from dangerous dog breeds to the housing crisis.
The AI-generated comments proved extremely effective at changing Redditors’ minds. The university’s ethics committee frowned upon the study, as it’s generally unethical to subject people to experimentation without their knowledge. Reddit’s legal team seems to be pursuing legal action against the university.
Unfortunately, the Zurich researchers decided not to publish their full findings, but what we do know about the study points to glaring dangers in the online ecosystem—manipulation, misinformation, and a degradation of human connection.
The power of persuasion
The internet has become a weapon of mass deception.
In the AI era, this persuasion power becomes even more drastic. AI avatars resembling financial advisors, therapists, girlfriends, and spiritual mentors can become a channel for ideological manipulation.
The University of Zurich study underscores this risk. If manipulation is unacceptable when researchers do it, why is it okay for tech giants to do it?
Large language models (LLMs) are the latest products of algorithmically driven content. Algorithmically curated social media and streaming platforms have already proven manipulative.
- Facebook experimented with manipulating users’ moods—without their consent— through their newsfeeds as early as 2012.
- The Rabbit Hole podcast shows how YouTube’s algorithm created a pipeline for radicalizing young men.
- Cambridge Analytica and Russiagate showed how social media influences elections at home and abroad.
- TikTok’s algorithm has been shown to create harmful echo chambers that produce division.
Foundational LLMs like Claude and ChatGPT are like a big internet hive mind. The premise of these models holds that they know more than you. Their inhumanness makes users assume their outputs are unbiased.
Algorithmic creation of content is even more dangerous than algorithmic curation of content via the feed. This content speaks directly to you, coddles you, champions and reinforcing your viewpoint.
Look no farther than Grok, the LLM produced by Elon Musk’s company xAI. From the beginning, Musk was blatant about engineering Grok to support his worldview. Earlier this year, Grok fell under scrutiny for doubting the number of Jews killed in the holocaust and for promoting the falsehood of white genocide in South Africa.
Human vs. machine
Reddit users felt hostile toward the study because the AI responses were presented as human responses. It’s an intrusion. The subreddit’s rules protect and incentivize real human discussion, dictating that the view in question must be yours and that AI-generated posts must be disclosed.
Reddit is a microcosm of what the internet used to be: a constellation of niche interests and communities largely governing themselves, encouraging exploration. Through this digital meandering, a whole generation found likeminded cohorts and evolved with the help of those relationships.
Since the early 2010s, bots have taken over the internet. On social media, they are deployed en masse to manipulate public perception. For example, a group of bots in 2016 posed as Black Trump supporters, ostensibly to normalize Trumpism for minority voters. Bots played a pivotal role in Brexit, for another.
I believe it matters deeply that online interaction remains human and genuine. If covert, AI-powered content is unethical in research, its proliferation within social media platforms should send up a red flag, too.
The thirst for authenticity
The third ethical offense of the Zurich study: it’s inauthentic.
The researchers using AI to advocate a viewpoint did not hold that viewpoint themselves. Why does this matter? Because the point of the internet is not to argue with robots all day.
If bots are arguing with bots over the merits of DEI, if students are using AI to write and teachers are using AI to grade then, seriously, what are we doing?
I worry about the near-term consequences of outsourcing our thinking to LLMs. For now, the experience of most working adults lies in a pre-AI world, allowing us to employ AI judiciously (mostly, for now). But what happens when the workforce is full of adults who have never known anything but AI and who never had an unassisted thought?
LLMs can’t rival the human mind in creativity, problem-solving, feeling, and ingenuity. LLMs are an echo of us. What do we become if we lose our original voice to cacophony?
The Zurich study treads on this holy human space. That’s what makes it so distasteful, and, by extension, so impactful.
The bottom line
The reasons this study is scandalous are the same reasons it’s worthwhile. It highlights what’s already wrong with a bot-infested internet, and how much more wrong it could get with AI. Its trespasses bring the degradation of the online ecosystem into stark relief.
This degradation has been happening for over a decade—yet incrementally, so that we haven’t felt it. A predatory, manipulative internet is a foregone conclusion. It’s the water we’re swimming in, folks.
This study shows how murky the water’s become, and how much worse it might get. I hope it will fuel meaningful legislation or at least a thoughtful, broad-based personal opting out. In the absence of rules against AI bots, Big Tech is happy to cash in on their largess.
Lindsey Witmer Collins is CEO of WLCM App Studio and Scribbly Books.
What's Your Reaction?






