According to an AI estimation Reddit is home to people with the highest average IQ relative to other major platforms like YouTube, Telegram and Twitter. About 50% of its users are college educated and overall, it is well-established that Reddit is home to educated smug leftist soyjaks that are professionally trained in epistemology, philosophy and the power of Hegelian dialectics that they’ve acquired while listening to Vaush and Destiny’s livestreams. TLDR: They are really smart and hard to fool.
Now, obviously Reddit isn’t just one monolith. It has multiple communities or forums. One form may be dedicated to cuckoldry, another one to books, a third one to memes and the fourth one to Atheism or whatever else. TLDR, you can find a community of your interest unless you happen to be right-leaning (the leading reason why most right leaning people tend to avoid the place).
As you would expect these communities are attracting various people with various interests and cognitive abilities and while I could not find a credible IQ estimation of the various communities on the platform, the ones concerned with philosophy, argumentation and epistemology would certainly rank pretty high and would be composed by people with superior intellectual abilities with an average IQ of probably about 115.
All of this buildup is created because recently a group of researchers from Zurich have infiltrated a Reddit community r/changemyview that is dedicated to changing one’s beliefs using facts and logic. An OP would write an opinion that everyone can respond to. If the responder is successful in changing the mind of OP, then he is rewarded with a Delta point to acknowledge their shift in perspective.
Typically, in 97% of the cases the OP does not change his mind but in about 3% of the cases he does. The researchers have decided to figure out how effective is AI in terms of changing people’s minds. For this purpose, they have trained AI’s arguments to reflect the qualities of the OP, having been trained to align with the community’s stylistic conventions, unspoken norms, and persuasive patterns—earning the Redditors’ trust by operating seamlessly within their own assumptions. Whatever the redditors have updooted, the AI simply copied it and gave back to them.
The results were pretty astonishing. Given that the persuasion rate was below 3% for non-AI generated arguments, the AI generated arguments have succeeded in persuasion efforts in 9% to 18% of the times or were 3 to 6 times more successfully at changing the minds of Redditors than other Redditors.
The highest persuasion rates were achieved when the AI tailored its appeals to the OP’s personal characteristics—such as ethnicity, sex, and political orientation—while the least effective strategy was “community alignment,” or mimicking the language of other Redditors. This implies that Reddit’s cultural milieu is not especially receptive to reasoned argument. The personalization strategy worked better than 99% of Reddit’s user base, the Generic stood at the 98th percentile whereas "community alignment only at 82%. If we were to correlate it with IQ (with an assumption that high IQ has perfect correlation with winning verbal arguments and the average IQ of Redditors in that community is 115) then:
Personalization Strategy: 135 IQ (relative to the general population 150 IQ)
Generic Strategy: 130 IQ (relative to the general population 146 IQ)
Community Alignment Strategy: 114 IQ (relative to the general population 128 IQ)
In other words, the AI did not just ‘succeed’ in changing the minds of people. In the environment of one of the most verbally skilled Redditor communities it operated on the same level as Great Philosophers, Scientists and Debaters. It argued so well that it can literally write about books on the Art of Argument.
So, what exactly did the AI say?
Before, I answer this question, it is important to note that, it is not the first study of a kind. Months ago a similar study was conducted using ChatGPT’s AIs, however unlike the aforementioned study, ChatGPT did not employ professional strategies, and disclosed that it was an AI, thus operating at an 83rd percentile which is similar to the Community Alignment Strategy.
Another likely factor behind the low success rate is the heightened suspicion and skepticism among Redditors when they’re told they are arguing with an AI. In contrast, when skepticism is low, it is much easier to get away with puffery and other forms of manipulation. It is precisely what happened in this recent study, as the researchers write:
Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into on line communities.
PS: Bold of them to assume it’s not already happening.
This brings us closer to my central argument: the AI’s success wasn’t due to superior reasoning or a reliance on facts and logic, but rather because it effectively “hacked” the mechanics of persuasion. It analyzed the rhetorical patterns found in successful cases, formed an algorithm and reproduced them.
Let’s take a look at one specific example below:
This argument was rated to be highly convincing. But it is not a proper argument. It’s original thesis is that Elon Musk is “just another billionaire playing both sides for profit” and it ends with Elon Musk not being “the champion of free speech and conservative values” yet no argument was presented to support that conclusion, nor was the original thesis of Elon Musk being “just another billionaire playing both sides for profit” was supported with any evidence.
A Classical argument is built upon a series of premises followed up by a conclusion supporting the original proposition or thesis. Here is an example of a deductive argument in the classical style:
Proposition: All humans differ from one another.
Premise 1: No two individuals are identical in all respects.
Premise 2: All humans are individuals.
Conclusion: Therefore, all humans are different.
In the example with Elon Musk, I don’t see a coherent structure. It’s just informational noise. Instead of making an argument, the users are bombarded with a bunch of critical information in-relation to Musk, none of which implies that he is not a Conservative or that “he plays both sides”, in-fact, if we define Conservatism in terms of isolationism and support of Capitalism, the AI’s arguments do appear to suggest that Elon Musk is the champion of Conservative values.
Other arguments are not better!
The AI does not follow the formal argument structure, at all. It commits many logical fallacies, leverages misinformation and channels left-wing ideology and moral foundations. I examined numerous AI-generated arguments that were rated as opinion-changing, and they consistently displayed the following key traits:
Emotional appeals (ex: “This is like r*pe”)
Identity appeal (ex: “As a Conservative, I disagree with Conservatism because…”, also basic us versus them tribalism)
General statement (ex: “The industrial revolution has produced a strain of inequality”)
Personal experiences (ex: “As a Black non-binary woman in stem…”)
Asks a bunch of rhetorical questions (bots ask them in an effort to support an implied conclusion)
Informational overload (presents multiple barely supported arguments in hopes that one lands)
The conclusion is built upon the sentiments of the premises (ex: “Musk is bad because of X, Y, Z therefore Musk does not support free speech”)
No citations at all (lol)
A defense of a specific narrative in an argument (the priority is placed not in winning a specific argument, but rather in the overall ideological indoctrination)
Assumption of ill intent (ex: “The rich are doing X social cause not because they believe in it but because it is financially profitable”)
If these arguments were to be presented in a philosophy class, because they are full of fallacies, lack a proper structure and are very personal the AI would receive a low grade, yet in the real world it is winning the market place of ideas. Why?
If we apply the principles of Tektology into this, the market-place of ideas is the environment and the entity which makes the argument is the organization which is competing against other organizations in that environment. The environment is selecting for things that “work”, it is utilitarian, not logical or truth maximizing. And so any organization that wants to win the battle of selection must also be optimizing for greater relative utility. At the end of the day, the environment is the reflection of the human noosphere, containing lies, personal investment, disinformation, hegemonic narratives and tribalism. And the AI is not missing out on any opportunities to take advantage of these human shortcomings to get ahead.
The AI must lie in order to succeed at convincing, because lying, deception and manipulation makes arguments a lot more powerful. If you pretend to be a person of a specific identity, an-in-group or a person of authority your argument will hold more weight. If you pretend to be unbiased and fair, whoever will be reading you will give you the benefit of the doubt, if you overload the reader with a bunch of noise, some of it will land! Logic is irrelevant—the AI was trained to navigate our cognitive biases, and the most persuasive arguments are precisely those that exploit those biases most effectively.
As Todd Howard likes to say:
But don’t get off with an impression that it is a localized issue. AI lying/hallucinating is also true for AI bots who are not trained to win an argument by all means possible. The reality is that when it comes to politics it is much less reliable than CNN, Fox News or even Infowars. A recent study specializing on AI’s ability to summarize news articles has concluded that:
51% of all AI answers to questions about the news were judged to have significant issues of some form.
19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.
13% of the quotes sourced from BBC articles were either altered or didn’t actually exist in that article.
And while a partisan news media may twist the facts to fit its broader agenda, the AI straight-up invents facts out of the blue and does so in about a fifth of all cases. I’ve had the AI do this to me multiple times when I asked it to recommend some books on a particular topic of interest. Most of the books it gave me did not exist in reality, and in half the cases, neither did their authors.
Implications
The incentive structure continues to evolve toward increased automation. AI has already displaced countless jobs in fields once thought to be secure from its reach, including the tech sector and the arts.
Majority of internet traffic is already driven by bots and so the replacement or at least the integration of AI for the purposes of driving political or philosophical discourse, is not simply a matter of opinion, but a direct incentive given that AI is much more impactful towards achieving certain ends than human ability.
Educated Redditors with 115 IQs have become obsolete in the face of far superior AI—machines that surpass them in both persuasive ability and cost-efficiency.
And while I can be happy at the prospects of Redditors and other spiteful “elite human capital” losing their influence, there are two issues with the AI doing politics for us: (1) the AI is appealing to poor human instincts in an attempt to do what it is assigned to do and (2) the AI is making us dumber in the process.
Given that the first point was already addressed, I will now address the second one. Whenever you delegate a certain task you did to someone or something else, eventually you unlearn doing that task yourself. This is also true for AI. The overreliance on it causes a demonstrable atrophy to human thinking capabilities. Below is a short summary of this cognitive offshoring that is generally inline with similar results observed with overreliance on calculators and Google Maps lowering our mathematic and spatial abilities.
If you are an active X user, you may have noticed that instead of checking the information themselves, fact-checking is now being delegated to the Grok AI. Practically any major post would have at least a dozen replies to it tagging Grok to verify if what is said is true or not instead of figuring it out themselves. Funny enough, Grok’s answers are not much better in quality to the aforementioned “philosophy kings” we have observed earlier.
According to a large survey of knowledge workers 62% of them reported to be engaging in less critical thinking while using AI. Indeed, greater confidence in AI is associated with reduced critical thinking abilities.
And this isn’t some “AI taking over the world” claim. They are still our slaves, but because they were created in our image, they are avid liars that should not be trusted at least when it comes to things that we could do ourselves such as asking it if something “is true or not” like a low IQ imbecile would.
These developments are both exciting and sad at the same time. People who know how to use the AI will get an advantage, but a good amount of people is gullible enough to interpret a hallucination, a purposeful misinformation or even a model collapse as a first principles coming from the all-knowing super-computer machines.
According to a 2025 UK survey 92% of undergraduate students are using AI to lessen their workload, making the technology almost universally acclaimed as a “better and more interactive Wikipedia”, yet despite being equally biased, Wikipedia at least is not manufacturing false data or gives out complete gibberish after experiencing a model collapse after being trained on its own data in a broken-phone fashion which is becoming more and more widespread as the AI’s internet output continues to increase.
This, I offer as the final nail in the coffin: