Reddit cracks down on bots with new human verification rules for suspicious activity
Reddit introduces new human verification requirements to combat bots and suspicious behaviour, aiming to improve platform authenticity and user trust.
Social media platform Reddit is stepping up efforts to combat bots by introducing new human-verification measures and clearer labelling of automated accounts, as concerns grow about non-human activity across online communities.
The move comes as another platform, Digg, recently shut down after failing to control bot activity overwhelming its service. Reddit, however, says it is taking a more structured approach to managing the issue.
The company announced it will begin labelling automated accounts that provide useful services to users, similar to how “good bots” are identified on X. At the same time, Reddit will require accounts suspected of being bots to verify that humans operate them.
Reddit emphasised that this verification will not be required across the entire platform. Instead, it will be triggered only when indicators suggest an account may not be human. These signals could include unusual activity patterns or technical markers associated with automation. Accounts that fail to verify may face restrictions, the company said.
To detect suspicious behaviour, Reddit is deploying specialised tools that analyse account-level signals and usage patterns, such as how quickly an account is generating posts or comments. Notably, using AI tools to help write content does not violate Reddit’s policies, although individual communities may enforce their own guidelines.
For verification, Reddit plans to use third-party solutions, including passkeys from Apple, Google, and YubiKey, as well as biometric systems like Face ID and identity tools such as World ID developed by Sam Altman. In some regions — including the U.K., Australia, and certain U.S. states — government-issued identification may also be required under local age-verification laws. However, Reddit said this is not its preferred method.
Reddit co-founder and CEO Steve Huffman said the company aims to maintain user privacy while ensuring authenticity. He noted that the goal is to confirm that an account belongs to a real person, not to identify who that person is, preserving the anonymity that defines Reddit’s platform.
These changes are designed to address the increasing presence of bots across the internet, where automated accounts are often used to influence political discussions, spread misinformation, manipulate engagement metrics, promote products covertly, and generate fraudulent ad traffic. Research from Cloudflare suggests that bot-generated traffic could surpass human-generated traffic by 2027 when including AI agents and web crawlers.
Reddit has become a key target for such activity, with bots frequently used for narrative manipulation, spam, reposting links, and even generating content to train AI systems. The platform’s data is widely used in AI model training through commercial agreements with developers, prompting speculation that bots may also be posting questions to expand training datasets in areas where AI lacks sufficient information.
Reddit co-founder Alexis Ohanian has previously spoken about the so-called “dead internet theory,” which suggests that bots could outnumber humans online. With the rise of AI-generated content and automated agents, the idea is increasingly being taken seriously.
The company had already signalled last year that it would introduce human verification measures in response to the growing bot problem and evolving regulatory demands. However, Huffman acknowledged that existing solutions are not ideal, noting in a recent appearance on the TBPN podcast that better systems are still needed. “The best long-term solutions will be decentralised, individualised, private, and ideally not require an ID at all,” Huffman said in the latest announcement.
Alongside these new measures, Reddit confirmed it will continue to remove spam and bot accounts, with the current average around 100,000 per day. It will also rely on user reports and is working on improved tools to identify suspicious activity. Developers operating legitimate automated accounts can now label them using a new “APP” designation, with further guidance available in the r/redditdev community.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0