A computer science student is behind a new AI tool designed to track down Redditors showing signs of radicalization and deploy bots to “deradicalize” them through conversation.
First reported by 404 Media, PrismX was built by Sairaj Balaji, a computer science student at SRMIST in Chennai, India. The tool works by analyzing posts for specific keywords and patterns associated with extreme views, giving those users a “radical score.” High scorers are then targeted by AI bots programmed to attempt “deradicalization” by engaging the users in conversation.
According to the federal government, the primary terror threat to the U.S. now is individuals radicalized to violence online through social media. At the same time, there are fears around surveillance technology and AI infiltrating online communities, not to mention concerns about the ethical minefield of deploying such a tool.
Responding to concerns, Balaji clarified in a LinkedIn post that the conversation part of the tool has not been tested on real Reddit users without consent. Instead, the scoring and conversation elements were used in simulated environments for research purposes only.
“The tool was designed to provoke discussion, not controversy,” he explained in the post. “We’re at a point in history where rogue actors and nation-states are already deploying weaponized AI. If a college student can build something like PrismX, it raises urgent questions: Who’s watching the watchers?”
While Balaji doesn’t claim to be an expert in deradicalization, as an engineer he is interested in the ethical implications of surveillance technology. “Discomfort sparks debate. Debate leads to oversight. And oversight is how we prevent the misuse of emerging technologies,” he said.
This isn’t the first time Redditors have been used as guinea pigs recently. Just last month, researchers from the University of Zurich faced intense backlash after experimenting on an unsuspecting subreddit.
The research involved deploying AI-powered bots into the Change My View subreddit, which positions itself as a “place to post an opinion you accept may be flawed,” in an experiment to see whether AI could be used to change peoples’ minds. When Redditors found out they were being experimented on without their knowledge, they weren’t impressed. Neither was the platform itself.
Ben Lee, Reddit’s chief legal officer, wrote in a post that neither Reddit nor the r/changemyview mods knew about the experiment ahead of time. “What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote. “It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.”
While PrismX is not currently being tested on real unconsenting users, it piles on the ever-growing question of the role of artificial intelligence in human spaces.