We’ve all heard the adage, “Every coin has two sides.” The same is true for AI; just as it has benefits, it may also have drawbacks if trained improperly. Microsoft discovered the dangers of developing racist AI, but what happens if the intelligence is actively directed at a poisonous forum? Someone found out. Yannic Kilcher, an AI researcher and YouTuber, trained an AI on 3.3 million postings from 4chan’s infamously toxic Politically Incorrect /pol/ board.
Kilcher released the AI on the board after implementing the model in 10 bots, which resulted in a wave of hatred. The bots produced 15,000 posts in 24 hours that frequently featured or engaged with racist information. Kilcher stated that they accounted for more than 10% of posts on /pol/ that day. Following Kilcher’s posting of his video and a copy of the software on Hugging Face, a form of GitHub for AI, ethicists, and AI researchers voiced alarm.
GPT-4chan (after OpenAI’s GPT-3), the model learned to recognize not just the words used in /pol/ postings but also an overall tone that Kilcher described as a combination of “offensiveness, nihilism, trolling, and profound suspicion.” The video author avoided 4chan’s anti-proxy and VPN protections and even utilized a VPN to make the bot postings appear from Seychelles.
The AI made a few blunders, such as posting blank messages, but it was convincing enough that it took around two days for many users to notice something was wrong. According to Kilcher, many forum users only saw one of the bots. The model caused enough suspicion that individuals suspected each other of being bots days after Kilcher disabled them.
In an interview with The Verge, the YouTuber described the experiment as a “prank,” not research. It serves as a reminder that trained AI is only as good as the data it is fed. Instead, the issue is how Kilcher distributed his work. While he avoided giving the bot code, he did share a somewhat neutered version of the model with the Hugging Face AI repository. Hugging Face chose to restrict access as a precaution since visitors may have reproduced the AI for evil motives. The research raised obvious ethical difficulties, and Kilcher himself stated that he should focus on “far more positive” work in the future.
Many leading experts are concerned about the negative aspects of AI as it progresses. As a result, more effort is being made to include ethics in AI research. These initiatives demonstrate the negative consequences of AI if we are not vigilant. As our world evolves, AI ethics will play an essential role in enabling us to use AI productively.
- Video: https://www.youtube.com/watch?v=efPrtcLdcdM