Open platform + Open access to AI = Open season for mischief makers
This is why we can’t have nice things: Wikipedia is in the center of an editing crisis at the moment, gratitude to AI. People have started attacking the website with nonsensical details dreamed up by big language models like ChatGPT. But actually, who didn’t notice this coming?
Correction (10/11/24): Lebleu came out to clarify a few details concerning the motivations behind the
user posting AI-generated material and the models mentioned. These changes were used to the original text. Wikipedia is under assaults
The edited article continues below.
Wikipedia has a new industry called WikiProject AI Cleanup. It is a task force of soldiers currently combing via Wikipedia articles, editing or removing false knowledge that appears to have been posted by people operating generative AI.
Ilyas Lebleu, a founding member of the cleanup crew, told 404 Media that the crisis formed when Wikipedia editors and users saw courses that a chatbot unmistakably wrote. The couple confirmed the theory by playing some passages using ChatGPT.
“We noticed an eccentric writing style generated by AI,” said Lebleu, a founding partner of Wikipedia’s
AI Cleanup team. “We were able to produce the style using ChatGPT, which allowed us to identify some of the most prominent examples of AI-generated articles.” “Discovering some standard AI catchphrases allowed us to fast spot some of the most obvious examples of generated articles, which we quickly wanted to formalize into an organized project to collect our findings and techniques.”
For example, There is one report about an Ottoman fortification built in the 1400s called “Amberlisihar.
” The 2,000-word article details the milestone spot and construction. Unfortunately, Amberlisihar does not exist; instead, someone created a complete hallucination filled with enough factual details to lend
it some credibility. The team recognized it as a fraud and deleted it. Wikipedia is under assaults
As for why this is transpiring, the cleanup crew believes there are three primary reasons.
Lebleu stated that editors especially add AI-generated content for self-promotion or to intentionally create hoaxes. However, some editors power be misinformed and believe the range is accurate and constructive.
However, let’s be honest – two factors are the main donors to this scoundrel business. First is an intrinsic problem with Wikipedia’s model – anyone can be an editor on the forum. Many universities do not accept learners turning in papers that cite Wikipedia for this same reason. Wikipedia is under assaults
The second is just that the internet ruins everything. We’ve seen this time and also, particularly with
AI applications. Remember Tay, Microsoft’s Twitter bot, which Microsoft removed less than 24 hours behind it began posting vulgar and racist tweets? More modern AI applications are just as easy to abuse as we have seen with deepfakes, foolish AI-generated shovelware books on Kindle, and other shenanigans.
When the public has almost unrestricted entry to something, you can expect a small portion of users to abuse it. When we are speaking about 100 people, it power not be a big deal, but when it’s millions, you are heading to have a problem. Sometimes, it’s for illicit profit. Other times, it’s just because they can. Such is the matter with Wikipedia’s present predicament.