Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy which gives an administrator the authority to quickly delete an AI -generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet, 4O4 Media reported.

Wikipedia is maintained by a global, collaborative community of volunteer contributors and editors, and part of the reason it remains a reliable source of information is that this community takes a lot of time to discuss, deliberate, and argue about everything that happens on the platform, be it changes to individual articles or the policies that govern how those changes are made. 

It is normal for entire Wikipedia articles to be deleted, but the main process for deletion usually requires a week-long discussion phase which Wikipedia’s try to come to a consensus on whether to delete the article.

However, in order to deal with common problems that clearly violate Wikipedia’s policies, Wikipedia also has a “speedy deletion” process, where one person flags an article, an administration checks if it meets certain conditions, and then deletes the article without the discussion period.

At the moment, most articles that Wikipedia editors flag as being AI-generated fall into the latter category because editors can’t be absolutely certain they were AI-generated. IIyas Lebleu, a founding member of WikiProject AI Cleanup and an editor that contributed some critical language in the recently adopted policy on AI generated articles and speedy deletion, said this is why previous proposals on regulating AI generated articles on Wikipedia have struggled.

BBC reported: Wikipedia has lost a legal challenge to the new Online Safety Act rules it says could threaten the human rights and safety of its volunteer editors.

The Wikimedia Foundation – the non-profit which supports the online encyclopedia – wanted a judicial review of the regulations which could mean Wikipedia has to verify the identities of its users.

But it said despite the loss, the judgment “emphasized the responsibility of Ofcom and the UK government to ensure Wikipedia is protected.”

The government told the BBC it welcomed the High Court’s judgement, “which will help us continue to work implementing the Online Safety Act to create a safer online world for everyone.”

Judicial reviews challenge the lawfulness of the way a decision has been made by a public body.

In this case, the Wikimedia Foundation and Wikipedia editor tried to challenge the way in which the government decided to make regulations covering which sites should be classed as “Category 1” under the Online Safety Act – the strictest rules sites must follow.

It argued that the rules were logically flawed and too broad, meaning a policy intended to impost extra rules on large social media companies would instead apply to Wikipedia.

In particular the foundation is concerned the extra duties required – if Wikipedia was classed as Category 1 – it would mean it would have to verify the identity of its contributors, undermining privacy their privacy and safety.

The only way to cut the numbers of people in the UK who could access the online encyclopedia by three-quarters, or disable the key functions on the site.

Politico reported: The U.K. High Court dismissed the Wikimedia Foundation’s challenge to parts of the country’s Online Safety Act on Monday, but suggested the nonprofit could have grounds for legal action in the future.

The Wikimedia Foundations, which operates Wikipedia, sought a judicial review of the Online Safety Act’s Categorization Regulations in May, arguing the rules risked subjecting Wikipedia to the most stringent “Category 1” duties intended for social media platforms. 

The nonprofit was particularly concerned that under the OSA’s “Category 1” duties it would be forced to verify the identity of users — undermining their privacy — or else allow “potentially malicious” users to block unverified users from changing content, leading to vandalism and disinformation going unchecked.

Posted in MediumTagged

Leave a Reply

Your email address will not be published. Required fields are marked *