Connect with us

SOCIAL

X Looks to Improve Content Moderation After Issues with AI Images and Bot Farms

Published

on

X Looks to Improve Content Moderation After Issues with AI Images and Bot Farms

Content moderation remains a major challenge on X, despite owner Elon Musk insisting that its crowd-sourced Community Notes are the key solution for combatting harmful content.

Last week, AI-generated images of singer Taylor Swift being sexually assaulted by NFL fans gained huge traction on X, reaching over 27 million views, and 260,000 likes, before the originating account was suspended.

Swift is now reportedly exploring legal action against X and the creator of the content, while X, unable to stop the spread of the pictures, despite that initial suspension, has now banned all searches for “Taylor Swift” in the app in response.

Which is not exactly a great endorsement of the effectiveness of its Community Notes approach. And while this content is in violation of X’s Sensitive Media policy, and would therefore be removed regardless of Community Notes being issued, the fact that X hasn’t been able to stop the images being spread suggests that the platform could be leaning too much into its crowd-sourced moderation approach, as opposed to hiring its own content moderators.

Which X is looking to address. Today, X announced that it’s building a new, 100-person content moderation center in Texas, which will focus on child sexual abuse content, but will also be tasked with managing other elements as well.

Advertisement

That’s seemingly an admission that Community Notes can’t be relied upon to do all the heavy lifting in this respect. But at the same time, X’s new “freedom of speech, not reach” approach is centered around the fact that its user community should be who decides what’s acceptable and what’s not in the app, and that there shouldn’t be a central arbiter of moderation decisions, as there had been on Twitter in the past.

Community Notes, at least in theory, addresses this, but clearly, more needs to be done to tackle the broader spread of harmful material. While that the same time, X’s claims that it’s eradicating bots have also come under more scrutiny.

As reported by The Guardian, the German Government has reportedly uncovered a vast network of Russian-originated bots in the app, which were coordinating to seed anti-Ukraine sentiment among German users.

As per The Guardian:

Using specialized monitoring software, the experts uncovered a huge trail of posts over a one-month period from 10 December, which amounted to a sophisticated and concerted onslaught on Berlin’s support for Ukraine. More than 1m German-language posts were sent from an estimated 50,000 fake accounts, amounting to a rate of two every second. The overwhelming tone of the messages was the suggestion that the government of Olaf Scholz was neglecting the needs of Germans as a result of its support for Ukraine, both in terms of weapons and aid, as well as by taking in more than a million refugees.

X has been working to eradicate bot farms of this type by using “payment verification” as a means to ensure that real people are behind every profile in the app, both by pushing users towards its X Premium verification program, and through a new test of a $1 fee to engage in the app.

Advertisement

In theory, that should make bot programs like this increasingly cost-prohibitive, thereby limiting their use. If the $1 fee were in place in Germany, for example (it’s currently being tested in New Zealand and the Philippines), it would have cost this operation $50k just to begin.

Though, evidently, that also hasn’t been the impediment that X had hoped, with various verified bot profiles still posting automated messages in the app.

Essentially, X’s solutions to tackle content moderation and bots, the two key issues of focus repeatedly stated by Elon as his main drivers in evolving the app, have thus far not worked out as planned. Which has led to distrust among ad partners and regulators, and broader concerns about the platform’s shift away from human moderation.

X clearly needs to improve on both fronts, and as noted, it has seemingly acknowledged this by announcing plans for more human moderators. But that also comes with increased costs, and with X’s margins already being crushed due to key ad partners pausing their campaigns, it has some work ahead of it to get its systems on the right track.

Content moderation is a major challenge for every platform, and it always seemed unlikely that X would be able to cull 80% of its team and still maintain the operational capacity to police these elements.

Maybe, through improved machine learning, it can still keep costs down and enhance its monitoring systems. But it’s another challenge for the Musk-owned app, which could see more users and brands looking elsewhere.    

Advertisement



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address