Connect with us

SOCIAL

Facebook Outlines New, Machine Learning Process to Improve the Accuracy of Community Standards Enforcement

Published

on

Facebook is always working to improve its detection and enforcement efforts in order to remove content that breaks its rules, and keep users safe from abuse, misinformation, scams, etc. 

And it’s systems have significantly improved in this respect – as explained by Facebook:

Online services have made great strides in leveraging machine-learned models to fight abuse at scale. For example, 99.5% of takedowns on fake Facebook accounts are proactively detected before users report them.”

But there are still significant limitations in its processes, mostly due to the finite capacity for human reviewers to assess and pass judgment on such instances. Machine learning tools can identify a growing number of issues, but human input is still required to confirm whether many of those identified cases are correct, because computer systems often miss the complex nuance of language.

But now, Facebook has a new system to assist in this respect:

CLARA (Confidence of Labels and Raters), is a system built and deployed at Facebook to estimate the uncertainty in human-generated decisions. […] CLARA is used at Facebook to obtain more accurate decisions overall while reducing operational resource use.” 

The system essentially augments human decision making by adding a machine learning layer on top of that which assesses each individual raters’ capacity to make the right call on content, based on their past accuracy.

Facebook CLARA system

The CLARA element is in the ‘Realtime Prediction Service’ sector of this flow chart, which assesses the result of each incident and crosschecks the human ruling against what the machine model would have predicted, while also referencing both against each reviewers’ past results for each type of report.   

That system, which has now been deployed at Facebook, has resulted in a significant improvement in efficiency, ensuring more accurate results in enforcement.

Advertisement

Compared to a random sampling baseline, CLARA provides a better trade-off curve, enabling an efficient usage of labeling resources. In a production deployment, we found that CLARA can save up to 20% of total reviews compared to majority vote.”

Which is important right now, because Facebook has been forced to reduce its human moderation capacity due to COVID-19 lockdowns in different regions. By improving its systems for accurately detecting violations, through automated means, Facebook is then able to concentrate its resources on the key areas of concern, maximizing the manpower that it has available.

Of course, there are still issues with Facebook’s systems. Just this week, reports emerged that Facebook is looking at a new way to use share velocity signals in order to better guide human moderation efforts, and stop misinformation, in particular, from reaching massive audiences on the platform. That comes after a recent COVID-19 conspiracy video was viewed some 20 million times on Facebook, despite violating the platform’s rules, before, finally, Facebook’s moderators moved to take it down.

Improved human moderation wouldn’t have helped in this respect, so there are still other areas of concern for Facebook to address, but the smarter it can be about utilizing the resources it has available, the more Facebook can focus its efforts onto the key areas of concern, in order to detect and remove violating content before it can reach big audiences.

You can read more about Facebook’s new CLARA reinforcement process here

Socialmediatoday.com

Advertisement

SOCIAL

YouTube Tests Improved Comment Removal Notifications, Updated Video Performance and Hashtag Insights

Published

on

YouTube Expands its 'Pre-Publish Checks' Tool to the Mobile App

YouTube’s looking to provide more context on content removals and violations, while it’s also experimenting with a new form of analytics on average video performance benchmarks, along with improved hashtag discovery, which could impact your planning and process.

First off, on policy violations – YouTube’s looking to provide more context on comment removals via an updated system that will link users through to the exact policy that they’ve violated when a comment is removed.

As explained by YouTube’s Conor Kavanagh:

“Many users have told us that they would like to know if and when their comment has been removed for violating one of our Community Guidelines. Additionally, we want to protect creators from a single user’s ability to negatively impact the community via comments, either on a single channel or multiple channels.”

The new comment removal notification aims to address this, by providing more context as to when a comment has been removed for violating the platform’s Community Guidelines.

In expansion of this, YouTube will also put some users into timeout if they keep breaking the rules. Literally:

If someone leaves multiple abusive comments, they may receive a temporary timeout which will block the ability to comment for up to 24 hours.”

Advertisement

YouTube says that this will hopefully reduce the amount of abusive comments across the platform, while also adding more transparency to the process, in order to help people understand how they’ve broken the rules, which could also help to guide future behavior.

On a similar note, YouTube’s also expanding its test of timestamps in Community Guidelines policy violation notifications for publishers, which provide more specific details on when a violation has occurred in video clips.

Initially only available for violations of its ‘Harmful and Dangerous’ policy, YouTube’s now expanding these notifiers to violations related to ‘Child Safety’, ‘Suicide and Self-Harm’, and ‘Violent or Graphic’.

If you’re in the experiment, you’ll see these timestamps in YouTube Studio as well as over email if we believe a violation has occurred. We hope these timestamps are useful in understanding why your video violated our policies and we hope to expand to more policies over time.”

On another front, YouTube’s also testing a new analytics card in YouTube Studio which will show creators the typical amount of views they get on different formats, including VODs, Shorts, and live streams.

YouTube average video performance

As you can see in this example, the new data card will provide insight into the average amount of views you see in each format, based on your the last 10 uploads in each, which could provide more comparative context on performance.

Finally, YouTube’s also launched a test that aims to showcase more relevant hashtags on video clips.

“We’re launching an experiment to elevate the hashtags on a video’s watch page that we’ve found viewers are interested in, instead of just the first few added to the video’s description. Hashtags are still chosen by creators themselves – nothing is changing there – the goal of the experiment is simply to drive more engagement with hashtags while connecting viewers with content they will likely enjoy.”

So YouTube will be looking to highlight more relevant hashtags in video clips, as a means to better connect users to more video clips on the same topic.

Advertisement

Which could put more emphasis on hashtag use – so it could be time to upgrade your hashtag research approach in line with the latest trending topics.

All of these updates are fairly minor, but they could impact your YouTube approach, and it’s worth considering the potential impacts in your process.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish