Connect with us

SOCIAL

YouTube Outlines its Evolving Efforts to Combat the Spread of Harmful Misinformation

Published

on

YouTube Moves Away from Original Programming to Focus on Creator Funding Initiatives


YouTube has provided a new overview of its evolving efforts to combat the spread of misinformation via YouTube clips, which sheds some light on the various challenges that the platform faces, and how it’s considering its options in managing these concerns.

It’s a critical issue, with YouTube, along with Facebook, regularly being identified as a key source of misleading and potentially harmful content, with viewers sometimes taken down ever-deeper rabbit holes of misinformation via YouTube’s recommendations.

YouTube says that it is working to address this, and is focused on three key elements in this push.

The first element is catching misinformation before it gains traction, which YouTube explains can be particularly challenging with newer conspiracy theories and misinformation pushes, as it can’t update its automated detection algorithms without a significant amount of content to go on to train its systems.

Automated detection processes are built on examples, and for older conspiracy theories, this works very well, because YouTube has enough data to feed in, in order to train its classifiers on what they need to detect and limit. But newer shifts complicate matters, presenting a different challenge.

YouTube says that it’s considering various ways to update its processes on this front, and limit the spread of evolving harmful content, particularly around developing news stories.  

“For major news events, like a natural disaster, we surface developing news panels to point viewers to text articles for major news events. For niche topics that media outlets might not cover, we provide viewers with fact check boxes. But fact checking also takes time, and not every emerging topic will be covered. In these cases, we’ve been exploring additional types of labels to add to a video or atop search results, like a disclaimer warning viewers there’s a lack of high quality information.

Advertisement

That, ideally, will expand its capacity to detect and limit emerging narratives, though this will always remain a challenge in many respects.

See also  LinkedIn Publishes New Guide to Maximizing Your Business Branding Efforts

The second element of focus is cross-platform sharing, and the amplification of YouTube content outside of YouTube itself.

YouTube says that it can implement all the changes it wants within its app, but if people are re-sharing videos on other platforms, or embedding YouTube content on other websites, that makes it harder for YouTube to restrict its spread, which leads to further challenges in mitigating such.

“One possible way to address this is to disable the share button or break the link on videos that we’re already limiting in recommendations. That effectively means you couldn’t embed or link to a borderline video on another site. But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms. Our systems reduce borderline content in recommendations, but sharing a link is an active choice a person can make, distinct from a more passive action like watching a recommended video.

This is a key point – while YouTube wants to restrict content that could promote harmful misinformation, if it doesn’t technically break the platform’s rules, how much can YouTube work to limit such, without over-stepping the line?

If YouTube can’t limit the spread of content through sharing, that’s still a significant vector for harm, so it needs to do something, but the trade-offs here are significant.

“Another approach could be to surface an interstitial that appears before a viewer can watch a borderline embedded or linked video, letting them know the content may contain misinformation. Interstitials are like a speed bump – the extra step makes the viewer pause before they watch or share content. In fact, we already use interstitials for age-restricted content and violent or graphic videos, and consider them an important tool for giving viewers a choice in what they’re about to watch.

Each of these proposals would be seen by some as overstepping, but they could also limit the spread of harmful content. At what point, then, does YouTube become a publisher, which could bring it under existing editorial rules and processes?

Advertisement

There are no easy answers in any of these categories, but it’s interesting to consider the various elements at play.

See also  Instagram's Chief Outlines the Key Areas of Focus for the App in 2022

Lastly, YouTube says that it’s expanding its misinformation efforts globally, due to varying attitudes and approaches towards information sources.

“Cultures have different attitudes towards what makes a source trustworthy. In some countries, public broadcasters like the BBC in the U.K. are widely seen as delivering authoritative news. Meanwhile in others, state broadcasters can veer closer to propaganda. Countries also show a range of content within their news and information ecosystem, from outlets that demand strict fact-checking standards to those with little oversight or verification. And political environments, historical contexts, and breaking news events can lead to hyperlocal misinformation narratives that don’t appear anywhere else in the world. For example, during the Zika outbreak in Brazil, some blamed the disease on international conspiracies. Or recently in Japan, false rumors spread online that an earthquake was caused by human intervention.

The only way to combat this is to hire more staff in each region, and create more localized content moderation centers and processes, in order to factor in regional nuance. Though even then, there are considerations as to how restrictions potentially apply across borders – should a warning shown on content in one region also appear in others?

Again, there are no definitive answers, and it’s interesting to consider the varying challenges YouTube faces here, as it works to evolve its processes.

You can read YouTube’s full overview of its evolving misinformation mitigation efforts here.



Source link

Advertisement

SOCIAL

Meta’s Adding More Ad Targeting Information to its Ad Library Listings

Published

on

Meta's Adding More Ad Targeting Information to its Ad Library Listings

In the wake of the Cambridge Analytics scandal, Meta has implemented a range of data protection measures to ensure that it limits access to users’ personal data and insight, while at the same time, it’s also been working to provide more transparency into how its systems are being used by different groups to target their messaging.

These conflicting approaches require a delicate balance, one which Meta has largely been able to maintain via its Ad Library, which enables anyone to see any ad being run by any Facebook Page in the recent past.

Now, Meta’s looking to add to that insight, with new information being added to the Ad Library on how Pages are using social issue, electoral or political ads in their process.

Meta ad targeting

As you can see here, the updated Ad Library overview will include more specific information on how each advertiser is using these more sensitive targeting options, which could help researchers detect misuse or report concerns.

As explained by Meta:

“At the end of this month, detailed targeting information for social issue, electoral or political ads will be made available to vetted academic researchers through the Facebook Open Research and Transparency (FORT) environment […] Coming in July, our publicly available Ad Library will also include a summary of targeting information for social issue, electoral or political ads run after launch. This update will include data on the total number of social issue, electoral and political ads a Page ran using each type of targeting (such as location, demographics and interests) and the percentage of social issue, electoral and political ad spend used to target those options.”

That’s a significant update for Meta’s ad transparency efforts, which will help researchers better understand key trends in ad usage, and how they relate to messaging resonance and response.

See also  Instagram's Chief Outlines the Key Areas of Focus for the App in 2022

Meta has come under scrutiny over such in the past, with independent investigations finding that housing ads, for example, were illegally using race-based exclusions in their ad targeting. That led to Meta changing its rules on how its exclusions can be used, and this new expansion could eventually lead to similar, by making discriminatory ad targeting easier to identify, with direct examples from Meta’s system.

Advertisement

For regular advertisers, it could also give you some additional insight into your competitors’ tactics. You might find more detailed information on how other brands are honing in on specific audiences, which may not be discriminatory, but may highlight new angles for your own marketing efforts.

It’s a good transparency update, which should glean significant benefits for researchers trying to better understand how Meta’s intricate ad targeting system is being used in various ways.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending