Connect with us

SOCIAL

Meta’s Developing and ‘Ethical Framework’ for the Use of Virtual Influencers

Published

on

Meta's Developing and 'Ethical Framework' for the Use of Virtual Influencers

[ad_1]

With the rise of digital avatars, and indeed, fully digital characters that have evolved into genuine social media influencers in their own right, online platforms now have an obligation to establish clear markers as to what’s real and what’s not, and how such creations can be used in their apps.

The coming metaverse shift will further complicate this, with the rise of virtual depictions blurring the lines of what will be allowed, in terms of representation. But with many virtual influencers already operating, Meta is now working to establish ethical boundaries on their application.

As explained by Meta:

From synthesized versions of real people to wholly invented “virtual influencers” (VIs), synthetic media is a rising phenomenon. Meta platforms are home to more than 200 VIs, with 30 verified VI accounts hosted on Instagram. These VIs boast huge follower counts, collaborate with some of the world’s biggest brands, fundraise for organizations like the WHO, and champion social causes like Black Lives Matter.”

Some of the more well-known examples on this front are Shudu, who has more than 200k followers on Instagram, and Lil’ Miquela, who has an audience of over 3 million in the app.

At first glance, you wouldn’t necessarily realize that this is not an actual person, which makes such characters a great vehicle for brand and product promotions, as they can be utilized 24/7, and can be placed into any environment. But that also leads to concerns about body image perception, deepfakes, and other forms of misuse through false or unclear representation.

Deepfakes, in particular, may be problematic, with Meta citing this campaign, with English football star David Beckham, as an example of how new technologies are evolving to expand the use of language, as one element, for varying purpose.

Advertisement

The well-known ‘DeepTomCruise’ account on TikTok is another example of just how far these technologies have come, and it’s not hard to imagine a scenario where they could be used to, say, show a politician saying or doing something that he or she actually didn’t, which could have significant real world impacts.

Which is why Meta is working with developers and experts to establish clearer boundaries on such use – because while there is potential for harm, there are also beneficial uses for such depictions.

Imagine personalized video messages that address individual followers by name. Or celebrity brand ambassadors appearing as salespeople at local car dealerships. A famous athlete would make a great tutor for a kid who loves sports but hates algebra.

Such use cases will increasingly become the norm as VR and AR technologies are developed, with these platforms placing digital characters front and center, and establishing new norms for digital connection.

It would be better to know what’s real and what’s not, and as such, Meta needs clear regulations to remove dishonest depictions, and enforce transparency over VI use.

But then again, much of what you see on Instagram these days is not real, with filters and editing tools altering people’s appearance well beyond what’s normal, or realistic. That can also have damaging consequences, and while Meta’s looking to implement rules on VI use, there’s arguably a case for similar transparency in editing tools applied to posted videos and images as well.

That’s a more complex element, particularly as such tools also enable people to feel more comfortable in posting, which no doubt increases their in-app activity. Would Meta be willing to put more focus on this element if it could risk impacting user engagement? The data on the impact of Instagram on people’s mental health are pretty clear, with comparison being a key concern.

Advertisement

Should that also come under the same umbrella of increased digital transparency?

It’s seemingly not included in the initial framework as yet, but at some stage, this is another element that should be examined, especially given the harmful effects that social media usage can have on young women.

But however you look at it, this is no doubt a rising element of concern, and it’s important for Meta to build guardrails and rules around the use of virtual influencers in their apps.

You can read more about Meta’s approach to virtual influencers here.



[ad_2]

Source link

SOCIAL

Twitter Faces Advertiser Boycott Due to Failures to Police Child Abuse Material

Published

on

Elon Musk Launches Hostile Takeover Bid for Twitter

Twitter’s no good, very bad year continues, with the company this week being forced to inform some advertisers that their ads had been displayed in the app alongside tweets soliciting child pornography and other abuse material.

As reported by Reuters:

Brands ranging from Walt Disney, NBCUniversal and Coca-Cola, to a children’s hospital, were among some 30 advertisers that have appeared on the profile pages of Twitter accounts that peddle links to the exploitative material.”

The discovery was made by cybersecurity group Ghost Data, which worked with Reuters to uncover the ad placement concerns, dealing another big blow to the app’s ongoing business prospects.

Already in a state of disarray amid the ongoing Elon Musk takeover saga, and following recent revelations from its former security chief that it’s lax on data security and other measures, Twitter’s now also facing an advertiser exodus, with big brands including Dyson, Mazda and Ecolab suspending their Twitter campaigns in response.

Which, really, is the least concerning element about the discovery, with the Ghost Data report also identifying more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period.

Ghost Data says that Twitter failed to remove more than 70% of the accounts during the time of the study.

Advertisement

The findings raise further questions about Twitter’s inability, or willingness, to address potentially harmful material, with The Verge reporting late last month that Twitter ‘cannot accurately detect child sexual exploitation and non-consensual nudity at scale’.

That finding stemmed from an investigation into Twitter’s proposed plan to give adult content creators the ability to begin selling OnlyFans-style paid subscriptions in the app.

Rather than working to address the abundance of pornographic material on the platform, Twitter instead considered leaning into it – which would undoubtedly raise the risk factor for advertisers who do not want their promotions to appear alongside potentially offensive tweets.

Which is likely happening, at an even greater scale than this new report suggests, because Twitter’s own internal investigation into its OnlyFans-esque proposal found that:

Twitter could not safely allow adult creators to sell subscriptions because the company was not – and still is not – effectively policing harmful sexual content on the platform.”

In other words, Twitter couldn’t risk facilitating the monetization of exploitative material in the app, and because it has no way of tackling such, it had to scrap the proposal before it really gained any traction.

With that in mind, these new findings are no surprise – but again, the advertiser backlash is likely to be significant, which could force Twitter to launch a new crackdown either way.

For its part, Twitter says that it is investing more resources dedicated to child safety, ‘including hiring for new positions to write policy and implement solutions’.

Advertisement

So, great, Twitter’s taking action now. But these reports, based on investigation into Twitter’s own examinations, show that Twitter has been aware of this potential issue for some time – not child exploitation specifically, but adult content concerns that it has no way of policing.

In fact, Twitter openly assists in the promotion of adult content, albeit inadvertently. For example, in the ‘For You’ section of my ‘Explore’ tab (i.e. the front page of Explore in the app), Twitter continuously recommends that I follow ‘Facebook’ as a topic, based on my tweets and the people I follow in the app.

Here are the tweets that it highlighted as some of the top topical tweets for ‘Facebook’ yesterday:

It’s not pornographic material as such, but I’m tipping that if I tap through on any of these profiles, I’ll find it pretty quick. And again, these tweets are highlighted based on Twitter’s own topical tweets algorithm, which is based on engagement with tweets that mention the topic term. These completely unrelated and off-topic tweets are then being pushed by Twitter itself, to users that haven’t expressed any interest in adult content.

It’s clear, based on all the available evidence, that Twitter does have a porn problem, and it’s doing little to address it.

Distributors of adult content view Twitter as the best social network for advertising, because it’s less restrictive than Facebook, and has much broader reach than niche adult sites, while Twitter gains the usage and engagement benefits of hosting material that other social platforms would simply not allow.

Which is likely why it’s been willing to turn a blind eye to such for so long, to the point that it’s now being highlighted as a much bigger problem.

Though it is important to note that adult content, in itself, is not inherently problematic, among consenting adult users at least. It’s Twitter’s approach to child abuse and exploitative content that’s the real issue at hand.

Advertisement

And Twitter’s systems are reportedly ‘woefully inadequate’ in this respect.

As reported by The Verge:

A 2021 report found that the processes Twitter uses to identify and remove child sexual exploitation material are woefully inadequate – largely manual at a time when larger companies have increasingly turned to automated systems that can catch material that isn’t flagged by PhotoDNA. Twitter’s primary enforcement software is “a legacy, unsupported tool” called RedPanda, according to the report. “RedPanda is by far one of the most fragile, inefficient, and under-supported tools we have on offer,” one engineer quoted in the report said.”

Indeed, additional analysis of Twitter’s CSE detection systems found that of the 1 million reports submitted each month, 84% contain newly-discovered material – ‘none of which would be flagged’, by Twitter’s systems.

So while it’s advertisers that are putting the pressure back on the company in this instance, it’s clear that Twitter’s issues stem far beyond ad placement concerns alone.

Hitting Twitter’s bottom line, however, may be the only way to force the platform to take action – though it’ll be interesting to see just how willing and able Twitter is to enact a broader plan to address such amidst of its ongoing ownership battle.

Within its takeover agreement with Elon Musk, there’s a provision which states that Twitter needs to:

“Use its commercially reasonable efforts to preserve substantially intact the material components of its current business organization.”

Advertisement

In other words, Twitter can’t make any significant changes to its operational structure while it’s in the transition phase, which is currently in debate as it headed for a courtroom battle with Musk.

Would initiating a significant update to its CSE detection models qualify as a substantial change – substantial enough to alter the operating structure of the company at the time of the initial agreement?

In essence, Twitter likely doesn’t want to make any major changes. But it might have to, especially if more advertisers join this new boycott, and push the company to take immediate action.

It’s likely to be a mess either way, but this is a huge concern for Twitter, which should be rightfully held to account for its systemic failures in this respect.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish