Connect with us


Twitter Is Looking to Re-Open its Account Verification Process, Seeks Feedback on New Guidelines



Get ready to stake your case – after shutting down its account verification application process back in 2017, Twitter says that it’s now looking to re-open applications for account verification which could give you a chance to get your own blue checkmark, making you infinitely more important than others on the platform.

But hang on – as you can see, the actual policy around what verification means isn’t set in stone yet.

As Twitter notes, it’s currently seeking feedback from the community as to what people expect the blue checkmark to represent, in order to revamp its previous, flawed process, which ended up being a mess due to the different ways that Twitter employees applied the qualifiers, and who should be approved and not, etc.

That’s what lead to the initial pause on the public application process – because there was a level of confusion around what the blue checkmark meant, no one really knew who should be approved for one, who should not. That lead to Twitter verifying the profile of reported a white supremacist leader – despite, at that time, looking to take more action against hate speech. Because there was a level of uncertainty over whether the badge signified ‘identity’ or ‘endorsement’, Twitter shut the whole thing down, and while certain accounts have still been verified since then, public applications have been off the cards entirely for three years. 

See also  Trump’s hype for state lockdown protests puts Twitter and Facebook’s new COVID-19 policies to the test

Now they could be coming back – but Twitter first needs to ensure that there’s clear understanding about what verification actually means, both internally and externally.

In order to address this, Twitter’s published a proposed overview of which accounts should be considered for verification, along with a new survey to seek feedback on its process.

The first tier of profiles that Twitter says should be eligible for verification are ‘Notable Accounts’, with the blue tick signifying that the account is an authentic representation of that person or entity, serving an immediate public purpose.

The six types of accounts Twitter has filed under this heading are:

  1. Government
  2. Companies, Brands and Non- Profit Organizations
  3. News
  4. Entertainment
  5. Sports
  6. Activists, Organizers, and Other Influential Individuals

Those make sense – marking official accounts in these categories serves a clear purpose, and while its public application process has been paused, Twitter has continued to approve verification for accounts in these categories.

But the more complex, and divisive arguments around verification come from the public, when people want their own blue checkmark. If you’ve got lots of followers and you spend a lot of time on Twitter, why shouldn’t you get your own checkmark, right?

This comes down to the core of the question around Twitter verification – does the blue checkmark simply signify identity, in which case, anyone who can provide their ID documents should qualify? Or does it signify celebrity, which is a more nebulous and subjective criteria?

That’s what Twitter’s now trying to determine – in the public survey on expectations around verification, Twitter looks to glean insight what people think the blue tick means.

Twitter verification survey

Twitter also seeks to clarify whether people see the verification badge as a general qualifier of identity, or as an endorsement of that person from Twitter. 

See also  Pakistan Bans TikTok Due to 'Immoral and Indecent' Content

The distinction here is key – as noted, Twitter has previously given blue checkmarks to users of questionable background, and if people see that as Twitter giving their support to that person, as opposed to a marker of identity, then that’s a problem for the brand.

As such, Twitter needs to clarify what its badge actually represents – but even so, this doesn’t seem like the best way to address these issues and formulate a better policy.

Back in June, it seemed like Twitter was going to go with a new form of verification that would signify that a user had confirmed their identity, with reverse engineering expert Jane Manchun Wong uncovering this explanation.

Twitter verification explainer

That relates to ‘confirming’ your Twitter account, not verification exactly, which seemed like it could be a different form of profile badge. That would cater to those looking to confirm their identity, who were not considered to be in the top tier of qualifiers for account verification.

Maybe that’s what Twitter is looking to go with – as noted by Twitter:

“The blue verified badge isn’t the only way we are planning to distinguish accounts on Twitter. Heading into 2021, we’re committed to giving people more ways to identify themselves, such as new account types and labels. We’ll share more in the coming weeks.”

Maybe, then, Twitter will make account verification available to ‘Notable accounts’ only, those which fit into a strict criteria of celebrity or status, while it could also add another tier of accounts that have confirmed their ID documents, with a different type of badge.

See also  Instagram Stories Can Now Include More Than One Photo

That seems to cater to the key elements – and while there will still be debates over what counts as ‘notable’, the criteria listed above seems fairly clear. The ‘entertainment’ category could lead to questions, as could the ‘other influential individuals’ marker. But if Twitter has an internal review panel to approve such, that could be a more workable solution.

Twitter has also been looking to add dedicated badges for bot accounts, to provide more transparency in interactions, and that could be another category it looks to implement. 

So we could soon have three different types of account badges to signify who or what each is.

Will that clear things up? Probably not, especially given that many people who’ve already been incorrectly approved for verification, based on these revised guidelines, will still be circulating on the platform, which will maintain a level of confusion over what the badge means.

But maybe, if Twitter revises the rules and removes the badges from those who no longer qualify based on the update, that could work. 

Maybe. Still seems like it’ll cause a lot of headaches.

You can take the Twitter verification survey yourself here.

Continue Reading


Meta’s Developing and ‘Ethical Framework’ for the Use of Virtual Influencers



Meta's Developing and 'Ethical Framework' for the Use of Virtual Influencers

With the rise of digital avatars, and indeed, fully digital characters that have evolved into genuine social media influencers in their own right, online platforms now have an obligation to establish clear markers as to what’s real and what’s not, and how such creations can be used in their apps.

The coming metaverse shift will further complicate this, with the rise of virtual depictions blurring the lines of what will be allowed, in terms of representation. But with many virtual influencers already operating, Meta is now working to establish ethical boundaries on their application.

As explained by Meta:

From synthesized versions of real people to wholly invented “virtual influencers” (VIs), synthetic media is a rising phenomenon. Meta platforms are home to more than 200 VIs, with 30 verified VI accounts hosted on Instagram. These VIs boast huge follower counts, collaborate with some of the world’s biggest brands, fundraise for organizations like the WHO, and champion social causes like Black Lives Matter.”

Some of the more well-known examples on this front are Shudu, who has more than 200k followers on Instagram, and Lil’ Miquela, who has an audience of over 3 million in the app.

At first glance, you wouldn’t necessarily realize that this is not an actual person, which makes such characters a great vehicle for brand and product promotions, as they can be utilized 24/7, and can be placed into any environment. But that also leads to concerns about body image perception, deepfakes, and other forms of misuse through false or unclear representation.

See also  Google Brings its 'Dataset Search' Tool Out of Beta Testing

Deepfakes, in particular, may be problematic, with Meta citing this campaign, with English football star David Beckham, as an example of how new technologies are evolving to expand the use of language, as one element, for varying purpose.

The well-known ‘DeepTomCruise’ account on TikTok is another example of just how far these technologies have come, and it’s not hard to imagine a scenario where they could be used to, say, show a politician saying or doing something that he or she actually didn’t, which could have significant real world impacts.

Which is why Meta is working with developers and experts to establish clearer boundaries on such use – because while there is potential for harm, there are also beneficial uses for such depictions.

Imagine personalized video messages that address individual followers by name. Or celebrity brand ambassadors appearing as salespeople at local car dealerships. A famous athlete would make a great tutor for a kid who loves sports but hates algebra.

Such use cases will increasingly become the norm as VR and AR technologies are developed, with these platforms placing digital characters front and center, and establishing new norms for digital connection.

It would be better to know what’s real and what’s not, and as such, Meta needs clear regulations to remove dishonest depictions, and enforce transparency over VI use.

But then again, much of what you see on Instagram these days is not real, with filters and editing tools altering people’s appearance well beyond what’s normal, or realistic. That can also have damaging consequences, and while Meta’s looking to implement rules on VI use, there’s arguably a case for similar transparency in editing tools applied to posted videos and images as well.

See also  LinkedIn Adds Live-Streaming for Company Pages and New 'Invite to Follow' Options

That’s a more complex element, particularly as such tools also enable people to feel more comfortable in posting, which no doubt increases their in-app activity. Would Meta be willing to put more focus on this element if it could risk impacting user engagement? The data on the impact of Instagram on people’s mental health are pretty clear, with comparison being a key concern.

Should that also come under the same umbrella of increased digital transparency?

It’s seemingly not included in the initial framework as yet, but at some stage, this is another element that should be examined, especially given the harmful effects that social media usage can have on young women.

But however you look at it, this is no doubt a rising element of concern, and it’s important for Meta to build guardrails and rules around the use of virtual influencers in their apps.

You can read more about Meta’s approach to virtual influencers here.

Source link

Continue Reading


Meta Publishes New Guide to the Various Security and Control Options in its Apps



Meta Publishes New Guide to the Various Security and Control Options in its Apps

Meta has published a new set of safety tips for journalists to help them protect themselves in the evolving online connection space, which, for the most part, also apply to all users more broadly, providing a comprehensive overview of the various tools and processes that it has in place to help people avoid unwanted attention online.

The 32-page guide is available in 21 different languages, and provides detailed overviews of Meta’s systems and profile options for protection and security, with specific sections covering Facebook, Instagram and WhatsApp.

The guide begins with the basics, including password protections and enabling two-factor authentication.

It also outlines tips for Page managers in securing their business profiles, while there are also notes on what to do if you’ve been hacked, advice for protection on Messenger and guidance on bullying and harassment.

Meta security guide

For Instagram, there are also general security tips, along with notes on its comment moderation tools.

Meta security guide

While for WhatsApp, there are explainers on how to delete messages, how to remove messages from group chats, and details on platform-specific data options.

Meta security guide

There are also links to various additional resource guides and tools for more context, providing in-depth breakdowns of when and how to action the various options.

It’s a handy guide, and while there are some journalist-specific elements included, most of the tips do apply to any user, so it could well be a valuable resource for anyone looking to get a better handle on your various privacy tools and options.

Definitely worth knowing either way – you can download the full guide here.

See also  Instagram Stories Can Now Include More Than One Photo

Source link

Continue Reading


Twitter bans account linked to Iran leader over video threatening Trump



Twitter bans account linked to Iran leader over video threatening Trump

Iran’s supreme leader Ayatollah Ali Khamenei meets with relatives of slain commander Qasem Soleimani ahead of the second anniverary of his death in a US drone strike in Iraq – Copyright POOL/AFP/File Tom Brenner

Twitter said Saturday it had permanently suspended an account linked to Iran’s supreme leader that posted a video calling for revenge for a top general’s assassination against former US president Donald Trump.

“The account referenced has been permanently suspended for violating our ban evasion policy,” a Twitter spokesperson told AFP.

The account, @KhameneiSite, this week posted an animated video showing an unmanned aircraft targeting Trump, who ordered a drone strike in Baghdad two years ago that killed top Iranian commander General Qassem Soleimani.

Supreme leader Ayatollah Ali Khamenei’s main accounts in various languages remain active. Last year, another similar account was suspended by Twitter over a post also appearing to reference revenge against Trump.

The recent video, titled “Revenge is Definite”, was also posted on Khamenei’s official website.

According to Twitter, the company’s top priority is keeping people safe and protecting the health of the conversation on the platform.

The social media giant says it has clear policies around abusive behavior and will take action when violations are identified.

As head of the Quds Force, the foreign operations arm of Iran’s Revolutionary Guards, Soleimani was the architect of its strategy in the Middle East.

He and his Iraqi lieutenant were killed by a US drone strike outside Baghdad airport on January 3, 2020.

Khamenei has repeatedly promised to avenge his death.

On January 3, the second anniversary of the strike, the supreme leader and ultraconservative President Ebrahim Raisi once again threatened the US with revenge.

See also  TikTok updates policies to ban deepfakes, expand fact-checks, and flag election misinfo

Trump’s supporters regularly denounce the banning of the Republican billionaire from Twitter, underscoring that accounts of several leaders considered authoritarian by the United States are allowed to post on the platform.

Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address