Connect with us


Facebook Announces Official Timeline for Trump Ban, Changes to Rules Around Political Speech



The door has been opened for former President Donald Trump to potentially return to Facebook, his key promotional platform of choice – though he will have to wait for two years (dating back to January 7th), and he will have to undergo an assessment to decide whether he should have his accounts reinstated.

The announcement comes as part of Facebook’s response to the latest ruling from its indepedent Oversight Board in relation to its decision to ban Trump back in January, in the wake the Capitol riot. Following the incident, in which Facebook says that Trump both instigated and incited the violent uprising via social media, The Social Network cut off Trump’s access to both Facebook and Instagram, a penalty that it’s maintained ever since.

Trump has been seeking to regain access to the platform, and his 32 million Facebook followers, and the Oversight Board afforded Trump an opportunity to share his perspective on the ban as part of its assessment process.

And now, Facebook has announced the next steps it will take in relation to the Oversight Board’s findings.

Here’s how Facebook’s rules around political leaders, and what they can share on Facebook, will change as a result.

First off, Facebook will now implement clearly defined penalties for suspensions, even those relating to significant incidents that could lead to broad-ranging social unrest.   

As explained by Facebook:

“The [Oversight Board] criticized the open-ended nature of [Trump’s] suspension, stating that “it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension.” The board instructed us to review the decision and respond in a way that is clear and proportionate, and made a number of recommendations on how to improve our policies and processes.”

Based on this, Facebook has now established a clear framework around future incidents, with escalating penalties of up to two years for the most significant violations.

Facebook penalties

Given the nature of the Trump ban, Facebook puts this incident in its ‘most severe’ category, meaning it garners the most significant penalty available. Hence, Trump is now banned for two years, effective from the date of the initial suspension on January 7th.

But that doesn’t definitively mean that Trump will be able to start posting again on January 7th 2023:

See also  Twitter rolls out paid subscription ‘Super Follows’ to let you cash in on your tweets

“At the end of this period, we will look to experts to assess whether the risk to public safety has receded. We will evaluate external factors, including instances of violence, restrictions on peaceful assembly and other markers of civil unrest. If we determine that there is still a serious risk to public safety, we will extend the restriction for a set period of time and continue to re-evaluate until that risk has receded.”

So Trump could return to Facebook in 2023, just in time for a re-election campaign, with the top job up again in 2024. But Facebook could also decide that he still poses a significant risk to public debate.

And going on Trump’s official response to today’s ruling, that seems like a strong possibility.

Trump response to Facebook

Despite it all, Trump is still pushing ahead with the ‘rigged election’ narrative, which is what sparked the Capitol riots in the first place. Given this, it seems like a very strong possibility that he may have trouble regaining Facebook access, even in 2023 – and without access to Facebook’s platform to push any potential re-election promotions and campaign material, that will be a big blow to Trump’s chances, if he were to seek re-election in the next period.

So while Trump could return to the platform in two years, it’s not a given that it will happen – in fact, right now, you’d have to think that won’t be much of a chance.

But the key point here is that Facebook has established clearer, more transparent rules around such, and what the maximum penalties will be from now on, which is critically important in ensuring clearer guidance around its official processes.

Furthering this, Facebook has also set down more specific parameters around what politicians can say on the platform, and how its rules will apply to public figures, which has been another point of contention.

Up till now, Facebook has allowed certain ‘newsworthy’ content that might otherwise violate its rules to remain up on its platform, in the interests of public debate and transparency.

Facebook CEO Mark Zuckerberg defended this approach in a speech to Georgetown in 2019, explaining that:

I don’t think it’s right for a private company to censor politicians or the news in a democracy. […] We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying.”

But now, Facebook is re-assessing this.

See also  Inbound vs Outbound Lead Generation: A Visual Comparison [Infographic]

In line with the Oversight Board’s recommendations to establish clearer rules for all users, Facebook will now evaluate all content under the same parameters, even if it’s from a politician or public figure.

“We grant our newsworthiness allowance to a small number of posts on our platform. Moving forward, we will begin publishing the rare instances when we apply it. Finally, when we assess content for newsworthiness, we will not treat content posted by politicians any differently from content posted by anyone else. Instead, we will simply apply our newsworthiness balancing test in the same way to all content, measuring whether the public interest value of the content outweighs the potential risk of harm by leaving it up.”

There is some room for leniency here. As Facebook notes, it will still allow some exemptions under its ‘newsworthy content‘ provisions. But the rules will now be much clearer around such, and all users will essentially face the same penalties and restrictions.  

That’s a significant change in approach, but the idea here is that it will provide more transparency over the various assessments and decisions, ensuring that all users understand what’s acceptable, and what’s not, and what the penalties can be in each case.

Will that stop people from abusing the massive reach of Facebook’s platforms to spread divisive messaging, and maximize their own political interests?

No – in fact, if anything, the latest data suggests that more political regimes are now recognizing the potential of Facebook in this regard, and are using the platform for domestic influence campaigns.  

Facebook influence campaigns overview

It seems, in some ways, that the Trump campaign’s reliance on Facebook to expand its reach and messaging has shined a lot on this type of usage, which has lead to more, smaller-scale efforts to manipulate voters.

Facebook’s new rules will play a part in providing more transparency around such, but the stats indicate that this will be an ongoing concern, with Facebook’s unmatched reach providing a big lure for politically affiliated groups to boost their messaging. 

See also  Amazon eyes launching its computer science education program in India

Facebook is also well-aware that these updates won’t address every concern:

“We know today’s decision will be criticized by many people on opposing sides of the political divide – but our job is to make a decision in as proportionate, fair and transparent a way as possible, in keeping with the instruction given to us by the Oversight Board.”

In this respect, these are good changes, which reflect that Facebook’s independent board does indeed have significant influence on the company’s decisions, and will act as a valuable, outside voice in guiding its rules, even in the highest-profile cases.

Which has been a key concern around the Oversight Board, that essentially, it will be a ‘toothless tiger’, and that Facebook will simply ignore the rulings that it doesn’t like, in order to continue on as it always has.

But thus far, that hasn’t been the case. Facebook has tuned into its independent experts, and is working to align with their rulings, in almost all respects, providing much-needed input into its policy decision-making.

Because it shouldn’t come down to what Zuckerberg believes. At Facebook’s scale, it needs outside voices in the room.

And Facebook has, once again, reiterated that this should go even further:

“In the absence of frameworks agreed upon by democratically accountable lawmakers, the board’s model of independent and thoughtful deliberation is a strong one that ensures important decisions are made in as transparent and judicious a manner as possible. The Oversight Board is not a replacement for regulation, and we continue to call for thoughtful regulation in this space.”

Yes, social media platforms should be regulated, in relation to what can and cannot be posted, and Facebook itself advocates for this. In this respect, the Trump decision underlines the value of independent oversight, and why broader rulings like this should apply to all platforms, taking such decisions out of the hands of business managers who have a clear vested interest, and putting it under the guidance of elected officials.

That’s a more complex journey, but the process here points to the value of outside perspective.

Continue Reading


Meta’s Developing and ‘Ethical Framework’ for the Use of Virtual Influencers



Meta's Developing and 'Ethical Framework' for the Use of Virtual Influencers

With the rise of digital avatars, and indeed, fully digital characters that have evolved into genuine social media influencers in their own right, online platforms now have an obligation to establish clear markers as to what’s real and what’s not, and how such creations can be used in their apps.

The coming metaverse shift will further complicate this, with the rise of virtual depictions blurring the lines of what will be allowed, in terms of representation. But with many virtual influencers already operating, Meta is now working to establish ethical boundaries on their application.

As explained by Meta:

From synthesized versions of real people to wholly invented “virtual influencers” (VIs), synthetic media is a rising phenomenon. Meta platforms are home to more than 200 VIs, with 30 verified VI accounts hosted on Instagram. These VIs boast huge follower counts, collaborate with some of the world’s biggest brands, fundraise for organizations like the WHO, and champion social causes like Black Lives Matter.”

Some of the more well-known examples on this front are Shudu, who has more than 200k followers on Instagram, and Lil’ Miquela, who has an audience of over 3 million in the app.

At first glance, you wouldn’t necessarily realize that this is not an actual person, which makes such characters a great vehicle for brand and product promotions, as they can be utilized 24/7, and can be placed into any environment. But that also leads to concerns about body image perception, deepfakes, and other forms of misuse through false or unclear representation.

See also  TikTok's New TV-Connection Option Comes to US Homes

Deepfakes, in particular, may be problematic, with Meta citing this campaign, with English football star David Beckham, as an example of how new technologies are evolving to expand the use of language, as one element, for varying purpose.

The well-known ‘DeepTomCruise’ account on TikTok is another example of just how far these technologies have come, and it’s not hard to imagine a scenario where they could be used to, say, show a politician saying or doing something that he or she actually didn’t, which could have significant real world impacts.

Which is why Meta is working with developers and experts to establish clearer boundaries on such use – because while there is potential for harm, there are also beneficial uses for such depictions.

Imagine personalized video messages that address individual followers by name. Or celebrity brand ambassadors appearing as salespeople at local car dealerships. A famous athlete would make a great tutor for a kid who loves sports but hates algebra.

Such use cases will increasingly become the norm as VR and AR technologies are developed, with these platforms placing digital characters front and center, and establishing new norms for digital connection.

It would be better to know what’s real and what’s not, and as such, Meta needs clear regulations to remove dishonest depictions, and enforce transparency over VI use.

But then again, much of what you see on Instagram these days is not real, with filters and editing tools altering people’s appearance well beyond what’s normal, or realistic. That can also have damaging consequences, and while Meta’s looking to implement rules on VI use, there’s arguably a case for similar transparency in editing tools applied to posted videos and images as well.

See also  Inbound vs Outbound Lead Generation: A Visual Comparison [Infographic]

That’s a more complex element, particularly as such tools also enable people to feel more comfortable in posting, which no doubt increases their in-app activity. Would Meta be willing to put more focus on this element if it could risk impacting user engagement? The data on the impact of Instagram on people’s mental health are pretty clear, with comparison being a key concern.

Should that also come under the same umbrella of increased digital transparency?

It’s seemingly not included in the initial framework as yet, but at some stage, this is another element that should be examined, especially given the harmful effects that social media usage can have on young women.

But however you look at it, this is no doubt a rising element of concern, and it’s important for Meta to build guardrails and rules around the use of virtual influencers in their apps.

You can read more about Meta’s approach to virtual influencers here.

Source link

Continue Reading


Meta Publishes New Guide to the Various Security and Control Options in its Apps



Meta Publishes New Guide to the Various Security and Control Options in its Apps

Meta has published a new set of safety tips for journalists to help them protect themselves in the evolving online connection space, which, for the most part, also apply to all users more broadly, providing a comprehensive overview of the various tools and processes that it has in place to help people avoid unwanted attention online.

The 32-page guide is available in 21 different languages, and provides detailed overviews of Meta’s systems and profile options for protection and security, with specific sections covering Facebook, Instagram and WhatsApp.

The guide begins with the basics, including password protections and enabling two-factor authentication.

It also outlines tips for Page managers in securing their business profiles, while there are also notes on what to do if you’ve been hacked, advice for protection on Messenger and guidance on bullying and harassment.

Meta security guide

For Instagram, there are also general security tips, along with notes on its comment moderation tools.

Meta security guide

While for WhatsApp, there are explainers on how to delete messages, how to remove messages from group chats, and details on platform-specific data options.

Meta security guide

There are also links to various additional resource guides and tools for more context, providing in-depth breakdowns of when and how to action the various options.

It’s a handy guide, and while there are some journalist-specific elements included, most of the tips do apply to any user, so it could well be a valuable resource for anyone looking to get a better handle on your various privacy tools and options.

Definitely worth knowing either way – you can download the full guide here.

See also  Combining StitchFix and Instagram, FlipFit ushers in the next phase of social retail

Source link

Continue Reading


Twitter bans account linked to Iran leader over video threatening Trump



Twitter bans account linked to Iran leader over video threatening Trump

Iran’s supreme leader Ayatollah Ali Khamenei meets with relatives of slain commander Qasem Soleimani ahead of the second anniverary of his death in a US drone strike in Iraq – Copyright POOL/AFP/File Tom Brenner

Twitter said Saturday it had permanently suspended an account linked to Iran’s supreme leader that posted a video calling for revenge for a top general’s assassination against former US president Donald Trump.

“The account referenced has been permanently suspended for violating our ban evasion policy,” a Twitter spokesperson told AFP.

The account, @KhameneiSite, this week posted an animated video showing an unmanned aircraft targeting Trump, who ordered a drone strike in Baghdad two years ago that killed top Iranian commander General Qassem Soleimani.

Supreme leader Ayatollah Ali Khamenei’s main accounts in various languages remain active. Last year, another similar account was suspended by Twitter over a post also appearing to reference revenge against Trump.

The recent video, titled “Revenge is Definite”, was also posted on Khamenei’s official website.

According to Twitter, the company’s top priority is keeping people safe and protecting the health of the conversation on the platform.

The social media giant says it has clear policies around abusive behavior and will take action when violations are identified.

As head of the Quds Force, the foreign operations arm of Iran’s Revolutionary Guards, Soleimani was the architect of its strategy in the Middle East.

He and his Iraqi lieutenant were killed by a US drone strike outside Baghdad airport on January 3, 2020.

Khamenei has repeatedly promised to avenge his death.

On January 3, the second anniversary of the strike, the supreme leader and ultraconservative President Ebrahim Raisi once again threatened the US with revenge.

See also  Google Has Removed Over 500 Chrome Extensions Due to Malware Concerns

Trump’s supporters regularly denounce the banning of the Republican billionaire from Twitter, underscoring that accounts of several leaders considered authoritarian by the United States are allowed to post on the platform.

Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address