Connect with us

SOCIAL

Facebook Outlines New Measures to Protect the Integrity of the US Presidential Election

Published

on

The US Presidential Election campaigns are gaining momentum, and the expectation is that it will be one of most divisive and volatile political battles in the nation’s history.

And already, there have been accusations of questionable tactics, and concerns around the use of misinformation in order to gain advantage. Questions have been raised around the voting process itself, the use of image editing and ‘deepfakes‘, and foreign interference already. And this is before we’ve really reached the main campaign period – over the next two months, you can expect there to be much, much more on this front, as the contenders seek to get an edge in the race.

Facebook knows that it’ll caught be in the middle of this, just as it was in 2016, and along with the various new measures that it’s implemented to better detect political misuse, and protect voters from such, this week, CEO Mark Zuckerberg some additional steps that it’s taking in order to uphold the integrity of the 2020 US Presidential Election.

Here’s what’s been announced:

Voter Information Push

Facebook says that it will present authoritative information on voting at the top of Facebook and Instagram “almost every day until the election”, via its Voting Information Center.

Facebook Voting Info

The informational prompts are part of Facebook’s effort to get more people to the polls, with a goal of encouraging four million more Americans to vote

Hopefully, through these prompts, Facebook will be able to counter voting misinformation, and encourage more people to have their see on the nation’s leadership.

The informational prompts will include video tutorials on how to vote, and updates on deadlines for registering and voting in your state.

Blocking New Political Ads in the Lead-Up to the Poll

After weighing a political advertising blackout period in the days leading into the vote, Facebook has now decided to only block new political ads during the final week of the campaign.

As explained by Zuckerberg:

“It’s important that campaigns can run get out the vote campaigns, and I generally believe the best antidote to bad speech is more speech, but in the final days of an election there may not be enough time to contest new claims. So in the week before the election, we won’t accept new political or issue ads.”

That will mean that existing ads can still run, while the respective campaigns will also be able to adjust the targeting and budget for their previously launched promotions. But new ads will not be authorized in that final week.

Many have criticized the decision, including the Trump campaign, which says that President Trump will essentially be “silenced by the Silicon Valley Mafia” in the crucial, final stretch of the campaign.

Which is not true – Trump, and indeed any other candidate, will still be able to post to their Facebook Page in that last week. They just won’t be able to boost such or push new ads, while the option to amplify previously existing campaigns will still afford them the capacity to amplify their messaging via paid means.

See also  This Week in Apps: US ponders TikTok ban, apps see a record Q2, iOS 14 public beta arrives

Some have suggested that the ruling is too soft, and that Facebook should implement a full blackout period to stop voter manipulation, while others have noted that the expected uptick in early and mail-in voting this year will render the measure useless either way.

But there is some solid logic to the measure.

Last year, in the Australian Federal Election, the Liberal Coalition won the vote, despite most pundits tipping the Labor Party to win, based on the campaign. One of the key reasons the Labor Party is believed to have lost the final vote, despite seemingly leading the race, was a late push from the Liberal Party which suggested that Labor would increase taxes – and specifically, that Labor would introduce a ‘death tax’ that would see people forced to pay up to 30% tax on any inheritance they may receive from deceased friends or relatives.

Labor Death Tax post

Which wasn’t true – the Labor Party had repeatedly denied that it was even considering such a measure, and re-stated, repeatedly, that this was not the case.

But in the final days of the campaign, the Liberal Coalition ramped up its rhetoric. And based on Google search activity, that had a major impact.

Death tax searches

As you can see here, the election was held on May 18th, and searches for ‘death tax’ and ‘inheritance tax’ in Australia significantly ramped up in that last week.

The Coalition clearly saw this as a key area of concern for voters, and worked to amplify such in the final lead-up to the vote. Given this, it could be argued that, with more time, the Labor Party may have been able to counter the concern more effectively. 

As such, stopping the amplification of such messaging in that last week could actually be critically important – so while it’s not a full ban, as some had hoped for, and Facebook still won’t fact check political ads, it may be a more important measure than many are anticipating.

Only time, of course, will tell.

Removing Election Misinformation

Facebook will also expand its efforts to remove misinformation about voting.

“We already committed to partnering with state election authorities to identify and remove false claims about polling conditions in the last 72 hours of the campaign, but given that this election will include large amounts of early voting, we’re extending that period to begin now and continue through the election until we have a clear result.”

The act of voting itself will be a key element of focus, with US President Donald Trump repeatedly criticizing the voting process, and the variations being made to accommodate voters amid COVID-19. 

Just this week, Trump suggested that voters test the integrity of the system by seeking to vote twice, which is illegal in every US state. 

With doubts like this being cast over the process, Facebook is looking to get ahead of any such activity, and take more action to remove voting misinformation from its platform.

See also  What Nonprofits Taught Me About Social Media Marketing

Limiting Message Forwarding

Facebook has also announced that it will implement a new limit on message forwarding in Messenger in order to restrict the spread of viral misinformation via message.

As per Facebook:

“We’re introducing a forwarding limit on Messenger, so messages can only be forwarded to five people or groups at a time. Limiting forwarding is an effective way to slow the spread of viral misinformation and harmful content that has the potential to cause real world harm.”

Facebook implemented the same in WhatsApp back in April, in order to stem the flow of COVID-19 misinformation campaigns, which, Facebook says, lead to a 70% reduction in the number of highly forwarded messages sent in the app.

With Facebook implementing more measures to restrict the flow of misinformation in its main app, many activists and campaigners have turned to messaging to continue their efforts, and this proactive step by Facebook could be a significant measure to restrict any such push.

Cracking Down on Voting Misrepresentation

Facebook is also expanding its enforcement efforts against voting misinformation in posts.

“We already remove explicit misrepresentations about how or when to vote that could cause someone to lose their opportunity to vote – for example, saying things like “you can send in your mail ballot up to 3 days after election day”, which is obviously not true. (In most states, mail-in ballots have to be *received* by election day, not just mailed, in order to be counted.) We’re now expanding this policy to include implicit misrepresentations about voting too, like “I hear anybody with a driver’s license gets a ballot this year”, because it might mislead you about what you need to do to get a ballot, even if that wouldn’t necessarily invalidate your vote by itself.”

The expanded crackdown will help to dispel falsehoods about the voting process.

In addition, Facebook is also implementing new rules against using threats related to COVID-19 to discourage voting.

“We’ll remove posts with claims that people will get COVID-19 if they take part in voting. We’ll attach a link to authoritative information about COVID-19 to posts that might use the virus to discourage voting, and we’re not going to allow this kind of content in ads.”

Already, claims about protest activity and COVID-19 have been used to discourage people from voting in some areas. 

Policing Premature Claims About the Election Outcome

Finally, another key area of concern, which Facebook has already flagged, is the possibility of civil unrest as a result of the final outcome of the vote. 

Last month, The New York Times reported that Facebook has been exploring measures it might take in case President Trump decides not to accept the results of the 2020 election.

Trump, who has repeatedly criticized the integrity of the voting process, has thus far avoided questions about whether he will accept the final result – and now, Facebook has announced a range of additional measures that it will take to counter any effort to claim victory, or question the result, in the wake of the poll.

See also  LinkedIn Provides Tips on How to Increase Your Company Page Following

First, Facebook says that it will partner with Reuters and the National Election Pool to provide authoritative information about election results.

“We’ll show this in the Voting Information Center so it’s easily accessible, and we’ll notify people proactively as results become available. Importantly, if any candidate or campaign tries to declare victory before the results are in, we’ll add a label to their post educating that official results are not yet in and directing people to the official results.”

Facebook will also add an “informational label” to any post which seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods – which, Facebook says, will include any posts by the President.

Facebook will also increase its monitoring and enforcement efforts for groups like QAnon, which some are concerned may seek to organize violence or civil unrest in the period after the election. Facebook removed thousands of groups and Pages associated with QAnon specifically last month

These are some significant measures, and while Facebook still, as noted, won’t be fact checking political ads, the measures introduced here could go a long way in combatting efforts to manipulate voters during the campaign.

It’s difficult to know how effective the measures will be, and unfortunately, we won’t have any definitive insight till after the election, but within the parameters of Facebook’s approach to political content, these are important steps, which could have a major impact.

In respect to how effective they are, Facebook is also conducting a large scale analysis of its impact on the political process, which will involve it gaining permission from users to analyze their activity throughout the campaign period.

And this week, reports have emerged that as part of this effort, Facebook may actually pay some users not to use their Facebook and Instagram accounts.

That likely relates to a control group – if Facebook wants to measure the full impacts of its posts and updates on voting behavior, it needs to have a comparison. By having a group of users not use Facebook or Instagram, then getting insight into how they voted and engaged with political content without these platforms, it will help the researchers establish a better baseline of what impact Facebook actually has.

There’s a lot going on, and with Facebook set to come under intense scrutiny, it’s working to do all it can to protect users from manipulation.

Will it work? Should Facebook do more? We’ll soon find out, as the campaign is about to kick into overdrive.

Socialmediatoday.com

Continue Reading
Advertisement

SOCIAL

Meta’s Developing and ‘Ethical Framework’ for the Use of Virtual Influencers

Published

on

Meta's Developing and 'Ethical Framework' for the Use of Virtual Influencers


With the rise of digital avatars, and indeed, fully digital characters that have evolved into genuine social media influencers in their own right, online platforms now have an obligation to establish clear markers as to what’s real and what’s not, and how such creations can be used in their apps.

The coming metaverse shift will further complicate this, with the rise of virtual depictions blurring the lines of what will be allowed, in terms of representation. But with many virtual influencers already operating, Meta is now working to establish ethical boundaries on their application.

As explained by Meta:

From synthesized versions of real people to wholly invented “virtual influencers” (VIs), synthetic media is a rising phenomenon. Meta platforms are home to more than 200 VIs, with 30 verified VI accounts hosted on Instagram. These VIs boast huge follower counts, collaborate with some of the world’s biggest brands, fundraise for organizations like the WHO, and champion social causes like Black Lives Matter.”

Some of the more well-known examples on this front are Shudu, who has more than 200k followers on Instagram, and Lil’ Miquela, who has an audience of over 3 million in the app.

At first glance, you wouldn’t necessarily realize that this is not an actual person, which makes such characters a great vehicle for brand and product promotions, as they can be utilized 24/7, and can be placed into any environment. But that also leads to concerns about body image perception, deepfakes, and other forms of misuse through false or unclear representation.

See also  Facebook Announces New Initiatives to Raise Awareness of Climate Change, and Tackle Climate Misinformation

Deepfakes, in particular, may be problematic, with Meta citing this campaign, with English football star David Beckham, as an example of how new technologies are evolving to expand the use of language, as one element, for varying purpose.

The well-known ‘DeepTomCruise’ account on TikTok is another example of just how far these technologies have come, and it’s not hard to imagine a scenario where they could be used to, say, show a politician saying or doing something that he or she actually didn’t, which could have significant real world impacts.

Which is why Meta is working with developers and experts to establish clearer boundaries on such use – because while there is potential for harm, there are also beneficial uses for such depictions.

Imagine personalized video messages that address individual followers by name. Or celebrity brand ambassadors appearing as salespeople at local car dealerships. A famous athlete would make a great tutor for a kid who loves sports but hates algebra.

Such use cases will increasingly become the norm as VR and AR technologies are developed, with these platforms placing digital characters front and center, and establishing new norms for digital connection.

It would be better to know what’s real and what’s not, and as such, Meta needs clear regulations to remove dishonest depictions, and enforce transparency over VI use.

But then again, much of what you see on Instagram these days is not real, with filters and editing tools altering people’s appearance well beyond what’s normal, or realistic. That can also have damaging consequences, and while Meta’s looking to implement rules on VI use, there’s arguably a case for similar transparency in editing tools applied to posted videos and images as well.

See also  What Nonprofits Taught Me About Social Media Marketing

That’s a more complex element, particularly as such tools also enable people to feel more comfortable in posting, which no doubt increases their in-app activity. Would Meta be willing to put more focus on this element if it could risk impacting user engagement? The data on the impact of Instagram on people’s mental health are pretty clear, with comparison being a key concern.

Should that also come under the same umbrella of increased digital transparency?

It’s seemingly not included in the initial framework as yet, but at some stage, this is another element that should be examined, especially given the harmful effects that social media usage can have on young women.

But however you look at it, this is no doubt a rising element of concern, and it’s important for Meta to build guardrails and rules around the use of virtual influencers in their apps.

You can read more about Meta’s approach to virtual influencers here.





Source link

Continue Reading

SOCIAL

Meta Publishes New Guide to the Various Security and Control Options in its Apps

Published

on

Meta Publishes New Guide to the Various Security and Control Options in its Apps


Meta has published a new set of safety tips for journalists to help them protect themselves in the evolving online connection space, which, for the most part, also apply to all users more broadly, providing a comprehensive overview of the various tools and processes that it has in place to help people avoid unwanted attention online.

The 32-page guide is available in 21 different languages, and provides detailed overviews of Meta’s systems and profile options for protection and security, with specific sections covering Facebook, Instagram and WhatsApp.

The guide begins with the basics, including password protections and enabling two-factor authentication.

It also outlines tips for Page managers in securing their business profiles, while there are also notes on what to do if you’ve been hacked, advice for protection on Messenger and guidance on bullying and harassment.

Meta security guide

For Instagram, there are also general security tips, along with notes on its comment moderation tools.

Meta security guide

While for WhatsApp, there are explainers on how to delete messages, how to remove messages from group chats, and details on platform-specific data options.

Meta security guide

There are also links to various additional resource guides and tools for more context, providing in-depth breakdowns of when and how to action the various options.

It’s a handy guide, and while there are some journalist-specific elements included, most of the tips do apply to any user, so it could well be a valuable resource for anyone looking to get a better handle on your various privacy tools and options.

Definitely worth knowing either way – you can download the full guide here.

See also  Facebook Announces New Initiatives to Raise Awareness of Climate Change, and Tackle Climate Misinformation



Source link

Continue Reading

SOCIAL

Twitter bans account linked to Iran leader over video threatening Trump

Published

on

Twitter bans account linked to Iran leader over video threatening Trump


Iran’s supreme leader Ayatollah Ali Khamenei meets with relatives of slain commander Qasem Soleimani ahead of the second anniverary of his death in a US drone strike in Iraq – Copyright POOL/AFP/File Tom Brenner

Twitter said Saturday it had permanently suspended an account linked to Iran’s supreme leader that posted a video calling for revenge for a top general’s assassination against former US president Donald Trump.

“The account referenced has been permanently suspended for violating our ban evasion policy,” a Twitter spokesperson told AFP.

The account, @KhameneiSite, this week posted an animated video showing an unmanned aircraft targeting Trump, who ordered a drone strike in Baghdad two years ago that killed top Iranian commander General Qassem Soleimani.

Supreme leader Ayatollah Ali Khamenei’s main accounts in various languages remain active. Last year, another similar account was suspended by Twitter over a post also appearing to reference revenge against Trump.

The recent video, titled “Revenge is Definite”, was also posted on Khamenei’s official website.

According to Twitter, the company’s top priority is keeping people safe and protecting the health of the conversation on the platform.

The social media giant says it has clear policies around abusive behavior and will take action when violations are identified.

As head of the Quds Force, the foreign operations arm of Iran’s Revolutionary Guards, Soleimani was the architect of its strategy in the Middle East.

He and his Iraqi lieutenant were killed by a US drone strike outside Baghdad airport on January 3, 2020.

Khamenei has repeatedly promised to avenge his death.

On January 3, the second anniversary of the strike, the supreme leader and ultraconservative President Ebrahim Raisi once again threatened the US with revenge.

See also  This week in growth marketing on TechCrunch

Trump’s supporters regularly denounce the banning of the Republican billionaire from Twitter, underscoring that accounts of several leaders considered authoritarian by the United States are allowed to post on the platform.



Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending