Connect with us

SOCIAL

Facebook Outlines New Measures to Protect the Integrity of the US Presidential Election

Published

on

facebook outlines new measures to protect the integrity of the us presidential election

The US Presidential Election campaigns are gaining momentum, and the expectation is that it will be one of most divisive and volatile political battles in the nation’s history.

And already, there have been accusations of questionable tactics, and concerns around the use of misinformation in order to gain advantage. Questions have been raised around the voting process itself, the use of image editing and ‘deepfakes‘, and foreign interference already. And this is before we’ve really reached the main campaign period – over the next two months, you can expect there to be much, much more on this front, as the contenders seek to get an edge in the race.

Facebook knows that it’ll caught be in the middle of this, just as it was in 2016, and along with the various new measures that it’s implemented to better detect political misuse, and protect voters from such, this week, CEO Mark Zuckerberg some additional steps that it’s taking in order to uphold the integrity of the 2020 US Presidential Election.

Here’s what’s been announced:

Voter Information Push

Facebook says that it will present authoritative information on voting at the top of Facebook and Instagram “almost every day until the election”, via its Voting Information Center.

Facebook Voting Info

The informational prompts are part of Facebook’s effort to get more people to the polls, with a goal of encouraging four million more Americans to vote

Hopefully, through these prompts, Facebook will be able to counter voting misinformation, and encourage more people to have their see on the nation’s leadership.

The informational prompts will include video tutorials on how to vote, and updates on deadlines for registering and voting in your state.

Blocking New Political Ads in the Lead-Up to the Poll

After weighing a political advertising blackout period in the days leading into the vote, Facebook has now decided to only block new political ads during the final week of the campaign.

As explained by Zuckerberg:

“It’s important that campaigns can run get out the vote campaigns, and I generally believe the best antidote to bad speech is more speech, but in the final days of an election there may not be enough time to contest new claims. So in the week before the election, we won’t accept new political or issue ads.”

That will mean that existing ads can still run, while the respective campaigns will also be able to adjust the targeting and budget for their previously launched promotions. But new ads will not be authorized in that final week.

Many have criticized the decision, including the Trump campaign, which says that President Trump will essentially be “silenced by the Silicon Valley Mafia” in the crucial, final stretch of the campaign.

Which is not true – Trump, and indeed any other candidate, will still be able to post to their Facebook Page in that last week. They just won’t be able to boost such or push new ads, while the option to amplify previously existing campaigns will still afford them the capacity to amplify their messaging via paid means.

Some have suggested that the ruling is too soft, and that Facebook should implement a full blackout period to stop voter manipulation, while others have noted that the expected uptick in early and mail-in voting this year will render the measure useless either way.

But there is some solid logic to the measure.

Last year, in the Australian Federal Election, the Liberal Coalition won the vote, despite most pundits tipping the Labor Party to win, based on the campaign. One of the key reasons the Labor Party is believed to have lost the final vote, despite seemingly leading the race, was a late push from the Liberal Party which suggested that Labor would increase taxes – and specifically, that Labor would introduce a ‘death tax’ that would see people forced to pay up to 30% tax on any inheritance they may receive from deceased friends or relatives.

Labor Death Tax post

Which wasn’t true – the Labor Party had repeatedly denied that it was even considering such a measure, and re-stated, repeatedly, that this was not the case.

But in the final days of the campaign, the Liberal Coalition ramped up its rhetoric. And based on Google search activity, that had a major impact.

Death tax searches

As you can see here, the election was held on May 18th, and searches for ‘death tax’ and ‘inheritance tax’ in Australia significantly ramped up in that last week.

The Coalition clearly saw this as a key area of concern for voters, and worked to amplify such in the final lead-up to the vote. Given this, it could be argued that, with more time, the Labor Party may have been able to counter the concern more effectively. 

As such, stopping the amplification of such messaging in that last week could actually be critically important – so while it’s not a full ban, as some had hoped for, and Facebook still won’t fact check political ads, it may be a more important measure than many are anticipating.

Only time, of course, will tell.

Removing Election Misinformation

Facebook will also expand its efforts to remove misinformation about voting.

“We already committed to partnering with state election authorities to identify and remove false claims about polling conditions in the last 72 hours of the campaign, but given that this election will include large amounts of early voting, we’re extending that period to begin now and continue through the election until we have a clear result.”

The act of voting itself will be a key element of focus, with US President Donald Trump repeatedly criticizing the voting process, and the variations being made to accommodate voters amid COVID-19. 

Just this week, Trump suggested that voters test the integrity of the system by seeking to vote twice, which is illegal in every US state. 

With doubts like this being cast over the process, Facebook is looking to get ahead of any such activity, and take more action to remove voting misinformation from its platform.

Limiting Message Forwarding

Facebook has also announced that it will implement a new limit on message forwarding in Messenger in order to restrict the spread of viral misinformation via message.

As per Facebook:

“We’re introducing a forwarding limit on Messenger, so messages can only be forwarded to five people or groups at a time. Limiting forwarding is an effective way to slow the spread of viral misinformation and harmful content that has the potential to cause real world harm.”

Facebook implemented the same in WhatsApp back in April, in order to stem the flow of COVID-19 misinformation campaigns, which, Facebook says, lead to a 70% reduction in the number of highly forwarded messages sent in the app.

With Facebook implementing more measures to restrict the flow of misinformation in its main app, many activists and campaigners have turned to messaging to continue their efforts, and this proactive step by Facebook could be a significant measure to restrict any such push.

Cracking Down on Voting Misrepresentation

Facebook is also expanding its enforcement efforts against voting misinformation in posts.

“We already remove explicit misrepresentations about how or when to vote that could cause someone to lose their opportunity to vote – for example, saying things like “you can send in your mail ballot up to 3 days after election day”, which is obviously not true. (In most states, mail-in ballots have to be *received* by election day, not just mailed, in order to be counted.) We’re now expanding this policy to include implicit misrepresentations about voting too, like “I hear anybody with a driver’s license gets a ballot this year”, because it might mislead you about what you need to do to get a ballot, even if that wouldn’t necessarily invalidate your vote by itself.”

The expanded crackdown will help to dispel falsehoods about the voting process.

In addition, Facebook is also implementing new rules against using threats related to COVID-19 to discourage voting.

“We’ll remove posts with claims that people will get COVID-19 if they take part in voting. We’ll attach a link to authoritative information about COVID-19 to posts that might use the virus to discourage voting, and we’re not going to allow this kind of content in ads.”

Already, claims about protest activity and COVID-19 have been used to discourage people from voting in some areas. 

Policing Premature Claims About the Election Outcome

Finally, another key area of concern, which Facebook has already flagged, is the possibility of civil unrest as a result of the final outcome of the vote. 

Last month, The New York Times reported that Facebook has been exploring measures it might take in case President Trump decides not to accept the results of the 2020 election.

Trump, who has repeatedly criticized the integrity of the voting process, has thus far avoided questions about whether he will accept the final result – and now, Facebook has announced a range of additional measures that it will take to counter any effort to claim victory, or question the result, in the wake of the poll.

First, Facebook says that it will partner with Reuters and the National Election Pool to provide authoritative information about election results.

“We’ll show this in the Voting Information Center so it’s easily accessible, and we’ll notify people proactively as results become available. Importantly, if any candidate or campaign tries to declare victory before the results are in, we’ll add a label to their post educating that official results are not yet in and directing people to the official results.”

Facebook will also add an “informational label” to any post which seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods – which, Facebook says, will include any posts by the President.

Facebook will also increase its monitoring and enforcement efforts for groups like QAnon, which some are concerned may seek to organize violence or civil unrest in the period after the election. Facebook removed thousands of groups and Pages associated with QAnon specifically last month

These are some significant measures, and while Facebook still, as noted, won’t be fact checking political ads, the measures introduced here could go a long way in combatting efforts to manipulate voters during the campaign.

It’s difficult to know how effective the measures will be, and unfortunately, we won’t have any definitive insight till after the election, but within the parameters of Facebook’s approach to political content, these are important steps, which could have a major impact.

In respect to how effective they are, Facebook is also conducting a large scale analysis of its impact on the political process, which will involve it gaining permission from users to analyze their activity throughout the campaign period.

And this week, reports have emerged that as part of this effort, Facebook may actually pay some users not to use their Facebook and Instagram accounts.

That likely relates to a control group – if Facebook wants to measure the full impacts of its posts and updates on voting behavior, it needs to have a comparison. By having a group of users not use Facebook or Instagram, then getting insight into how they voted and engaged with political content without these platforms, it will help the researchers establish a better baseline of what impact Facebook actually has.

There’s a lot going on, and with Facebook set to come under intense scrutiny, it’s working to do all it can to protect users from manipulation.

Will it work? Should Facebook do more? We’ll soon find out, as the campaign is about to kick into overdrive.

Socialmediatoday.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SOCIAL

Paris mayor to stop using ‘global sewer’ X

Published

on

Hidalgo called Twitter a 'vast global sewer'

Hidalgo called Twitter a ‘vast global sewer’ – Copyright POOL/AFP Leon Neal

Paris Mayor Anne Hidalgo said on Monday she was quitting Elon Musk’s social media platform X, formerly known as Twitter, which she described as a “global sewer” and a tool to disrupt democracy.

“I’ve made the decision to leave X,” Hidalgo said in an op-ed in French newspaper Le Monde. “X has in recent years become a weapon of mass destruction of our democracies”, she wrote.

The 64-year-old Socialist, who unsuccessfully stood for the presidency in 2022, joined Twitter as it was then known in 2009 and has been a frequent user of the platform.

She accused X of promoting “misinformation”, “anti-Semitism and racism.”

“The list of abuses is endless”, she added. “This media has become a vast global sewer.”

Since Musk took over Twitter in 2022, a number of high-profile figures said they were leaving the popular social platform, but there has been no mass exodus.

Several politicians including EU industry chief Thierry Breton have announced that they are opening accounts on competing networks in addition to maintaining their presence on X.

The City of Paris account will remain on X, the mayor’s office told AFP.

By contrast, some organisations have taken the plunge, including the US public radio network NPR, or the German anti-discrimination agency.

Hidalgo has regularly faced personal attacks on social media including Twitter, as well as sometimes criticism over the lack of cleanliness and security in Paris.

In the latest furore, she has faced stinging attacks over an October trip to the French Pacific territories of New Caledonia and French Polynesia that was not publicised at the time and that she extended with a two-week personal vacation.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SOCIAL

Meta Highlights Key Platform Manipulation Trends in Latest ‘Adversarial Threat Report’

Published

on

Meta Highlights Key Platform Manipulation Trends in Latest ‘Adversarial Threat Report’

While talk of a possible U.S.  ban of TikTok has been tempered of late, concerns still linger around the app, and the way that it could theoretically be used by the Chinese Government to implement varying forms of data tracking and messaging manipulation in Western regions.

The latter was highlighted again this week, when Meta released its latest “Adversarial Threat Report,” which includes an overview of Meta’s latest detections, as well as a broader summary of its efforts throughout the year.

And while the data shows that Russia and Iran remain the most common source regions for coordinated manipulation programs, China is third on that list, with Meta shutting down almost 5,000 Facebook profiles linked to a Chinese-based manipulation program in Q3 alone.

As explained by Meta:

“We removed 4,789 Facebook accounts for violating our policy against coordinated inauthentic behavior. This network originated in China and targeted the United States. The individuals behind this activity used basic fake accounts with profile pictures and names copied from elsewhere on the internet to post and befriend people from around the world. They posed as Americans to post the same content across different platforms. Some of these accounts used the same name and profile picture on Facebook and X (formerly Twitter). We removed this network before it was able to gain engagement from authentic communities on our apps.”

Meta says that this group aimed to sway discussion around both U.S. and China policy by both sharing news stories, and engaging with posts related to specific issues.

“They also posted links to news articles from mainstream US media and reshared Facebook posts by real people, likely in an attempt to appear more authentic. Some of the reshared content was political, while other covered topics like gaming, history, fashion models, and pets. Unusually, in mid-2023 a small portion of this network’s accounts changed names and profile pictures from posing as Americans to posing as being based in India when they suddenly began liking and commenting on posts by another China-origin network focused on India and Tibet.”

Meta further notes that it took down more Coordinated Inauthentic Behavior (CIB) groups from China than any other region in 2023, reflecting the rising trend of Chinese operators looking to infiltrate Western networks.  

“The latest operations typically posted content related to China’s interests in different regions worldwide. For example, many of them praised China, some of them defended its record on human rights in Tibet and Xinjiang, others attacked critics of the Chinese government around the world, and posted about China’s strategic rivalry with the U.S. in Africa and Central Asia.”

Google, too, has repeatedly removed large clusters of YouTube accounts of Chinese origin that had been seeking to build audiences in the app, in order to then seed pro-China sentiment.

The largest coordinated group identified by Google is an operation known as “Dragonbridge” which has long been the biggest originator of manipulative efforts across its apps.

As you can see in this chart, Google removed more than 50,000 instances of Dragonbridge activity across YouTube, Blogger and AdSense in 2022 alone, underlining the persistent efforts of Chinese groups to sway Western audiences.

So these groups, whether they’re associated with the CCP or not, are already looking to infiltrate Western-based networks. Which underlines the potential threat of TikTok in the same respect, given that it’s controlled by a Chinese owner, and therefore likely more directly accessible to these operators.

That’s partly why TikTok is already banned on government-owned devices in most regions, and why cybersecurity experts continue to sound the alarm about the app, because if the above figures reflect the level of activity that non-Chinese platforms are already seeing, you can only imagine that, as TikTok’s influence grows, it too will be high on the list of distribution for the same material.

And we don’t have the same level of transparency into TikTok’s enforcement efforts, nor do we have a clear understanding of parent company ByteDance’s links to the CCP.

Which is why the threat of a possible TikTok ban remains, and will linger for some time yet, and could still spill over if there’s a shift in U.S./China relations.

One other point of note from Meta’s Adversarial Threat Report is its summary of AI usage for such activity, and how it’s changing over time.

X owner Elon Musk has repeatedly pointed to the rise of generative AI as a key vector for increased bot activity, because spammers will be able to create more complex, harder to detect bot accounts through such tools. That’s why X is pushing towards payment models as a means to counter bot profile mass production.

And while Meta does agree that AI tools will enable threat actors to create larger volumes of convincing content, it also says that it hasn’t seen evidence “that it will upend our industry’s efforts to counter covert influence operations” at this stage.

Meta also makes this interesting point:

“For sophisticated threat actors, content generation hasn’t been a primary challenge. They rather struggle with building and engaging authentic audiences they seek to influence. This is why we have focused on identifying adversarial behaviors and tactics used to drive engagement among real people. Disrupting these behaviors early helps to ensure that misleading AI content does not play a role in covert influence operations. Generative AI is also unlikely to change this dynamic.”

So it’s not just content that they need, but interesting, engaging material, and because generative AI is based on everything that’s come before, it’s not necessarily built to establish new trends, which would then help these bot accounts build an audience.

These are some interesting notes on the current threat landscape, and how coordinated groups are still looking to use digital platforms to spread their messaging. Which will likely never stop, but it is worth noting where these groups originate from, and what that means for related discussion.

You can read Meta’s Q3 “Adversarial Threat Report” here.



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SOCIAL

US judge halts pending TikTok ban in Montana

Published

on

TikTok use has continued to grow apace despite a growing number of countries banning the app from government devices

TikTok use has continued to grow apace despite a growing number of countries banning the app from government devices. — © POOL/AFP Liam McBurney

A federal judge on Thursday temporarily blocked a ban on TikTok set to come into effect next year in Montana, saying the popular video sharing app was likely to win its pending legal challenge.

US District Court Judge Donald Molloy placed the injunction on the ban until the case, originally filed by TikTok in May, has been ruled on its merits.

Molloy deemed it likely TikTok and its users will win, since it appeared the Montana law not only violates free speech rights but runs counter to the fact that foreign policy matters are the exclusive domain of the federal government.

“The current record leaves little doubt that Montana’s legislature and attorney general were more interested in targeting China’s ostensible role in TikTok than they with protecting Montana consumers,” Molloy said in the ruling.

The app is owned by Chinese firm ByteDance and has been accused by a wide swathe of US politicians of being under Beijing’s tutelage, something the company furiously denies.

Montana’s law says the TikTok ban will become void if the app is acquired by a company incorporated in a country not designated by the United States as a foreign adversary.

TikTok had argued that the unprecedented ban violates constitutionally protected right to free speech.

The prohibition signed into law by Republican Governor Greg Gianforte is seen as a legal test for a national ban of the Chinese-owned platform, something lawmakers in Washington are increasingly calling for.

Montana’s ban would be the first to come into effect in the United States – Copyright AFP Kirill KUDRYAVTSEV

The ban would make it a violation each time “a user accesses TikTok, is offered the ability to access TikTok, or is offered the ability to download TikTok.”

Each violation is punishable by a $10,000 fine every day it takes place.

Under the law, Apple and Google will have to remove TikTok from their app stores.

State political leaders have “trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business in the name of anti-Chinese sentiment,” ACLU Montana policy director Keegan Medrano said after the bill was signed.

The law is yet another skirmish in duels between TikTok and many western governments, with the app already banned on government devices in the United States, Canada and several countries in Europe.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending