SOCIAL
Facebook Clashes with the US Government Over Vaccine Misinformation

It seems like Facebook may be on a collision course with the US Government once again, this time over the role that it may or may not be playing in the amplification of COVID-19 vaccine misinformation, which has been identified as a key impediment in the nation’s path to recovery from the pandemic.
On Friday, when asked directly about vaccine misinformation on Facebook, US President Joe Biden responded that ‘they’re killing people‘ by allowing vaccine conspiracy theories to spread.
Biden’s comment came a day after the White House also noted that it’s been in regular contact with social media platforms to ensure that they remain aware of the latest narratives which pose a danger to public health
As per White House press secretary Jen Psaki:
“We work to engage with them to better understand the enforcement of social media platform policy.”
In response to Biden’s remarks, Facebook immediately went on the offensive, with a Facebook spokesperson telling ABC News that it “will not be distracted by accusations which aren’t supported by the facts”.
Facebook followed that up with an official response today, in a post titled ‘Moving Past the Finger Pointing’.
“At a time when COVID-19 cases are rising in America, the Biden administration has chosen to blame a handful of American social media companies. While social media plays an important role in society, it is clear that we need a whole of society approach to end this pandemic. And facts – not allegations – should help inform that effort. The fact is that vaccine acceptance among Facebook users in the US has increased. These and other facts tell a very different story to the one promoted by the administration in recent days.”
The post goes on to highlight various studies which show that Facebook’s efforts to address vaccine hesitancy are working, and that, if anything, Facebook users are less resistant to the vaccine effort, in opposition to Biden’s remarks.
Which is largely in line with Facebook’s broader stance of late – that, based on academic research, there’s currently no definitive link between increased vaccine hesitancy and Facebook sharing, nor, on a similar path, is there any direct connection between Facebook usage and political polarization, despite ongoing claims.
In recent months, Facebook has taken a more proactive approach to dismissing these ideas, by explaining that polarizing and extremist content is actually bad for its business, despite the suggestion that it benefits from the related engagement with such posts.
As per Facebook:
“All social media platforms, including but not limited to ours, reflect what is happening in society and what’s on people’s minds at any given moment. This includes the good, the bad, and the ugly. For example, in the weeks leading up to the World Cup, posts about soccer will naturally increase – not because we have programmed our algorithms to show people content about soccer but because that’s what people are thinking about. And just like politics, soccer strikes a deep emotional chord with people. How they react – the good, the bad, and the ugly – will be reflected on social media.”
Facebook’s Vice President of Global Affairs Nick Clegg also took a similar angle back in March in his post about the News Feed being an interplay between people and platform – which means the platform itself cannot be fully to blame:
“The goal is to make sure you see what you find most meaningful – not to keep you glued to your smartphone for hours on end. You can think about this sort of like a spam filter in your inbox: it helps filter out content you won’t find meaningful or relevant, and prioritizes content you will.”
Clegg further notes that Facebook actively reduces the distribution of sensational and misleading content, as well as posts that are found to be false by its independent fact-checking partners.
“For example, Facebook demotes clickbait (headlines that are misleading or exaggerated), highly sensational health claims (like those promoting “miracle cures”), and engagement bait (posts that explicitly seek to get users to engage with them).”
Clegg also says that Facebook made a particularly significant commitment to this, in conflict with its own business interests, by implementing a change to the News Feed algorithm back in 2018 which gives more weight to updates from your friends, family, and groups that you’re a part of, over content from Pages that you follow.
So, according to Facebook, it doesn’t benefit from sensationalized content and left-of-center conspiracy theories – and in fact, it actually goes out of its way to penalize such.
Yet, despite these claims, and the references to inconclusive academic papers and internal studies, the broader evidence doesn’t support Facebook’s stance.
Earlier this week, The New York Times reported that Facebook has been working to change the way that its own data analytics platform works, in order to restrict public access to insights which show that far-right posts and misinformation perform better on the platform than more balanced coverage and reports.
The controversy stems from this Twitter profile, created by Times reporter Kevin Roose, which displays a daily listing of the ten most engaging posts across Facebook, based on CrowdTangle data.
The top-performing link posts by U.S. Facebook pages in the last 24 hours are from:
1. ForAmerica
2. Taunton Daily Gazette
3. Ben Shapiro
4. Ben Shapiro
5. Sean Hannity
6. Nelly
7. Ben Shapiro
8. Newsmax
9. Dan Bongino
10. Ben Shapiro— Facebook’s Top 10 (@FacebooksTop10) July 14, 2021
Far-right Pages always dominate the chart, which is why Facebook has prevously sought to explain that the metrics used in creating the listing are wrong, and are therefore not indicative of actual post engagement and popularity.
According to the NYT report, Facebook had actually gone further than this internally, with staffers looking for a way to alter the data displayed within CrowdTangle to avoid such comparison.
Which didn’t go as planned:
“Several executives proposed making reach data public on CrowdTangle, in hopes that reporters would cite that data instead of the engagement data they thought made Facebook look bad. But [Brandon] Silverman, CrowdTangle’s chief executive, replied in an email that the CrowdTangle team had already tested a feature to do that and found problems with it. One issue was that false and misleading news stories also rose to the top of those lists.”
So, no matter how Facebook was looking to spin it, these types of posts were still gaining traction, which shows that, even with the aforementioned updates and processes to limit such sharing, this remains the type of content that sees the most engagement, and thus, reach on The Social Network.
Which, you could argue, is a human problem, rather than a Facebook one. But at 2.8 billion users, giving it more potential for content amplification than any platform in history, Facebook does need to take responsibility for the role that it plays within this process, and the role it can potentially play in amplifying the impact of such in the case of, say, a pandemic where vaccine fear-mongering could end up costing the world an unmeasurable toll.
It seems fairly clear that Facebook does play a significant part within this. And when you also consider that some 70% of Americans now get at least some news content from Facebook, it’s clear that the app has become a source of truth for many, which informs what they do, including their political stances, their civic understanding. And yes, their view of public health advice.
Heck, even flat earthers have been able to gain traction in the modern age, underlining the power of anti-science movements. And again, while you can’t definitively say that Facebook is responsible for such, if somebody posts a random video of flat earthers trying to prove their theory, that’s probably going to get traction due to the divisive, sensational nature of that content – like this clip for example:

Videos like this attract believers and skeptics alike, and while many of the comments are critical, that’s all, in Facebook’s algorithmic judgment, engagement.
Thus, even your mocking remarks will help such material gain traction – and the more people who comment, the more momentum such posts get.
8 out of 10 people might dismiss such theories as total rubbish, but 2 might take the opportunity to dig deeper. Multiply that by the view counts these videos see and that’s a lot of potential influence on this front that Facebook is facilitating.
And definitely, these types of posts do gain traction. A study conducted by MIT in 2019 found that false news stories on Twitter are 70% more likely to be retweeted than those that are true, while further research into the motivations behind such activity have found that a need for belonging and community can also solidify groups around lies and misinformation as a psychological response.
There’s also another key element within this – the changing nature of media distribution itself.
As Yale University social psychologist William J. Brady recently explained:
“When you post things [on social media], you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.”
That shift, in giving each person their own personal motivation for sharing certain content, has changed the paradigm for content reach, which has diluted the influence of publications themselves in favor of algorithms, – which, again, are fueled by people and their need for validation and response.
You share a post saying ‘vaccines are safe’ and probably no one will care, but if you share one that says ‘vaccines are dangerous’, people will pay attention, and you’ll get all the notifications from all the likes, shares and comments, which will then trigger your dopamine receptors, and make you feel part of something bigger, something more – that your voice is important in the broader landscape.
As such, Facebook is somewhat right in pointing to human nature as the culprit, and not its own systems. But it, and other platforms, have given people the medium, they provide the means to share, they devise the incentives to keep them posting.
And the more time that people spend on Facebook, the better is for Facebook’s business.
You can’t argue that Facebook doesn’t benefit in this respect – and as such, it is in the company’s interests to turn a blind eye, and pretend there’s no problem with its systems, and the role that it plays in amplifying such movements.
But it does, it is, and the US Government is right to take a closer look at this element.
SOCIAL
Paris mayor to stop using ‘global sewer’ X

Hidalgo called Twitter a ‘vast global sewer’ – Copyright POOL/AFP Leon Neal
Paris Mayor Anne Hidalgo said on Monday she was quitting Elon Musk’s social media platform X, formerly known as Twitter, which she described as a “global sewer” and a tool to disrupt democracy.
“I’ve made the decision to leave X,” Hidalgo said in an op-ed in French newspaper Le Monde. “X has in recent years become a weapon of mass destruction of our democracies”, she wrote.
The 64-year-old Socialist, who unsuccessfully stood for the presidency in 2022, joined Twitter as it was then known in 2009 and has been a frequent user of the platform.
She accused X of promoting “misinformation”, “anti-Semitism and racism.”
“The list of abuses is endless”, she added. “This media has become a vast global sewer.”
Since Musk took over Twitter in 2022, a number of high-profile figures said they were leaving the popular social platform, but there has been no mass exodus.
Several politicians including EU industry chief Thierry Breton have announced that they are opening accounts on competing networks in addition to maintaining their presence on X.
The City of Paris account will remain on X, the mayor’s office told AFP.
By contrast, some organisations have taken the plunge, including the US public radio network NPR, or the German anti-discrimination agency.
Hidalgo has regularly faced personal attacks on social media including Twitter, as well as sometimes criticism over the lack of cleanliness and security in Paris.
In the latest furore, she has faced stinging attacks over an October trip to the French Pacific territories of New Caledonia and French Polynesia that was not publicised at the time and that she extended with a two-week personal vacation.
SOCIAL
Meta Highlights Key Platform Manipulation Trends in Latest ‘Adversarial Threat Report’

While talk of a possible U.S. ban of TikTok has been tempered of late, concerns still linger around the app, and the way that it could theoretically be used by the Chinese Government to implement varying forms of data tracking and messaging manipulation in Western regions.
The latter was highlighted again this week, when Meta released its latest “Adversarial Threat Report,” which includes an overview of Meta’s latest detections, as well as a broader summary of its efforts throughout the year.
And while the data shows that Russia and Iran remain the most common source regions for coordinated manipulation programs, China is third on that list, with Meta shutting down almost 5,000 Facebook profiles linked to a Chinese-based manipulation program in Q3 alone.
As explained by Meta:
“We removed 4,789 Facebook accounts for violating our policy against coordinated inauthentic behavior. This network originated in China and targeted the United States. The individuals behind this activity used basic fake accounts with profile pictures and names copied from elsewhere on the internet to post and befriend people from around the world. They posed as Americans to post the same content across different platforms. Some of these accounts used the same name and profile picture on Facebook and X (formerly Twitter). We removed this network before it was able to gain engagement from authentic communities on our apps.”
Meta says that this group aimed to sway discussion around both U.S. and China policy by both sharing news stories, and engaging with posts related to specific issues.
“They also posted links to news articles from mainstream US media and reshared Facebook posts by real people, likely in an attempt to appear more authentic. Some of the reshared content was political, while other covered topics like gaming, history, fashion models, and pets. Unusually, in mid-2023 a small portion of this network’s accounts changed names and profile pictures from posing as Americans to posing as being based in India when they suddenly began liking and commenting on posts by another China-origin network focused on India and Tibet.”
Meta further notes that it took down more Coordinated Inauthentic Behavior (CIB) groups from China than any other region in 2023, reflecting the rising trend of Chinese operators looking to infiltrate Western networks.
“The latest operations typically posted content related to China’s interests in different regions worldwide. For example, many of them praised China, some of them defended its record on human rights in Tibet and Xinjiang, others attacked critics of the Chinese government around the world, and posted about China’s strategic rivalry with the U.S. in Africa and Central Asia.”
Google, too, has repeatedly removed large clusters of YouTube accounts of Chinese origin that had been seeking to build audiences in the app, in order to then seed pro-China sentiment.
The largest coordinated group identified by Google is an operation known as “Dragonbridge” which has long been the biggest originator of manipulative efforts across its apps.
As you can see in this chart, Google removed more than 50,000 instances of Dragonbridge activity across YouTube, Blogger and AdSense in 2022 alone, underlining the persistent efforts of Chinese groups to sway Western audiences.
So these groups, whether they’re associated with the CCP or not, are already looking to infiltrate Western-based networks. Which underlines the potential threat of TikTok in the same respect, given that it’s controlled by a Chinese owner, and therefore likely more directly accessible to these operators.
That’s partly why TikTok is already banned on government-owned devices in most regions, and why cybersecurity experts continue to sound the alarm about the app, because if the above figures reflect the level of activity that non-Chinese platforms are already seeing, you can only imagine that, as TikTok’s influence grows, it too will be high on the list of distribution for the same material.
And we don’t have the same level of transparency into TikTok’s enforcement efforts, nor do we have a clear understanding of parent company ByteDance’s links to the CCP.
Which is why the threat of a possible TikTok ban remains, and will linger for some time yet, and could still spill over if there’s a shift in U.S./China relations.
One other point of note from Meta’s Adversarial Threat Report is its summary of AI usage for such activity, and how it’s changing over time.
X owner Elon Musk has repeatedly pointed to the rise of generative AI as a key vector for increased bot activity, because spammers will be able to create more complex, harder to detect bot accounts through such tools. That’s why X is pushing towards payment models as a means to counter bot profile mass production.
And while Meta does agree that AI tools will enable threat actors to create larger volumes of convincing content, it also says that it hasn’t seen evidence “that it will upend our industry’s efforts to counter covert influence operations” at this stage.
Meta also makes this interesting point:
“For sophisticated threat actors, content generation hasn’t been a primary challenge. They rather struggle with building and engaging authentic audiences they seek to influence. This is why we have focused on identifying adversarial behaviors and tactics used to drive engagement among real people. Disrupting these behaviors early helps to ensure that misleading AI content does not play a role in covert influence operations. Generative AI is also unlikely to change this dynamic.”
So it’s not just content that they need, but interesting, engaging material, and because generative AI is based on everything that’s come before, it’s not necessarily built to establish new trends, which would then help these bot accounts build an audience.
These are some interesting notes on the current threat landscape, and how coordinated groups are still looking to use digital platforms to spread their messaging. Which will likely never stop, but it is worth noting where these groups originate from, and what that means for related discussion.
You can read Meta’s Q3 “Adversarial Threat Report” here.
SOCIAL
US judge halts pending TikTok ban in Montana

TikTok use has continued to grow apace despite a growing number of countries banning the app from government devices. — © POOL/AFP Liam McBurney
A federal judge on Thursday temporarily blocked a ban on TikTok set to come into effect next year in Montana, saying the popular video sharing app was likely to win its pending legal challenge.
US District Court Judge Donald Molloy placed the injunction on the ban until the case, originally filed by TikTok in May, has been ruled on its merits.
Molloy deemed it likely TikTok and its users will win, since it appeared the Montana law not only violates free speech rights but runs counter to the fact that foreign policy matters are the exclusive domain of the federal government.
“The current record leaves little doubt that Montana’s legislature and attorney general were more interested in targeting China’s ostensible role in TikTok than they with protecting Montana consumers,” Molloy said in the ruling.
The app is owned by Chinese firm ByteDance and has been accused by a wide swathe of US politicians of being under Beijing’s tutelage, something the company furiously denies.
Montana’s law says the TikTok ban will become void if the app is acquired by a company incorporated in a country not designated by the United States as a foreign adversary.
TikTok had argued that the unprecedented ban violates constitutionally protected right to free speech.
The prohibition signed into law by Republican Governor Greg Gianforte is seen as a legal test for a national ban of the Chinese-owned platform, something lawmakers in Washington are increasingly calling for.
Montana’s ban would be the first to come into effect in the United States – Copyright AFP Kirill KUDRYAVTSEV
The ban would make it a violation each time “a user accesses TikTok, is offered the ability to access TikTok, or is offered the ability to download TikTok.”
Each violation is punishable by a $10,000 fine every day it takes place.
Under the law, Apple and Google will have to remove TikTok from their app stores.
State political leaders have “trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business in the name of anti-Chinese sentiment,” ACLU Montana policy director Keegan Medrano said after the bill was signed.
The law is yet another skirmish in duels between TikTok and many western governments, with the app already banned on government devices in the United States, Canada and several countries in Europe.
-
SEO7 days ago
Google Discusses Fixing 404 Errors From Inbound Links
-
SOCIAL4 days ago
Musk regrets controversial post but won’t bow to advertiser ‘blackmail’
-
SEARCHENGINES6 days ago
Google Search Console Was Down Today
-
SEO4 days ago
A Year Of AI Developments From OpenAI
-
SEO5 days ago
SEO Salary Survey 2023 [Industry Research]
-
PPC5 days ago
5 Quick Tips to Increase Referral Traffic
-
MARKETING6 days ago
How to Schedule Ad Customizers for Google RSAs [2024]
-
SEARCHENGINES6 days ago
Most SEOs Believe Google’s November Core & Reviews Updates Will Complete In December