On the eve on the 2020 U.S. election, tensions are running high.
The good news? 2020 isn’t 2016. Social networks are way better prepared to handle a wide array of complex, dangerous or otherwise ambiguous Election Day scenarios.
The bad news: 2020 is its own beast, one that’s unleashed a nightmare health scenario on a divided nation that’s even more susceptible now to misinformation, hyper-partisanship and dangerous ideas moving from the fringe to the center than it was four years ago.
The U.S. was caught off guard by foreign interference in the 2016 election, but shocking a nation that’s spent the last eight months expecting a convergence of worst-case scenarios won’t be so easy.
Social platforms have braced for the 2020 election in a way they didn’t in 2016. Here’s what they’re worried about and the critical lessons from the last four years that they’ll bring to bear.
Contested election results
President Trump has repeatedly signaled that he won’t accept the results of the election in the case that he loses — a shocking threat that could imperil American democracy, but one social platforms have been tracking closely. Trump’s erratic, often rule-bending behavior on social networks in recent months has served as a kind of stress test, allowing those platforms to game out different scenarios for the election.
Facebook and Twitter in particular have laid out detailed plans about what happens if the results of the election aren’t immediately clear or if a candidate refuses to accept official results once they’re tallied.
On election night, Facebook will pin a message to the top of both Facebook and Instagram telling users that vote counting is still underway. When authoritative results are in, Facebook will change those messages to reflect the official results. Importantly, U.S. election results might not be clear on election night or for some days afterward, a potential outcome for which Facebook and other social networks are bracing.
If a candidate declared victory prematurely, Facebook doesn’t say it will remove those claims, but it will pair them with its message that there’s no official result and voting is still underway.
Twitter released its plans for handling election results two months ago, explaining that it will either remove or attach a warning label to premature claims of victory before authoritative election results are in. The company also explicitly stated that it will act against any tweets “inciting unlawful conduct to prevent a peaceful transfer of power or orderly succession,” a shocking rule to have to articulate, but a necessary one in 2020.
On Monday, Twitter elaborated on its policy, saying that it would focus on labeling misleading tweets about the presidential election and other contested races. The company released a sample image of a label it would append, showing a warning stating that “this tweet is sharing inaccurate information.”
Last week, the company also began showing users large misinformation warnings at the top of their feeds. The messages told users that they “might encounter misleading information” about mail-in voting and also cautioned them that election results may not be immediately known.
According to Twitter, users who try to share tweets with misleading election-related misinformation will see a pop-up pointing them to vetted information and forcing them to click through a warning before sharing. Twitter also says it will act on any “disputed claims” that might cast doubt on voting, including “unverified information about election rigging, ballot tampering, vote tallying, or certification of election results.”
One other major change that many users probably already noticed is Twitter’s decision to disable retweets. Users can still retweet by clicking through a pop-up page, but Twitter made the change to encourage people to quote retweet instead. The effort to slow down the spread of misinformation was striking, and Twitter said it will stay in place through the end of election week, at least.
YouTube didn’t go into similar detail about its decision making, but the company previously said it will put an “informational” label on search results related to the election and below election-related videos. The label warns users that “results may not be final” and points them to the company’s election info hub.
This is one area where social networks have made big strides. After Russian disinformation took root on social platforms four years ago, those companies now coordinate with one another and the government about the threats they’re seeing.
In the aftermath of 2016, Facebook eventually woke up to the idea that its platform could be leveraged to scale social ills like hate and misinformation. Its scorecard is uneven, but its actions against foreign disinformation have been robust, reducing that threat considerably.
A repeat of the same concerns from 2016 is unlikely. Facebook made aggressive efforts to find foreign coordinated disinformation campaigns across its platforms, and it publishes what it finds regularly and with little delay. But in 2020, the biggest concerns are coming from within the country — not without.
Most foreign information operations have been small so far, failing to gain much traction. Last month, Facebook removed a network of fake accounts connected to Iran. The operation was small and failed to generate much traction, but it shows that U.S. adversaries are still interested in trying out the tactic.
Misleading political ads
To address concerns around election misinformation in ads, Facebook opted for a temporary political ad blackout, starting at 12 a.m. PT on November 4 and continuing until the company deems it safe to toggle them back on. Facebook hasn’t accepted any new political ads since October 27 and previously said it won’t accept any ads that delegitimize the results of the election. Google will also pause election-related ads after polls close Tuesday.
Facebook has made a number of big changes to political ads since 2016, when Russia bought Facebook ads to meddle with U.S. politics. Political ads on the platform are subject to more scrutiny and much more transparency now and Facebook’s ad library emerged as an exemplary tool that allows anyone to see what ads have been published, who bought them and how much they spent.
Unlike Facebook, Twitter’s way of dealing with political advertising was cutting it off entirely. The company announced the change a year ago and hasn’t looked back since. TikTok also opted to disallow political ads.
Politically motivated violence is a big worry this week in the U.S. — a concern that shows just how tense the situation has grown under four years of Trump. Leading into Tuesday, the president has repeatedly made false claims of voter fraud and encouraged his followers to engage in voter intimidation, a threat Facebook was clued into enough that it made a policy prohibiting “militarized” language around poll watching.
Facebook made a number of other meaningful recent changes, like banning the dangerous pro-Trump conspiracy theory QAnon and militias that use the platform to organize, though those efforts have come very late in the game.
Facebook was widely criticized for its inaction around a Trump post warning “when the looting starts, the shooting starts” during racial justice protests earlier this year, but its recent posture suggests similar posts might be taken more seriously now. We’ll be watching how Facebook handles emerging threats of violence this week.
Its recent decisive moves against extremism are important, but the platform has long incubated groups that use the company’s networking and event tools to come together for potential real-world violence. Even if they aren’t allowed on the platform any longer, many of those groups got organized and then moved their networks onto alternative social networks and private channels. Still, making it more difficult to organize violence on mainstream social networks is a big step in the right direction.
Twitter also addressed the potential threat of election-related violence in advance, noting that it may add warnings or require users to remove any tweets “inciting interference with the election” or encouraging violence.
Platform policy shifts in 2020
Facebook is the biggest online arena where U.S. political life plays out. While a similar number of Americans watch videos on YouTube, Facebook is where they go to duke it out over candidates, share news stories (some legitimate, some not) and generally express themselves politically. It’s a tinderbox in normal times — and 2020 is far from normal.
While Facebook acted against foreign threats quickly after 2016, the company dragged its feet on platform changes that could be perceived as politically motivated — a hesitation that backfired by incubating dangerous extremists and allowing many kinds of misinformation, particularly on the far-right, to survive and thrive.
In spite of Facebook’s lingering misguided political fears, there are reasons to be hopeful that the company might avert election-related catastrophes.
Whether it was inspired by the threat of a contested election, federal antitrust action or a possible Biden presidency, Facebook has signaled a shift to more thoughtful moderation with a flurry of recent policy enforcement decisions. An accompanying flurry of election-focused podcast and television ads suggests Facebook is worried about public perception too — and it should be.
Twitter’s plan for the election has been well-communicated and detailed. In 2020, the company treats its policy decisions with more transparency, communicates them in real time and isn’t afraid to admit to mistakes. The relatively small social network plays an outsized role in publishing political content that’s amplified elsewhere, so the choices it makes are critical for countering misinformation and extremism.
The companies that host and amplify online political conversation have learned some major lessons since 2016 — mostly the hard way. Let’s just hope it was enough to help them guide their roiling platforms through one of the most fraught moments in modern U.S. history.
TechCrunch an American online publisher focusing on the tech industry. The company specifically reports on the business related to tech, technology news, analysis of emerging trends in tech, and profiling of new tech businesses and products.
Facebook fighting against disinformation: Launch new options
Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.
“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.
He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.
The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.
Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.
The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.
This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).
“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.
They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.
Change on Facebook
Facebook announces news that will facilitate your sales and purchases on the social network.
Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.
The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.
Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.
In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.
Entireweb Articles – Read the latest Articles and News in Search Engine related world!
Facebook AI Hunts & Removes Harmful Content
Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.
Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.
Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.
The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.
The Facebook announcement stated:
“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.
But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.
…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”
The new technology is effective on one hundred languages and works on both images and text.
Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.
Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.
“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.
If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”
New Facebook AI Live
Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.
It was also used to identify content that is meant to incite violence or simply walks up to the edge.
Facebook used the following example of harmful content that stops just short of inciting violence:
“Does that guy need all of his teeth?”
The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.
Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.
Graph Shows Success Of Facebook Hate Speech Detection
Entailment Few-Shot Learning
Facebook calls their new technology, Entailment Few-Shot Learning.
It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.
Facebook’s article about the research used this example:
“…we can reformulate an apparent sentiment classification input and label pair:
[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:
[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”
Facebook Working To Develop Humanlike AI
The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.
The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.
“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.
By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”
Read Facebook’s Announcement Of New AI
Article About Facebook’s New Technology
Read Facebook’s Research Paper
Roger Montti is a search marketer with 20 years experience.
I offer site audits and link building strategies.
New Facebook Groups Features For Building Strong Communities
Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.
In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.
Here’s an overview of everything that was announced at the recent Facebook Communities Summit.
More Options For Facebook Group Admins
Admins can utilize these new features to make their Groups feel more unique :
- Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
- Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
- Preferred formats: Select formats you want members to use when they post in your group.
- Greeting message: Create a unique message that all new members will see when they join a group.
Stronger Connections For Members
Members of Facebook Groups can build stronger connections by taking advantage of the following new features:
- Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
- Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
- Recurring Events: Set up regular events for member to get together either online or in person.
- Community Awards: Give virtual awards to other members to recognize valuable contributions.
New Ways To Manage Communities
New tools will make it easier for admins to manage their groups:
- Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
- Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
- Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Monetization & Fundraisers
A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:
- Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
- Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
- Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Bringing Together Groups & Pages
Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.
This will allow Group admins to use an official voice when interacting with their community.
Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.
When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.
Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.
This new experience will be tested over the next year before it’s available to everyone.
Source: Meta Newsroom
Featured Image: AlesiaKan/Shutterstock
Matt Southern has been the lead news writer at Search Engine Journal since 2013. With a degree in communications, Matt has an uncanny ability to make the most complex subject matter easy to understand. When he’s not ferociously following and covering the search industry, he’s busy writing SEO-friendly copy that converts.
Expansion, Machine Learning & Core Updates
How Does Google Search Use Synonyms?
Google Asks Hosting Companies To Serve 500 Status Code On Robot Detection Interstitial
Google Autocomplete: A Complete SEO Guide
Daily Search Forum Recap: January 17, 2022
How To Use Amazon Attribution & Brand Referral Bonus Programs
12 Tools and Resources for Software Developers in Insurance
Google Search Console Updated With Desktop Page Experience Report
Google Search Ranking Update January 14th & 15th
The Secrets Of First-Party Data [Podcast]
WordPress 5.9 to Introduce Language Switcher on Login Screen
14 Top Reasons Why Google Isn’t Indexing Your Site
20 Tips and Best Practices
Pages That Look Like Error Pages Can Be Considered Soft 404s By Google
Here’s How Meta Is Changing Facebook Ads Targeting For 2022
Are Nofollow Links a Google Ranking Factor?
Critical Vulnerabilities in All in One SEO Plugin Affects Millions of WordPress Websites …
17 Actionable Content Marketing Tips for 2022
10 Things You Need To Know To Be Successful
How To Help Google Rank Products With Duplicate Descriptions
SEARCHENGINES3 days ago
Google Versatile Text Ads Are Responsive Search Ads?
SEARCHENGINES3 days ago
Microsoft Bing Testing Related Searches On Left Side Bar
MARKETING5 days ago
5 Social Media Strategies that Boost Your SEO
SEO7 days ago
25 Unique SEO Tactics That Deliver Big Results
SEO2 days ago
Are Local Citations (NAP) A Google Ranking Factor?
SEO2 days ago
Is It A Ranking Factor?
SEARCHENGINES6 days ago
Google Search Ranking Algorithm Update On January 11, 2022 (Unconfirmed)
SEARCHENGINES4 days ago
Google 1/11 Search Algorithm Update, Manual Actions Delayed, Core Update Specifics & Microsoft Bing IndexNow News