Connect with us

FACEBOOK

Twitter screens Trump’s Minneapolis threat-tweet for glorifying violence

Published

on

After applying a fact-checking label Tuesday to a misleading vote-by-mail tweet made by US president Donald Trump, Twitter is on a roll and has labeled another of the president’s tweets — this time screening his words from casual view with what it calls a “public interest notice” that states the tweet violated its rules about glorifying violence. 

Here’s how the tweet appears without further interaction (second tweet in the below screengrab):

The public interest notice replaces the substance of what Trump wrote, meaning a user has to actively click through to view the offending tweet.

Engagement options are also limited as a result by this label, meaning users can only retweet the offending tweet with a comment; they cannot like it, reply to it or vanilla retweet it.

Twitter’s notice goes on to explain why it has not removed the offending tweet entirely — and this is where the public interest element of the policy kicks in — with the company writing: “Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible.” 

Twitter appears to be shrugging off the president’s decision yesterday to sign an executive order targeting the legal shield which internet companies rely on to protect them from liability for user-created content — doubling down on displeasing Trump who has accused social media platforms generally of deliberately suppressing conservative views, despite plenty of evidence that ad-targeting platform algorithms actually boost outrage-fuelled content and views — which tends, conversely, to amplify conservative viewpoints.

In the latest clash, Trump had tweeted in reference to violent demonstrations taking place in Minneapolis sparked by the killing of a black man, George Floyd, by a white police officer — with the president claiming that “THUGS are dishonoring the memory of George Floyd” before threatening to send in the “Military”.

“Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!” Trump added — making a bald threat to use military force against civilians.

See also  Does Content or Links Improve Trust with Google?

Twitter has wrestled with the issue of how to handle world leaders who break its content rules for years. Most often as a result of Trump who routinely uses its platform to bully all manner of targets — from rival politicians to hated journalists, disobedient business leaders, and even actors who displease him — as well as to dispense direct and sometimes violent threats.

Since being elected, Trump has also used Twitter’s global platform as a foreign policy weapon, firing military threats at the likes of North Korea and Iran in tweet form.

Back in 2018, for example, he teased North Korean leader Kim Jong-Un with button-pushing nuclear destruction (see below tweet) — before going on to “fall in love” with the dictator when he met him in person.

Twitter’s go-to defence for not taking offending Trump tweets down in the past has been that, as US president, the substance of what the man tweets — however mad, bad and dangerous — is inherently newsworthy.

However, more recently, the company has created a policy tool that allows it to intervene — defining terms last summer around “public interest” content on Twitter.

It warned then (almost a full year ago, in June 2019) that it might place a public interest notice on tweets that would otherwise violate its rules (and therefore merit a takedown) — in order to “to provide additional context and clarity”, rather than removing the offensive tweet.

See also  6 Key Elements of Competitor Analysis That Will Help Your Business Win Out

Fast forward a year and the tech giant has started applying labels to Trump’s tweets — beginning with a fact-check label earlier this week, related to the forthcoming US election, and following up now with a public interest notice related to Trump glorifying violence.

So, finally, the tech giant seems to be inching towards drawing a limit-line around Trump in near real-time.

Explaining its decision to badge the US president’s threat to order the military to shoot looters in Minneapolis, the company writes: “This Tweet violates our policies regarding the glorification of violence based on the historical context of the last line, its connection to violence, and the risk it could inspire similar actions today.”

“We’ve taken action in the interest of preventing others from being inspired to commit violent acts, but have kept the Tweet on Twitter because it is important that the public still be able to see the Tweet given its relevance to ongoing matters of public importance,” Twitter goes on.

It also links to its policy against tweets that glorify violence — which states unequivocally [in bold]: “You may not threaten violence against an individual or a group of people.”

Back in June, when Twitter announced the ‘abusive behavior’ label, it also warned that tweets which get screened with a public interest notice will not benefit from any algorithmic acceleration, writing: “We’ll also take steps to make sure the Tweet is not algorithmically elevated on our service, to strike the right balance between enabling free expression, fostering accountability, and reducing the potential harm caused by these Tweets.”

However the newsworthiness of Twitter’s decision to finally apply its own rules vis-a-vis Trump will ensure there’s plenty of non-algorithmic amplification (and no little irony).

See also  Social Media Bottlenecks: 54% of Professionals Experience Communication Breakdowns Between Businesses and Agencies

We reached out to the company with questions about its decision to apply a public interest screen on Trump’s latest tweet but at the time of writing it had not responded.

On Wednesday night, Twitter CEO and co-founder, Jack Dorsey, put out a series of tweets defending its decision to apply a fact-check label to Trump’s earlier misleading tweets about vote-by-mail.

“This does not make us an “arbiter of truth”,” wrote Dorsey. “Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves. More transparency from us is critical so folks can clearly see the why behind our actions.”

Dorsey’s remarks followed pointed comments made by Facebook CEO Mark Zuckerberg to Fox News, seeking to contrast Facebook’s claimed ‘neutrality’ when policing its platform with Twitter’s policy of taking a stance on issues such as political advertising (which Twitter does not allow).

“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online,” Zuckerberg told the conservative news station. “Private companies… especially these platform companies, shouldn’t be in the position of doing that.”

It’s notable that Dorsey used Zuckerberg’s exact turn of phrase — “arbiter of truth” — to reject Facebook’s attack on Twitter’s policy as a straw man argument.

TechCrunch

FACEBOOK

Facebook fighting against disinformation: Launch new options

Published

on

Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.

“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.

He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.

The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.

Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.

The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.

This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).

“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.

See also  Facebook is shutting down Lasso, its TikTok clone

They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.

Change on Facebook

Facebook announces news that will facilitate your sales and purchases on the social network.

Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.

The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.

Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.

In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.

Continue Reading

FACEBOOK

Facebook AI Hunts & Removes Harmful Content

Published

on

Main Article Image - AI

Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.

Few-Shot Learning

Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.

Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.

The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.

The Facebook announcement stated:

“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.

But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”

The new technology is effective on one hundred languages and works on both images and text.

See also  Facebook Announces $3 Million Donation for Afghan Refugees and Aid Organizations

Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.

Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.

“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.

If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”

New Facebook AI Live

Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.

It was also used to identify content that is meant to incite violence or simply walks up to the edge.

Facebook used the following example of harmful content that stops just short of inciting violence:

“Does that guy need all of his teeth?”

The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.

Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.

Graph Shows Success Of Facebook Hate Speech Detection

Facebook Hate Speech AI

Entailment Few-Shot Learning

Facebook calls their new technology, Entailment Few-Shot Learning.

It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.

See also  6 Key Elements of Competitor Analysis That Will Help Your Business Win Out

Facebook’s article about the research used this example:

“…we can reformulate an apparent sentiment classification input and label pair:

[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:

[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”

Facebook Working To Develop Humanlike AI

The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.

The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.

“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.

By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”

Citations

Read Facebook’s Announcement Of New AI

Our New AI System to Help Tackle Harmful Content

Article About Facebook’s New Technology

Harmful content can evolve quickly. Our new AI system adapts to tackle it

Read Facebook’s Research Paper

Entailment as Few-Shot Learner (PDF)

Searchenginejournal.com

Continue Reading

FACEBOOK

New Facebook Groups Features For Building Strong Communities

Published

on

Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.

In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.

Here’s an overview of everything that was announced at the recent Facebook Communities Summit.

More Options For Facebook Group Admins

Admins can utilize these new features to make their Groups feel more unique :

  • Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
  • Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
  • Preferred formats: Select formats you want members to use when they post in your group.
  • Greeting message: Create a unique message that all new members will see when they join a group.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Stronger Connections For Members

Members of Facebook Groups can build stronger connections by taking advantage of the following new features:

  • Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
  • Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
  • Recurring Events: Set up regular events for member to get together either online or in person.
  • Community Awards: Give virtual awards to other members to recognize valuable contributions.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

New Ways To Manage Communities

New tools will make it easier for admins to manage their groups:

  • Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
  • Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
  • Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Monetization & Fundraisers

A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:

  • Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
  • Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
  • Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Bringing Together Groups & Pages

Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.

See also  Social Media Bottlenecks: 54% of Professionals Experience Communication Breakdowns Between Businesses and Agencies

This will allow Group admins to use an official voice when interacting with their community.

Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.

When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.

Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.

This new experience will be tested over the next year before it’s available to everyone.

Source: Meta Newsroom


Featured Image: AlesiaKan/Shutterstock

Searchenginejournal

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending