Connect with us

FACEBOOK

Brexit ad blitz data firm paid by Vote Leave broke privacy laws, watchdogs find

Published

on

joint investigation by watchdogs in Canada and British Columbia has found that Cambridge Analytica-linked data firm, Aggregate IQ, broke privacy laws in Facebook ad-targeting work it undertook for the official Vote Leave Brexit campaign in the UK’s 2016 EU referendum.

A quick reminder: Vote Leave was the official leave campaign in the referendum on the UK’s membership of the European Union. While Cambridge Analytica is the (now defunct) firm at the center of a massive Facebook data misuse scandal which has dented the company’s fortunes and continues to tarnish its reputation.

Vote Leave’s campaign director, Dominic Cummings — now a special advisor to the UK prime minister — wrote in 2017 that the winning recipe for the leave campaign was data science. And, more specifically, spending 98% of its marketing budget on “nearly a billion targeted digital adverts”.

Targeted at Facebook users.

The problem is, per the Canadian watchdogs’ conclusions, AIQ did not have proper legal consents from UK voters for disclosing their personal information to Facebook for the Brexit ad blitz which Cummings ordered.

Either for “the purpose of advertising to those individuals (via ‘custom audiences’) or for the purpose of analyzing their traits and characteristics in order to locate and target others like them (via ‘lookalike audiences’)”.

Oops.

Last year the UK’s Electoral Commission also concluded that Vote Leave breached election campaign spending limits by channeling money to AIQ to run the targeting political ads on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave. So there’s a full sandwich of legal wrongdoings stuck to the brexit mess that UK society remains mired in, more than three years later.

See also  Facebook should ban campaign ads. End the lies.

Meanwhile, the current UK General Election is now a digital petri dish for data scientists and democracy hackers to run wild experiments in microtargeted manipulation — given election laws haven’t been updated to take account of the outgrowth of the adtech industry’s tracking and targeting infrastructure, despite multiple warnings from watchdogs and parliamentarians.

Data really is helluva a drug.

The Canadian investigation cleared AIQ of any wrongdoing in its use of phone numbers to send SMS messages for another pro-Brexit campaign, BeLeave; a purpose the watchdogs found had been authorized by the consent provided by individuals who gave their information to that youth-focused campaign.

But they did find consent problems with work AIQ undertook for various US campaigns on behalf of Cambridge Analytica affiliate, SCL Elections — including for a political action committee, a presidential primary campaign and various campaigns in the 2014 midterm elections.

And, again — as we know — Facebook is squarely in the frame here too.

“The investigation finds that the personal information provided to and used by AIQ comes from disparate sources. This includes psychographic profiles derived from personal information Facebook disclosed to Dr. Aleksandr Kogan, and onward to Cambridge Analytica,” the watchdogs write.

“In the case of their work for US campaigns… AIQ did not attempt to determine whether there was consent it could rely on for its use and disclosure of personal information.”

The investigation also looked at AIQ’s work for multiple Canadian campaigns — finding fewer issues related to consent. Though the report states that in: “certain cases, the purposes for which individuals are informed, or could reasonably assume their personal information is being collected, do not extend to social media advertising and analytics”.

See also  Trump’s hype for state lockdown protests puts Twitter and Facebook’s new COVID-19 policies to the test

AIQ also gets told off for failing to properly secure the data it misused.

This element of the probe resulted from a data breach reported by UpGuard after it found AIQ running an unsecured GitLab repository — holding what the report dubs “substantial personal information”, as well as encryption keys and login credentials which it says put the personal information of 35 million+ people at risk.

Double oops.

“The investigation determined that AIQ failed to take reasonable security measures to ensure that personal information under its control was secure from unauthorized access or disclosure,” is the inexorable conclusion.

Turns out if an entity doesn’t have a proper legal right to people’s information in the first place it may not be majorly concerned about where else the data might end up.

The report flows from an investigation into allegations of unauthorized access and use of Facebook user profiles which was started by the Office of the Information and Privacy Commissioner for BC in late 2017. A separate probe was opened by the Office of the Privacy Commissioner of Canada last year. The two watchdogs subsequently combined their efforts.

The upshot for AIQ from the joint investigation’s finding of multiple privacy and security violations is a series of, er, “recommendations”.

On the data use front it is suggested the company take “reasonable measures” to ensure any third-party consent it relies on for collection, use or disclosure of personal information on behalf of clients is “adequate” under the relevant Canadian and BC privacy laws.

“These measures should include both contractual measures and other measures, such as reviewing the consent language used by the client,” the watchdogs suggest. “Where the information is sensitive, as with political opinions, AIQ should ensure there is express consent, rather than implied.”

See also  Startup leaders need to learn how to build companies ready for crisis

On security, the recommendations are similarly for it to “adopt and maintain reasonable security measures to protect personal information, and that it delete personal information that is no longer necessary for business or legal purposes”.

“During the investigation, AIQ took steps to remedy its security breach. AIQ has agreed to implement the Offices’ recommendations,” the report adds.

The upshot of political ‘data science’ for Western democracies? That’s still tbc. Buckle up.

TechCrunch

FACEBOOK

Facebook fighting against disinformation: Launch new options

Published

on

Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.

“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.

He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.

The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.

Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.

The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.

This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).

“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.

See also  Pinterest Announces New Policy to Remove Misinformation About the Upcoming Census

They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.

Change on Facebook

Facebook announces news that will facilitate your sales and purchases on the social network.

Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.

The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.

Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.

In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.

Continue Reading

FACEBOOK

Facebook AI Hunts & Removes Harmful Content

Published

on

Main Article Image - AI

Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.

Few-Shot Learning

Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.

Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.

The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.

The Facebook announcement stated:

“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.

But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”

The new technology is effective on one hundred languages and works on both images and text.

See also  Twitter Announces Full iOS Launch of Tweet Sharing to Instagram Stories

Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.

Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.

“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.

If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”

New Facebook AI Live

Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.

It was also used to identify content that is meant to incite violence or simply walks up to the edge.

Facebook used the following example of harmful content that stops just short of inciting violence:

“Does that guy need all of his teeth?”

The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.

Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.

Graph Shows Success Of Facebook Hate Speech Detection

Facebook Hate Speech AI

Entailment Few-Shot Learning

Facebook calls their new technology, Entailment Few-Shot Learning.

It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.

See also  Facebook should ban campaign ads. End the lies.

Facebook’s article about the research used this example:

“…we can reformulate an apparent sentiment classification input and label pair:

[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:

[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”

Facebook Working To Develop Humanlike AI

The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.

The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.

“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.

By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”

Citations

Read Facebook’s Announcement Of New AI

Our New AI System to Help Tackle Harmful Content

Article About Facebook’s New Technology

Harmful content can evolve quickly. Our new AI system adapts to tackle it

Read Facebook’s Research Paper

Entailment as Few-Shot Learner (PDF)

Searchenginejournal.com

Continue Reading

FACEBOOK

New Facebook Groups Features For Building Strong Communities

Published

on

Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.

In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.

Here’s an overview of everything that was announced at the recent Facebook Communities Summit.

More Options For Facebook Group Admins

Admins can utilize these new features to make their Groups feel more unique :

  • Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
  • Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
  • Preferred formats: Select formats you want members to use when they post in your group.
  • Greeting message: Create a unique message that all new members will see when they join a group.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Stronger Connections For Members

Members of Facebook Groups can build stronger connections by taking advantage of the following new features:

  • Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
  • Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
  • Recurring Events: Set up regular events for member to get together either online or in person.
  • Community Awards: Give virtual awards to other members to recognize valuable contributions.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

New Ways To Manage Communities

New tools will make it easier for admins to manage their groups:

  • Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
  • Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
  • Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Monetization & Fundraisers

A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:

  • Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
  • Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
  • Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Bringing Together Groups & Pages

Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.

See also  Jack Dorsey explains why Twitter fact-checked Trump’s false voting claims

This will allow Group admins to use an official voice when interacting with their community.

Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.

When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.

Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.

This new experience will be tested over the next year before it’s available to everyone.

Source: Meta Newsroom


Featured Image: AlesiaKan/Shutterstock

Searchenginejournal

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending