Connect with us


Meta Files New Lawsuit Over the Sale of Fake Customer Reviews on Facebook



Meta Uncovers Coordinated Misinformation Initiatives Targeting Ukrainian Users

Meta has filed a new lawsuit in California over the use of fake reviews on Facebook, targeting a specific fake review seller who it says had sought to manipulate its systems to benefit its customers.

In the new suit, Meta says that Chad Taylor Cowan, who had been operating as ‘Customer Feedback Score Solutions’, provided fake reviews and feedback for businesses, with the intention of artificially increasing their Facebook Customer Feedback Score.

As explained by Meta:

“Meta analyzes feedback on an ongoing basis to understand people’s experiences on our technologies. As a part of this work, some people receive surveys after clicking on ads to help understand whether the quality of the product they purchased met their expectations, the shipping was timely, and to learn more about their customer service experience. This survey data, along with other information, informs a business’ Customer Feedback Score.”

Businesses that receive a significant amount of negative feedback can then face enforcement measures, including ad restrictions, financial penalties or disabling accounts.

Customer Feedback Solutions aimed to manipulate this process to benefit its clients.

Cowan used a network of fraudulent and hired Facebook user accounts to provide fake customer reviews to artificially increase Customer Feedback Scores, drown out and minimize negative reviews, and avoid our enforcement. These actions create poor experiences for people who see these ads, deceptively influencing and misleading our community. This is also a direct violation of Meta’s TermsAdvertising and Page Policies, as well as California law.


It’s the first time that Meta has targeted fake review sellers specifically, though it has been steadily increasing its overall legal enforcement efforts over the past few years. Meta has also launched various lawsuits against companies selling Likes and followers, which is along a similar vein, but reviews haven’t been a key focus – though earlier this year, Amazon launched legal action against two companies that had allegedly acted as fake-review brokers on its platform.

See also  How to Win Potential Consumers with Customer Journey Mapping on Google

That may have opened the door for more enforcement action on this front, with the Amazon cases potentially acting as legal precedent for such, and as Meta looks to introduce more eCommerce and brand recommendation tools into its apps, it makes sense for it to take action now to address this element.

The case will likely take some time, but it’ll be interesting to see what legal decisions come from these new actions against those selling fake online reviews.

Source link


Social Platforms Could Face Legal Action for Addictive Algorithms Under Proposed California Law



Social Platforms Could Face Legal Action for Addictive Algorithms Under Proposed California Law

In what could be a significant step towards protecting children from potential harms online, the California legislature is currently debating an amended bill that would enable parents, as well as the state Attorney General, to sue social platforms for algorithms and systems that addict children to their apps.

As reported by The Wall Street Journal:

Social-media companies such as Facebook parent Meta Platforms could be sued by government attorneys in California for features that allegedly harm children through addiction under a first-in-the-nation bill that faces an important vote in the state Senate here Tuesday. The measure would permit the state attorney general, local district attorneys and the city attorneys of California’s four largest cities to sue social-media companies including Meta – which also owns Instagram – as well as TikTok, and Snapchat, under the state’s law governing unfair business practices.

If passed, that could add a range of new complications for social media platforms operating within the state, and could restrict the way that algorithmic amplification is applied for users under a certain age.

The ‘Social Media Platform Duty to Children Act’ was initially proposed early last month, but has since been amended to improve its chances of securing passage through the legislative process. The bill includes a range of ‘safe harbor’ clauses that would exempt social media companies from liability if said company makes changes to remove addictive features of their platform within a specified time frame.

What, exactly, those ‘addictive’ features are isn’t specified, but the bill essentially takes aims at social platform algorithms, which are focused on keeping users active in each app for as long as possible, by responding to each person’s individual usage behaviors and hooking them in through the presentation of more of what they react to in their ever-refreshing content feeds.

See also  Twitter Shares Insights into How Fans Welcomed the NBA and MLB Re-Starts [Infographic]

Which, of course, can have negative impacts. As we’ve repeatedly seen play out through social media engagement, the problem with algorithmic amplification is that it’s based on a binary process, which makes no judgment about the actual content of the material it seeks to amplify. The system simply responds to what gets people to click and comment – and what gets people to click and comment more than anything else? Emotionally charged content, posts that take a divisive, partisan viewpoint, with updates that spark anger and laughter being among the most likely to trigger the strongest response.


That’s part of the reason for increased societal division overall, because online systems are built to maximize engagement, which essentially incentivizes more divisive takes and stances in order to maximize shares and reach.

Which is a major concern of algorithmic amplification, while another, as noted in this bill, is that social platforms are getting increasingly good at understanding what will keep you scrolling, with TikTok’s ‘For You’ feed, in particular, almost perfecting the art of drawing users in, and keeping them in the app for hours at a time.

Indeed, TikTok’s own data shows that users spend around 90 minutes per day in the app, on average, with younger users being particularly compelled by its never-ending stream of short clips. That’s great for TikTok, and underlines its nous in building systems that align with user interests. But the question essentially being posed by this bill is ‘is this actually good for youngsters online?’

Already, some nations have sought to implement curbs on young people’s internet usage behaviors, with China implementing restrictions on gaming and live-streaming, including the recent introduction of a ban on people under the age of 16 from watching live-streams after 10pm.

See also  7 Email Marketing Mistakes Killing Your Mobile Conversion Rate [Infographic]

The Italian Parliament has implemented laws to better protect minors from cyberbullying, while evolving EU privacy regulations have seen the implementation of a range of new protections for young people, and the use of their data online, which has changed the way that digital platforms operate.

Even in the US, a bill proposed in Minnesota earlier this year would have banned the use of algorithms entirely in recommending content to anyone under age 18. 

And given the range of investigations which show how social platform usage can be harmful for young users, it makes sense for more legislators to seek more regulatory action on such – though the actual, technical complexities of such may be difficult to litigate, in terms of proving definitive connection between algorithmic amplification and addiction.

But it’s an important step, which would undoubtedly make the platforms re-consider their systems in this regard, and could lead to better outcomes for all users.


Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address