Ta kontakt med oss


Indiska regeringen försöker utöva nya kontroller över onlinetal


Indiska regeringen försöker utöva nya kontroller över onlinetal

The Indian Government is taking more overt action to control what can and cannot be discussed online in the nation, with proposed new rules that would enable the government itself to dictate what’s true and what’s not, and force social platforms to remove false claims or risk fines or bans.

Indian authorities have been pushing social platforms to enforce their agendas for some time, with the government repeatedly calling on social apps to remove anti-government sentiment, in order to manipulate public opinion on several key fronts.

Which clearly oversteps the bounds of content moderation. But that the same time, the debate around what is and is not acceptable on this front continues to rage on, with free speech proponents calling for a more hands-off approach, and the platforms, in many cases, calling for external regulation to alleviate their control over such.

Because here’s the thing – at some level, everyone acknowledges that there needs to be a barrier of content moderation conducted by all social media platforms, in order to weed out criminal or otherwise harmful content. The secondary element is the debate – what constitutes ‘harmful’ in this respect, and what obligation do social platforms have to adhere to, say, government requests for the removal of ‘harmful’ posts, as they relate to government initiatives and/or other elements?

This is the key point that Elon Musk has repeatedly raised in his brief time at Twitter thus far. Musk’s ‘Twitter Files’ expose, for example, purports to uncover government meddling, in order to control the messaging that’s being distributed to users via social apps.

But thus far, those revelations have only really shown that Twitter worked with government officials, from all sides of the political spectrum, in order to police illegal content, and/or content that could have impeded, for example, the rollout of the COVID vaccine, at a time when the expanded take-up of vaccinations was our only way out of the endless lockdowns and impacts.

Se även  5 Amazing Landing Page Examples To Inspire Your Own

At the time, government officials called on Twitter, and other social apps, to remove posts that questioned the safety of vaccines, or otherwise raised doubts that could stop people from getting the shot. Which opponents of vaccine mandates now say was in violation of their free speech – but again, in an evolving situation, these teams made the best decision they could at the time. Which may have been wrong, and could, inadvertently, have led to some incorrect suspensions or actions taken. But again, given the assessments before them, moderation teams are tasked with increasingly difficult decisions that could impact millions of people.

In this context, the principles those teams have adhered to is correct, and criticizing such process in retrospect is folly – but again, the core consideration is that, in some cases, there will always be a need for some level of moderation that not everybody is going to agree with.

Which is the truly difficult thing.

Meta, for example, has for years been calling for government oversight and regulation of social apps, in order to take moderation decisions about particularly sensitive topics out of its hands, while also ensuring that all platforms adhere to the same standards, lessening the censorship burden on individual platforms and chiefs.

But securing agreement on such, from all governments, is virtually impossible, and while Meta’s called on the UN to implement wide-reaching rules, even that wouldn’t cover all regions, and see all jurisdictions adhering to the same principles.

Se även  Facebook gjorde ingenting för att hjälpa en kvinna vars brorson med särskilda behov blev mobbad online

Because they don’t. Each nation has different levels of tolerance for different things, and none of them want to see their citizens held to the same standard as the other. They manage their own laws and rules independently, and any over-arching regulations would be too much – which is why it’s virtually impossible to secure consensus on what content should and should not be allowed, on a global basis.

And then, once you have a level of control over such, there are also authoritarian governments, like in India, which see an opportunity to exert even more control, in order to quell dissent and criticism. Which, again, is a step too far – but then again, how is that any different to blunting anti-vaccine messages in other regions, or seeking to supress certain stories or angles?

There are no easy answers, which is why this remains a key point of contention, and will be so for some time yet. Elon Musk is trying to shake things up in this respect, by subverting what he perceived as mainstream media bias – but within that, there also needs to be limits.

Citizen journalism, which Musk is touting as a key avenue for truth, can be even more easily manipulated, but if you’re going to accept that one conspiracy is true, then you also need to entertain the others, and that can lead to even more harmful outcomes when there’s no filter of truth or risk.

Ideally, there could be a universal agreement on content standards, and moderation rulings. But it’s hard to see how that comes about.

Se även  Meta Abandons Several Projects, Including Smart Watch and Consumer Portal Devices, In Order to Cut Costs

And while Musk would prefer to remove all moderation controls, and let the people decide, we’ve already seen where that path leads, and the harm that it can cause through manipulation of the truth.

But for some prominent voices, that seems to be what they want.

In Brazil, for example, ousted President Jair Bolsonaro recently sparked riots by questioning the results of the latest election, in which he lost by a significant margin. There’s no evidence to support Bolsonaro’s claims, he simply says that it can’t be true – and millions of people, with limited questioning, believe it.

The same as Trump – despite all evidence to the contrary, Trump still claims that the 2020 election was ‘stolen’ via widespread voter fraud and cheating.

If you can make such claims, with no evidence, and spread them to a wide breadth of people via social apps, and they can be accepted as fact by that audience, that’s a powerful means to control whatever narrative you choose.

Musk, in particular, seems to be fascinated by this idea, and has admitted that, in the past, he’s announced major projects that will likely never work in order to manipulate government action.

Maybe, Musk’s whole ‘free speech’ push is simply another means of narrative control, enabling him to bend conditions in his favor, by simply saying whatever he wants, with less risk of being fact-checked or debunked.

Because those that would question such are liars, and he is the truth.

It’s the traditional authoritarian playbook, and without universally agreed terms, there’s no way to know who to trust.

Main image by Avinash Bhat/Flickr


Håll ett öga på vad vi gör
Bli först med att få de senaste uppdateringarna och exklusivt innehåll direkt till din e-postinkorg.
Vi lovar att inte spamma dig. Du kan avbryta prenumerationen när som helst.
Ogiltig e-postadress


Elon Musk, Twitter möter oro för varumärkessäkerhet efter att chefer lämnat


Elon Musk, Twitter möter oro för varumärkessäkerhet efter att chefer lämnat

Elon Musk, CEO of Tesla, speaks with CNBC on May 16th, 2023.

David A. Grogan | CNBC

The sudden departure of Twitter executives tasked with content moderation and brand safety has left the company more vulnerable than ever to hate speech.

On Thursday, Twitter’s vice president of trust and safety, Ella Irwin, resigned from the company. Following Irwin’s departure, the company’s head of brand safety and ad quality, A.J. Brown, reportedly left, as did Maie Aiyed, a program manager who worked on brand-safety partnerships.

It’s been just over seven months since Elon Musk closed his $44 billion purchase of Twitter, an investment that has so far been a giant money loser. Musk has dramatically downsized the company’s workforce and rolled back policies that restricted what kinds of content could circulate. In response, numerous brands suspended or decreased their advertising spending, as several civil rights groups have documented.

Twitter, under Musk, is the fourth most-hated brand in the U.S., according to the 2023 Axios Harris reputation rankings.

The controversy surrounding Musk’s control of Twitter continues to build.

This week, Musk said that it’s not against Twitter’s terms of service to misgender trans people on the platform. He said doing so is merely “rude” but not illegal.” LGBTQ+ advocates and researchers dispute his position, claiming it invites bullying of trans people. On Friday, Musk encouraged his 141.8 million followers to watch a video, posted to Twitter, that was deemed transphobic by these groups.

Numerous LGBTQ organizations expressed dismay to NBC News over Musk’s decision, saying the company’s new policies will lead to an uptick in anti-trans hate speech and online abuse.

Although Musk recently hired former NBC Universal global advertising chief Linda Yaccarino to succeed him as CEO, it’s unclear how the new boss will assuage advertisers’ concerns regarding racist, antisemitic, transphobic and homophobic content in light of the recent departures and Musk’s ongoing role as majority owner and technology chief.

Se även  Vad är ett SSL-certifikat? (& varför din webbplats behöver en)

Even before the latest high-profile exits, Musk had been reducing the number of workers tasked with safety and content moderation as part of the company’s widespread layoffs. He eliminated the entire artificial intelligence ethics team, which was responsible for ensuring that harmful content wasn’t being algorithmically recommended to users.

Musk, who is also the CEO of Tesla and SpaceX, has recently played down concerns about the prevalence of hate speech on Twitter. He claimed during a Wall Street Journal event that since he took over the company in October, hate speech on the platform has declined, and that Twitter has slashed “spam, scams and bots” by “at least 90%.”

Experts and ad industry insiders told CNBC that there’s no evidence to support those claims. Some say Twitter is actively impeding independent researchers who are attempting to track such metrics.

Twitter didn’t provide a comment for this story.

The state of hate speech on Twitter

In a paper published in April that will be presented at the upcoming International Conference on Web and Social Media in Cyprus, researchers from Oregon State, University of Southern California and other institutions showed that hate speech has increased since Musk bought Twitter.

The authors wrote that the accounts known for posts containing hateful content and slurs targeting Blacks, Asians, LGTBQ groups and others increased such tweeting “dramatically following Musk’s takeover” and do not show signs of slowing down. They found that Twitter hasn’t made progress on bots, which have remained as prevalent and active on the social media platform as they were prior to Musk’s tenure.

Musk previously indicated that Twitter’s recommendation algorithms surface less offensive content to people who don’t want to see it.

Keith Burghardt, one of the authors of the paper and a computer scientist at the University of Southern California’s Information Sciences Institute, told CNBC that the deluge of hate speech and other explicit content correlates to the reduction of people working on trust and safety issues and the relaxed content-moderation policies.

Musk also said at the WSJ event that “most advertisers” had come back to Twitter.

Louis Jones, a longtime media and advertising executive who now works at the Brand Safety Institute, said it’s not clear how many advertisers have resumed spending but that “many advertisers remain on pause, as Twitter has limited reach compared to some other platforms.”

Jones said many advertisers are waiting to see how levels of “toxicity” and hate speech on Twitter change as the site appears to slant toward more right-wing users and as the U.S. election season draws near. He said one big challenge for brands is that Musk and Twitter haven’t made clear what they count in their measurements assessing hate speech, spam, scams and bots.

Researchers are calling on the billionaire Twitter owner to provide data to back up his recent claims.

“More data is critical to really understand whether there is a continuous decrease in either hate speech or bots,” Burghardt said. “That again emphasizes the need for greater transparency and for academics to have freely available data.”

Show us the data

Getting that data is becoming harder.

Twitter recently started charging companies for access to its application programing interface (API), which allows them to incorporate and analyze Twitter data. The lowest-paid tier costs $42,000 for 50 million tweets.

Imran Ahmed, CEO of the Center for Countering Digital Hate nonprofit, said that because researchers now have “to pay a fortune” to access the API, they’re having to rely on other potential routes to the data.

“Twitter under Elon Musk has been more opaque,” Ahmed said.

He added that Twitter’s search function is less effective than in the past and that view counts, as seen on certain tweets, can suddenly change, making them unstable to use.

“We no longer have any confidence in the accuracy of the data,” Ahmed said.

The CCDH analyzed a series of tweets from the beginning of 2022 through Feb. 28, 2023. It released a report in March analyzing over 1.7 million tweets collected using a data-scraping tool and Twitter’s search function and discovered that tweets mentioning the grooming narrative have risen 119% since Musk took over.

That refers to “the false and hateful lie” that the LGBTQ+ community grooms children, according to the report. The CCDH report found that a small number of popular Twitter accounts like Libs of TikTok and Gays Against Groomers have been driving the “hateful ‘grooming’ narrative online.”

The Simon Wiesenthal Center, a Jewish human rights group, continues to find antisemitic posts on Twitter. The group recently conducted its 2023 study of digital terrorism and hate on social platforms and graded Twitter a D-, putting it on par with Russia’s VK as the worst in the world for large social networks.

Rabbi Abraham Cooper, associate dean and director of global social action agenda at the center, called on Musk to meet with him to discuss the rise of hate speech on Twitter. He said he has yet to receive a response.

“They need to look at it seriously,” Cooper said. If they don’t, he said, lawmakers are going to be called upon to “do something about it.”

WATCH: Elon Musk’s visit to China

Elon Musk's visit to China shows how important the market is for Tesla, strategist says


Håll ett öga på vad vi gör
Bli först med att få de senaste uppdateringarna och exklusivt innehåll direkt till din e-postinkorg.
Vi lovar att inte spamma dig. Du kan avbryta prenumerationen när som helst.
Ogiltig e-postadress
Fortsätt läsa


WhatsApp lanserar ny "Security Hub" för att markera användarkontrollalternativ


WhatsApp lanserar ny "Security Hub" för att markera användarkontrollalternativ

WhatsApp has launched a new Security Hub mini-site, which provides a complete overview of the various safety and security tools available in the app, to help you manage your WhatsApp experience.

The security hub includes an overview of WhatsApp’s default safety elements, along with its various user control options to enhance your messaging security.

WhatsApp Security Hub

There are also tips on how to avoid spammers and scammers, and unwanted attention, as well as links to the platform’s various usage policies.

WhatsApp is known and trusted for its enhanced security measures, which ensure that your private chats remain that way, and it’s continually working to improve its tools on this front.

The WhatsApp team also continues to oppose legislation that seeks to access user chats via back doors, or other means. Various governments have raised concerns that encrypted chat apps protect criminal activity, and should therefore be accessible by authorities – but WhatsApp has remained steadfast in its dedication to protection on this front.

Enligt WhatsApp:

"Around the world, businesses, individuals and governments face persistent threats from online fraud, scams and data theft. Malicious actors and hostile states routinely challenge the security of our critical infrastructure. End-to-end encryption is one of the strongest possible defenses against these threats, and as vital institutions become ever more dependent on internet technologies to conduct core operations, the stakes have never been higher.

It’s with this in mind that WhatsApp’s new Security Hub provides even more guidance for individual users, which could give you more peace of mind, while also protecting your chats.

Se även  Varför konverterar inte Mina Google Ads? 10 skäl (& lösningar!)

If you’re wondering about the limits of WhatsApp’s systems, and what you can do to maximize security, it’s worth checking out.


Håll ett öga på vad vi gör
Bli först med att få de senaste uppdateringarna och exklusivt innehåll direkt till din e-postinkorg.
Vi lovar att inte spamma dig. Du kan avbryta prenumerationen när som helst.
Ogiltig e-postadress
Fortsätt läsa


"Våg" av rättstvister väntas när skolor slåss mot sociala medieföretag


"Våg" av rättstvister väntas när skolor slåss mot sociala medieföretag

About 40 and counting school districts across the country are suing social media companies over claims that their apps are addictive, damaging to students’ mental health, and causing adverse impacts on schools and other government resources.

Many of these lawsuits, which were originally filed in a variety of court jurisdictions, were consolidated into one 281-page multidistrict litigation claim filed March 10 in the U.S. District Court for the Northern District of California. Plaintiffs in the case include school districts, individuals and local and state governments. In total, there are about 235 plaintiffs.

The product liability complaint seeks unspecified monetary damages, as well as injunctive relief ordering each defendant to remedy certain design features on their platforms and provide warnings to youth and parents that its products are “addictive and pose a clear and present danger to unsuspecting minors.” 

Attorneys representing plaintiff school districts said this master complaint allows districts to share legal resources for similar public nuisance claims against social media companies in an attempt to recoup money spent addressing the youth mental health crisis.

Individual district lawsuits describe actions taken by school systems to address student mental well-being, such as hiring more counselors, using universal screeners and provding lessons on resilency building. In its lawsuit, California’s San Mateo County Board of Education also explains how it had to reallocate funding to pay staff to address bullying and fighting, hire more security staff, and to investigate vandalism. 

Schools are on the front lines of this crisis, said Lexi Hazam, an attorney with Lieff, Cabraser, Heimann & Bernstein and co-lead counsel for the plaintiffs’ consolidated complaint. 

Districts “are often having to divert resources and time and effort from their educational mission in order to address the mental health crisis among their students,” said Hazam. Students’ mental health struggles are caused largely by social media design features that “deliberately set out to addict” youth, she said. 

The design features, the multidistrict litigation said, “manipulate dopamine delivery to intensify use” and use “trophies” to reward extreme usage.

School districts “are often having to divert resources and time and effort from their educational mission in order to address the mental health crisis among their students.”

Lexi Hazam

Co-lead counsel for the plaintiffs’ consolidated complaint

But major litigation like this is likely to take many years to resolve, according to legal experts. The lawsuit is in its early stages, and the court will soon consider motions to dismiss. If the case proceeds, it will move into the discovery phase, where opposing parties can request documents and information that may not already be available.

One legal expert said getting involved in the case may actually make school districts vulnerable to legal action by parents who cast blame on them for not doing more to support students’ mental well-being. The case also discounts the positive aspects of teens’ social media use, said Eric Goldman, law professor and co-director of the High Tech Law Institute at Santa Clara University School of Law. 

“Here’s the reason why not every school district is going to sign up — first, because I think at least some school districts realize that social media may not be the problem. In fact, it may be part of the solution,” Goldman said.

The more likely reason why districts shouldn’t participate, Goldman said, is because schools would be “admitting to their parents that they aren’t doing a good job to manage the mental health needs of their student population.” 

Reducing risks

The lawsuit — known as the Social Media Adolescent Addiction/Personal Injury Products Liability Litigation — was filed against Meta Platforms Inc., which operates Facebook and Instagram, as well as the companies behind Snapchat, TikTok and YouTube. 

There’s no cost to school systems to join the litigation since the plaintiffs’ law firms are working on contingency, meaning they’re paid only if they prevail, according to several plaintiffs attorneys. 

Per the lawsuit, the social media platforms exploit children by having “an algorithmically-generated, endless feed to keep users scrolling.” 

The result, the complaint said, is that youth are struggling with anxiety, depression, addiction, eating disorders, self-harm and suicide risk. Individual school district cases folded into this litigation also claim the social media companies’ platforms have contributed to school security threats and vandalism

Se även  Sociala plattformar kan ställas inför rättsliga åtgärder för beroendeframkallande algoritmer enligt föreslagen kalifornisk lag

“Defendants’ choices have generated extraordinary corporate profits — and yielded immense tragedy,” the master complaint declares. 

“Here’s the reason why not every school district is going to sign up — first, because I think at least some school districts realize that social media may not be the problem. In fact, it may be part of the solution.”

‘Wave of litigation expected as schools fight social media companies

Eric Goldman

Law professor and co-director of the High Tech Law Institute at Santa Clara University School of Law

The lawsuit notes the widespread use of social media among teens, as well as details troubling statistics showing increases in youth suicide risk, anxiety and persistent sadness. 


Håll ett öga på vad vi gör
Bli först med att få de senaste uppdateringarna och exklusivt innehåll direkt till din e-postinkorg.
Vi lovar att inte spamma dig. Du kan avbryta prenumerationen när som helst.
Ogiltig e-postadress
Fortsätt läsa