Connect with us

SOCIAL

Indian Government Seeks to Exert New Controls Over Online Speech

Published

on

Indian Government Seeks to Exert New Controls Over Online Speech

The Indian Government is taking more overt action to control what can and cannot be discussed online in the nation, with proposed new rules that would enable the government itself to dictate what’s true and what’s not, and force social platforms to remove false claims or risk fines or bans.

Indian authorities have been pushing social platforms to enforce their agendas for some time, with the government repeatedly calling on social apps to remove anti-government sentiment, in order to manipulate public opinion on several key fronts.

Which clearly oversteps the bounds of content moderation. But that the same time, the debate around what is and is not acceptable on this front continues to rage on, with free speech proponents calling for a more hands-off approach, and the platforms, in many cases, calling for external regulation to alleviate their control over such.

Because here’s the thing – at some level, everyone acknowledges that there needs to be a barrier of content moderation conducted by all social media platforms, in order to weed out criminal or otherwise harmful content. The secondary element is the debate – what constitutes ‘harmful’ in this respect, and what obligation do social platforms have to adhere to, say, government requests for the removal of ‘harmful’ posts, as they relate to government initiatives and/or other elements?

This is the key point that Elon Musk has repeatedly raised in his brief time at Twitter thus far. Musk’s ‘Twitter Files’ expose, for example, purports to uncover government meddling, in order to control the messaging that’s being distributed to users via social apps.

But thus far, those revelations have only really shown that Twitter worked with government officials, from all sides of the political spectrum, in order to police illegal content, and/or content that could have impeded, for example, the rollout of the COVID vaccine, at a time when the expanded take-up of vaccinations was our only way out of the endless lockdowns and impacts.

At the time, government officials called on Twitter, and other social apps, to remove posts that questioned the safety of vaccines, or otherwise raised doubts that could stop people from getting the shot. Which opponents of vaccine mandates now say was in violation of their free speech – but again, in an evolving situation, these teams made the best decision they could at the time. Which may have been wrong, and could, inadvertently, have led to some incorrect suspensions or actions taken. But again, given the assessments before them, moderation teams are tasked with increasingly difficult decisions that could impact millions of people.

In this context, the principles those teams have adhered to is correct, and criticizing such process in retrospect is folly – but again, the core consideration is that, in some cases, there will always be a need for some level of moderation that not everybody is going to agree with.

Which is the truly difficult thing.

Meta, for example, has for years been calling for government oversight and regulation of social apps, in order to take moderation decisions about particularly sensitive topics out of its hands, while also ensuring that all platforms adhere to the same standards, lessening the censorship burden on individual platforms and chiefs.

But securing agreement on such, from all governments, is virtually impossible, and while Meta’s called on the UN to implement wide-reaching rules, even that wouldn’t cover all regions, and see all jurisdictions adhering to the same principles.

Because they don’t. Each nation has different levels of tolerance for different things, and none of them want to see their citizens held to the same standard as the other. They manage their own laws and rules independently, and any over-arching regulations would be too much – which is why it’s virtually impossible to secure consensus on what content should and should not be allowed, on a global basis.

And then, once you have a level of control over such, there are also authoritarian governments, like in India, which see an opportunity to exert even more control, in order to quell dissent and criticism. Which, again, is a step too far – but then again, how is that any different to blunting anti-vaccine messages in other regions, or seeking to supress certain stories or angles?

There are no easy answers, which is why this remains a key point of contention, and will be so for some time yet. Elon Musk is trying to shake things up in this respect, by subverting what he perceived as mainstream media bias – but within that, there also needs to be limits.

Citizen journalism, which Musk is touting as a key avenue for truth, can be even more easily manipulated, but if you’re going to accept that one conspiracy is true, then you also need to entertain the others, and that can lead to even more harmful outcomes when there’s no filter of truth or risk.

Ideally, there could be a universal agreement on content standards, and moderation rulings. But it’s hard to see how that comes about.

And while Musk would prefer to remove all moderation controls, and let the people decide, we’ve already seen where that path leads, and the harm that it can cause through manipulation of the truth.

But for some prominent voices, that seems to be what they want.

In Brazil, for example, ousted President Jair Bolsonaro recently sparked riots by questioning the results of the latest election, in which he lost by a significant margin. There’s no evidence to support Bolsonaro’s claims, he simply says that it can’t be true – and millions of people, with limited questioning, believe it.

The same as Trump – despite all evidence to the contrary, Trump still claims that the 2020 election was ‘stolen’ via widespread voter fraud and cheating.

If you can make such claims, with no evidence, and spread them to a wide breadth of people via social apps, and they can be accepted as fact by that audience, that’s a powerful means to control whatever narrative you choose.

Musk, in particular, seems to be fascinated by this idea, and has admitted that, in the past, he’s announced major projects that will likely never work in order to manipulate government action.

Maybe, Musk’s whole ‘free speech’ push is simply another means of narrative control, enabling him to bend conditions in his favor, by simply saying whatever he wants, with less risk of being fact-checked or debunked.

Because those that would question such are liars, and he is the truth.

It’s the traditional authoritarian playbook, and without universally agreed terms, there’s no way to know who to trust.

Main image by Avinash Bhat/Flickr

Source link

SOCIAL

The Most Visited Websites in the World – 2023 Edition [Infographic]

Published

on

The Most Visited Websites in the World - 2023 Edition [Infographic]

Google remains the most-visited website in the world, while Facebook is still the most frequented social platform, based on web traffic. Well, actually, YouTube is, but YouTube’s only a partial social app, right?

The findings are displayed in this new visualization from Visual Capitalist, which uses SimilarWeb data to show the most visited websites in bubble chart format, highlighting the variance in traffic.

As you can see, following Facebook, Twitter and Instagram are the next most visited social platforms, which is likely in line with what most would expect – though the low numbers for TikTok probably stand out, given its dominance of modern media zeitgeist.

But there is a reason for that – this data is based on website visits, not app usage, so platforms like TikTok and Snapchat, which are primarily focused on the in-app experience, won’t fare as well in this particular overview.

In that sense, it’s interesting to see which social platforms are engaging audiences via their desktop offerings.

You can check out the full overview below, and you can read Visual Capitalist’s full explainer here.

Source link

Continue Reading

SOCIAL

Cheeky branding wins (and missteps)

Published

on

Cheeky branding wins (and missteps)

Storyboard

Branding and rebranding is getting more fun, here we look at some of cheekiest brands that have caught our eye – for the right and wrong reasons.



Source link

Continue Reading

SOCIAL

Google Outlines Ongoing Efforts to Combat China-Based Influence Operations Targeting Social Apps

Published

on

Google Outlines Ongoing Efforts to Combat China-Based Influence Operations Targeting Social Apps

Over the past year, Google has repeatedly noted that a China-based group has been looking to use YouTube, in particular, to influence western audiences, by building various channels in the app, then seeding them with pro-China content.

There’s limited info available on the full origins or intentions of the group, but today, Google has published a new overview of its ongoing efforts to combat the initiative, called DRAGONBRIDGE.

As explained by Google:

In 2022, Google disrupted over 50,000 instances of DRAGONBRIDGE activity across YouTube, Blogger, and AdSense, reflecting our continued focus on this actor and success in scaling our detection efforts across Google products. We have terminated over 100,000 DRAGONBRIDGE accounts in the IO network’s lifetime.

As you can see in this chart, DRAGONBRIDGE is by far the most prolific source of coordinated information operations that Google has detected over the past year, while Google also notes that it’s been able to disrupt most of the project’s attempted influence, by snuffing out its content before it gets seen.

Dragonbridge

Worth noting the scale too – as Google notes, DRAGONBRIDGE has created more than 100,000 accounts, which includes tens of thousands of YouTube channels. Not individual videos, entire channels in the app, which is a huge amount of work, and content, that this group is producing.

That can’t be cheap, or easy to keep running. So they must be doing it for a reason.

The broader implication, which has been noted by various other publications and analysts, is that DRAGONBRIDGE is potentially being supported by the Chinese Government, as part of a broader effort to influence foreign policy approaches via social media apps. 

Which, at this kind of scale, is a concern, while DRAGONBRIDGE has also targeted Facebook and Twitter as well, at different times, and it could be that their efforts on those platforms are also reaching similar activity levels, and may not have been detected as yet.

Which then also relates to TikTok, a Chinese-owned app that now has massive influence over younger audiences in western nations. If programs like this are already in effect, it stands to reason that TikTok is also likely a key candidate for boosting the same, which remains a key concern among regulators and officials in many nations.

The US Government is reportedly weighing a full TikTok ban, and if that happens, you can bet that many other nations will follow suit. Many government organizations are also banning TikTok on official devices, based on advice from security experts, and with programs like DRAGONBRIDGE also running, it does seem like Chinese-based groups are actively operating influence and manipulation programs in foreign nations.

Which seems like a significant issue, and while Google is seemingly catching most of these channels before they have an impact, it also seems likely that this is only one element of a larger push.

Hopefully, through collective action, the impact of such can be limited – but for TikTok, which still reports to Chinese ownership, it’s another element that could raise further questions and scrutiny.

Source link

Continue Reading

Trending

en_USEnglish