Connect with us

SOCIAL

TikTok Faces New Legal Challenge Over its Tracking of Underage User Data

Published

on

Despite facing an array of challenges – which, at times, threatened the app’s very existence – TikTok has continued to grow in 2020, and looks set to become an even more important, and influential platform in the year ahead.

But while TikTok has seemingly avoided a ban in the US (for now), the app remains under scrutiny, due to concerns about its impact on young users, and international data security considerations, given its Chinese ownership. 

On the first point, TikTok is set to once again be examined over the ways in which it tracks and uses data from underage users as part of newly launched legal proceedings in the UK. 

As per Sky News UK:

“A 12-year-old girl from London, who cannot be identified, plans to bring a damages claim against six firms said to be responsible for TikTok and its “predecessor” app Musical.ly for “loss of control of personal data”. According to a High Court ruling published on Wednesday, the action alleges the firms have “misused the claimant’s private information and processed the claimant’s personal data” in breach of EU and UK data protection laws.”

The case is the latest of many that have brought against the app over the same concern:

  • In October, Pakistan’s Telecommunication Authority temporarily banned TikTok in the nation due to “immoral and indecent content” in the app, which was being made available to young users. TikTok implemented changes and the ban was lifted shortly after. 
  • In August, French officials announced a new investigation into TikTok’s data-gathering practices, primarily due to concerns around its measures to protect younger users.
  • In July 2019, the UK Information Commissioner launched an investigation into how TikTok handles the personal data of its young users, and whether it prioritizes the safety of children on its network. 
  • In February 2019, the FTC fined TikTok a record $5.7 million for illegally collecting the names, email addresses, pictures and locations of kids under age 13.  
See also  Facebook Further Clarifies Rules Around False Claims in Ads in Relation to Election Fraud

TikTok’s measures to protect younger users are rightfully a key focus – according to internal data obtained by The New York Times, more than a third of the app’s daily users in the US are under 14 years of age. That’s despite the full TikTok experience technically only being made available to users over 13, though the app is available to people under that age threshold in what TikTok calls a “limited app experience“:

“TikTok for Younger Users introduces additional safety and privacy protections designed specifically for an audience that is under 13 years old. TikTok for Younger Users allows us to split users into age-appropriate TikTok environments, in line with FTC guidance for mixed audience apps. Users enter the appropriate app experience after passing through an age-gate when they register for a TikTok account.”

So the only thing stopping younger users from logging-on and accessing all TikTok has to offer is a fairly loose age-gate, which no doubt many are subverting. That could also mean that even more than a third of all US TikTok users are under 14, as its official data would be based on self-registered birth dates.

As such, the concerns around TikTok’s child protection efforts are valid, and it makes sense for authorities to be scrutinizing, and challenging the app. That only becomes more pressing as it continues to grow, and its influential capacity increases. And while TikTok’s overall focus is on fun, light-hearted short clips, there’s clearly a level of inherent exploitation within that framework, with young girls, in particular, incentivized to push the limits of what they share in order to garner more likes.

See also  3 Essential Email Marketing Metrics You Should Track & How to Improve Them [Infographic]

That concern is not isolated to TikTok, all social platforms need to manage such. But TikTok’s video focus, and young audience appeal, does seem to increase the risk in this respect. And while ideally these concerns could be addressed without legal challenge, it’s a critical element for the platform to manage.

As such, keeping the pressure on TikTok should help ensure it keeps working to provide more protection where possible.

As noted, the other lingering concern for TikTok is international data security, with China’s stringent cybersecurity laws technically dictating that parent company ByteDance would need to share data on TikTok users with the CCP on request. We don’t know whether any such request has been made, nor what the Chinese regime might do with the personal information available on ByteDance’s servers. But while the US Government’s attempts to force TikTok to sell into American ownership lacked the required evidence to gain clear passage, many other authorities have raised similar concerns, and that could still see TikTok face more scrutiny in 2021, which could impede its growth.

TikTok lost 200 million users within a day in late June, when the Indian Government banned it due to ongoing conflicts with the Chinese Government. The CCP has continued to bristle various nations as it pursues its interests, and that could result in further restrictions on Chinese apps, very quickly, if indeed tensions escalate. 

These are the key concerns for TikTok, as a platform, moving forward. The app has clearly shown that it can succeed, and become a challenger for the established social media players, and the capacity of its algorithm to tap into personal interests, and keep users glued to the app, is clearly significant. But until it can ensure the protection younger users, and appease data sharing concerns, queries will linger around the app.     

See also  Snow’s avatar app Zepeto registers 150M users, eyes China market

With billions of dollars on the line, it seems likely that TikTok, and ByteDance, will find solution. But this latest legal battle is another reminder that it won’t necessarily be clear sailing ahead for the latest big social app.

Socialmediatoday.com

Continue Reading
Advertisement

SOCIAL

Meta’s Developing and ‘Ethical Framework’ for the Use of Virtual Influencers

Published

on

Meta's Developing and 'Ethical Framework' for the Use of Virtual Influencers


With the rise of digital avatars, and indeed, fully digital characters that have evolved into genuine social media influencers in their own right, online platforms now have an obligation to establish clear markers as to what’s real and what’s not, and how such creations can be used in their apps.

The coming metaverse shift will further complicate this, with the rise of virtual depictions blurring the lines of what will be allowed, in terms of representation. But with many virtual influencers already operating, Meta is now working to establish ethical boundaries on their application.

As explained by Meta:

From synthesized versions of real people to wholly invented “virtual influencers” (VIs), synthetic media is a rising phenomenon. Meta platforms are home to more than 200 VIs, with 30 verified VI accounts hosted on Instagram. These VIs boast huge follower counts, collaborate with some of the world’s biggest brands, fundraise for organizations like the WHO, and champion social causes like Black Lives Matter.”

Some of the more well-known examples on this front are Shudu, who has more than 200k followers on Instagram, and Lil’ Miquela, who has an audience of over 3 million in the app.

At first glance, you wouldn’t necessarily realize that this is not an actual person, which makes such characters a great vehicle for brand and product promotions, as they can be utilized 24/7, and can be placed into any environment. But that also leads to concerns about body image perception, deepfakes, and other forms of misuse through false or unclear representation.

See also  LinkedIn Outlines the Strength of its Reach and Ad Targeting Options [Infographic]

Deepfakes, in particular, may be problematic, with Meta citing this campaign, with English football star David Beckham, as an example of how new technologies are evolving to expand the use of language, as one element, for varying purpose.

The well-known ‘DeepTomCruise’ account on TikTok is another example of just how far these technologies have come, and it’s not hard to imagine a scenario where they could be used to, say, show a politician saying or doing something that he or she actually didn’t, which could have significant real world impacts.

Which is why Meta is working with developers and experts to establish clearer boundaries on such use – because while there is potential for harm, there are also beneficial uses for such depictions.

Imagine personalized video messages that address individual followers by name. Or celebrity brand ambassadors appearing as salespeople at local car dealerships. A famous athlete would make a great tutor for a kid who loves sports but hates algebra.

Such use cases will increasingly become the norm as VR and AR technologies are developed, with these platforms placing digital characters front and center, and establishing new norms for digital connection.

It would be better to know what’s real and what’s not, and as such, Meta needs clear regulations to remove dishonest depictions, and enforce transparency over VI use.

But then again, much of what you see on Instagram these days is not real, with filters and editing tools altering people’s appearance well beyond what’s normal, or realistic. That can also have damaging consequences, and while Meta’s looking to implement rules on VI use, there’s arguably a case for similar transparency in editing tools applied to posted videos and images as well.

See also  Instagram Launches Updated UI for Standalone Threads App

That’s a more complex element, particularly as such tools also enable people to feel more comfortable in posting, which no doubt increases their in-app activity. Would Meta be willing to put more focus on this element if it could risk impacting user engagement? The data on the impact of Instagram on people’s mental health are pretty clear, with comparison being a key concern.

Should that also come under the same umbrella of increased digital transparency?

It’s seemingly not included in the initial framework as yet, but at some stage, this is another element that should be examined, especially given the harmful effects that social media usage can have on young women.

But however you look at it, this is no doubt a rising element of concern, and it’s important for Meta to build guardrails and rules around the use of virtual influencers in their apps.

You can read more about Meta’s approach to virtual influencers here.





Source link

Continue Reading

SOCIAL

Meta Publishes New Guide to the Various Security and Control Options in its Apps

Published

on

Meta Publishes New Guide to the Various Security and Control Options in its Apps


Meta has published a new set of safety tips for journalists to help them protect themselves in the evolving online connection space, which, for the most part, also apply to all users more broadly, providing a comprehensive overview of the various tools and processes that it has in place to help people avoid unwanted attention online.

The 32-page guide is available in 21 different languages, and provides detailed overviews of Meta’s systems and profile options for protection and security, with specific sections covering Facebook, Instagram and WhatsApp.

The guide begins with the basics, including password protections and enabling two-factor authentication.

It also outlines tips for Page managers in securing their business profiles, while there are also notes on what to do if you’ve been hacked, advice for protection on Messenger and guidance on bullying and harassment.

Meta security guide

For Instagram, there are also general security tips, along with notes on its comment moderation tools.

Meta security guide

While for WhatsApp, there are explainers on how to delete messages, how to remove messages from group chats, and details on platform-specific data options.

Meta security guide

There are also links to various additional resource guides and tools for more context, providing in-depth breakdowns of when and how to action the various options.

It’s a handy guide, and while there are some journalist-specific elements included, most of the tips do apply to any user, so it could well be a valuable resource for anyone looking to get a better handle on your various privacy tools and options.

Definitely worth knowing either way – you can download the full guide here.

See also  3 Essential Email Marketing Metrics You Should Track & How to Improve Them [Infographic]



Source link

Continue Reading

SOCIAL

Twitter bans account linked to Iran leader over video threatening Trump

Published

on

Twitter bans account linked to Iran leader over video threatening Trump


Iran’s supreme leader Ayatollah Ali Khamenei meets with relatives of slain commander Qasem Soleimani ahead of the second anniverary of his death in a US drone strike in Iraq – Copyright POOL/AFP/File Tom Brenner

Twitter said Saturday it had permanently suspended an account linked to Iran’s supreme leader that posted a video calling for revenge for a top general’s assassination against former US president Donald Trump.

“The account referenced has been permanently suspended for violating our ban evasion policy,” a Twitter spokesperson told AFP.

The account, @KhameneiSite, this week posted an animated video showing an unmanned aircraft targeting Trump, who ordered a drone strike in Baghdad two years ago that killed top Iranian commander General Qassem Soleimani.

Supreme leader Ayatollah Ali Khamenei’s main accounts in various languages remain active. Last year, another similar account was suspended by Twitter over a post also appearing to reference revenge against Trump.

The recent video, titled “Revenge is Definite”, was also posted on Khamenei’s official website.

According to Twitter, the company’s top priority is keeping people safe and protecting the health of the conversation on the platform.

The social media giant says it has clear policies around abusive behavior and will take action when violations are identified.

As head of the Quds Force, the foreign operations arm of Iran’s Revolutionary Guards, Soleimani was the architect of its strategy in the Middle East.

He and his Iraqi lieutenant were killed by a US drone strike outside Baghdad airport on January 3, 2020.

Khamenei has repeatedly promised to avenge his death.

On January 3, the second anniversary of the strike, the supreme leader and ultraconservative President Ebrahim Raisi once again threatened the US with revenge.

See also  LinkedIn Outlines the Strength of its Reach and Ad Targeting Options [Infographic]

Trump’s supporters regularly denounce the banning of the Republican billionaire from Twitter, underscoring that accounts of several leaders considered authoritarian by the United States are allowed to post on the platform.



Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending