Connect with us

SOCIAL

WhatsApp Adds Encryption for Chat Back-Ups, Closing a Loophole in its Privacy Systems

Published

on

Facebook’s looking to expand WhatsApp’s message privacy options even further, by giving users the option to encrypt their message back-ups as well, adding another layer of security to their private WhatsApp communications.

Right now, all WhatsApp messages are end-to-end encrypted by default, which has become a key value proposition for the app amid rising concerns about digital data trails and maintaining privacy.

Soon, that will be extended to your data history as well – as explained by WhatsApp:

“People can already back up their WhatsApp message history via cloud-based services like Google Drive and iCloud. WhatsApp does not have access to these backups, and they are secured by the individual cloud-based storage services. But now, if people choose to enable end-to-end encrypted (E2EE) backups, neither WhatsApp nor the backup service provider will be able to access their backup or their backup encryption key.”

The measure will provide extra assurance for WhatsApp users, which is likely important given the perceptual hit the platform took earlier this year when it announced an update to its privacy policy. That change, which allows some additional data sharing between WhatsApp and parent company Facebook, was perceived by many to be a watering down of WhatsApp’s fundamental approach to individual privacy, and as a result, many users switched to alternative messaging platforms in order to get away from the prying eyes of Zuckerberg and his cohorts.

The update wasn’t a breach of WhatsApp’s long-standing data privacy approach, and only related to communications between individuals and businesses in WhatsApp, and subsequent outreach targeting as a result. But still, the backlash was significant enough for WhatsApp to delay the change to better explain, and for Facebook execs to go on a PR push to stem the tide of users looking to abandon the platform.

See also  Daily Crunch: Facebook will pay $52M to content moderators

How big an impact the controversy actually had on WhatsApp usage, we don’t know, but definitely, WhatsApp could use a new feature like this to reinforce its privacy stance, and underline to its users that nobody can access their private messages, not even those within WhatsApp itself.

WhatsApp back-up encryption process

Functionally, being able to encrypt your message back-ups probably doesn’t add much for regular users. But then again, as noted by TechCrunch, gaining access to WhatsApp chat data via third-party workarounds has thus far been the only way for government and law enforcement agencies to peer into the WhatsApp network.

Tapping these unencrypted WhatsApp chat backups on Google and Apple servers is one of the widely known ways law enforcement agencies across the globe have for years been able to access WhatsApp chats of suspect individuals.

In other words, the current back-up options, which rely on third-party providers, reduce the overall security of WhatsApp chats, a loophole that Facebook is now closing up. Which will also undoubtedly raise the hackles of various organizations that have voiced their opposition to Facebook further locking down its messaging systems.

Back In October 2019, representatives from the US, UK and Australia co-signed an open letter to Facebook which called on the company to abandon its full messaging encryption plans, arguing that it would:

“…put our citizens and societies at risk by severely eroding capacity to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries’ attempts to undermine democratic values and institutions, preventing the prosecution of offenders and safeguarding of victims.”

The Governments of each region called for Facebook to provide, at the least, ‘backdoor access’ for official investigations, which Facebook has repeatedly refused.

See also  Twitter tests a new way to find accounts to follow

Which is what’s pushed authorities to seek out alternate means, like tapping into third-party back-ups – and with Facebook now moving to cut that off as well, that could see a new ramp-up of opposition to Facebook’s plans, and renewed calls for limits on the same.

A key focus of the concern on this front is the potential of such options to shield child traffickers, with the National Society for the Prevention of Cruelty to Children arguing that any move to further restrict access to such by law enforcement increases the potential for use of these platforms among perpetrator groups.

As per NSPCC chief executive Peter Wanless:

“Private messaging is at the front line of child sexual abuse, but the current debate around end-to-end encryption risks leaving children unprotected where there is most harm.”

This is the most compelling, and important argument against the move at present. By providing full encryption across all of its messaging apps, Facebook will essentially hide all communications by predators, and those who would seek to use such systems for child exploitation, which could then lead to an expansion of such activity.

Yet, at the same time, the broader push for increased online privacy continues to gain momentum, with people seeking options to protect their private communications from outside monitoring.

It’s a complex balance, and there’s compelling logic on both sides, but either way, it seems that Facebook it pushing ahead, with the company also repeatedly noting that it’s moving to integrate all of its messaging tools (Messenger, Instagram Direct and WhatsApp) and add more encryption options across the board.

See also  New LinkedIn Ads Features Every B2B Advertiser Should Know

There’s no definitive right answer here, but it is interesting to note the ongoing debate, which could eventually force Facebook to reverse course, or change its approach, if regulators from one of its major usage regions decides to make a more definitive push back.

Socialmediatoday.com

Continue Reading
Advertisement

SOCIAL

Meta’s Developing and ‘Ethical Framework’ for the Use of Virtual Influencers

Published

on

Meta's Developing and 'Ethical Framework' for the Use of Virtual Influencers


With the rise of digital avatars, and indeed, fully digital characters that have evolved into genuine social media influencers in their own right, online platforms now have an obligation to establish clear markers as to what’s real and what’s not, and how such creations can be used in their apps.

The coming metaverse shift will further complicate this, with the rise of virtual depictions blurring the lines of what will be allowed, in terms of representation. But with many virtual influencers already operating, Meta is now working to establish ethical boundaries on their application.

As explained by Meta:

From synthesized versions of real people to wholly invented “virtual influencers” (VIs), synthetic media is a rising phenomenon. Meta platforms are home to more than 200 VIs, with 30 verified VI accounts hosted on Instagram. These VIs boast huge follower counts, collaborate with some of the world’s biggest brands, fundraise for organizations like the WHO, and champion social causes like Black Lives Matter.”

Some of the more well-known examples on this front are Shudu, who has more than 200k followers on Instagram, and Lil’ Miquela, who has an audience of over 3 million in the app.

At first glance, you wouldn’t necessarily realize that this is not an actual person, which makes such characters a great vehicle for brand and product promotions, as they can be utilized 24/7, and can be placed into any environment. But that also leads to concerns about body image perception, deepfakes, and other forms of misuse through false or unclear representation.

See also  New LinkedIn Ads Features Every B2B Advertiser Should Know

Deepfakes, in particular, may be problematic, with Meta citing this campaign, with English football star David Beckham, as an example of how new technologies are evolving to expand the use of language, as one element, for varying purpose.

The well-known ‘DeepTomCruise’ account on TikTok is another example of just how far these technologies have come, and it’s not hard to imagine a scenario where they could be used to, say, show a politician saying or doing something that he or she actually didn’t, which could have significant real world impacts.

Which is why Meta is working with developers and experts to establish clearer boundaries on such use – because while there is potential for harm, there are also beneficial uses for such depictions.

Imagine personalized video messages that address individual followers by name. Or celebrity brand ambassadors appearing as salespeople at local car dealerships. A famous athlete would make a great tutor for a kid who loves sports but hates algebra.

Such use cases will increasingly become the norm as VR and AR technologies are developed, with these platforms placing digital characters front and center, and establishing new norms for digital connection.

It would be better to know what’s real and what’s not, and as such, Meta needs clear regulations to remove dishonest depictions, and enforce transparency over VI use.

But then again, much of what you see on Instagram these days is not real, with filters and editing tools altering people’s appearance well beyond what’s normal, or realistic. That can also have damaging consequences, and while Meta’s looking to implement rules on VI use, there’s arguably a case for similar transparency in editing tools applied to posted videos and images as well.

See also  Instagram rolls out fan badges for live videos, expands IGTV ads test

That’s a more complex element, particularly as such tools also enable people to feel more comfortable in posting, which no doubt increases their in-app activity. Would Meta be willing to put more focus on this element if it could risk impacting user engagement? The data on the impact of Instagram on people’s mental health are pretty clear, with comparison being a key concern.

Should that also come under the same umbrella of increased digital transparency?

It’s seemingly not included in the initial framework as yet, but at some stage, this is another element that should be examined, especially given the harmful effects that social media usage can have on young women.

But however you look at it, this is no doubt a rising element of concern, and it’s important for Meta to build guardrails and rules around the use of virtual influencers in their apps.

You can read more about Meta’s approach to virtual influencers here.





Source link

Continue Reading

SOCIAL

Meta Publishes New Guide to the Various Security and Control Options in its Apps

Published

on

Meta Publishes New Guide to the Various Security and Control Options in its Apps


Meta has published a new set of safety tips for journalists to help them protect themselves in the evolving online connection space, which, for the most part, also apply to all users more broadly, providing a comprehensive overview of the various tools and processes that it has in place to help people avoid unwanted attention online.

The 32-page guide is available in 21 different languages, and provides detailed overviews of Meta’s systems and profile options for protection and security, with specific sections covering Facebook, Instagram and WhatsApp.

The guide begins with the basics, including password protections and enabling two-factor authentication.

It also outlines tips for Page managers in securing their business profiles, while there are also notes on what to do if you’ve been hacked, advice for protection on Messenger and guidance on bullying and harassment.

Meta security guide

For Instagram, there are also general security tips, along with notes on its comment moderation tools.

Meta security guide

While for WhatsApp, there are explainers on how to delete messages, how to remove messages from group chats, and details on platform-specific data options.

Meta security guide

There are also links to various additional resource guides and tools for more context, providing in-depth breakdowns of when and how to action the various options.

It’s a handy guide, and while there are some journalist-specific elements included, most of the tips do apply to any user, so it could well be a valuable resource for anyone looking to get a better handle on your various privacy tools and options.

Definitely worth knowing either way – you can download the full guide here.

See also  Instagram rolls out fan badges for live videos, expands IGTV ads test



Source link

Continue Reading

SOCIAL

Twitter bans account linked to Iran leader over video threatening Trump

Published

on

Twitter bans account linked to Iran leader over video threatening Trump


Iran’s supreme leader Ayatollah Ali Khamenei meets with relatives of slain commander Qasem Soleimani ahead of the second anniverary of his death in a US drone strike in Iraq – Copyright POOL/AFP/File Tom Brenner

Twitter said Saturday it had permanently suspended an account linked to Iran’s supreme leader that posted a video calling for revenge for a top general’s assassination against former US president Donald Trump.

“The account referenced has been permanently suspended for violating our ban evasion policy,” a Twitter spokesperson told AFP.

The account, @KhameneiSite, this week posted an animated video showing an unmanned aircraft targeting Trump, who ordered a drone strike in Baghdad two years ago that killed top Iranian commander General Qassem Soleimani.

Supreme leader Ayatollah Ali Khamenei’s main accounts in various languages remain active. Last year, another similar account was suspended by Twitter over a post also appearing to reference revenge against Trump.

The recent video, titled “Revenge is Definite”, was also posted on Khamenei’s official website.

According to Twitter, the company’s top priority is keeping people safe and protecting the health of the conversation on the platform.

The social media giant says it has clear policies around abusive behavior and will take action when violations are identified.

As head of the Quds Force, the foreign operations arm of Iran’s Revolutionary Guards, Soleimani was the architect of its strategy in the Middle East.

He and his Iraqi lieutenant were killed by a US drone strike outside Baghdad airport on January 3, 2020.

Khamenei has repeatedly promised to avenge his death.

On January 3, the second anniversary of the strike, the supreme leader and ultraconservative President Ebrahim Raisi once again threatened the US with revenge.

See also  Daily Crunch: Facebook will pay $52M to content moderators

Trump’s supporters regularly denounce the banning of the Republican billionaire from Twitter, underscoring that accounts of several leaders considered authoritarian by the United States are allowed to post on the platform.



Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending