Connect with us

SOCIAL

Snap Launches New ‘Family Center’ so Parents Can Monitor Who Their Teens are Interacting with in the App

Published

on

Snap Launches New ‘Family Center’ so Parents Can Monitor Who Their Teens are Interacting with in the App

It’s been in development for the last few months, and now, Snapchat has officially launched its new Family Center, which will enable parents to essentially monitor who their teens are engaging with in the app, while also keeping the specifics of their conversations private.

As outlined in this introductory video, the aim of the Family Center is to help parents understand how their kids are engaging in the app, without overstepping privacy grounds.

As explained by Snap:

Family Center is designed to reflect the way that parents engage with their teens in the real world, where parents usually know who their teens are friends with and when they are hanging out – but don’t eavesdrop on their private conversations. In the coming weeks, we will add a new feature that will allow parents to easily view new friends their teens have added.

Parents will also be able to report any accounts that may be of concern directly to Snap’s Trust and Safety teams, without alerting their child, which could help to avoid any unwanted attention that their kids might be getting in the app.

In order to access the platform, parents will need to sign up for their own Snapchat account, then access the Family Center in the app.

See also  Meta's Reallocating Resources Away from Bulletin and its News Tab, Which Could See Publishers Lose Out

As outlined here, teens will need to accept an invitation from their parents to join their Family Center dashboard, so there’s full transparency in the process.

(As an aside, it could also be a way for Snapchat to boost its active user counts, as every parent who wants to utilize the Family Center will need to sign up for an account to access it.)

It’s a valuable, and important update – though it does comes with some level of risk for Snap, in respect to potentially reducing the app’s appeal.

The ephemeral nature of Snapchat has, over time, made it a key platform for more risqué, controversial sharing activity, in variance to, say, Facebook, where your whole family is watching on. But now, with parents wading into the conversation, that could make it a less appealing prospect for this type of engagement, which may water down the platform’s value for younger audiences.

Yet, at the same time, there have been various reports of how Snapchat is commonly used for sending lewd messages, and arranging hook-ups, which comes with its own level of risk, while drug dealers reportedly also now use Snap to organize meet-ups and sales.

Logically, parents will be keen to glean more insight into such – but again, I can’t imagine Snap users will be so welcoming of an intrusive tool in this respect.

See also  YouTube Launches @username Handles To Help Drive Traffic

Still, there’s valuable purpose here, and this seems like a compromise that Snap needs to, and should make.

But it could see at least some of this activity drifting off to other platforms instead.

In addition to this, Snap’s also planning to add additional features to its Family Center:

“… including new content controls for parents and the ability for teens to notify their parents when they report an account or a piece of content to us. While we closely moderate and curate both our content and entertainment platforms, and don’t allow unvetted content to reach a large audience on Snapchat, we know each family has different views on what content is appropriate for their teens and want to give them the option to make those personal decisions.”

Overall, it seems like a valuable addition to Snap’s protection tools, which already include measures to stop unwanted messages between adults and youngsters and limit teens from showing up in search results.

And on balance, it does seem that the potential value outweighs the potential risk in losing user engagement.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SOCIAL

Op-Ed: Wagner Group recruiting on social media? What about high-risk liabilities?

Published

on

The Wagner group has spearheaded the months-long Russian assault on Bakhmut

The Wagner group has spearheaded the months-long Russian assault on Bakhmut – Copyright Venezuelan Presidency/AFP Handout

Russia’s not-very-charming Wagner Group seems determined to keep generating ambiguous headlines. The latest news about the group includes this not-overly-well-covered bit of information about it recruiting on social media.

It’s not really all that surprising, but it is indicative of the state of Wagner to some extent. You’d think that a privileged mercenary group with connections to the top could at least “borrow” people if it needs them.

The current ads on Facebook, Twitter, and elsewhere are said to be asking for medics, psychologists, and drone operators. Structurally, this means Wagner is effectively repopulating its services troops. How do you run out of psychologists, of all things? Wear and tear?

Wagner Group withdrew rather suddenly from Bakhmut after announcing “victory” in capturing the town. Unconfirmed and uninformative commentary from the group itself suggests it may have taken up to 20,000 casualties in the process. That’s quite an admission.

That’s a lot of casualties, too. Publicly available information isn’t too reliable, but the strength of Wagner on Wikipedia is listed as “6,000 to 8,000”. …And they took 20,000 casualties?

It’s unlikely the entire force was actually wiped out two or three times despite a lot of obvious turnover. The group remained actively in combat for months. If this number is anything like accurate, they must have been simply feeding in their well-publicized recruits over the entire period.

See also  New Report Looks at the Importance Versus Annoyance of Two-Factor Authentication [Infographic]

This overall situation raises more than a few questions:

Expecting social media to spot an innocuous job ad and instantly connect it to Wagner is unreasonable. If they do spot it, what can they do about it?

It’s unclear if Wagner is specifically sanctioned. Some individuals are, but what about the group?

If they are, do social media platforms automatically remove the ads on that basis? If not, why not?

They’re advertising in multiple languages, being a multinational group. What are these jurisdictions supposed to do about it?

Why would Wagner be so visible, virtually advertising their weaknesses? Seems unlikely.

Social media famously doesn’t want to get involved in anything. Realistically, what can social media do about ads from innocuous third parties acting for Wagner?

Social media seems a bit clumsy as a recruiting option, particularly outside Russia. Why do it this way? Bait for foreign intelligence services, perhaps?

Can a nation hold a social media platform legally liable for recruiting war criminals? That could happen, given the depth of the issue in Ukraine.

Far more seriously as though it wasn’t serious enough – This is unlikely to be a one-off problem for social media. A “Craigslist for Atrocities” leaves a lot to be desired. Some sort of default rule needs to be in place.

See also  Twitter Inc. Merged into X Corp. As Part of the Next Evolution of the App

Something like “No mass murderers allowed” in the Terms of Service would help. Or “Advertising for participants in crimes against humanity not permitted”, maybe?

This could well come back to bite the big platforms in particular. Take a good look in the mirror, social media.  …Or a court just might.

_________________________________________________________

Disclaimer

The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SOCIAL

Russia Fines WhatsApp For Failing To Delete Content

Published

on

Russia Fines WhatsApp For Failing To Delete Content

Text size

Source link

See also  Infographic Design Trends for 2023 [Infographic]
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SOCIAL

Meta Threatens to Ban News Content in California Due to Proposed ‘Journalism Preservation Act’

Published

on

Meta Could be Exploring Paid Blue Checkmarks on Facebook and Instagram

Here we go again.

With California considering a new ‘Journalism Preservation Act’, which would essentially force Meta to pay for news content that users share on Facebook, Meta has threatened to ban news content entirely in the state – which is now a common refrain for Meta in such circumstances.

California’s Journalism Preservation Act aims to address imbalances in the digital advertising sector by forcing Meta to share a cut of its revenue with local publishers. The central argument is that Facebook benefits from increased engagement as a result of news content, and thus gains ad revenue as a result, as Facebook users share and discuss news content via links.

But the flaw here, as Meta has repeatedly argued – when Australia implemented its similar News Bargaining Code in 2021, and when Canada proposed its own variation – is that Meta doesn’t actually glean as much value from publishers as they do from Facebook, despite what the media players continue to project.

As per Meta spokesman Andy Stone:

As noted, the basis for all of these proposals is that Meta benefits from publisher content, so it should also pay to use it. But with Meta’s own insights showing that total views of posts with links (in the US) have declined by almost half over the last two years, the numbers show that Facebook is actually becoming increasingly less reliant on such over time.

Still, that hasn’t stopped the big players from pushing for reforms, and using their influence over political parties to seek more money, as their own income streams continue to dry up due to evolving consumption shifts.

See also  Denmark’s Public Broadcaster Bans TikTok from Work Devices in Latest Restriction of the App

Which has, of course, benefited online platforms, and over time, Meta and Google have gradually eaten up more and more ad market share, squeezing out the competition.

That leaves less money for publishers, which means less money for journalists, and thus, less comprehensive and informative local media ecosystems.

The basis for further investment in local voices makes sense – but the idea that Meta should be the one funding it is flawed, and always has been in every application of this approach.

Yet despite its protests, when Meta has been forced to concede, local media groups have benefited.

In Australia, for example, where Meta did actually ban news content for a time, before re-negotiating terms of the proposal, the Australian Government has since touted the success of the initiative, claiming that over 30 commercial agreements have been established between Google and Meta and Australian news businesses, which has seen over $AU200 million being re-distributed to local media providers annually.  

Really, Meta probably should have stood its ground, and refused to pay at all, because even in a watered-down variation of this proposal, millions has filtered through to publishers, which is what’s empowered Canada and now California to try their hand at the same.

See also  Monitor Data In Looker Studio

But it remains a flawed approach, which, if anything, will only prompt Meta to phase out news content even more, as it continues to focus on entertainment, largely driven by Reels engagement.

Meta actually sought to cut political content from user feeds entirely over the past year, but has since eased back on that push, after user feedback showed that despite political posts causing angst and argument, people do still want some political discussion in the app.

But it’s in clear decline, which means that Meta needs news posts less and less, as the broader focus for social apps moves more towards content discovery, and away from perspective sharing.

Which means that California, and Canada, are in increasingly weaker positions as they seek to negotiate these deals.

It could be difficult for Meta to initiate a state-wide ban on news content, but I do think that they could, and would do so, if push comes to shove.

Which will only hurt local news publishers through reduced traffic – and it’ll be interesting to see if California and Canada do seek to enact these revenue share pushes, despite Meta’s threats.



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

en_USEnglish