Connect with us

SOCIAL

Introducing social media to children during the COVID-19 crisis

Published

on

The COVID-19 crisis has prompted many parents to rewrite the family rule book around social media.

Parents who vowed their children wouldn’t set a digital foot into the world of social media prior to junior high are allowing their children to dabble in virtual communication in an effort to keep them connected with their friends.

We reached out to Dr. Michael Rich, the director of the Center on Media and Child Health at Boston Children’s Hospital, for a crash course in what parents should consider when they sign up their child for a social media account.

Laurel Gregory: We’ve heard from a lot of parents who are giving their children the green light to use social media at a much younger age than they planned. What advice would you give them?

Dr. Michael Rich: As you know, we don’t specifically endorse any product, but Facebook actually convened a group of child developmental experts, including me and one of my staffers, to help develop Messenger Kids — not Facebook messenger but Facebook Messenger Kids. While it is not perfect… one of the good things about it is it’s completely monitored by parents. The parents are able to not only observe all of the traffic that the kid is involved with, but needs to curate and actively choose their contacts. The idea behind this, from those of us who were consulting, is that the kids are jumping into social media anyway whether or not they are supposed to, and this is a way for the parent to help guide and mentor the child on using social media and messaging apps in responsible, safe and kind ways. It allows them to basically train them. So in a sense, it’s like sitting in the front seat of the car when your child learns how to drive.

It’s scary, you’re a little white-knuckled and worried about it, but you are essentially helping them apprentice in this new skill — at your side.


Tweet This

Story continues below advertisement

I think that the real issue is: will parents put in the time to be with their child as they introduce this new technology to them — this new way of connecting with friends, which also includes helping them know when to use it and when to turn it off?

Advertisement

READ MORE: Facebook rolls out Messenger app for kids in Canada — despite calls to shut it down

LG: So it’s about staying engaged as a parent and also using social media as a tool. It’s fine for my five-year-old son to be chatting with friends on Facetime?

Dr. R: Yeah, absolutely. I think that we’re at a stage in our social evolution, if you will, even before lockdown for COVID-19, where we have to acknowledge that kids are moving seamlessly between physical space and digital space. And in acknowledging that, we have to understand that just like we increase their freedom if they take responsibility in real life — like what parties they go to, who’s houses they go to, what they do — we should do exactly the same in the digital space.

I think that with very young children, it’s really important to observe them, both in terms of what they are doing and sort of how they are responding.


Tweet This

Particularly when going to much more open spaces like Instagram and TikTok, kind of reserving the right to say, ‘You know, I don’t think this is the right space for you.’ TikTok can go to some very dark and scary places, and for that matter so can Instagram, and I’m not talking about real badness. I’m also talking about the way image-based social media kind of encourages narcissism, the selfie and the objectification of one’s self. Be aware that that may be going on for your child and be watchful for it and mindful of it. Discuss it with them and ask if they really want to go there.

Story continues below advertisement

READ MORE: TikTok joins forces with WHO to promote coronavirus facts amid pandemic

LG: Can you recommend a specific platform for certain ages?

Advertisement

Dr. R: The reality is whatever age number you choose, it’s not going to be the same for every child. Even siblings in the same family! There are some 10-year-olds who are fine in virtually any social media context because they know how to respect themselves and each other enough to use them well. And there are 20-year-olds who aren’t. So I think the key is not to follow some sort of magical algorithm — one size fits all — but more to say: work with your child. Obviously Facebook Messenger Kids is a good middle ground to help them try things out in a mentored environment with parent involvement. When you get into other things, sit right next to the child, watch them go through it. Have them teach you how to do it, because frankly, kids know better how to navigate TikTok than parents do. They are technically adept but don’t have executive function to stay healthy and safe and be respectful and mindful of each other.

The real issue here is us learning to parent in the digital space. Us learning to bring our same values to bear on it. I would even say I have moved away from using terms like developmentally appropriate because appropriate is a values-laden term. Let’s think about developmentally optimal. What is optimal not just for all children but for this child at this point in his or her life. What needs does she or he have for these tools? And is he or she ready to take on responsibility to themselves, their friends and society to function in this space? Can they function in this space independently, or do they need a learner’s permit? Do they need to be using… a more curated and mentored environment?

An in-depth interview with Dr. Michael Rich will be published Monday, April 20 on the Family Matters podcast.

© 2020 Global News, a division of Corus Entertainment Inc.

Read More

Advertisement

SOCIAL

Twitter Faces Advertiser Boycott Due to Failures to Police Child Abuse Material

Published

on

Elon Musk Launches Hostile Takeover Bid for Twitter

Twitter’s no good, very bad year continues, with the company this week being forced to inform some advertisers that their ads had been displayed in the app alongside tweets soliciting child pornography and other abuse material.

As reported by Reuters:

Brands ranging from Walt Disney, NBCUniversal and Coca-Cola, to a children’s hospital, were among some 30 advertisers that have appeared on the profile pages of Twitter accounts that peddle links to the exploitative material.”

The discovery was made by cybersecurity group Ghost Data, which worked with Reuters to uncover the ad placement concerns, dealing another big blow to the app’s ongoing business prospects.

Already in a state of disarray amid the ongoing Elon Musk takeover saga, and following recent revelations from its former security chief that it’s lax on data security and other measures, Twitter’s now also facing an advertiser exodus, with big brands including Dyson, Mazda and Ecolab suspending their Twitter campaigns in response.

Which, really, is the least concerning element about the discovery, with the Ghost Data report also identifying more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period.

Ghost Data says that Twitter failed to remove more than 70% of the accounts during the time of the study.

Advertisement

The findings raise further questions about Twitter’s inability, or willingness, to address potentially harmful material, with The Verge reporting late last month that Twitter ‘cannot accurately detect child sexual exploitation and non-consensual nudity at scale’.

That finding stemmed from an investigation into Twitter’s proposed plan to give adult content creators the ability to begin selling OnlyFans-style paid subscriptions in the app.

Rather than working to address the abundance of pornographic material on the platform, Twitter instead considered leaning into it – which would undoubtedly raise the risk factor for advertisers who do not want their promotions to appear alongside potentially offensive tweets.

Which is likely happening, at an even greater scale than this new report suggests, because Twitter’s own internal investigation into its OnlyFans-esque proposal found that:

Twitter could not safely allow adult creators to sell subscriptions because the company was not – and still is not – effectively policing harmful sexual content on the platform.”

In other words, Twitter couldn’t risk facilitating the monetization of exploitative material in the app, and because it has no way of tackling such, it had to scrap the proposal before it really gained any traction.

With that in mind, these new findings are no surprise – but again, the advertiser backlash is likely to be significant, which could force Twitter to launch a new crackdown either way.

For its part, Twitter says that it is investing more resources dedicated to child safety, ‘including hiring for new positions to write policy and implement solutions’.

Advertisement

So, great, Twitter’s taking action now. But these reports, based on investigation into Twitter’s own examinations, show that Twitter has been aware of this potential issue for some time – not child exploitation specifically, but adult content concerns that it has no way of policing.

In fact, Twitter openly assists in the promotion of adult content, albeit inadvertently. For example, in the ‘For You’ section of my ‘Explore’ tab (i.e. the front page of Explore in the app), Twitter continuously recommends that I follow ‘Facebook’ as a topic, based on my tweets and the people I follow in the app.

Here are the tweets that it highlighted as some of the top topical tweets for ‘Facebook’ yesterday:

It’s not pornographic material as such, but I’m tipping that if I tap through on any of these profiles, I’ll find it pretty quick. And again, these tweets are highlighted based on Twitter’s own topical tweets algorithm, which is based on engagement with tweets that mention the topic term. These completely unrelated and off-topic tweets are then being pushed by Twitter itself, to users that haven’t expressed any interest in adult content.

It’s clear, based on all the available evidence, that Twitter does have a porn problem, and it’s doing little to address it.

Distributors of adult content view Twitter as the best social network for advertising, because it’s less restrictive than Facebook, and has much broader reach than niche adult sites, while Twitter gains the usage and engagement benefits of hosting material that other social platforms would simply not allow.

Which is likely why it’s been willing to turn a blind eye to such for so long, to the point that it’s now being highlighted as a much bigger problem.

Though it is important to note that adult content, in itself, is not inherently problematic, among consenting adult users at least. It’s Twitter’s approach to child abuse and exploitative content that’s the real issue at hand.

Advertisement

And Twitter’s systems are reportedly ‘woefully inadequate’ in this respect.

As reported by The Verge:

A 2021 report found that the processes Twitter uses to identify and remove child sexual exploitation material are woefully inadequate – largely manual at a time when larger companies have increasingly turned to automated systems that can catch material that isn’t flagged by PhotoDNA. Twitter’s primary enforcement software is “a legacy, unsupported tool” called RedPanda, according to the report. “RedPanda is by far one of the most fragile, inefficient, and under-supported tools we have on offer,” one engineer quoted in the report said.”

Indeed, additional analysis of Twitter’s CSE detection systems found that of the 1 million reports submitted each month, 84% contain newly-discovered material – ‘none of which would be flagged’, by Twitter’s systems.

So while it’s advertisers that are putting the pressure back on the company in this instance, it’s clear that Twitter’s issues stem far beyond ad placement concerns alone.

Hitting Twitter’s bottom line, however, may be the only way to force the platform to take action – though it’ll be interesting to see just how willing and able Twitter is to enact a broader plan to address such amidst of its ongoing ownership battle.

Within its takeover agreement with Elon Musk, there’s a provision which states that Twitter needs to:

“Use its commercially reasonable efforts to preserve substantially intact the material components of its current business organization.”

Advertisement

In other words, Twitter can’t make any significant changes to its operational structure while it’s in the transition phase, which is currently in debate as it headed for a courtroom battle with Musk.

Would initiating a significant update to its CSE detection models qualify as a substantial change – substantial enough to alter the operating structure of the company at the time of the initial agreement?

In essence, Twitter likely doesn’t want to make any major changes. But it might have to, especially if more advertisers join this new boycott, and push the company to take immediate action.

It’s likely to be a mess either way, but this is a huge concern for Twitter, which should be rightfully held to account for its systemic failures in this respect.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish