SOCIAL
TikTok Launches New Tools to Help Protect Users from Potentially Offensive and Harmful Content
Amid various investigations into how it protects (or doesn’t) younger users, TikTok has announced a new set of filters and options to provide more ways to limit unwanted exposure in the app.
First off, TikTok has launched a new way for users to automatically filter out videos that include words or hashtags that they don’t want to see in their feed.
As you can see in this example, now, you can block specific hashtags via the ‘Details’ tab when you action a clip. So if you don’t want to see any more videos tagged #icecream, for whatever reason (weird example TikTok folk), now you can indicate that in your settings, while you can also block content containing chosen key terms within the description.
Which is not perfect, as the system doesn’t detect the actual content, just what people have manually entered in their description notes. So if you had a phobia of ice cream, there’s still a chance that you might be exposed to disturbing vision in the app, but it does provide another means to manage your experience in a new way.
TikTok says that the option will be available to all users ‘within the coming weeks’.
TikTok’s also expanding its limits on content exposure relating to potentially harmful topics, like dieting, extreme fitness, and sadness, among others.
Last December, TikTok launched a new series of tests to investigate how it might be able to reduce the potentially harmful impacts of algorithm amplification, by limiting the amount of videos in certain, sensitive categories that are highlighted in user ‘For You’ Feeds.
It’s now moving to the next stage of this project.
As explained by TikTok:
“As a result of our tests, we’ve improved the viewing experience so that viewers now see fewer videos about these topics at a time. We’re still iterating on this work given the nuances involved. For example, some types of content may have both encouraging and sad themes, such as disordered eating recovery content.”
This is an interesting area of research, which essentially seeks to stop people from stumbling down rabbit holes of internet information, and becoming obsessed with possibly harmful elements. By restricting how much on a given topic people can view at a time, that could have a positive impact on user behaviors.
Finally, TikTok’s also working on a new ratings system for content, like movie classifications for TikTok clips.
“In the coming weeks, we’ll begin to introduce an early version to help prevent content with overtly mature themes from reaching audiences between ages 13-17. When we detect that a video contains mature or complex themes – for example, fictional scenes that may be too frightening or intense for younger audiences – a maturity score will be allocated to the video to help prevent those under 18 from viewing it across the TikTok experience.”

TikTok has also introduced new brand safety ratings to help advertisers avoid placing their promotions alongside potentially controversial content, and that same detection process could be applied here to better safeguard against mature themes and material.
Though it would be interesting to see how, exactly, TikTok’s system detects such content.
What kind of entity identification does TikTok have in place, what can its AI systems actually flag in videos, and based on what parameters?
I suspect that TikTok’s system may be very well advanced in this respect, which is why its algorithm is so effective at keeping users scrolling, because it’s able to pick out the key elements of content that you’re more likely to engage with, based on your past behavior.
The more entities that TikTok can register, the more signals it has to match you with clips, and it does seem like TikTok’s system is getting very good at figuring out more elements in uploaded videos.
As noted, the updates come as TikTok faces ongoing scrutiny in Europe over its failure to limit content exposure among young users. Last month TikTok pledged to update its policies around branded content after an EU investigation found it to be ‘failing in its duty’ to protect children from hidden advertising and inappropriate content. On another front, reports have also suggested that many kids have severely injured themselves, some even dying, while taking part in dangerous challenges sparked by the app.
TikTok has introduced measures to combat this too, and it’ll be interesting to see if these new tools help to reassure regulatory groups that it is doing all that it can to keep its young audience safe, in more respects.
Though I suspect it won’t. Short-form video requires attention-grabbing gimmicks and stunts, which means that shocking, surprising and controversial material generally performs better in that environment.
As such, TikTok’s very process, at least in part, incentivizes such, which means that more creators will keep posting potentially risky content in the hopes of going viral in the app.
SOCIAL
Musk regrets controversial post but won’t bow to advertiser ‘blackmail’

Elon Musk’s comments at the New York Times’ Dealbook conference drew a shocked silence – Copyright GETTY IMAGES NORTH AMERICA/AFP Slaven Vlasic
Elon Musk apologized Wednesday for endorsing a social media post widely seen as anti-Semitic, but accused advertisers who are turning away from his social media platform X of “blackmail” and said anyone who does so can “go fuck yourself.”
The remark before corporate executives at the New York Times’ Dealbook conference drew a shocked silence.
Earlier, Musk had apologized for what he called “literally the worst and dumbest post that I’ve ever done.”
In a comment on X, formerly Twitter, Musk on November 15 called a post “the actual truth” that said Jewish communities advocated a “dialectical hatred against whites,” which was criticized as echoing longtime conspiracy theory among White supremacists.
The statement prompted a flood of departures from X of major advertisers, including Apple, Disney, Comcast and IBM who criticized Musk for anti-semitism.
“I’m sorry for that tweet or post,” Musk said Wednesday. “It was foolish of me.”
He told interviewer Andrew Ross Sorkin that his post had been misinterpreted and that he had sought to clarify the remark in subsequent posts to the thread.
But Musk also said he wouldn’t be beholden to pressure from advertisers.
“If somebody’s gonna try to blackmail me with advertising, blackmail me with money?” Musk said. “Go fuck yourself.”
But the billionaire acknowledged that there were business implications to the advertiser actions.
“If the company fails… it will fail because of an advertiser boycott” Musk said. “And that will be what will bankrupt the company.”
Musk, who met with Israeli Prime Minister Benjamin Netanyahu during a visit to Israel earlier this week, insisted in the interview that he holds no discrimination against Jews, calling himself “philo-Semitic,” or an admirer of Judaism.
During the interview, Musk wore a necklace given to him by a parent of an Israeli hostage taken in the Hamas attack on October 7. The necklace reads, “Bring Them Home.”
Musk told Sorkin that the Israel trip had been planned earlier and was not an “apology tour” related to the controversial tweet.
SOCIAL
TikTok Encourages Creators To Make Longer Videos, With Focus On Ad Revenue 11/30/2023

A new report by The Information shows the company’s recent efforts to convince
creators to put out longer videos in order to provide more room for ad placements.
According to the …
SOCIAL
X Adds Option To Embed Videos in Isolation From Posts

Next time you go to embed an X post, you may notice a new step:
Now, X will enable you to choose whether you want to embed the video element in isolation, or the whole post, as normal.
And if you do choose to embed just the video (or GIF), it’ll look like this:
Which could be a helpful way to present X-originated video on third-party websites, and add context to, say, your blog post, without the clutter of the full X framing.
But it could also reduce brand exposure for X, which is likely why Twitter didn’t enable this before, though it did once provide an “embedded video widget” which essentially served the same purpose.

Twitter gradually seemed to phase that out as the platform evolved, and there’s no specific reason that I can find as to why it removed it as an option. But either way, now, it’s back, so you have more options for using X-originated content, and putting more focus on video elements specifically.
Though I don’t know why they didn’t also take the opportunity to remove the ‘Tweet’ reference. Since the re-brand to X, the platform seems to have gone to little effort to weed out all the tweet and bird terminology, but then again, with 80% fewer staff, that’s probably understandable as well.
-
FACEBOOK6 days ago
Indian Government Warns Facebook, YouTube About Deepfakes, Misinformation Violations
-
MARKETING5 days ago
Whiteboard Friday Recap 2023: AI Edition
-
SOCIAL7 days ago
Meta Stock: Still Room For Upside In A Maturing Market (NASDAQ:META)
-
SEARCHENGINES7 days ago
Google Testing “Simple Search” Refinement Option
-
SOCIAL6 days ago
Instagram Will Now Enable All Users to Download Publicly Posted Reels Clips
-
SOCIAL7 days ago
X is Bringing Post Headlines Back to Link Previews In-Stream
-
MARKETING7 days ago
OpenAI: The return of the king
-
MARKETING6 days ago
Making the Most of Electronic Resumes (Pro Tips and Tricks)
You must be logged in to post a comment Login