Connect with us

SOCIAL

TikTok Launches New ‘Creativity Program’ to Provide More Revenue Opportunities to Creators

Published

on

TikTok Launches New ‘Creativity Program’ to Provide More Revenue Opportunities to Creators

TikTok continues to explore more ways to share revenue with creators, this time via a new ‘Creativity Program’, which it’s launching in beta with selected creators to begin with.

Varying from the current Creator Fund, TikTok’s Creativity Program aims to reward creators for posting longer videos, with only content longer than a minute in length eligible for funding.

As per TikTok:

“To be eligible for the Creativity Program Beta, users will need to be at least 18 years old, meet the minimum follower and video view requirements, and have an account in good standing. To start earning, creators must create and publish high-quality, original content longer than one minute. Creators will have access to an updated dashboard to view video eligibility, estimated revenue, and video performance metrics and analytics.”

That’s a shift from TikTok’s traditional, short-form approach, and it could be that by expanding the length of videos, that gives TikTok more leeway to better monetize through enhanced engagement.

But outside of these details, the full process is fairly vague, at least at this stage. TikTok says that the new system will not re-route money from ads, and that payouts will be based on ‘qualified views and RPM’.

“Creators already enrolled in the TikTok Creator Fund can switch to the Creativity Program, and those that are not enrolled can apply to the new program once available. Creators currently enrolled in the TikTok Creator Fund can choose to switch to the Creativity Program Beta.”

TikTok’s Creator Fund, which sees creators drawing from a set pool of funds, has been criticized for its fluctuating payouts, and even declining funding, despite creator view counts increasing. Essentially, the static funding model doesn’t really work as a reliable, recurring source of revenue, which has seen some creators looking to other platforms instead.

See also  Reports Show that Facebook Usage is Up, as Meta Continues to Develop its AI Targeting Models

YouTube is the key winner on this front. YouTube’s Partner Program has a well-established revenue share process in place, while it’s also now testing its new Shorts funding program, which will see all Shorts ad revenue shared with eligible creators, based on view counts.

It’s too early to tell how effective that program will be, but a more direct line of revenue share, from ads to creators, means that as ad income increases, creators make more money, as opposed to having a set pool of money that doesn’t shift.

That seems like a better, more sustainable way to go, but as noted, it seems that TikTok’s new Creativity Program isn’t moving in-line with that process. TikTok hasn’t gone into specifics, but it’s hoping that this will be a better solution than the current process.

And it needs to improve here. If TikTok can’t provide better revenue share options for creators, more of them will eventually prioritize other platforms instead, and as Reels and Shorts become more popular options, they offer significant reach potential in their own right, which could see TikTok lose market share.

If Meta or YouTube look to sign top stars to exclusive deals, that could be a big blow to the app, while TikTok is also fighting for its very survival in the US, amid ongoing questions about its linkage to the CCP.

See also  Twitter’s Rules Around Speech are Focused on Avoiding Harm, Not Maintaining Control

As such, it needs this new program to work. We’ll keep you updated on any progress.

TikTok’s Creativity Program Beta will initially be available to creators by invite-only and then become available to all eligible US creators in the coming months.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SOCIAL

Elon Musk, Twitter face brand-safety concerns after executives depart

Published

on

Elon Musk, Twitter face brand-safety concerns after executives depart

Elon Musk, CEO of Tesla, speaks with CNBC on May 16th, 2023.

David A. Grogan | CNBC

The sudden departure of Twitter executives tasked with content moderation and brand safety has left the company more vulnerable than ever to hate speech.

On Thursday, Twitter’s vice president of trust and safety, Ella Irwin, resigned from the company. Following Irwin’s departure, the company’s head of brand safety and ad quality, A.J. Brown, reportedly left, as did Maie Aiyed, a program manager who worked on brand-safety partnerships.

It’s been just over seven months since Elon Musk closed his $44 billion purchase of Twitter, an investment that has so far been a giant money loser. Musk has dramatically downsized the company’s workforce and rolled back policies that restricted what kinds of content could circulate. In response, numerous brands suspended or decreased their advertising spending, as several civil rights groups have documented.

Twitter, under Musk, is the fourth most-hated brand in the U.S., according to the 2023 Axios Harris reputation rankings.

The controversy surrounding Musk’s control of Twitter continues to build.

This week, Musk said that it’s not against Twitter’s terms of service to misgender trans people on the platform. He said doing so is merely “rude” but not illegal.” LGBTQ+ advocates and researchers dispute his position, claiming it invites bullying of trans people. On Friday, Musk encouraged his 141.8 million followers to watch a video, posted to Twitter, that was deemed transphobic by these groups.

Numerous LGBTQ organizations expressed dismay to NBC News over Musk’s decision, saying the company’s new policies will lead to an uptick in anti-trans hate speech and online abuse.

Although Musk recently hired former NBC Universal global advertising chief Linda Yaccarino to succeed him as CEO, it’s unclear how the new boss will assuage advertisers’ concerns regarding racist, antisemitic, transphobic and homophobic content in light of the recent departures and Musk’s ongoing role as majority owner and technology chief.

See also  Rethinking the marketing planning process for an agile world

Even before the latest high-profile exits, Musk had been reducing the number of workers tasked with safety and content moderation as part of the company’s widespread layoffs. He eliminated the entire artificial intelligence ethics team, which was responsible for ensuring that harmful content wasn’t being algorithmically recommended to users.

Musk, who is also the CEO of Tesla and SpaceX, has recently played down concerns about the prevalence of hate speech on Twitter. He claimed during a Wall Street Journal event that since he took over the company in October, hate speech on the platform has declined, and that Twitter has slashed “spam, scams and bots” by “at least 90%.”

Experts and ad industry insiders told CNBC that there’s no evidence to support those claims. Some say Twitter is actively impeding independent researchers who are attempting to track such metrics.

Twitter didn’t provide a comment for this story.

The state of hate speech on Twitter

In a paper published in April that will be presented at the upcoming International Conference on Web and Social Media in Cyprus, researchers from Oregon State, University of Southern California and other institutions showed that hate speech has increased since Musk bought Twitter.

The authors wrote that the accounts known for posts containing hateful content and slurs targeting Blacks, Asians, LGTBQ groups and others increased such tweeting “dramatically following Musk’s takeover” and do not show signs of slowing down. They found that Twitter hasn’t made progress on bots, which have remained as prevalent and active on the social media platform as they were prior to Musk’s tenure.

Musk previously indicated that Twitter’s recommendation algorithms surface less offensive content to people who don’t want to see it.

Keith Burghardt, one of the authors of the paper and a computer scientist at the University of Southern California’s Information Sciences Institute, told CNBC that the deluge of hate speech and other explicit content correlates to the reduction of people working on trust and safety issues and the relaxed content-moderation policies.

Musk also said at the WSJ event that “most advertisers” had come back to Twitter.

Louis Jones, a longtime media and advertising executive who now works at the Brand Safety Institute, said it’s not clear how many advertisers have resumed spending but that “many advertisers remain on pause, as Twitter has limited reach compared to some other platforms.”

Jones said many advertisers are waiting to see how levels of “toxicity” and hate speech on Twitter change as the site appears to slant toward more right-wing users and as the U.S. election season draws near. He said one big challenge for brands is that Musk and Twitter haven’t made clear what they count in their measurements assessing hate speech, spam, scams and bots.

Researchers are calling on the billionaire Twitter owner to provide data to back up his recent claims.

“More data is critical to really understand whether there is a continuous decrease in either hate speech or bots,” Burghardt said. “That again emphasizes the need for greater transparency and for academics to have freely available data.”

Show us the data

Getting that data is becoming harder.

Twitter recently started charging companies for access to its application programing interface (API), which allows them to incorporate and analyze Twitter data. The lowest-paid tier costs $42,000 for 50 million tweets.

Imran Ahmed, CEO of the Center for Countering Digital Hate nonprofit, said that because researchers now have “to pay a fortune” to access the API, they’re having to rely on other potential routes to the data.

“Twitter under Elon Musk has been more opaque,” Ahmed said.

He added that Twitter’s search function is less effective than in the past and that view counts, as seen on certain tweets, can suddenly change, making them unstable to use.

“We no longer have any confidence in the accuracy of the data,” Ahmed said.

The CCDH analyzed a series of tweets from the beginning of 2022 through Feb. 28, 2023. It released a report in March analyzing over 1.7 million tweets collected using a data-scraping tool and Twitter’s search function and discovered that tweets mentioning the grooming narrative have risen 119% since Musk took over.

That refers to “the false and hateful lie” that the LGBTQ+ community grooms children, according to the report. The CCDH report found that a small number of popular Twitter accounts like Libs of TikTok and Gays Against Groomers have been driving the “hateful ‘grooming’ narrative online.”

The Simon Wiesenthal Center, a Jewish human rights group, continues to find antisemitic posts on Twitter. The group recently conducted its 2023 study of digital terrorism and hate on social platforms and graded Twitter a D-, putting it on par with Russia’s VK as the worst in the world for large social networks.

Rabbi Abraham Cooper, associate dean and director of global social action agenda at the center, called on Musk to meet with him to discuss the rise of hate speech on Twitter. He said he has yet to receive a response.

“They need to look at it seriously,” Cooper said. If they don’t, he said, lawmakers are going to be called upon to “do something about it.”

WATCH: Elon Musk’s visit to China

Elon Musk's visit to China shows how important the market is for Tesla, strategist says



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SOCIAL

WhatsApp Launches New ‘Security Hub’ to Highlight User Control Options

Published

on

WhatsApp Launches New ‘Security Hub’ to Highlight User Control Options

WhatsApp has launched a new Security Hub mini-site, which provides a complete overview of the various safety and security tools available in the app, to help you manage your WhatsApp experience.

The security hub includes an overview of WhatsApp’s default safety elements, along with its various user control options to enhance your messaging security.

WhatsApp Security Hub

There are also tips on how to avoid spammers and scammers, and unwanted attention, as well as links to the platform’s various usage policies.

WhatsApp is known and trusted for its enhanced security measures, which ensure that your private chats remain that way, and it’s continually working to improve its tools on this front.

The WhatsApp team also continues to oppose legislation that seeks to access user chats via back doors, or other means. Various governments have raised concerns that encrypted chat apps protect criminal activity, and should therefore be accessible by authorities – but WhatsApp has remained steadfast in its dedication to protection on this front.

As per WhatsApp:

Around the world, businesses, individuals and governments face persistent threats from online fraud, scams and data theft. Malicious actors and hostile states routinely challenge the security of our critical infrastructure. End-to-end encryption is one of the strongest possible defenses against these threats, and as vital institutions become ever more dependent on internet technologies to conduct core operations, the stakes have never been higher.

It’s with this in mind that WhatsApp’s new Security Hub provides even more guidance for individual users, which could give you more peace of mind, while also protecting your chats.

See also  Snapchat's ChatGPT powered 'My AI' chatbot draws backlash from users

If you’re wondering about the limits of WhatsApp’s systems, and what you can do to maximize security, it’s worth checking out.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SOCIAL

‘Wave’ of litigation expected as schools fight social media companies

Published

on

‘Wave’ of litigation expected as schools fight social media companies

About 40 and counting school districts across the country are suing social media companies over claims that their apps are addictive, damaging to students’ mental health, and causing adverse impacts on schools and other government resources.

Many of these lawsuits, which were originally filed in a variety of court jurisdictions, were consolidated into one 281-page multidistrict litigation claim filed March 10 in the U.S. District Court for the Northern District of California. Plaintiffs in the case include school districts, individuals and local and state governments. In total, there are about 235 plaintiffs.

The product liability complaint seeks unspecified monetary damages, as well as injunctive relief ordering each defendant to remedy certain design features on their platforms and provide warnings to youth and parents that its products are “addictive and pose a clear and present danger to unsuspecting minors.” 

Attorneys representing plaintiff school districts said this master complaint allows districts to share legal resources for similar public nuisance claims against social media companies in an attempt to recoup money spent addressing the youth mental health crisis.

Individual district lawsuits describe actions taken by school systems to address student mental well-being, such as hiring more counselors, using universal screeners and provding lessons on resilency building. In its lawsuit, California’s San Mateo County Board of Education also explains how it had to reallocate funding to pay staff to address bullying and fighting, hire more security staff, and to investigate vandalism. 

Schools are on the front lines of this crisis, said Lexi Hazam, an attorney with Lieff, Cabraser, Heimann & Bernstein and co-lead counsel for the plaintiffs’ consolidated complaint. 

Districts “are often having to divert resources and time and effort from their educational mission in order to address the mental health crisis among their students,” said Hazam. Students’ mental health struggles are caused largely by social media design features that “deliberately set out to addict” youth, she said. 

The design features, the multidistrict litigation said, “manipulate dopamine delivery to intensify use” and use “trophies” to reward extreme usage.


School districts “are often having to divert resources and time and effort from their educational mission in order to address the mental health crisis among their students.”

Lexi Hazam

Co-lead counsel for the plaintiffs’ consolidated complaint


But major litigation like this is likely to take many years to resolve, according to legal experts. The lawsuit is in its early stages, and the court will soon consider motions to dismiss. If the case proceeds, it will move into the discovery phase, where opposing parties can request documents and information that may not already be available.

One legal expert said getting involved in the case may actually make school districts vulnerable to legal action by parents who cast blame on them for not doing more to support students’ mental well-being. The case also discounts the positive aspects of teens’ social media use, said Eric Goldman, law professor and co-director of the High Tech Law Institute at Santa Clara University School of Law. 

“Here’s the reason why not every school district is going to sign up — first, because I think at least some school districts realize that social media may not be the problem. In fact, it may be part of the solution,” Goldman said.

The more likely reason why districts shouldn’t participate, Goldman said, is because schools would be “admitting to their parents that they aren’t doing a good job to manage the mental health needs of their student population.” 

Reducing risks

The lawsuit — known as the Social Media Adolescent Addiction/Personal Injury Products Liability Litigation — was filed against Meta Platforms Inc., which operates Facebook and Instagram, as well as the companies behind Snapchat, TikTok and YouTube. 

There’s no cost to school systems to join the litigation since the plaintiffs’ law firms are working on contingency, meaning they’re paid only if they prevail, according to several plaintiffs attorneys. 

Per the lawsuit, the social media platforms exploit children by having “an algorithmically-generated, endless feed to keep users scrolling.” 

The result, the complaint said, is that youth are struggling with anxiety, depression, addiction, eating disorders, self-harm and suicide risk. Individual school district cases folded into this litigation also claim the social media companies’ platforms have contributed to school security threats and vandalism

See also  Star Ocean The Divine Force Launches October 27 on Xbox

“Defendants’ choices have generated extraordinary corporate profits — and yielded immense tragedy,” the master complaint declares. 


“Here’s the reason why not every school district is going to sign up — first, because I think at least some school districts realize that social media may not be the problem. In fact, it may be part of the solution.”

‘Wave of litigation expected as schools fight social media companies

Eric Goldman

Law professor and co-director of the High Tech Law Institute at Santa Clara University School of Law


The lawsuit notes the widespread use of social media among teens, as well as details troubling statistics showing increases in youth suicide risk, anxiety and persistent sadness. 

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

en_USEnglish