SOCIAL
Facebook Parent Meta Plans Another Round of Job Cuts, May Lay Off Over 1,000 Employees: Report

Facebook-parent Meta Platforms is planning a fresh round of job cuts in a reorganisation and downsizing effort that could affect thousands of workers, the Washington Post reported on Wednesday.
Last year, the social media company let go 13 percent of its workforce — more than 11,000 employees — as it grappled with soaring costs and a weak advertising market.
Meta now plans to push some leaders into lower-level roles without direct reports, flattening the layers of management between top boss Mark Zuckerberg and the company’s interns, the Washington Post reported, citing a person familiar with the matter.
Meta declined a Reuters request for comment, but spokesperson Andy Stone in a series of tweets cited several previous statements by Zuckerberg suggesting that more cuts were on the way.
Zuckerberg told investors earlier this month that last year’s layoffs were “the beginning of our focus on efficiency and not the end.” He said he would work on “flattening our org structure and removing some layers of middle management.”
Last year’s layoffs were the first in Meta’s 18-year history. Other tech companies have cut thousands of jobs, including Google parent Alphabet, Microsoft and Snap.
Meta aggressively hired during the COVID-19 pandemic to meet a surge in social media usage by stuck-at-home consumers. But business suffered in 2022 as advertisers pulled the plug on spending in the face of rapidly rising interest rates.
Meta, once worth more than $1 trillion (nearly Rs. 82,840,100 crore), is now valued at $446 billion (nearly Rs. 36,94,700 crore). Meta shares were down about 0.5 percent on Wednesday.
The company has said it would also reduce office space, lower discretionary spending and extend a hiring freeze into 2023 to rein in expenses.
More than 1,00,000 layoffs were announced at US companies in January, led by technology companies, according to a report from employment firm Challenger, Gray & Christmas.
© Thomson Reuters 2023
(Except for the headline, this story has not been edited by NDTV staff and is published from a press release)
For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2023 hub.
SOCIAL
LinkedIn Experiments with New AI Assistant for InMails

Microsoft-owned LinkedIn is experimenting with yet another way to bring generative AI into the app, this time via an AI assistant in your LinkedIn inbox that’ll be able to provide quick answers to questions as you engage in your DMs.
As you can see in this screenshot, shared by app researcher Nima Owji, the new LinkedIn inbox assistant would be available via a dedicated icon in the UI, which would provide you with a generative AI assistant for your LinkedIn responses. That could make it easier to research key points, check spelling, get advice on conversational elements, etc.
The addition would expand on Microsoft’s growing generative AI empire, with the tech giant looking to use its partnership with OpenAI to incorporate ChatGPT-like tools into every surface that it can, which has already seen it add AI generated profile summaries, job descriptions, post creation prompts, and more into the LinkedIn experience.
LinkedIn also added generative AI messages for job candidates within its Recruiter platform last month.
It would also see LinkedIn finally follow up on its inbox assistant tool, which it actually first previewed back in 2016.

This slightly blurry image was lifted from a LinkedIn presentation seven years back, where LinkedIn previewed its coming ‘InBot’ option. InBot, powered in part by Microsoft’s evolving AI tools (at the time) was supposed to synch with your calendar, which would then enable it to automatically schedule meetings on your behalf, arrange phone calls, follow-ups, and more.
But it never came to be. For whatever reason, LinkedIn abandoned the project shortly after this announcement – most likely because LinkedIn was looking to latch onto the short-lived messaging bots trend, which Meta believed would be a revolution in customer service. Till it wasn’t.
Because messaging bots never caught on, LinkedIn likely decided not to bother – though it is interesting that, even back then, shortly after Microsoft’s acquisition of the app, LinkedIn was already talking up the potential of merging Microsoft-powered AI tools into LinkedIn’s functions.
It’s taken a while for that to come to fruition, but soon, we may have a better version of InBot incoming, which would theoretically be able to incorporate these originally planned functions, along with more advanced generative AI responses and prompts.
That could actually be pretty valuable on LinkedIn, with various functions that could help you maximize your lead nurturing efforts, including immediately accessible info on the user that you’re interacting with, to personalize the exchange.
Of course, there is also a level of risk that the more AI tools LinkedIn adds, the less human the app will become, with users getting generative tools to come up with more posts, messages, profile summaries, and everything else in between over time.
Eventually, that could see a lot of LinkedIn interactions becoming bots talking to other bots, while the real humans behind each account remain distant. Which would see more engagement happening in the app – and would certainly make for some interesting IRL meet-up scenarios as a result. But it does also seem like LinkedIn could, maybe, be overdoing it, depending on how all of these tools are integrated.
We’ll find out. There’s no timeline on a potential launch for the new AI chatbot tool as yet.
SOCIAL
Op-Ed: BBC says social media erasing war crimes videos

Image: — © AFP/File Olivier DOULIERY
When the staid and stately BBC starts complaining about content deletions on social media on its own website, you know there’s a problem. According to the BBC, their Ukraine war posts were taken down and then they couldn’t upload, at least on Instagram.
The problems apply to Facebook and YouTube, as well as Instagram. Both AI and human moderators may be involved. The information is as blurry as you’d expect. Every case is a bit different. It’s not that easy to decide what to show and what not to show.
Given the constant complaints about social media disinformation and propaganda bots, it’s not a great look. You’d think these arbitrary decisions would get at least some scrutiny.
To be fair –
- Graphic depictions of some things have been giving social media moderators PTSD for years. It’s pretty obvious that they’re dealing with utter filth.
- It’s truly gruesome. They’re trying to filter out as much of this trash as possible, with good reason.
- That’s the main reason social media isn’t just another version of the porn industry and/or any other toxic stuff you care to name.
- Social media does have a responsibility to manage these issues, and it does, to whatever extent it can.
- Let’s not underestimate the degrees of difficulty in managing footage at the production and publishing ends. Some things really can’t be shown, often for multiple reasons.
…So the arbitrary blocks and standards aren’t totally useless; just incredibly annoying sometimes.
That said, the question of not showing war crimes, sanitized or otherwise, is a very mixed issue. It’s not like you’re going to see war crimes in progress and like it. It can be traumatic. People do have a right NOT to be traumatized, despite right-wing media.
This isn’t really censorship in the conventional sense. It’s a judgment call on what can be shown.
There are a few options for social media:
1.Simply don’t show them as a rule, not a guessing game.
2. Selective edits.
3. “Viewer discretion” notices.
4. A Yes/No process with due notification to posters.
5. Penalties for abuse of rules.
This is where it gets even trickier. Rules can be their own goals. YouTubers in particular have a lot of issues with demonetization and content rules. It’s confusing. The US legal principle of “Fair Use” seems to be more of a raffle in some cases.
When it comes to war crimes and hard facts, however, it’s a totally different ball game. It’s about lots of people dying. This scale of human misery can’t be a non-topic.
A less obvious issue is that the BBC, a global news service, is being vetted by algorithms. That can’t go unchallenged. Where are the lines drawn? News media is doing its job for a nice change, and can’t spread the news?
People literally risk their lives to get this material. There’s nothing more relevant going on in the world today.
I’m not saying there’s a simple answer. There does have to be a fix. …Or these war crimes can be dismissed as “fake news”. We know what happens then.
____________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
SOCIAL
How to Grow Your Small Business on TikTok

To effectively market on TikTok, it’s crucial to deliver value to your intended audience. By Kelly Richardson, co-founder of Infobrandz. She likes to help people build businesses through visual communication & her influential blogs. TikTok has exploded in popularity over the past few years, with more …
Source link
-
SEARCHENGINES5 days ago
Google Updates Shopping Ads Policy Center & Free Listings Policy Center
-
SEARCHENGINES6 days ago
Google Local Service Ads Sends Out Mass Policy Violation Notices
-
SEO5 days ago
How To Use AI To Enhance Your SEO Content Writing [Webinar]
-
SEARCHENGINES6 days ago
Google Search With More Detailed Car Comparison Tools
-
SEO6 days ago
Google’s Search Relations Team Explores Web3’s SEO Impact
-
PPC6 days ago
49 Father’s Day Instagram Captions & Ready-Made Images
-
FACEBOOK5 days ago
How Facebook secretly collects your information even if you haven’t signed up
-
WORDPRESS7 days ago
How Yarnnakarn Ceramics Uses WordPress.com to Expand Their Business – WordPress.com News