Connect with us


LinkedIn Announces Tougher Measures Against Inappropriate Content on its Platform



Amid the various divisive debates and concerns at present – which, if anything, look set to become even more incendiary as we head towards the US election – LinkedIn has this week outlined a range of new measures that it’s implementing to ensure that its members feel comfortable and protected when engaging on the platform.

As explained by LinkedIn:

“Every LinkedIn member has the right to a safe, trusted, and professional experience on our platform. We’ve heard from some of you that we should set a higher bar for safe conversations given the professional context of LinkedIn. We could not agree more. We’re committed to making sure conversations remain respectful and professional.”

In line with this, LinkedIn has announced the following updates:

Making policies stronger and clearer

LinkedIn says that it’s working to refine its Professional Community Policies in order to clarify that “hateful, harassing, inflammatory or racist content has absolutely no place on our platform”.

“In this ever-changing world, people are bringing more conversations about sensitive topics to LinkedIn and it’s critical these conversations stay constructive and respectful, never harmful. When we see content or behavior that violates our Policies, we take swift action to remove it.”

LinkedIn also notes that it’s rolling out new educational content to help users understand their obligations in this respect, which will appear as pop-up notifications or reminders when you go to post, message or otherwise engage.

Using AI and machine learning to protect against inappropriate content

LinkedIn says that it’s also working with parent-company Microsoft to help keep the LinkedIn feed appropriate and professional.

See also  Snapchat Rolls Out More Ads Certification Courses

“More recently, we’ve scaled our defenses with new AI models for finding and removing profiles containing inappropriate content, and we’ve created a LinkedIn Fairness Toolkit (LiFT) to help us measure multiple definitions of fairness in large-scale machine learning workflows.”

LinkedIn published a full overview of the LinkedIn Fairness Toolkit (LiFT) earlier this week, which facilitates: 

“… a more equitable platform by avoiding harmful biases in our models and ensuring that people with equal talent have equal access to job opportunities.” 

Creating economic opportunity for every member of the global workforce is now the key focus of former LinkedIn CEO Jeff Weiner, who stepped down from his former role in June to take on this new focus. The COVID-19 pandemic may actually present new opportunities to facilitate a significant shift in such – as we look to get the economy back on track in the wake of the pandemic, it may provide a new opportunity to implement updated standards on quality, which could help reduce systemic bias.

It’s a hard task, but LinkedIn is already taking steps on this front.

In addition to this, LinkedIn also recently rolled out new process to detect and hide inappropriate InMail messages, tackling another key area of concern for users. 

Closing the loop when you report content that violates our policies

LinkedIn also notes that, in the coming weeks, it will be providing more transparency in its enforcement efforts when taking action on content that violates platform policies.

“We’ll close the loop with members who report inappropriate content, letting them know the action we’ve taken on their report. And, for members who violate our policies, we’ll inform them about which policy they violated and why their content was removed.”

These are important initiatives for LinkedIn, with each of these elements having significant, negative impacts in varying form. As such, it’s good to see LinkedIn taking a more definitive stand on such, and while we’ll have to wait and see on the actual impacts those efforts end up having, it’s good that LinkedIn is coming out on the front foot and detailing its updated processes.

See also  Pinterest to test livestreamed events this month with 21 creators

In terms of actions users can take themselves, LinkedIn advises that members should ignore and report unwanted connection requests, and utilize its updated audience control options on their posts, limiting who can see and reply to their updates if they feel unsafe.

“You now have the option to select who gets to see your content. You can select ‘Anyone’, which makes your post visible to anyone on or off LinkedIn, ‘Anyone + Twitter’, which makes your post visible to anyone on both LinkedIn and Twitter, or ‘Connections only’, which makes your post visible to only your 1st-degree connections and reduces the likelihood of people you don’t know or don’t trust seeing your post.” 

Twitter implemented similar controls recently, with the option to limit who can reply to your tweets, while Instagram has also added more tools to limit who can engage with your updates

Of course, due to LinkedIn’s algorithm the amount of people who see your posts will be limited either way, but the controls will give you more options on such, which could help you limit unwanted interactions.

In some ways, it’s sad that there’s a need to implement such controls and options, but it’s reflective of how people choose to interact and engage on social media. Social platforms have now become a critical element in modern discourse, and that, unfortunately, also includes negative interactions.

The idea of a globally connected, interactive space is idealistic, and as we’ve increasingly found, there’s a need for limitations around that connection.

It’s sad, but realistic. And as such, it’s also important for LinkedIn to take these steps.  

See also  Digital Marketing Tools We're Thankful for This Year

You can read more about LinkedIn’s security updates here.

Continue Reading


Meta’s Developing the World’s Fastest AI Supercomputer to Fuel its Metaverse Vision



Meta's Developing the World's Fastest AI Supercomputer to Fuel its Metaverse Vision

As it looks to a future in the currently theoretical ‘metaverse’, Meta will need to up its computing power and systems in order to facilitate simultaneous connection in wholly immersive digital worlds, while it’ll also need more advanced computing power to fuel the next stage of its AI plans, in various forms.

Which is why Meta is developing a new AI Research SuperCluster (RSC), which it says will eventually become the fastest AI supercomputer in the world, when it’s fully built out by mid-2022.

The advanced system will eventually be able to perform ‘5 exaflops of mixed precision compute’ at peak. Which, I have no real idea of what that truly means, but basically, Meta’s new, advanced computational system will be able to process huge amounts of data, facilitating development in a wide range of applications, with a specific view towards the next stage of its metaverse vision.

As explained by Meta:

RSC will help Meta’s AI researchers build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images, and video together; develop new augmented reality tools; and much more. We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together.”

AR is clearly a key focus, with Meta developing its own AR-enabled glasses that will expand the use cases for the technology. The RSC will provide increased capacity to develop more complex AR systems, which could advance Meta’s tools beyond what’s currently available, which would ideally see its AR glasses become the top of the line, most advanced model available, helping Meta potentially dominate the space over rivals Snapchat and Apple.

See also  Social media: Your battlefield or your mission field?

Unless, of course, Snap and Apple team up, which is my prediction. But even so, with the additional computing power of the RSC behind it, Meta could still be well ahead, which could be a key step in bridging our current online experience to the next stage.

Which is where Meta is really focused:

“Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform – the metaverse, where AI-driven applications and products will play an important role.

It’s worth noting here that Meta specifically notes that the metaverse will take years to develop, it’s not something that’s happening overnight, nor will it become an all-immersive, integrated world by next year. Which is why any company or project that’s pitching itself as ‘metaverse ready’ is kidding itself – the metaverse, as it’s broadly envisioned, will require massive collaboration between platforms, in order to transfer your digital identity between virtual worlds, and take your avatars, skins, digital items, and more with you.

Meta is keen to reiterate that it won’t own that space, as such:

No one company can (or should) build the metaverse alone. It will be built by people and businesses all over the world. And it’ll be important that experiences built by different companies or people, like avatars or virtual worlds, work together.

But really, Meta is best-placed to host the party, via its industry-leading consumer VR tools and advanced computing systems like RSC, which will give it a significant advantage in dictating what the metaverse will be, and who will be able to sign up.

See also  Europe shows the way in online privacy

Eventually, this will require industry agreement on schemas and systems that will likely enable any service to join. But they’ll still need a host platform, along with software/hardware connection. Meta will be at the forefront of that aspect, which, again, will see it well-placed to define the rules of the space, and dominate the next stage of digital connection – whether it technically ‘owns’ it or not.

But it is worth noting that the metaverse does not exist yet, not in any form, and any platform or project that claims otherwise is ultimately misleading. Those NFT projects that claim to be ‘metaverse-ready’, yeah, no, maybe avoid them.

Eventually, Meta’s RSC will give it significant advantages in developing new systems for everything from combating harmful content on its platforms to building entirely new user experiences. The potential here is massive, and while it will take time to see the results of these developments, it’ll be interesting to see how Meta’s processes evolve in turn, and whether these advanced systems result in a significant acceleration in its development cycles.

You can read more technical details on Meta’s RSC project here.

Source link

Continue Reading


TikTok Partners with Zefr to Offer Increased Assurance on Safe Ad Placement



TikTok Partners with Zefr to Offer Increased Assurance on Safe Ad Placement

TikTok has partnered with brand suitability platform Zefr on a new brand safety post-bid measurement solution for in-feed ads, which will enable advertisers to ensure that their TikTok promotions don’t appear alongside potentially offensive material.

As you can see here, using Zefr’s dashboard, which provides insights into each campaign by mapping it against the Global Alliance for Responsible Media (GARM) Suitability Risk categories, advertisers will now be able to ensure that their TikTok ads are not shown next to content that they don’t want to be associated with.

As explained by TikTok:

“This solution will provide advertisers with campaign insights into brand safety and brand suitability for their TikTok campaigns. These insights provide clients with third-party impartial reassurance that their investment is delivered next to content suitable for their brand, protecting brand reputation and mitigating risk.”

Zefr’s advanced ‘Cognition AI’ process utilizes audio, text, and frame-by-frame video analysis, along with scaled human review, to determine brand safety, and provide full assurance on potential ad placement.

With TikTok’s challenges and posts sometimes veering into dangerous territory, the option will help to reassure brands that their campaigns won’t end up being associated with potential harm, which could help TikTok secure even more ad spend.

Though it could be difficult to 100% guarantee success here. For example, the recent ‘Milk Crate Challenge’ on TikTok started off innocently enough, but eventually lead to increasingly risky and dangerous behaviors, which resulted in serious injuries to some participants. Other TikTok challenges could follow a similar evolution – though the additional assurance of Zefr’s systems will ideally help to catch these out before they become a potential brand risk, or at the least, as soon as they’re identified as a problem.

See also  Social media: Your battlefield or your mission field?

It’s a good integration, and another key step in TikTok’s broader expansion of its ad tools.

The new TikTok Zefr integration is available to advertisers in the US, Canada, the UK, France, Germany, Italy, Poland and Spain.

Source link

Continue Reading


How to Elevate Your Social Media ROI [Infographic]



How to Elevate Your Social Media ROI [Infographic]

Looking for ways to improve your social media marketing efforts in 2022?

As we head into the new year, it’s worth revising your business goals, and establishing a clear direction for your digital marketing process. Maybe you’re happy with the growth and interaction you’re seeing, and how that’s then leading to conversion, but over the past two years, in particular, there’s no doubt been some level of disruption to your marketing plans.

With that in mind, this infographic from the team at Click Dimensions could help. They’ve put together a simple overview of how to establish your social media marketing goals, including which metrics to focus on, how to increase engagement, and the importance of adapting as things progress.

It could help to spark some new thinking in your approach – check out the full infographic below.

Source link

See also  Digital Marketing Tools We're Thankful for This Year
Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address