Connect with us


Facebook is Launching a New Investigation into Potential Algorithmic Bias in its Systems



Facebook is launching a new investigation into potential bias within its algorithms, as it works to improve its systems in response to the #BlackLivesMatter movement, and in light of its recent civil rights audit.

As reported by The Wall Street Journal, both Facebook and Instagram will launch new examinations of their core algorithms.

As per WSJ:

“The newly formed “equity and inclusion team” at Instagram will examine how Black, Hispanic and other minority users in the U.S. are affected by the company’s algorithms, including its machine-learning systems, and how those effects compare with white users, according to people familiar with the matter.”

Facebook will establish a similar team for its main app.

As noted, the move comes in response to the rising calls for improved representation on all levels, after the recent #BlackLivesMatter protests, while Facebook’s own civil rights audit, conducted over two years, and published earlier this month, found various concerns with the platform’s systems, including the potential for algorithmic bias.

As per the report:

“Because algorithms work behind the scenes, poorly designed, biased, or discriminatory algorithms can silently create disparities that go undetected for a long time unless systems are in place to assess them.” 

Facebook’s algorithms have inadvertently facilitated discriminatory processes in the past. Back in 2016, a report from ProPublica showed that it was possible to use Facebook’s ‘ethnic affinities’ demographic segmentation to eliminate specific racial groups from your ad reach, which is in violation of federal laws.

Facebook ethnic affinity

Facebook subsequently suspended the ability to target ads by excluding racial groups, yet, at the time, Facebook also noted that many ad targeting options like this were being built by Facebook’s machine learning systems, based on usage trends. As such, they were more a result of the algorithm providing options based on the available data, as opposed to Facebook deliberately facilitating such.

Facebook eventually removed all potentially discriminatory targeting options for housing, employment or credit ads last year. But even then, experts noted that any algorithmically defined system remains susceptible to inherent bias, based on the input data set.

As per Pauline Kim, a professor of employment law at Washington University:

“It’s within the realm of possibility, depending on how the algorithm is constructed, that you could end up serving ads, inadvertently, to biased audiences.”

That’s because the system is reading the data as it’s input.

As a basic illustration, if your company hires more white people, there’s a chance that an algorithm, looking to display your job ads to candidates, would only serve your job ads only to white users, based on the data it has available.

Essentially, the concern is that any algorithm based on real-world data will always reflect current-world biases, and Facebook won’t be able to detect such within its processes without conducting a full examination of its systems.

This is a significant concern, and it’s good to see Facebook looking to address such, particularly given that it was a key focus of the recent civil rights audit. 

If Facebook can improve its systems, and weed out algorithmic bias, that could go a long way to improving equality, while the lessons learned may also help other platforms address the same in their own systems.  

The move may also help Facebook repair relations with civil rights groups, who have lead a boycott of Facebook ads in July over the company’s refusal to address hate speech posted to the network by US President Donald Trump.

There’s a long way to go on this front, but addressing key elements like this could help Facebook show that it’s taking its responsibilities seriously in this respect.


Twitter Expands Content Recommendations, Showing Users More Tweets from Profiles They Don’t Follow



Twitter Expands Content Recommendations, Showing Users More Tweets from Profiles They Don’t Follow

Suddenly seeing a heap more random accounts appear in your Twitter feed?

This is why – today, Twitter ramped up its tweet recommendations for a heap more users.

So you’re going to see more tweets in your feed based on things like:

  • Interests based on tweet activity
  • Topics you follow
  • Tweets you’ve engaged with
  • Tweets people in your network like
  • People followed by people you follow

There’s a heap of expanded exposure potential here, and Twitter, in an effort to juice engagement, is looking to keep people in the app for as long as possible, which, ideally, these recommendations will facilitate.

It’s similar to how Facebook and Instagram are now showing you more AI-based content recommendations, which stems from TikTok, and its focus on highlighting the most relevant content to each user, which is not directly tied to your own social graph.

There was a time when your social graph was the defining factor, which gave Facebook a huge advantage, but now, there’s been a bigger shift towards entertainment over social interaction, which expands the potential to show each user more interesting content, from a much broader range of sources.

Conceptually it makes sense, but it’s largely reliant on the platform algorithms being actually good at showing you the best content, based on your interests. TikTok is very good at this, hooking into your expressed likes and dislikes based on your viewing history.

Twitter, however, not so much.

In my experience, Twitter’s recommended topics are always pretty far off, and even within those topics, the tweets it highlights tend to also be off-topic, uninteresting, and even just weird a lot of the time.

Right now, Twitter seems convinced that I’m interested in ‘AirBnB’, ‘beauty Influencers’ and ‘Blink 182’. I’m not interested in any of these things, which I’ve tried to tell Twitter’s algorithms by selecting the ‘Not interested in this topic’ option – yet every time I re-open the app, they’re on my Explore page once again.

It could be worse – last month it was showing me ‘Peanuts’ comics, so I had Charlie Brown’s massive head staring back at me every time I tapped over to the Explore tab.

Again, I’ve directly told Twitter that I’m not interested, but it keeps showing them to me, while today, after this new announcement, this is what my feed currently looks like:

And they just keep coming – every time I scroll back to the top, another 20 tweets are in my feed, with 80% being recommendations.

Look, this is probably a short-term push, and maybe it helps people discover new users to follow, and helps Twitter boost engagement. But again, if you’re seeing a heap more recommendations, this is why.

Hopefully, the feedback will help Twitter refine its topic and content streams.  

Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address