Connect with us

MARKETING

Facebook agrees to revamp adtech over discrimination charges

Published

on

Facebook agrees to revamp adtech over discrimination charges

Facebook’s parent company Meta will revamp its targeted advertising system following accusations it allowed landlords to run discriminatory ads. This is part of a sweeping settlement to a Fair Housing Act lawsuit announced Tuesday by the U.S. Justice Department.

This is the second time the company has settled a lawsuit over adtech discrimination issues. However, yesterday’s settlement goes further than the previous one. It requires the company to overhaul its ad targeting tool, Lookalike Audiences, which makes it possible to target housing ads by race, gender, religion or other sensitive characteristics that enable discrimination.


Get the daily newsletter digital marketers rely on.

Advertisement


“Because of this groundbreaking lawsuit, Meta will — for the first time — change its ad delivery system to address algorithmic discrimination,” Damian Williams, a U.S. attorney, said in a statement. “But if Meta fails to demonstrate that it has sufficiently changed its delivery system to guard against algorithmic bias, this office will proceed with the litigation.”

Facebook must build a new ad system that will ensure housing ads are delivered to a more equitable mix of people. It must also submit the system to a third party for review and pay a $115,054 fine, the maximum penalty available under the law.

Read next: Major brands commit to mitigating adtech bias

This new system will use machine learning to fix bias. It “will work to ensure the age, gender and estimated race or ethnicity of a housing ad’s overall audience matches the age, gender, and estimated race or ethnicity mix of the population eligible to see that ad,” the company said in a statement.

Advertisement

Worth noting. An MIT study released in March found “machine-learning models that are popular for image recognition tasks actually encode bias when trained on unbalanced data. This bias within the model is impossible to fix later on, even with state-of-the-art fairness-boosting techniques, and even when retraining the model with a balanced dataset.” Earlier this month MIT released a study which found that “explanation methods designed to help users determine whether to trust a machine-learning model’s predictions can perpetuate biases and lead to less accurate predictions for people from disadvantaged groups.”

Why we care. Adtech bias is getting a lot of attention, it needs to get more. On the same day as the Facebook settlement, a coalition of major brands, the IAB and the Ad Council announced a plan to address the issue. Automated marketing and ad targeting can result in unintentional discrimination. They can also scale up intentional discrimination. Intended or not the impact of discrimination is real and has a huge impact on the entire society. Technology can’t fix this. Machine learning and AI can suffer from the same biases as their programmers. This is a problem which people caused and only people can fix.


About The Author

Constantine von Hoffman is managing editor of MarTech. A veteran journalist, Con has covered business, finance, marketing and tech for CBSNews.com, Brandweek, CMO, and Inc. He has been city editor of the Boston Herald, news producer at NPR, and has written for Harvard Business Review, Boston Magazine, Sierra, and many other publications. He has also been a professional stand-up comedian, given talks at anime and gaming conventions on everything from My Neighbor Totoro to the history of dice and boardgames, and is author of the magical realist novel John Henry the Revelator. He lives in Boston with his wife, Jennifer, and either too many or too few dogs.

Advertisement

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address