Facebook’s parent company Meta will revamp its targeted advertising system following accusations it allowed landlords to run discriminatory ads. This is part of a sweeping settlement to a Fair Housing Act lawsuit announced Tuesday by the U.S. Justice Department.
This is the second time the company has settled a lawsuit over adtech discrimination issues. However, yesterday’s settlement goes further than the previous one. It requires the company to overhaul its ad targeting tool, Lookalike Audiences, which makes it possible to target housing ads by race, gender, religion or other sensitive characteristics that enable discrimination.
Get the daily newsletter digital marketers rely on.
“Because of this groundbreaking lawsuit, Meta will — for the first time — change its ad delivery system to address algorithmic discrimination,” Damian Williams, a U.S. attorney, said in a statement. “But if Meta fails to demonstrate that it has sufficiently changed its delivery system to guard against algorithmic bias, this office will proceed with the litigation.”
Facebook must build a new ad system that will ensure housing ads are delivered to a more equitable mix of people. It must also submit the system to a third party for review and pay a $115,054 fine, the maximum penalty available under the law.
Read next: Major brands commit to mitigating adtech bias
This new system will use machine learning to fix bias. It “will work to ensure the age, gender and estimated race or ethnicity of a housing ad’s overall audience matches the age, gender, and estimated race or ethnicity mix of the population eligible to see that ad,” the company said in a statement.
Worth noting. An MIT study released in March found “machine-learning models that are popular for image recognition tasks actually encode bias when trained on unbalanced data. This bias within the model is impossible to fix later on, even with state-of-the-art fairness-boosting techniques, and even when retraining the model with a balanced dataset.” Earlier this month MIT released a study which found that “explanation methods designed to help users determine whether to trust a machine-learning model’s predictions can perpetuate biases and lead to less accurate predictions for people from disadvantaged groups.”
Why we care. Adtech bias is getting a lot of attention, it needs to get more. On the same day as the Facebook settlement, a coalition of major brands, the IAB and the Ad Council announced a plan to address the issue. Automated marketing and ad targeting can result in unintentional discrimination. They can also scale up intentional discrimination. Intended or not the impact of discrimination is real and has a huge impact on the entire society. Technology can’t fix this. Machine learning and AI can suffer from the same biases as their programmers. This is a problem which people caused and only people can fix.
You must be logged in to post a comment Login