SOCIAL
Tech Companies Agree to New Accord to Limit the Impacts of AI Deepfakes
With the latest examples of generative AI video wowing people with their accuracy, they also underline the potential threat that we now face from artificial content, which could soon be used to depict unreal, yet convincing scenes that could influence people’s opinions, and their subsequent responses.
Like, for example, how they vote.
With this in mind, late last week, at the 2024 Munich Security Conference, representatives from almost every major tech company agreed to a new pact to implement “reasonable precautions” in preventing artificial intelligence tools from being used to disrupt democratic elections.
As per the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”:
“2024 will bring more elections to more people than any year in history, with more than 40 countries and more than four billion people choosing their leaders and representatives through the right to vote. At the same time, the rapid development of artificial intelligence, or AI, is creating new opportunities as well as challenges for the democratic process. All of society will have to lean into the opportunities afforded by AI and to take new steps together to protect elections and the electoral process during this exceptional year.”
Executives from Google, Meta, Microsoft, OpenAI, X, and TikTok are among those who’ve agreed to the new accord, which will ideally see broader cooperation and coordination to help address AI-generated fakes before they can have an impact.
The accord lays out seven key elements of focus, which all signatories have agreed to, in principle, as key measures:
The main benefit of the initiative is the commitment from each company to work together to share best practices, and “explore new pathways to share best-in-class tools and/or technical signals about Deceptive AI Election Content in response to incidents”.
The agreement also sets out an ambition for each “to engage with a diverse set of global civil society organizations, academics” in order to inform broader understanding of the global risk landscape.
It’s a positive step, though it’s also non-binding, and it’s more of a goodwill gesture on the part of each company to work towards the best solutions. As such, it doesn’t lay out definitive actions to be taken, or penalties for failing to do so. But it does, ideally, set the stage for broader collaborative action to stop misleading AI content before it can have a significant impact.
Though that impact is relative.
For example, in the recent Indonesian election, various AI deepfake elements were employed to sway voters, including a video depiction of deceased leader Suharto designed to inspire support, and cartoonish versions of some candidates, as a means to soften their public personas.
These were AI-generated, which is clear from the start, and no one was going to be misled into believing that these were actual images of how the candidates look, nor that Suharto had returned from the dead. But the impact of such can be significant, even with that knowledge, which underlines the power of such in perception, even if they are subsequently removed, labeled, etc.
That could be the real risk. If an AI-generated image of Joe Biden or Donald Trump has enough resonance, the origin of it could be trivial, as it could still sway voters based on the depiction, whether it’s real or not.
Perception matters, and smart use of deepfakes will have an impact, and will sway some voters, regardless of safeguards and precautions.
Which is a risk that we now have to bear, given that such tools are already readily available, and like social media before, we’re going to be assessing the impacts in retrospect, as opposed to plugging holes ahead of time.
Because that’s the way technology works, we move fast, we break things. Then we pick up the pieces.