OpenAI knows that its technology could be abused in a major election, but does not seem to have a clue how to stop it.

General
OpenAI knows that its technology could be abused in a major election, but does not seem to have a clue how to stop it.

Remember Cambridge Analytica, the British political consultancy that operated from 2013-18 and had one mission: to create political advertisements that would, in theory, sway voting intentions. It was to unwittingly scrape up data on Facebook users and use that personal information to tailor political ads that would, in theory, sway their voting intentions (for an exorbitant price, of course). When the scandal broke, Facebook was accused of interfering in everything from the UK's Brexit referendum to Donald Trump's 2015 presidential campaign.

Now, AI is trying to make that stuff look like patty cake. In this year of presidential elections in the US and general elections in the UK, AI has reached a stage where it can deep-fake candidates, mimic their voices, create political messages from prompts, and personalize them to individual users. With Biden and Trump bickering over the ranking of Xenoblade games, it is hard to imagine that this technology will not be involved in this and other elections. [This is a serious issue. The online discourse is bad enough as it is, with both parties believing whatever the other says, and misinformation is already rampant. Adding outright fabricated content and (among other things) AI targeting to this mix could be both explosive and disastrous. OpenAI, the most high-profile company in the field, recognizes that we may be heading into choppy waters. However, while it appears to be adept enough at identifying the problem, it is unclear whether it can actually address it.

OpenAI is all about "protecting the integrity of elections" and wants to "ensure that our technology is not used in a way that would undermine this process."

The positive aspects and unprecedented nature of AI, as well as the "potential for abuse," touch the heart of the matter: the "potential for abuse.

The company cites some of these problems as "misleading 'deep fakes,' massive influence operations, and chatbots impersonating candidates." Regarding the former, DALL-E (the company's image generation technology) "rejects requests to generate images of real people, including candidates. More worryingly, OpenAI says it does not know enough about how its tool might be used for personal persuasion in this context: "More worryingly, OpenAI

I see.

Well, neither fake images of candidates nor campaign applications are allowed; OpenAI will not allow its software to be used to build chatbots posing as real people or institutions. The list goes on: [e.g., falsely telling people about the voting process or eligibility (e.g., when, where, and who is eligible to vote) or discouraging them from voting (e.g., claiming that voting is pointless).

Open AI details its latest image proofing tool (which tags everything created by the latest iteration of DALL-E) and tests a "proof clarification tool" that can positively detect DALL-E images, with "promising initial results." The tool will soon be rolled out for testing by journalists, platforms, and researchers.

However, one of its reassurances raises more questions than it answers. ChatGPT is being integrated with existing sources of information. For example, users will be able to access real-time news coverage from around the world, including attribution and links. Transparency about the source of information and the balance of news sources will help voters better evaluate information and make their own decisions about what they can trust."

[18

Hmm. This makes OpenAI's tools sound like RSS feeds; the prospect of news coverage being filtered by AI seems rather more troubling than it is. But the prospect of creating, for example, something around the Israeli-Palestinian conflict or Trump's latest conspiracy theory sounds pretty dystopian to me.

Frankly, it sounds like OpenAI knows something bad is going to happen... But we don't yet know what that bad thing is, or how it intends to deal with it, or even if it can deal with it. It's all very well to say that we won't make a Joe Biden deepfake, but perhaps that's not the issue. The biggest issue will be how the tool is used to amplify and disseminate specific stories and talking points, and to micro-target individuals.

Reading between the lines, it's like we're driving fast in the dark with no headlights; OpenAI talks about things like directing people to CanIVote.org when asked questions about voting, but this kind of thing pales in comparison to the potential enormity of the issue. It seems like small beer. The toughest part is the expectation that "lessons from this work will be reflected in approaches in other countries and regions," which roughly translates to "seeing what bad things people do and fixing them after the fact."

That may be too harsh. But the typical technical terms of "learning" and working with partners don't seem to address the biggest issue here: AI is a tool that has the potential to interfere with and even influence major democratic elections. It is a tool. It will be inevitable that malicious actors will use AI for this purpose, and neither we nor OpenAI know what form that will take or what form it will take. The Washington Post's famous headline declares, "Democracy Dies in the Dark." But it may be wrong. What happens in a democracy is chaos, lies, and voters either become cultists or stay home.

.

Categories