AI’s Nefarious Potential
WHEN the parole board in the US state of Louisiana met in October to discuss the potential release of a convicted murderer, it called on a doctor with years of experience in mental health to talk about the inmate.
The board was not the only group paying attention.
A collection of online trolls took screenshots of the doctor from an online feed of her testimony and edited the images with artificial intelligence tools to make her appear naked. They then shared the manipulated files on 4chan, an anonymous message board known for fostering harassment and spreading hateful content and conspiracy theories.
It was one of numerous times that people on 4chan had used new AI-powered tools such as audio editors and image generators to spread racist and offensive content about people who had appeared before the parole board, according to Daniel Siegel, a graduate student at Columbia University who researches how AI is being exploited for malicious purposes. Siegel chronicled the activity on the site for several months.
The manipulated images and audio have not spread far beyond the confines of 4chan, Siegel said. But experts who monitor fringe message boards said the efforts offered a glimpse at how nefarious internet users could employ sophisticated AI tools to supercharge online harassment and hate campaigns in the months and years ahead.
Callum Hood, head of research at the Centre for Countering Digital Hate, said fringe sites such as 4chan – perhaps the most notorious of them all – often gave early warning signs for how new technology would be used to project extreme ideas.
Those platforms, he said, are filled with young people who are “very quick to adopt new technologies” such as AI to “project their ideology back into mainstream spaces”.
Those tactics, he said, are often adopted by some users on more popular online platforms.
Here are several problems resulting from AI tools that experts discovered on 4chan – and what regulators and technology companies are doing about them.
AI tools such as Dall-E and Midjourney generate novel images from simple text descriptions. But a new wave of AI image generators are made for the purpose of creating fake pornography, including removing clothes from existing images.
“They can use AI to just create an image of exactly what they want,” Hood said of online hate and misinformation campaigns.
There is no law in the United States banning the creation of fake images of people, leaving groups such as the Louisiana parole board scrambling to determine what can be done. The board opened an investigation in response to Siegel’s findings on 4chan.
“Any images that are produced portraying our board members or any participants in our hearings in a negative manner, we would definitely take issue with,” said Francis Abbott, executive director of the Louisiana Board of Pardons and Committee on Parole.
“But we do have to operate within the law, and whether it’s against the law or not – that has to be determined by somebody else.”
Illinois expanded its law governing revenge pornography to allow targets of non-consensual pornography made by AI systems to sue creators or distributors. California, Virginia and New York have also passed laws banning the distribution or creation of AI-generated pornography without consent.
Late last year, AI company ElevenLabs released a tool that could create a convincing digital replica of someone’s voice saying anything typed into the program.
Almost as soon as the tool went live, users on 4chan circulated clips of a fake Emma Watson, a British actor, reading Adolf Hitler’s manifesto, Mein Kampf.
Using content from the Louisiana parole board hearings, 4chan users have since shared fake clips of judges uttering offensive and racist comments about defendants.
Many of the clips were generated by ElevenLabs’ tool, according to Siegel, who used an AI voice identifier developed by ElevenLabs to investigate their origins.
ElevenLabs rushed to impose limits, including requiring users to pay before they could gain access to voice-cloning tools. But the changes did not seem to slow the spread of AI-created voices, experts said.
Scores of videos using fake celebrity voices have circulated on TikTok and YouTube – many of them sharing political disinformation.
Some major social media companies, including TikTok and YouTube, have since required labels on some AI content.
US President Joe Biden issued an executive order in October asking that all companies label such content and directed the Commerce Department to develop standards for watermarking and authenticating AI content.
As Meta moved to gain a foothold in the AI race, the company embraced a strategy to release its software code to researchers. The approach, broadly called “open source,” can speed development by giving academics and technologists access to more raw material to find improvements and develop their own tools.
When the company released Llama, its large language model, to select researchers last February, the code quickly leaked onto 4chan. People there used it for different ends: they tweaked the code to lower or eliminate guardrails, creating new chatbots capable of producing antisemitic ideas.
The effort previewed how free-to-use and open-source AI tools can be tweaked by technologically savvy users.
“While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness,” a spokesman for Meta said in an email.
In the months since, language models have been developed to echo far-right talking points or to create more sexually explicit content. Image generators have been tweaked by 4chan users to produce nude images or provide racist memes, bypassing the controls imposed by larger technology companies. — ©2024 The New York Times Company
Source Credit: https://www.thestar.com.my/news/focus/2024/01/30/ais-nefarious-potential