Generative AI has made it easier for the average person to create artwork and other content. It has also made it much easier to spin out some truly wild and controversial stuff—like AI-generated images of SpongeBob SquarePants or Nintendo’s Kirby flying jetliners into the World Trade Center.
While AI image generators have been used to create deepfake sexual material, such tools are also being employed to craft scenes depicting violence or risqué scenes—even involving politicians, historical figures, and beloved fictional characters.
Social media platforms are blowing up with images of stickers allegedly made using Facebook’s AI sticker generator. These include images of Elmo wielding a knife, Mickey Mouse in a toilet, Wario and Luigi from the Super Mario franchise holding rifles, and even a scantily-clad rendition of Canadian Prime Minister Justin Trudeau.
“We really do live in the stupidest future imaginable,” wrote video game artist Pier-Olivier Desbiens in a viral tweet containing some of the alleged AI stickers.
When Decrypt contacted Meta and inquired about the AI stickers, a spokesperson pointed to a blog post from the company that said in part: “As with all generative AI systems, the models could return inaccurate or inappropriate outputs. We’ll continue to improve these features as they evolve and more people share their feedback.”
Facebook parent company Meta has dove headlong into generative AI tools in 2023, investing up to $39 billion this year alone. In July, Meta joined OpenAI, Google, Microsoft, and others in pledging to develop AI responsibly. This pledge came after meetings with the Biden Administration regarding generative AI.
In addition to AI stickers that use Meta’s Llama 2 and Emu, Meta announced during its Meta Connect event last month the launch of a host of AI-generated tools, including conversational AI assistants for WhatsApp, Messenger, and Instagram, enlisting several high-profile celebrities, including Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka, to lend their voices and likenesses to Meta’s AI lineup.
Meta says that it is using artificial intelligence to identify harmful content faster and more accurately by training its large language models on the company’s “Community Standards.” adding that the company is optimistic generative AI can help it enforce our policies in the future.
The Facebook stickers aren’t the only surreal AI-generated art making waves this week. Along with SpongeBob and Kirby “doing 9/11,” as highlighted in a report from 404 Media, other Twitter users have allegedly been using Microsoft’s Bing AI image generator to turn out their own controversial pop culture riffs—such as a picture of “Neon Genesis Evangelion” anime characters piloting an airliner towards the World Trade Center.
While the AI genie is arguably already out of the bottle, companies are attempting to curb the misuse of their platforms, including instituting KYC policies for users, as Microsoft President Brad Smith suggested in September.
Earlier this year, Midjourney ended its free trial version after it was being used to create AI-generate deepfakes. Last month, ChatGPT creator OpenAI released the latest version of its AI-image generator Dall-E 3, which included new guardrails and features in an attempt to clamp down on violent, adult, or hateful content. That same month, Getty Images launched a generative AI image tool trained on its vast library of images.
Cybercriminals have stepped up using AI tools to create deepfakes of celebrities, commandeering their likenesses to dupe their fans out of their money and cryptocurrency—with one report claiming such content grew by 87 percent in the last year. On Monday, YouTube giant Mr. Beast notified his over 24 million Twitter followers that he had been the victim of one such scheme—and questioned whether tech companies were capable of stopping them. “Lots of people are getting this deepfake scam ad of me,”…
On Wednesday, content creation company Canva rolled out its Magic Studio suite of generative AI tools that include guardrails that would allow the tool to generate images of celebrities or anything that is related to medicine or politics.
“As part of our trust and safety [policies], we don’t allow our AI to generate images of popular or public figures or known persons as well as third-party intellectual property,” Canva’s Head of AI Products Danny Wu told Decrypt.
Source link
Related