adplus-dvertising
Connect with us

Art

Artists Can Fight Back Against AI by Killing Art Generators From the Inside – Gizmodo

Published

 on


Artist woman teaching painting to a humanoid AI robot, she is pointing at canvas and giving advice to a robot surrounded by paint brushes and a canvas.
Some AI models are trained on hundreds of gigabytes worth of images, but some researchers show that poisoning just a handful of those could cause the AI model to hallucinate.
Photo: Stock-Asso (Shutterstock)

How can artists hope to fight back against the whims of tech companies wanting to use their work to train AI? One group of researchers has a novel idea: slip a subtle poison into the art itself to kill the AI art generator from the inside out.

Ben Zhao, a professor of computer science at the University of Chicago and an outspoken critic of AI’s data scraping practices, told MIT Technology Review he and his team’s new tool, dubbed “Nightshade,” does what it says on the tin— poisoning any model that uses images to train AI. So far, artists’ only option to combat AI companies was to sue them, or hope developers abide by an artists’ own opt-out requests.

The tool manipulates an image at the pixel level, corrupting it in a way that the naked eye can’t detect. Once enough of these distorted images are used to train AI like Stability AI’s Stable Diffusion XL, the entire model starts to break down. After the team introduced data samples into a version of SDXL, the model would start to interpret a prompt for “car” as “cow” instead. A dog was interpreted as a cat, while a hat was turned into a cake. Similarly, different styles came out all wonky. Prompts for a “cartoon” offered art reminiscent of the 19th-century impressionists.

It also worked to defend individual artists. If you ask SDXL to create a painting in the style of renowned Sci-Fi and fantasy artist Michael Whelan, the poisoned model creates something far less akin to their work.

Depending on the size of the AI model, you would need hundreds or more likely thousands of poisoned images to create these strange hallucinations. Still, it could force all those developing new AI art generators to think twice before using training data scraped up from the internet.

Gizmodo reached out to Stability AI for comment, but we did not immediately hear back.

What Tools Do Artists Have to Fight Against AI Training?

Zhao was also the leader of the team that helped make Glaze, a tool that can create a kind of “style cloak” to mask artists’ images. It similarly disturbs the pixels on an image so it misleads AI art generators that try to mimic an artist and their work. Zhao told MIT Technology Review that Nightshade is going to be integrated as another tool in Glaze, but it’s also being released on the open-source market for other developers to create similar tools.

Other researchers have found some ways of immunizing images from direct manipulation by AI, but those techniques didn’t stop the data scraping techniques used for training the art generators in the first place. Nightshade is one of the few, and potentially most combative attempts so far to offer artists a chance at protecting their work.

There’s also a burgeoning effort to try and differentiate real images from those created by AI. Google-owned DeepMind claims it has developed a watermarking ID that can identify if an image was created by AI, no matter how it might be manipulated. These kinds of watermarks are effectively doing the same thing Nightshade is, manipulating pixels in such a way that’s imperceptible to the naked eye. Some of the biggest AI companies have promised to watermark generated content going forward, but current efforts like Adobe’s metadata AI labels don’t really offer any level of real transparency.

Nightshade is potentially devastating to companies that actively use artists’ work to train their AI, such as DeviantArt. The DeviantArt community has already had a pretty negative reaction to the site’s in-built AI art generator, and if enough users poison their images it could force developers to find every single instance of poisoned images by hand or else reset training on the entire model.

Still, the program won’t be able to change any existing models like SDXL or the recently released DALL-3. Those models are all already trained on artists’ past work. Companies like Stability AI, Midjourney, and DeviantArt have already been sued by artists for using their copyrighted work to train AI. There are many other lawsuits attacking AI developers like Google, Meta, and OpenAI for using copyrighted work without permission. Companies and AI proponents have argued that since generative AI creates new content based on that training data, all those books, papers, pictures, and art in the training data fall under fair use.

OpenAI developers noted in their research paper that their latest art generator can create far more realistic images because it is trained on detailed captions generated by the company’s own bespoke tools. The company did not reveal how much data actually went into training its new AI model (most AI companies have become reluctant to say anything about their AI training data), but the efforts to combat AI may escalate as time goes on. As these AI tools grow more advanced, they require even more data to power them, and artists might be willing to go to even greater measures to combat them.

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Art

40 Random Bits of Trivia About Artists and the Artsy Art That They Articulate – Cracked.com

Published

 on


[unable to retrieve full-text content]

40 Random Bits of Trivia About Artists and the Artsy Art That They Articulate  Cracked.com

728x90x4

Source link

Continue Reading

Art

John Little, whose paintings showed the raw side of Montreal, dies at 96 – CBC.ca

Published

 on


[unable to retrieve full-text content]

John Little, whose paintings showed the raw side of Montreal, dies at 96  CBC.ca

728x90x4

Source link

Continue Reading

Art

A misspelled memorial to the Brontë sisters gets its dots back at last

Published

 on

 

LONDON (AP) — With a few daubs of a paintbrush, the Brontë sisters have got their dots back.

More than eight decades after it was installed, a memorial to the three 19th-century sibling novelists in London’s Westminster Abbey was amended Thursday to restore the diaereses – the two dots over the e in their surname.

The dots — which indicate that the name is pronounced “brontay” rather than “bront” — were omitted when the stone tablet commemorating Charlotte, Emily and Anne was erected in the abbey’s Poets’ Corner in October 1939, just after the outbreak of World War II.

They were restored after Brontë historian Sharon Wright, editor of the Brontë Society Gazette, raised the issue with Dean of Westminster David Hoyle. The abbey asked its stonemason to tap in the dots and its conservator to paint them.

“There’s no paper record for anyone complaining about this or mentioning this, so I just wanted to put it right, really,” Wright said. “These three Yorkshire women deserve their place here, but they also deserve to have their name spelled correctly.”

It’s believed the writers’ Irish father Patrick changed the spelling of his surname from Brunty or Prunty when he went to university in England.

Raised on the wild Yorkshire moors, all three sisters died before they were 40, leaving enduring novels including Charlotte’s “Jane Eyre,” Emily’s “Wuthering Heights” and Anne’s “The Tenant of Wildfell Hall.”

Rebecca Yorke, director of the Brontë Society, welcomed the restoration.

“As the Brontës and their work are loved and respected all over the world, it’s entirely appropriate that their name is spelled correctly on their memorial,” she said.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending