Image creators were among the first generative AI tools to capture the public imagination.
Since then, the market has matured quickly, with AI image generators evolving from impressive technological curiosities into truly useful and powerful applications.
Today, many established, industry-standard art and design platforms have embraced this revolution by integrating generative functionality. Just about anything we can imagine can now be visualized, from fantastical landscapes to complex 3D models. However, their true value isn’t in replacing human creativity but in augmenting it.
My Favorite Generative AI Art And Design Tools
So, here’s my overview of what I believe are the best tools that artists, illustrators and designers can use to kick-start their creative process or overcome the challenges posed by a blank canvas. All have their pros and cons, and there’s no one-size-fits-all solution right now, so if you want to understand how they can help you create more impressive images or streamline your design workflows, read on.
Dall-E
Dall-E is one of the most powerful and flexible generative image models available to the general public. It was developed by OpenAI, who also created ChatGPT, and the latest version – Dall-E 3, can be accessed by anyone with a ChatGPT Plus subscription.
This model is extremely good at interpreting and creating images from very detailed prompts and offers users a high degree of flexibility over how their finished images turn out. It can also create new variations of existing images and achieve results that are very close to being photo-realistic.
Dall-E uses OpenAI’s language model, GPT-3, to process its users’ natural language prompts and has also been integrated by Microsoft (a major OpenAI investor) into its Bing, Co-Pilot and Designer tools. It can also be accessed through its own API, available through OpenAI Playground, meaning that developers can easily build image generation into their own applications.
Stable Diffusion
Stable Diffusion was one of the first image models to capture the public attention, demonstrating just how powerful the latest generation of AI image generators has become. Developed by researchers at Ludwig Maximilian University of Munich working alongside US company Runway ML, it can create numerous iterations of images from a single prompt. It is also capable of modifying and adding to existing images.
What really sets it apart from other models like Midjourney and Dall-E, though, is that it is open source. This means anyone can create custom versions to suit their own requirements and even run it locally on their own hardware. This has led to it being widely used for generating images used in movies, music videos and TV shows. Its flexibility does, however, mean that it can take a little more technical know-how to get the best results. If you don’t want to set up your own server, Stable Diffusion can be accessed through web interfaces, including DreamStudio and Stable Diffusion Web.
Midjourney
Midjourney is an image generation model aimed more at artists than designers, and as a result its output can be highly evocative, detailed and imaginative, and often has a fantastical quality.
Midjourney is a little different from other models in that rather than being accessed through a web interface, app or API, users interact with it via Discord, using the messaging platform’s bot commands. It has been used to create illustrations and comics as well as an image used as a cover of an issue of The Economist. It also sparked controversy when it was used to create the infamous Pope In A Puffer Jacket deepfake image.
Partly because of being accessed via Discord, a strong community has built up around Midjourney, and users frequently collaborate to create and share innovative prompts and use cases. However, its outputs can be less photo-realistic and more stylized than some of the tools listed here, and it is noted for seemingly allowing itself a greater degree of artistic license in the way it interprets prompts – sometimes leading to more unpredictable results.
Adobe Firefly
Firefly is part of Adobe’s Creative Cloud suite of design and productivity tools, although it is usable in a more limited form without a subscription. It brings generative capabilities to market-leading applications like Photoshop, Illustrator and Adobe Express. Images and designs created in Firefly can be seamlessly integrated into professional design workflows.
An interesting element of Firefly is its commitment to transparent and ethical AI. To this end, Adobe has trained its models entirely on images contained within its own database of stock images, as well as public domain material. The idea is to give creators peace of mind that they’re safe from claims that their work infringes on the copyright of others. Adobe even goes as far as to indemnify its users against any such legal claims that may arise in the future.
Canva Magic Design
Canva is a hugely popular cloud-based design platform that is frequently used to create marketing materials, email templates and social media assets. From the start of this year, it has incorporated generative design functionality. Powered by a custom version of Stable Diffusion, it allows users to generate images and design features such as logos, graphics and templates. These can be automatically aligned with elements such as color palettes and fonts to match your brand guidelines. It also uses generative AI to make suggestions for incorporating Canva’s vast library of image and template assets into users’ projects.
Overall, Canva’s generative functionality focuses on offering a streamlined and versatile experience around marketing content creation that can be accessed across both its free and paid-for services.
Other Great Generative AI Art And Design Tools
Art, design and graphics is one area where there is certainly no shortage of amazing generative AI tools – here are the best of the rest.
Imagen is an image generation model created by Google’s Deep Brain AI team. It scores particularly well on benchmarks that measure how closely the image output aligns with a user’s text prompts. It’s accessible via Google’s Gemini Pro chatbot in most territories, although it is not yet accessible in Europe.
The long-established stock image service now has a tool that enables users to create their own images if they can’t find anything in the library that fits their needs.
LONDON (AP) — With a few daubs of a paintbrush, the Brontë sisters have got their dots back.
More than eight decades after it was installed, a memorial to the three 19th-century sibling novelists in London’s Westminster Abbey was amended Thursday to restore the diaereses – the two dots over the e in their surname.
The dots — which indicate that the name is pronounced “brontay” rather than “bront” — were omitted when the stone tablet commemorating Charlotte, Emily and Anne was erected in the abbey’s Poets’ Corner in October 1939, just after the outbreak of World War II.
They were restored after Brontë historian Sharon Wright, editor of the Brontë Society Gazette, raised the issue with Dean of Westminster David Hoyle. The abbey asked its stonemason to tap in the dots and its conservator to paint them.
“There’s no paper record for anyone complaining about this or mentioning this, so I just wanted to put it right, really,” Wright said. “These three Yorkshire women deserve their place here, but they also deserve to have their name spelled correctly.”
It’s believed the writers’ Irish father Patrick changed the spelling of his surname from Brunty or Prunty when he went to university in England.
Raised on the wild Yorkshire moors, all three sisters died before they were 40, leaving enduring novels including Charlotte’s “Jane Eyre,” Emily’s “Wuthering Heights” and Anne’s “The Tenant of Wildfell Hall.”
Rebecca Yorke, director of the Brontë Society, welcomed the restoration.
“As the Brontës and their work are loved and respected all over the world, it’s entirely appropriate that their name is spelled correctly on their memorial,” she said.