There’s been a monsoon of articles concerning the advent of AI and whether it will put designers, like myself, out of work. I, for one, welcome our AI assistants and have been working with them for quite some time already. In fact, an AI came up with this blog title!

AI as a source of inspiration

The current AI fad really began with the release of the OpenAI DALL-E 2 generative model to the masses in mid 2022. What OpenAI did was a massive leap forward in visual fidelity. DALL-E produces actually serviceable images out of text prompts. And naturally, this conjured a surge of fear amongst creative professionals of all sorts.

I immediately set out to procure beta keys for each of these new, robust systems. I eventually got my hands on DALL-E 2, Midjourney and Stable Diffusion which are probably the most well known of them all. After an afternoon of just fooling around, I actually found practical use for them, not as ready-made image generators, but as idea generators.

DALL-E 2 seems to have the most realistic approach of these, and with no additional instructions the results resemble photographs. Because their training data reflects real life and has all the real-life biases and stereotypes, it is a useful tool for visualizing said iconographic ideas when that’s something you need right now. Sometimes you need an image of a typical graphic designer’s desk and not an artistic interpretation of such, to be relatable. Also, what each of these tools excel in, is the textbook example of perfect composition.


Let’s use DALL-E to generate “Green kiwi bird working as a graphic designer, uses Neural Network to generate ideas for graphic design. Trending in Artstation”.
That bird looks cute! But we’re a VR game company, so let’s use the graphical editing features to mask the head area and add a prompt “kiwi bird wearing virtual reality headset”. Working with these generators, you have to use a curious type of language. The natural order of your words doesn’t always work as intended, and the placement of your commas can change the way the AI interprets your prompt. There’re a lot of buzzwords that completely change the style, such as “Trending in Artstation”, “rendered in Unreal 5” and “Octane render”.

Not happy with the aspect ratio, need a wider picture? DALL-E can use the original to generate more content to the background. This time I asked it to add an Apple iMac on the desk. It seems it remembered the graphic design context, and added some pencils too!


I put the exact same original prompt in Midjourney and got back a completely different result. Midjourney is known for specializing in a more artistic and interpretative style. Midjourney is also different in its user interface; it is operated with Discord bot commands and all the prompts and results are publicly visible to all other users. This makes it unusable for any work done under a strict NDA.

Stable Diffusion Test Version had its own idea of the prompt. This AI is still in a very early stage – it was published only in August 2022 – but being Open Source it’s sure to become increasingly more relevant as more individuals get involved in feeding it different types of training data.


Stable Diffusion Dream Studio Beta is a faster but more inaccurate version. At the moment, the picture above wouldn’t be all that useful as, say, a blog post illustration, but it would be perfect for sponging some inspiration for an illustration of my own.

Making haste with Adobe Sensei

Adobe has incorporated their AI and machine learning platform Sensei in many of their products. In the most puritan sense, this is not a true AI, but a mixture of real time processing and a collection of different technologies and datasets that have utilized neural networks and machine learning.


In Adobe Photoshop, Content-Aware crop works best with smooth colours. It does a poor job trying to speculate how any details on the edges should continue or repeat, as you can see in the third image. I sometimes use this tool in very particular cases, and for small adjustments only. And even then, I usually need to smooth out some visual artifacts by hand.

That cyan dot distracts me. There, gone! Just a rough lasso selection and click Content-Aware fill. I could have used the spot healing brush here. It depends on the size and complexity of the fix. What if I didn’t like the bird at all?

This would have taken forever using the clone tool and healing brushes. This took literally seconds and looks pretty good already. I would only need to manually clean up some details.


Neural Filters are a more recent development and many of them are still in beta. They have less of a real world use, as the results still look unnatural. The most useful filter here for me has been the JPEG-artefact removal. The image on the right has significantly less noise and none of the unnatural blurring there used to be with regular denoising tools before.


Refine hair -tool is something I use all the time, and it makes masking loose strands of hair a breeze compared to what it used to be.


Adobe Illustrator doesn’t have as many of these Sensei features as Photoshop, but one that’s saved my day a couple of times is the Global edit tool. It recognizes similar objects across the file and allows you to edit all of them simultaneously. For example, in this job I had to replace all of the trees with a different kind. It would have taken me hours more time to replace each of them manually.

Other use cases of AI in design

http://thispersondoesnotexist.com
These people above do not exist. I use this tool often when I design user interfaces. It’s great for populating the mockup with avatars. Figma and Sketch both have plugins that use this service and allow you to specify some basic information about the avatar, such as age and sex.

Generative models are also useful for other dummy data, from names to complete paragraphs of text. If you grow bored of lorem ipsum, there are plenty of GPT-2 generators that do just that. Try the above Rytr for example!

One low key example you wouldn’t often think of is in camera technology. Canon has developed its AI Focus for a long time. It uses machine vision to pick the right focal point and keep the focus fixed on a moving target. Apple has taken the AI features even further, and in addition to facial recognition and focus, they now have full scene-understanding. Their camera software recognizes things like time of the day, weather, lighting conditions, skin colours and preferred target. All this to adjust camera settings correctly without the user realizing it.

It’s another discussion, whether you should call these solutions AI or not. Canon and Apple have used Deep Learning to teach their software how to do certain tasks. Once the software has been taught, it doesn’t run any artificial intelligence anymore. It doesn’t learn anything anymore. For the sake of easy conversation and marketing, though, it’s convenient to just call everything AI. Maybe I will eventually be put out of this work by an AI, but I’ve already had to renew and redirect my skill set in this business many times before, and will do so again. Someone has to yell at those pesky AIs for doing it all wrong.


I feel it’s my duty to also let you know, there’s http://thiscatdoesnotexist.com too!