Written by: Roland Lindner, Senior Motion Designer
AI generated artwork is hardly a new thing, but as with every technology, time, trial and error nurture innovation and progress. This is particularly true for content produced by AI, which learns and evolves with every new input of information provided by a human being.
We can observe some significant mass adoption of generative AI tech at the moment. The creative scene is accumulating a large amount of fresh, AI-generated content with systems like DALL-E 2, Google Imagen, Midjourney or Disco Diffusion.
This new run on AI is due to the recent development and progress of their latest iterations. Systems like DALL-E 2 (the successor of DALL-E 1) have now been trained on millions of images and captions, which lead to an improved ability to generate more realistic and accurate images and a more refined ability to interpret and match prompts. Combined with enhanced computing power and more sophisticated language models (like GPT-3), this also means results can have higher resolution and therefor a higher level of photorealism.
Beyond that, wider adoption puts the various systems through a testing phase, during which people are experimenting with different sentence structures, words and artistic style references. In return, this means the AI systems keep learning based on which results get picked and filtered through.
Looking at the latest output and feeling the pulse of the creative community on the respective social platforms makes it clear it’s not only my mind that is blown by the incredible visual quality and fidelity generated by artificial intelligence these days.
But does this mean that artists and designers will become redundant anytime soon?
Looking at other technological advances within the creative industry in the past, the short-term answer is no. Lifelike computer animation has not replaced actors but has provided new opportunities for motion capture and voice acting. Kit bashing in Photoshop or 3D apps has not replaced content artists but has sped up their production processes.
Another aspect that will make human input still relevant during a productive creative process is the inability of AI to understand context. Any result is only generated based on the corpus of information that has been put into the system. But for the AI, an image of an object and similar versions of it are just that. They are 2D representations of something inheriting a set of characteristics, that allow for a certain degree of combination and variation to generate new results. However, to use a popular example, an AI does not understand what a shoe and its function is, where the foot goes in, what laces are for or what a comfortable fit means to its wearer.
Even more so, because this learnt collection of images and words is the only reference for an AI to create from, it will only newly combine things that already exist in its set of data. It will not produce something completely new – not until it understands context and emotions.
Ultimately, inspiration and iteration are two fundamental aspects of the creative development cycle and AI generated content could be a new, powerful addition to our digital toolset.