Synthetic futures: my journey into the world of AI art making


Generative art making is flourishing. Algorithms that turn text prompts into images, such as DALL-E and Stable Diffusion, are emerging as viable creative tools. And they’re fuelling much debate about their artistic legitimacy and potential to pinch our jobs.

The sudden leap in fidelity of artificial intelligence (AI) art production has been made possible by advances in deep learning technologies, in particular natural language processing and generative adversarial networks.

In essence, a user can input a text description and the algorithm auto-translates this into a cohesive image.

MidJourney – or MJ as it is known to its passionate users – is perhaps the most seductive technology for its painterly output and poetic interactions. The charm begins from the very first moment, with the command line prompt “/imagine”.

Augmented imagination

MidJourney founder David Holz has said users find their text-to-image interactions to be a “deeply emotional experience” with the potential for it to be therapeutic. He said:

Read more