A picture may be worth a thousand words. But what about a picture generated entirely by a machine?
That is the question scholars, advocates, and internet users have been considering lately, as art generated by artificial intelligence (AI) has exploded in popularity. Some commentators have asked who regulates this digitally created art and whether the courts can prevent theft of creative ideas and techniques in the process of its generation.
But the reality is that little regulation protects the copyrighted works used to train these AI-based technologies, and privacy protections for images used in the creation of AI-based art are scant. Advocates have called for regulatory solutions rooted in copyright and privacy law.
Toward the end of last year, popular use of the Lensa AI app, which generates stylized portraits based on users’ uploaded selfies, spurred the latest round of controversy over the ethics of AI-generated art. Debate over AI art had been raging since earlier last year, when other popular AI models such as DALL-E 2 and Stable Diffusion rapidly gained popularity.
Some commentators have noted that these programs have made art more accessible. Stable Diffusion generates images for free based on strings of text entered by users, and Lensa sells its portraits for as little as $3.99. Queer users of Lensa have shared that the avatars created by the app, which allows users to specify their gender, have made them feel joyful and aligned with their true gender identity.
But many others have voiced concerns that stem from the mechanisms that such algorithms use to generate new images. Their creators collect and use captioned images to train the AI algorithm on the relationships between textual and visual representations. For example, Stable Diffusion trained its algorithm on data sets collected by the German nonprofit LAION, which has collected billions of captioned images from art shopping sites and websites such as Pinterest.