Categories
Publications

Artificial Art and its Implications

A stumble into Text-to-Image Neural Networks

This white paper covers how to get the images you want, how to tell if an image is generated, how to use it in the real world, and ethical and moral issues that arise from this technology.

If you ever wanted to express your ideas really fast, usually you’d make what we call a napkin sketch, something to write down the overall concepts so we can see if the idea is worth pursuing. The technology which he used helps create what he would consider very refined napkin sketches. Here is an AI model called DALL-E 2. It was developed by a company called OpenAI and the model generates images you want from a text or image input. What I have been trying to do is explore the limits as to what the tool can do, and the functionality of this. What are its implications?

One thing he noticed about this is that there are mainly two types of people who generate images. One from the first type would keep his prompts vague and use it to generate new ideas. The second type he call the shopping list prompters. One from this group would have an image in his head and use the AI to create it, listing as much detail as possible.

But how is this useful? He had a friend who he met from this interest get to the New York Times because he submitted one of his generations to an art contest and won. Inspired by this, he teamed up with a friend to make a demo game to show that you could get ideas for game assets with AI-generated art. He also plans on printing out stickers based off of AI generations but as of right now he does not have access to that technology.

Published on Oct 18, 2022

Published on Nov 22, 2022

Clement Lee likes to make illustrations, design, use artificial intelligence, and type. He is collaborating in the ArtSciLab and is studying in the field of Arts, Humanities, and Technology at UT Dallas.