Blog Week 4

When we use AI to instantly colorize historical photos or generate variant images, we are engaging with tools that do more than just automate aesthetic changes. These technologies reshape our understanding of reality, history, and creativity in ways that raise pressing ethical questions. One way to think about what AI does to images and texts is what I found in Ted Chiang’s metaphor in The New Yorker. Chiang argues that large language models do not reproduce the original material exactly but compress and approximate it, like a blurry JPEG of the web. This highlights that AI outputs are not faithful replicas of reality but reinterpretations built from patterns in training data. When this logic is applied to colorized images, it suggests we risk presenting speculative reconstructions as if they were factual, when in truth they are just guesses. That blurriness might seem harmless in casual uses, yet it becomes ethically significant when such images are treated as evidence of how things really looked.

Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation.

Sonja Drimmer’s critique of AI applications in art history sharpens this concern. She cautions that AI-driven restorations and re-creations can create misleading narratives about the past, noting that such projects often do not reveal new historical truths and these re-creations don’t actually teach us anything we didn’t know about the artists and their methods. The issue here is not only technical accuracy but the authority AI is given. When colorized or AI-generated reconstructions are shared without clear context about their own nature, audiences may accept them as authentic representations, rewriting perceptual histories through the lens of algorithmic interpretation rather than evidence.

This effort to ‘bring events back to life’ routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.

Taken together, these readings point to an ethical issue at the intersection of AI and image manipulation.

2 thoughts on “Blog Week 4

  1. Your first quote actually connects really well to the lecture that Lin gave us, specifically when she explained how ChatGPT gives us a middle ground type of answer when prompted, rather than telling us all of the details that we may be looking for. I like the image you provided and think the “deoldify” did a great job.

  2. Its a really interesting point that trusting AI blindly can lead to “rewriting history”. I wonder what else AI will do in the future to kind of fill in those blanks. Maybe take security footage or find access where it not authorized. It definitely adds a new angle to look at the ethics of AI. I really like that your demonstration directly showed the “patterns” that AI is able to remember and recognize.

Leave a Reply

Your email address will not be published. Required fields are marked *