This weeks Blog post is about assessing the convenience of AI image correction and colorization but also the risks and assumptions that go with it. AI image restoration can be dangerous because it can get things blatantly wrong while making the mistake look like it belongs in the image. For example we saw the image comparison where the original picture included a black woman and the AI colorized version depicted her skin color as much lighter. The picture didn’t look wrong, but when comparing to the original there was a clear change of skin tone that can be seen as very problematic. Here are two quotes that support my thoughts about how we should approach AI image customization.
“It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. ” – Ted Chiang, ChatGPT Is a Blurry JPEG of the Web, The New Yorker
“It is… imperative that we become active in critiquing, designing, and building the algorithms that will facilitate the collection-building and source-discovery processes that guide historians to their sources.” – Lauren Tilton, “Relating to Historical Sources”
When using AI image customization I think it is always important to know the context behind the photo first. This will allow you to spot any mistakes that the AI model makes and do some troubleshooting and tweaking so that your final product is satisfactory and accurate to what you deem necessary. A lot of the times, like the New Yorker article quote points out, the AI model spits out something that doesn’t “look” wrong but may have small “hallucinations” that actually do make a big difference. That’s why paying extra attention and double checking/comparing with the original is always a good idea. The quote from the Tilton piece is saying that historians are actively involved in interacting with AI and shaping the algorithms that do help them with their work. Being an active part in the questioning, curiosity, and the development surrounding AI use in historical image restoration will help the tools that are available become better and more reliable.




When using the Distant Viewing Explorer tools, The object recognition was much worse when inputting just the black and white image versus the AI colorized version.