I used Voyant Tools and Google Gemini to create a visualization of the top 30 most frequently occurring words found in Little Women, my favorite childhood book with which I am quite familiar.
I noticed a clear contrast between the outputs from the different tools. The word cloud created by Voyant Tools displayed terms that frequently appear in “Little Women,” such as “me,” “Laurie,” “Mother,” and “said.” Also, the variation in word size made these patterns easy to recognize. With this visualization, I’m pretty sure readers can get a sense of what Little Women is about.

The word cloud generated by Gemini, however, showed me a different result. At first, when I asked for a word cloud showing the top-30 most frequently used words in the text file, it just gave me a list like the picture below. So I asked for an “image” of a word cloud, and Gemini gave me an image of a word cloud that looked correct.


While character names like “Amy,” “Meg,” and “Jo” were still emphasized, Gemini also highlighted words such as “never,” “away,” “old,” and “young.” These words do not appear frequently in the text and have little relevance to the book’s themes. Furthermore, the similar sizes and colors of the words made it challenging to get meaningful insights from the visualization.
This indicates that Gemini may be interpreting the data rather than just organizing it. Even though AI tools are fast and powerful, their results can reflect the assumptions and algorithms built into the model. Because AI-generated outputs often appear objective and correct, readers may be tempted to accept them without question. However, what the AI highlights is influenced by algorithms designed by humans. For this reason, I believe AI should be viewed as a tool that requires critical thinking and a responsible approach to questioning its outputs.
I totally agree that AI’s biggest downfall is the strict methods and algorithms that it follows to produce its outputs. Critical thinking should always be used when dealing with AI. We know that it tends to put forth an answer that seems as objective and correct, and needs to be “convinced” otherwise. Because of this, we should never take outputs at face value and always take steps to understand where the model is going wrong and how it can be corrected.
Sena, I really admire the differences you observed between the Gemini and Voyant tools. I also observed a similar trend, and agree that AI can be tricky. Although it is definitely useful, without the user’s attentiveness, it can lead people down the wrong path through misinterpretations and assumptions. It will be interesting how Ai continues to improve and grow more powerful.