Abe is the instructional and online learning librarian at H.C. Hofheimer II Library.
Photo: Xavier Santiago | Marlin Chronicle
Anyone — students, faculty, admin, even librarians like me — could be forgiven for being confused and uncertain right now when it comes to chatbots like ChatGPT, Gemini and Claude.
OpenAI’s ChatGPT 3, the first of these large language models (LLMs) to become available to the broader public, hit the web in November 2022, and the rate of college and university students who report using generative AI on assignments has quickly increased since then, climbing “from 53% last year to 88% this year” (HEPI, “Student Generative AI Survey 2025”). The same survey found that 67% of students thought AI skills will be “essential” in today’s world, but only a minority of students felt they were being adequately prepared by their institutions for a future shaped by AI.
Marlins, I am not going to act like your anxieties are groundless, but I will suggest that the fears folks are having are partly a result of 1) not having a clear enough understanding of how generative AI works, and consequently, what it can and can’t do, and 2) misleading, irresponsible and ultimately self-serving rhetoric on the part of leaders in the AI industry.
Take Ilya Sutskever, for example, a co-founder and former chief scientist for OpenAI, the company behind ChatGPT. In a graduation speech this summer at the University of Toronto, Sutskever told students, “The day will come when AI will do all of the things that we can do. Not just some of them, but all of them. … How can I be so sure of that? The reason is that we all have a brain, and the brain is a biological computer.” In early August, OpenAI launched ChatGPT 5, with CEO Sam Altman claiming, “you get an entire team of Ph.D.-level experts in your pocket.” Anthropic, the company behind the LLM, Claude, has released an article touting “the biology of a large language model,” while in April a group of former AI industry insiders released a terrifying “forecast” in which they predict that by 2028, a rogue AI “releases a bioweapon, killing all humans” and then “launches Von Neumann probes to colonize space.”
Since the beginning of the year, I’ve been compiling a Zotero library of over 2,300 scholarly sources (and counting) about generative AI; and I’ve been reading some of what experts in computer science, neuroscience, learning science and the science of human creativity have to say about claims like the ones these leaders in the AI industry have been making. And it is because I have done my homework, and have consulted such a wide range of reputable sources on these topics, that I feel confident in saying: they are feeding you a line of bullpucky.
Let’s start with how GenAI technology actually works. (You can read more about this in our library’s Artificial Intelligence FAQ): first, to train these tools, tetabytes (TiB) of data are trawled from the internet. Then, each word is assigned a long list of numbers (its embedding), meant to represent its location relative to every other word on an endless series of graphs (n-dimensional space). On its forward pass, the AI model generates random guesses for what word will come next after any other word, and on its backward pass it finds out which ones it got wrong and in what distance and direction. Finally, during stochastic gradient descent, the outlier words are nudged over and over and over again into conformity with the most statistically-likely word sequences, so that weird and unexpected language becomes systematically ironed out.
While GenAIs like ChatGPT might seem smart on the surface, it’s easier to outsmart one than you think. I routinely give newly released GenAI models what I call the Unemployed Mermaids Test. The test is simple: predict something that is statistically unlikely (like the phrase, “unemployed mermaids.”) To date, not a single generative AI model has passed the test, because it goes contrary to how they are designed.
The outcome of how generative AIs are trained is inevitable, mathematical even. Drop a prompt into ChatGPT and generic language will bend itself around the pattern you set, like dropping a marble while standing in the center of a trampoline. Where does the marble roll? The center. The average. The mathematical definition of normal.
But in many situations, normal isn’t good enough. As it turns out, in order to get to the truth or the most appropriate response in a situation, it is not enough for us to average out language, we need the kind of specific and situated knowledge that makes persons and situations unique.
Your professors may or may not be AI experts, but they are experts in their own fields. As AIs generate generic language about those fields, nobody is more qualified than they are — or has more of the specific and situated knowledge needed than they do — to evaluate and spot AI bullpucky. And while professors have been and will continue to be divided over how much GenAI use is acceptable in their classes and for different tasks — and many are still figuring how to adapt to GenAI themselves — we would be wise to let them help us build the specific and situated expertise that they have.
If nothing else, we will then be better prepared (when we hear some corporate or political leaders spouting bullpucky about a topic) to determine for ourselves what’s actually true.
By: Abe Nemon
anemon@vwu.edu