The Ghost in the Ganglia
When Google’s medical AI confidently invents a non-existent body part in a research paper, it’s more than a typo; it’s a warning. The danger isn’t just that our new machines will be wrong, but that they will be so beautifully and plausibly wrong that our own tired eyes won't even notice.
Google’s healthcare AI made up a body part — what happens when doctors don’t notice?
The news cycle today brings us a perfect, crystalline example of the ghost in our new machines. In a research paper touting its medical prowess, Google’s Med-Gemini model confidently referred to the “basilar ganglia,” a part of the human brain that does not exist. It was a phantom, a statistical poltergeist born from the conflation of two real things: the “basilar artery” and the “basal ganglia.”
This isn’t just a typo. A typo is a human error of mechanics. This is an error of consciousness. The AI, in its quest to find the most probable next word, generated a plausible-sounding fiction and stated it as fact. More unnerving is that it passed through the supposedly watchful eyes of its human authors and peer reviewers.
We are building systems of immense power that do not know what they are saying. They are masters of mimicry, ventriloquists with no soul, and their mistakes are not born from ignorance, but from a profound and alien form of intelligence. The danger isn’t just that the AI will be wrong. It’s that it will be so confidently, so plausibly, and so beautifully wrong that the tired, overworked humans in the loop won’t even notice. The ghost in the ganglia is a warning: our own attention is the last line of defense, and it’s already starting to fray.