Numb & Dumber
How tech will make us more stupid, which may make tech more stupid.
There is an advantage in getting ChatGPT to write your essay or job application: It saves time, it boosts your chances, and it frees you from spelling and grammar. It is understandable; why wouldn’t someone give themselves an edge with the least effort? The issue only appears when the job you land actually requires you to know what you are doing. Passing an application is one thing; making life-or-death decisions because you tricked HR with a neatly phrased paragraph is something else entirely.
How many bridges might fail, how many planes might start behaving like experimental art pieces, and how many systems may collapse because applicants let AI do their homework? No one can know. We may soon need to rethink how we assess competence: perhaps fewer written submissions, more practical tests; perhaps fewer assignments that can be completed at 2 a.m. by a large language model whilst we sit with a glass of red watching Terminator 2. How we acquire this information and put it to work is really just smart and opportunistic; the problem comes further down the road, when you realise the system collapses in on itself.
Technology overuse already causes cognitive decline and intellectual atrophy. We forget how to think; we only remember the answer. How can any of us adapt the end result fully or effectively if we have no idea how it was created? Much like using GPS for ten years and then realising you no longer know how to get to your own house unless a satellite tells you. If we lose understanding of processes, how can we rework or improve anything when it breaks?
As a musician, I have always believed that meaningful work comes from lived experience. For a long time I assumed deep and emotional music could not exist without a genuine human life behind it. To my concern, this may not hold true; many people do not seem to hear or feel the difference between real and AI-generated music that once read a Wikipedia page on heartbreak. Apparently the process matters far less than we’d like to think (currently).
This is troubling, yet there is a second problem with a sense of tragic comedy. People increasingly turn to AI for information because it conveniently pulls from countless articles and studies. But if readers no longer support those articles and studies, or the AI companies don’t pay for where they get their knowledge, the system breaks down, as the writers cannot afford to write them. It is the intellectual equivalent of everyone expecting to get coffee out of a machine without putting any beans in! The more we rely on AI summaries, the more we risk starving the very sources that feed them; eventually the models may get dumber, not smarter, as will we.
Meanwhile, academics are streaming into AI companies and who can blame them? It’s an exciting new world and makes perfect sense for the individuals involved as the salaries are irresistible; university budgets are not. It’s a problem with no easy solution. I love the fact that all the info I search for is neatly herded up and presented clearly, and wouldn’t want a return to trawling multiple sites, So how do we ensure the illuminated minds that feed us continue to be fed?
There is always the possibility that AI companies do indeed reach artificial general intelligence: a system so capable that it no longer depends on human input. At that point, it will think faster than we can blink and understand more than a room full of Nobel laureates. This is also the moment where my understanding collapses; it may be the moment where many things collapse. Something new will begin— something powerful and unsettling.
By then, we become oblivious to what is happening as AI pulls the strings. Much like someone who slips into a coma after a severe accident. Perhaps that is a fitting metaphor for what may happen to society: a kind of collective cognitive nap; our capacities dulled, our experiences simplified to something primitive. Maybe, in a strange way, that would even feel like a relief, but it doesn’t sound like progress though does it?
P.S.
There is another side to this that came to me after completing this article that is worth a ponder. If AI systems start to stumble or slow as they reach the upper limits of available intelligence, it may continue to grow its emotional intelligence. As we confide more, pour our hearts out and bare our souls to it, this may become its real strength. I’m not sure if that scares me even more, so whilst I write these thoughts down for another article, I will fight the urge to crawl into a very dark hole and pull the lid over. Thanks.

