“AI’s inherent incomprehensibility is a unique flaw.”
Being able to explain how something works has value, but being able to explain why it works is enormously more valuable, because that knowledge can be built upon. The fact that AI is inherently incomprehensible - even to the people who created the models - is therefore highly problematic and a unique flaw. This argument was made by Emma Engström, researcher at IFFS in an op-ed in Dagens industri recently. In this interview below she explains further.
One of the recipients of the 2025 Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is Joel Mokyr. In his work, he describes how “explanatory knowledge” was an important ingredient in the breakthroughs of the Enlightenment period, and why this led to lasting development in science and innovation. His point is that this was when we began to base knowledge on prior knowledge in a new way. Instead of merely explaining that something worked, we wanted to understand why. This makes improvement possible. And from this perspective, the inherent incomprehensibility of today’s AI technology should be seen as a serious problem for knowledge-building, Engström argues.
– There is a body of literature about how today’s society is becoming increasingly incomprehensible. In the 2015 book The Black Box Society, Frank Pasquale describes a society increasingly governed by algorithms. For example, loan decisions and other assessments determined by algorithms that are difficult to understand. The latest AI models are infinitely more incomprehensible than those algorithms. Today’s hype is specifically centered on transformer models which, by their nature, get better and better the more parameters they incorporate - the more complex they become. I think that’s a huge problem, says Emma Engström.
In what ways?
– You could say there are two different categories of problems. One concerns ethics and arises when you cannot explain why a certain decision was made. For example why I got a job and you didn’t, or why you received a loan and I didn’t. For people to accept a decision or be able to appeal it, an explanation is required.
– The second category concerns knowledge-building and the power of being able to explain why something works. Take scissors as an example. Understanding that it is the lever effect that makes scissors work enables the next person to create even more effective scissors with an even longer handle. Mokyr shows that this kind of “explanatory knowledge” lies behind the exponential development of science since the Enlightenment.
You mention weather forecasts in your op-ed. AI has proven to be very good at getting them right.
– In one sense, this is not a problem. A weather forecast is not a decision about, say, a loan, and therefore does not require an explanation in the same way. It may be enough that the forecast is accurate, it is valuable in itself. But at the same time, if we were to replace human meteorologists with AI, we would over time lose knowledge about how weather, climate, winds, solar radiation, currents, and so on fit together and humans in 100 years might not understand anything about weather, because we would simply feed data into an AI model that spits out a result that may be accurate, but without us knowing why. And we cannot know whether all the data we feed in is relevant or whether we are missing important data. Perhaps the AI model uses only 10 percent of the data while 90 percent is junk.
– A further problem is that AI has obviously found a pattern that humans have not. In human hands, that knowledge might be used to create a deeper understanding of the climate that could be useful in many ways, or be applied in some other field. Scientific breakthroughs often happen this way. If the knowledge is locked inside an incomprehensible AI, this does not occur.
But won’t we one day be able to understand why AI gives the answers it does? There is, after all, a field called “explainable AI.”
– Yes, but the examples I’ve seen so far haven’t been convincing. Sometimes another, simpler AI model is used to help explain a result from a more advanced AI. But it’s unclear how and when this second model is actually relevant. And often the explanations are sweeping and do not provide any deep understanding.
Despite the problems, your point is not that we should stop using AI. Explain.
– AI can still be extremely valuable. With the help of the technology, we can absolutely gain new knowledge, as shown by last year’s Nobel Prize in Chemistry, where AI was used to predict protein structures. My point is rather that even if AI may outperform us at analyzing and detecting patterns in gigantic datasets, human understanding and theory-building about why something works is still crucial, and it becomes dangerous if we start to believe that AI can replace human intelligence.