Ten years ago, the Nobel Laureate Daniel Kahneman published the book “Thinking, Fast and Slow” (Farrar, Straus and Giroux, 2011), which quickly gained cult status. In brief, the book describes how people think with two parallel systems: System 1, a “fast, intuitive system” that makes snap and usually unconscious decisions that we often cannot explain afterwards, and System 2, which is sequential, logical and slow. Both are essential but solve very different problems and sometimes they end up conflicting with each other. For most of our decisions, we use the fast system, and it often takes a deliberate effort to switch to the slow system when the fast one is insufficient. When we recognize a face, we do not know why, we just do. That’s System 1 at work. When we solve a complex problem, like a crossword puzzle or a math problem, and cannot immediately see the answer, then we put System 2 to work. System 1 uses parallel data processing, which we do not really notice. System 2 requires effort—we normally solve only one complex problem at a time.
The book is immensely valuable and if you have not read it yet, I would strongly recommend it if you want to understand how we humans work. Kahneman’s conclusion is that the parallel structure between these two systems is essential for our ability to function and reason. If we were to do everything analytically and “slowly,” we would never survive. If we were to do everything at once, we would never be able to solve complex problems.
I recently heard an interview with Kahneman on a podcast by Lex Fridman, who has done several excellent interviews about AI and deep learning. Fridman, an AI researcher, interviewed psychologist Daniel Kahneman, previously awarded the Nobel Prize in economics, about deep learning and AI. You can find the interview here.
Before we go further, we should define what we mean by AI. What I am referring to here is a large network of non-linear simple operators, an artificial neural network (ANN, often called deep learning), where each connection between two nodes has a “weight” that can be changed. The network is “trained” through various methods of affecting the weights. The ANN creates “output” from “input” through its structure and the weights assigned to the nodes. The largest ANNs today have hundreds of billion nodes.
There are other types of AI, but here I’m referring to ANNs.
Back to the interview with Kahneman, which was fascinating and opened my eyes to what is really going on right now. The sequential system, System 2 in Kahneman’s book, is really what we have used computers for since they were invented. They can help us with solutions we can describe algorithmically, complete large calculations, perform operations that we can describe mathematically, etc. Computers have been a great help to our own System 2 thinking.
But they have, at least until the arrival of ANNs, been poor at the tasks that we ourselves handle using System 1. I remember when I was a young graduate student in the 80s, trying to use algorithms for face recognition in images. To put it mildly, I did not get very far.
Radiology is actually quite interesting. When Sectra began working with digitization in radiology, I decided to sit with a number of radiologists for several days to understand how they worked. I particularly remember an afternoon with Dr. Göran Karner, a radiologist in Motala, Sweden, who was reviewing thorax images. At lightning speed, he looked through several images and sorted them into two piles. One he called healthy, the other sick. As an engineer, I wanted to understand why people do things, so I asked him how he could make the distinction so quickly and how he could tell which illness the “sick” pile had. His answer was simply, “They’re sick because the images look sick. I’ll look closer at them later and decide what it is. Sometimes I’m wrong, but it’s rare.”
He put aside the “healthy” images and did not spend more time on them. The time he spent reviewing the images was extremely short, maybe 20 to 30 seconds per case, two images per case. Then he went back to the “sick” pile, reading through patient histories and looking closely at the images. As it turned out, his snap judgment that they were sick in some way was usually correct.
Little did I understand then that 30 years later, I would listen to a podcast episode that seemed to explain this. System 1, the fast and parallel track, is exactly what we are trying to do with AI today. It often cannot be explained algorithmically—the kind of explanation I was trying to get out of Dr. Karner. I thought he just did not feel like explaining it. Now I realize that he did not actually know. The “sick” images just looked “sick.” That was it. An incredibly irritating explanation for a young engineer.
But then, during the closer analysis, Dr. Karner used both the parallel and the sequential system when drawing conclusions.
What if this is what we need to do in medicine? An ANN by itself is not enough. It can support us by strengthening our own System 1. But it cannot replace us, because we work with both System 1 and System 2.
To even begin to approach human-like decision making, we need both systems working in parallel. The old algorithmic decision-making methods (usually based on Bayesian Decision Theory, in other words purely algorithmic) must be combined with ANNs to perform anywhere near as well as radiologists. But ANNs alone might never be enough to get us all the way there. We will not get anywhere near a human’s ability to diagnose and make decisions unless we manage to combine both types of systems in our computers.
I would strongly recommend both “Thinking, Fast and Slow” and the podcast for those interested.
This article was originally published in Swedish in MedTech Magazine: Torbjörn Kronander: Vad är AI, egentligen?