It would be great to have a theory of what a mind is. A real good theory. One that explains the mind so well, we can put it into code. There is one like that out there. It’s called the theory of predictive processing as explained by Andy Clark in Surfing Uncertainty and wonderfully summarized by Alex from Slate Star Codex.
I can’t explain it any better than Alex already did:
The brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.
The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.
The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”. It uses its knowledge of concepts to make predictions – not in the form of verbal statements, but in the form of expected sense data. It makes some guesses about what you’re going to see, hear, and feel next, and asks “Like this?” These predictions gradually move down all the cognitive layers to generate lower-level predictions. If that uniformed guy was a policeman, how would that affect the various objects in the scene? Given the answer to that question, how would it affect the distribution of edges in the scene? Given the answer to that question, how would it affect the raw-sense data received?
Source: Slate Star Codex
Think of your mind as structured in prediction departments (= PDs). Each department is responsible for a certain set of predictions.
The higher departments build their predictions on top of lower departments. For example, to predict a square, you need four right-angle edges:
And in order to predict an edge, you need to get two right-angle lines that touch:
And on and on it goes….
While bottom layers just try to figure out what a thing is, the higher departments make predictions on how it will behave. Predictions get more sophisticated the higher up you go.
Just take a moment to appreciate that. Somewhere in your brain, there is a department that is very good at predicting where a basketball will land when you throw it. This department is basically a physics simulation. And it works not by knowing math or the laws of physics. Not at all. It works by getting predictions from the Physics PD, which itself is built up on predictions from the thousands of layers beneath it.
Predictive processing, experienced.
Read this:
What does it say? Did you catch the second “THE”? Chances are you didn’t. Sometimes your top-down prediction is so strong, that the bottom-up sense data is ignored. Your higher departments do that when they are very confident about their predictions. They can already see what it is from the the shape of it, so there’s no need to be super picky about it.
Here’s another example: Did you notice the double “the” in my last sentence?
Both streams are probabilistic in nature. The bottom-up sensory stream has to deal with fog, static, darkness, and neural noise; it knows that whatever forms it tries to extract from this signal might or might not be real. For its part, the top-down predictive stream knows that predicting the future is inherently difficult and its models are often flawed. So both streams contain not only data but estimates of the precision of that data. A bottom-up percept of an elephant right in front of you on a clear day might be labelled “very high precision”; one of a vague form in a swirling mist far away might be labelled “very low precision”. A top-down prediction that water will be wet might be labelled “very high precision”; one that the stock market will go up might be labelled “very low precision”.
Source: Slate Star Codex
You might wonder why we are built this way. Why don’t we just get the sense data as it is? I think this example illustrates it quite beautifully:
Reality is messy. If we want to be able to navigate in a messy world, our prediction departments can’t be super precise about every single thing. If it looks sort of similar, it’s good enough. Computers are precise. And when they get some data that is even only slightly different than they expected, they give up and throw an error. Not humans. We can extract meaning from data, even when the data is not exactly as expected.
But more importantly, the actual raw data is just not very pretty. Our eyes have a blind spot, where there’s no data coming in. Not only that, most of what your eyes get is pretty crappy. Your eyes only get good data in the center of your visual field. That’s where you see a sharp image in full color. Everywhere else it’s kind of blurry and desaturated.
So yeah, raw data sucks. You want the responsible prediction departments to create something more pleasing. Put some color in … fill the blind spot… just make the damn thing look pretty!
Ok, that’s the upside. What’s the downside? The downside is that you don’t get to experience reality, even if you want to. All you ever experience are your high-level predictions. And that can lead to mistakes.
Here is a mildly frustrating example:
Most people see moving circles, even though the circles are completely still.
Another one:
It looks like the yellow and blue bars are moving. They are completely still. The only thing moving is the black bars. Your prediction here is so confident, that you never quite get to see the image as it is. I can’t help but wonder. What are some of the things we are missing? And even more interesting… what are things that we’re seeing that aren’t really there? But that’s a topic for another time. For now, let’s just see what we can learn from it for the mind we are building.
Next: A roadmap to the mind
Thanks to Coletta Braun and Johann Theisen for reading drafts of this.