Resisting the Instinct to Seek One Answer
Published:
(This post is translated to English by ChatGPT from Chinese version)
Under Karl Friston’s Free Energy Principle, life is driven by the need to minimize free energy (which, within this framework, is equivalent to uncertainty). This is a solid theory, particularly because at this level, there are no other competing theories. Without a theory, there’s no tool to think with. However, the theory’s shortcoming lies in its vastness—how do you learn a world model by minimizing uncertainty?
Of course, that’s an incredibly difficult question, so I’ll focus on a real-world implication: when your sole goal is to minimize uncertainty, your world model collapses.
Specifically, you can represent your knowledge as a distribution. Ideally, your understanding of the world would be a very complex distribution. You sample from the world, and based on those samples, you gradually construct this distribution (your world model). Naturally, you want this distribution to be less complex, at least easy to sample from, so that it helps you think faster next time (or even use it to train a simple model-free heuristic to act). You also want the uncertainty of this distribution to be low so that you feel reassured, believing you have a certain understanding of the world.
But when you constrain your model to be both simple and low in uncertainty, your inherently complex world model (because the world is complex) degrades into a simplistic one. There’s no denying that this model is (extremely) useful—most people, shaped by culture and education, share a simplified model and use it to impact the world.
Yet, if your model is both simple and carries little uncertainty, what’s the point of thinking? This simple model ensures you’re trapped in an attractor of the mental space—such low uncertainty and complexity constraints mean you can hardly update your model based on errors unless those errors are catastrophic enough to shatter it.
I’m reminded of the clinical symptom of rumination in depression. These individuals receive error signals, but their model can’t explain those errors. So, they continually replay painful experiences, trying to adapt their model to these experiences. Eventually, they may introduce a strange and overly complex model (irrational beliefs) to make sense of the world. At this point, this kind of “replaying” is likely unhelpful and may turn you into someone peculiar.
However, some people are fortunate. They can consistently place themselves in environments where they aren’t exposed to errors (which, of course, is a very good strategy). (The only useful lesson I learned from my undergraduate psychology education is that the best way to solve problems is to avoid encountering them.)
If you’re not that lucky, or if you want to be an interesting person (God, talking to a bag of heuristics is just too boring), we might need to actively resist our instinct to minimize uncertainty.
One reason I fear aging is that, due to the inevitable decline in cognitive capacity and shifts in social roles, we are forced to rely on simple models to make quick decisions. Time and mental energy become too scarce (cognitive decline), and the number of decisions we face becomes overwhelming (social role changes). A model-free system that quickly makes heuristic decisions, even if flawed, often becomes preferable to a model-based one.
Gradually, we helplessly collapse into simplicity… and slowly lose ourselves.