The Free Energy Argument for Value Realism
A materialist argument that your utility function is wrong
You have one goal in life, whether you realize it or not. This goal is to reduce free energy in your brain right now.
The free energy model of whole brain cognition makes this clear: reducing its own free energy is what your brain is doing. Whatever goals you think you have - such as, say, impacting the world positively, reduce suffering, saving ‘quality adjusted life years’ or collecting postage stamps featuring pictures of beetles driving beetles on betelgeuse - these are all instrumental subgoals of your one true goal: reducing free energy in your brain.
That’s your utility function. Woman, behold your son.
There are common objections here, such as ‘you are not your brain’ or ‘you have a will and a soul which exist outside of material reality.’ Those objections violate the tenets of materialism, which, for the purposes of this post, I'll take as a given, along with Friston’s ‘free energy minimization’ or ‘predictive processing’ framework of whole brain cognition.
This idea has a direct consequence which will make most sincere materialists uncomfortable: a person can be wrong in what they want. Your values can be wrong. And even more important, they likely are.
The only alternative here seems to be that predictive processing is wrong, or that the values that our brains are computing ‘should be’ considered totally independent of the values that we personally claim we have - which itself is a value claim, or something not worth taking seriously.
You obviously want things. Your brain clearly has something to do with this.1
You can be wrong in what you want.
The argument is simple: there is a true value system operating inside your brain. It is not a function of your personal experience, your cultural background, or the choices you’ve made. It’s a function of the physical nature of your biology.
The things that you assign value to, like money or status or good food or helping people out of suffering - those value assignments are predictions. Predictions can be wrong. Your brain is simply trying to navigate its energy landscape, moving in the direction of lower free energy. When you experience the desire to organize a room, and then you go organize the room, that desire is a prediction: namely, that the distribution of the free energy in your brain will be lowered by acting on the desire.
The free energy model says that your brain predicts this action as the one which is likely to reduce free energy the most. When you perform the action, and the free energy in your brain does indeed go down, you feel better. You feel calmer and more at peace. The prediction is validated, the model is updated, and bayesian weights are shifted.
“It is good to clean your room” is a value statement. Taking Hume’s fork, the ‘is’ side as a given, and empirical falsification as our only epistemic conduits, there’s no way to validate it as true or false.
“You will feel better if you clean your room”, however, is a prediction, which can be validated. It’s either true or false.
“Prediction error will be reduced by 10 joules if I put those papers into the folder and put it into the file cabinet, such that the surface of the desk is a continuous wood-grain pattern“ is a precise prediction with a magnitude. It can be directionally correct, but off by some quantity.
The error in that prediction - the comparison of the actual reward vs the expected reward - is what generates the dopaminergic update in the predictive processing model. The things that we value are predictions of how having or doing those things will affect our brain.
Predictions can be wrong. This means that, as human beings, our values can be wrong, in an objective, materialist sense.
Not only can they be wrong, they are very likely wrong.
It is likely that your values are wrong.
Errors in experienced vs predicted free energy reduction lead to dopaminergic updates to the bayesian weights in the worldmodel encoded in your brain. These errors are experienced as phasic release of dopamine and concomitant pleasure.
How often do you experience pleasure after accomplishing some task?
If the answer is not ‘never’, your values are wrong, i.e. your brain is generating incorrect predictions about the consequence of an action, you are acting on those incorrect predictions, and then experiencing pleasure as a result of the predictions underestimating true free energy reduction.
How often have you completed some task, and felt a sense of disappointment?
If the answer is not ‘never’, your values are wrong, i.e. your brain is generating incorrect predictions, you are acting on the incorrect predictions, and then experiencing disappointment as a result of the predictions overestimating the true free energy reduction.
But wait, it gets worse!
Lacan isn’t epistemically dark enough
If you’re been on this corner of the internet long enough, you’ve probably encountered someone like Lacan, who argues that a lot of what we do is about attempting to look good in front of our peers, and that we deny this fact to ourselves.
According to Lacan, you want to look good in front of your peers, which generally means not failing publicly. The end result, Lacan says, is that you only ape at having goals, but you generally don’t have real goals, because failing publicly scares you as a result of your predictions about what it would do to your status.
I think he’s right, but it goes further than this.
The free energy model suggests a reason why we should believe this is true, and maybe worse than it originally seems: your brain contains a bunch of predictions which are likely all at odds with one another!
You want to eat yummy food but you also want to stay in shape and be healthy. You want to be calm and relaxed but you also want to be energetic and aggressive when necessary. You want freedom to do as you please in the moment but you also want order and predictability in your environment. You want relationships that you can depend on in times of need, but you also want novelty and excitement.
What’s a free-energy minimizing primate to do?
Fortunately, there is an answer here. There is a pathway out! I’ll share that next time.
Please Tell Me I’m Full of Shit
But for now, please, tell me where I'm wrong. What am I missing here? As with anything I write, my main goal is to experience cognitive evolution, with the audience serving the role of epistemic predator. You are cordially invited to act as the selection function on which ideas get to keep on living in my brain. Please, consider me a juicy lamb, helplessness in the epistemic night. You are a vicious predator, a cognitive owl, who swoops down to snap me up. Kill me, kill me, kill me! Tell me where I am wrong, penetrate me with your sword of reason, so that I might die and be reborn, to grow and learn more!
My free energy model predicts that if someone will do just that, I will experience a temporary increase, and then long term decrease, in cognitive free energy, as the kinetic impact of your well-typed response pushes me through the cognitive energetic landscape, beyond the mountain ranging hemming in my current perspective, to a smoother, flatter terrain which more accurately represents the relationship between present actions and future free energy distributions.
I’ve wandered in the epistemic desert for 40 years, I gave a few blowjobs to the golden calf, and now I’m looking forward to seeing the promised land of dopaminergic continuity, with dopamine flowing continuously like opiods from the mammary glands of mother earth.
You’ve been told by the intellectual elite of the last century that there are no true values, and therefore, you should want what the state tells you to want. Of course they say this last part under their breath, hoping you’re too stupid to notice. But, dear reader, you obviously notice which is why you’re here in the footnotes, you stud.
My interpretation is that free energy minimisation is more a goal-achieving method than a goal in itself. I don't dispute your model of the interactions involved, just your labelling of their nature. I think the "you are not your brain" folks have a bit of a point here. It's easy to identify with your goals, but hard to identify with the abstract principle of free-energy minimisation.
Thanks for writing this up. I'm a value realist, and I found this argument intriguing, but there are a couple of things that don't quite fit for me.
It seems like you're assuming "free energy" is well defined and could be quantified in absolute terms, but I don't think this is the case. Free energy is inherently model dependent, how you define it depends on how you define the model a brain is using to interpret the world. Even just how brains choose to overcome prediction error - through active inference (by acting on the world) or by updating their world model is inherently subjective, different people can implicitly give different weight to each option.
If values come from people acting to minimise free energy, but they can just be changed by changing the model you're using to make predictions -- or the way you deal with prediction error -- I don't think this supports value realism.
I suspect whenever you're relying on a mathematical theory, which is defined in a model dependent relativistic way, you will always have this kind of issue.
I think the solution comes through the fact that a free energy minimisation procedure has to be instantiated in something physically real. Then, the value reality comes from the details of the physical instantiation (e.g. valence in a conscious system).