My answer is a rather standard compatibilist one, the algorithm in your brain produces the sensation of free will as an artifact of an optimization process.
There is nothing you can do about it (you are executing an algorithm, after all), but your subjective perception of free will may change as you interact with other algorithms, like me or Jessica or whoever. There aren't really any objective intentional "decisions", only our perception of them. Therefore there the decision theories are just byproducts of all these algorithms executing. It doesn't matter though, because you have no choice but to feel that decision theories are important.
So, watch the world unfold before your eyes, and enjoy the illusion of making decisions.
I wrote about this over the last few years:
https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty
https://www.lesswrong.com/posts/436REfuffDacQRbzq/logical-counterfactuals-are-low-res
Thanks, I'll revisit these. They seem like they might be pointing towards a useful resolution I can use to better model values.
Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn’t have turned out any other way than the way it did because I only ever experience myself to be in a single causal history.
That seems to be a non-sequitur. The fact that things did happen in one particular way does not imply that they could only have happened that way.
Just noticed that the same error is in Possibility and Couldness:
The coin itself is either heads or tails.
That doesn’t mean it must have been whatever it was,
That seems to be a non-sequitur. The fact that things did happen in one particular way does not imply that they could only have happened that way.
This would imply multiple causal histories for exactly the same world state. This can happen in sufficiently "small" universes, like Conway's Game of Life, but it does not, as far as I know, appear to happen in ours, or if it does it happens over such large time scales that we can act as if it doesn't since we'll never encounter it. (Although I guess we could always end up having been wro...
To me it seems that the world couldn't have turned out any other way, but it's useful to think about it as if it could in the moment. The decision you're ultimately making after careful consideration is the one you'd make, no matter how many times you'd rewind time. Having the ability to make a choice doesn't oppose determinism and its ramifications. With the right input you'll produce the right output.
Jessica recently wrote about difficulties with physicalist accounts of the world and alternatives to logical counterfactuals. In my recent post about the deconfusing human values research agenda, Charlie left a comment highlighting that my current model depends on a notion of "could have done something else" to talk about decisions.
Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn't have turned out any other way than the way it did because I only ever experience myself to be in a single causal history. Yet I also suspect this is not the whole story because it appears the world I find myself in is one of many possible causal histories, possibly realized in many causally isolated worlds after the point where they diverge (i.e. a non-collapse interpretation of quantum physics).
So this leaves me in a weird place. When thinking about values, it often make sense to think about the downstream effects of values on decisions and actions and in fact many people try to infer upstream values from observations of downstream behaviors, yet the notion of "deciding" implies there was some choice to make, which I think maybe there wasn't. Thus I have theories that conflict with each other yet seek to explain the same phenomena, so I'm confused.
Seeking to see through this confusion, what are some ways of reconciling both the experience of determinism and the experience of freedom of choice or free will?
Since this has impacts on how to think about decision theory, my hope is that people might be able to share how they've thought about this question and tried to resolve it.