Vladimir_Nesov comments on That Magical Click - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (400)
I suspect that status effects might be important here. When we play a video game, we choose to do it voluntarily, and so the developers are providing us a service. But if the universe is controlled by an AI, and we have no choice but to play games that it provides us, then it would feel more like being a pet.
The AI could also try to take that into account, I suppose, but I'm not sure what it could do to alleviate the problem without lying to us.
If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
It's not quite the same, because when the FAI decided what Physical Laws 2.0 ought to be, it must have made a prediction of what my decisions would be under the laws that it considered. So when I make my decisions, I'm really making decisions for two agents: the real me, and the one in FAI's prediction process. For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn't murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0.
It seems to me that free will works rather differently... sort of like you're in a Newcomb's Problem that never ends.
It just means that you were mistaken and PL2.0 doesn't actually allow you to murder. It's physically (rather, magically, since laws are no longer simple) impossible. This event has been prohibited.