FeepingCreature comments on Will AGI surprise the world? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
Sorry, I should have been more precise - I've read about Boltzmann brains, I just didn't realise the connection to UDT.
This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too, as if I have some knowledge that I am not a Boltzmann brain, this would be a point of difference. In which case, surely I should optimise for the very near future even under old-fashioned causal decision theory. Like you, I wouldn't bite this bullet.
I didn't know that - I've studied formal logic, but not to that depth unfortunately.
I was meaning in the sense of measure theory. I've seen people discussing maximising the measure of a utility function over all future Everett branches, although from my limited understanding of quantum mechanics I'm unsure whether this makes sense.
Yeah, I doubt this would be a good approach either, in that if it does turn out to be impossible to achieve unboundedly large utility I would still want to make the best of a bad situation and maximise the utility achievable by the finite amount of negentropy available. I imagine a better approach would be to add the satisfying function to the time-discounting function, scaled in some suitable manner. This doesn't intuitively strike me as a real utility function, as its adding apples and oranges so to speak, but perhaps useful as a tool?
Well, I'm approaching the limit of my understanding of physics here, but actually I was talking about alpha-point computation which I think may involve the creation of daughter universes inside black holes.
It does seem incompatible with e.g. the plank time, I just don't know enough to dismiss it with a very high level of confidence, although I'm updating wrt your reply.
Your reply has been very interesting, but I must admit I'm starting to get seriously point out that I'm starting to get out of my depth here, in physics and formal logic.
I think Boltzmann brains in the classical formulation of random manifestation in vacuum are a non-issue, as neither can they benefit from our reasoning (being random, while reason assumes a predictable universe) nor from our utility maximization efforts (since maximizing our short-term utility will make it no more or less likely that a Boltzmann brain with the increased utility manifests).