FeepingCreature comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 27 June 2014 06:58:48PM 0 points [-]

Boltzmann brains were discussed in many places, not sure what the best link would be.

Sorry, I should have been more precise - I've read about Boltzmann brains, I just didn't realise the connection to UDT.

In the current context this calls for time discount because we don't want the utility function to be dominated by the well being of those guys.

This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too, as if I have some knowledge that I am not a Boltzmann brain, this would be a point of difference. In which case, surely I should optimise for the very near future even under old-fashioned causal decision theory. Like you, I wouldn't bite this bullet.

If your utility function involves an infinite time span it would be typically impossible to prove arbitrarily tight bounds on it since logical sentences that contain unbounded quantifiers can be undecidable.

I didn't know that - I've studied formal logic, but not to that depth unfortunately.

I don't understand what you mean by maximizing measure.

I was meaning in the sense of measure theory. I've seen people discussing maximising the measure of a utility function over all future Everett branches, although from my limited understanding of quantum mechanics I'm unsure whether this makes sense.

I don't think it's a promising approach, but if you want to pursue it, you can recast it in terms of finite utility (by assigning new utility "1" when old utility is "infinity" and new utility "0" in other cases).

Yeah, I doubt this would be a good approach either, in that if it does turn out to be impossible to achieve unboundedly large utility I would still want to make the best of a bad situation and maximise the utility achievable by the finite amount of negentropy available. I imagine a better approach would be to add the satisfying function to the time-discounting function, scaled in some suitable manner. This doesn't intuitively strike me as a real utility function, as its adding apples and oranges so to speak, but perhaps useful as a tool?

If I understand you correctly it's the same as destabilizing the vacuum which I mentioned earlier.

Well, I'm approaching the limit of my understanding of physics here, but actually I was talking about alpha-point computation which I think may involve the creation of daughter universes inside black holes.

This is a nice fantasy but unfortunately strongly incompatible with what we know about physics. By "strongly" I mean that it would take a very radical update to make it work.

It does seem incompatible with e.g. the plank time, I just don't know enough to dismiss it with a very high level of confidence, although I'm updating wrt your reply.

Your reply has been very interesting, but I must admit I'm starting to get seriously point out that I'm starting to get out of my depth here, in physics and formal logic.

Comment author: FeepingCreature 30 June 2014 12:45:55AM 1 point [-]

In the current context this calls for time discount because we don't want the utility function to be dominated by the well being of those guys.

This is the bit I don't understand - if these agents are identical to me, then it follows that I'm probably a Boltzmann brain too, as if I have some knowledge that I am not a Boltzmann brain, this would be a point of difference. In which case, surely I should optimise for the very near future even under old-fashioned causal decision theory. Like you, I wouldn't bite this bullet.

I think Boltzmann brains in the classical formulation of random manifestation in vacuum are a non-issue, as neither can they benefit from our reasoning (being random, while reason assumes a predictable universe) nor from our utility maximization efforts (since maximizing our short-term utility will make it no more or less likely that a Boltzmann brain with the increased utility manifests).