Vladimir_Nesov comments on A Problem About Bargaining and Logical Uncertainty - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
You will have enough computing power later.
I mean suppose Omega gives you the option (now, when you don't have enough computing power to compute the millionth digit of pi) of replacing yourself with another AI that has a different decision theory, one that would later give control of the universe to the staples maximizer. Should you take this option? If not, what decision theory would refuse it? (Again, from your current perspective, taking the option gives you 1/2 "logical" probability of 10^20 paperclips instead of 1/2 "logical" probability of 10^10 paperclips. How do you justify refusing this?)
Good, I'm in a similar state. :)
Yes, I noticed the similarity as well, except in the ASP case it seems clearer what the right thing to do is.
(Grandparent was my comment, deleted while I was trying to come up with a clearer statement of my confusion, before I saw the reply. The new version is here.)