You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

earthwormchuck163 comments on A Series of Increasingly Perverse and Destructive Games - Less Wrong Discussion

11 Post author: nigerweiss 14 February 2013 09:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: nigerweiss 14 February 2013 05:28:36PM *  1 point [-]

So I guess I should have specified which model of hypercomputation Omega is using. Omega's computer can resolve ANY infinite trawl in constant time (assume time travel and an enormous bucket of phlebotinum is involved) - including programs which generate programs. So, the players also have the power to resolve any infinite computation in constant time. Were they feeling charitable, in an average utilitarian sense, they could add a parasitic clause to their program that simply created a few million copies of themselves which would work together to implement FAI, allow the FAI to reverse-engineer humanity by talking to all three of the contestants, then creating arbitrarily large numbers of value-fulfilled people and simulating them forever. But I digress.

In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.

Comment author: earthwormchuck163 14 February 2013 06:43:41PM 2 points [-]

In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.

In this case, I have no desire to escape from the room.

Comment author: nigerweiss 15 February 2013 03:17:17AM *  1 point [-]

That's fair.

Actually, my secret preferred solution to GAME3 is to immediately give up, write a program that uses all of us working together for arbitrary amounts of time (possibly with periodic archival and resets to avoid senescence and insanity), to create an FAI, then plugging our minds into an infinite looping function in which the FAI makes a universe for us, populates it with agreeable people, and fulfills all of our values forever. Program never halts, return value is taken to be 0, Niger0 is instantly and painlessly killed, and Niger1 (the simulation) eventually gets to go live in paradise for eternity.