You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ESRogs comments on Simulating Problems - Less Wrong Discussion

1 Post author: Andreas_Giger 30 January 2013 01:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread.

Comment author: ESRogs 31 January 2013 06:08:49AM 0 points [-]

Let's say we simulate Omega's prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent's decision.

It's not quite clear to me what you have in mind here. Are you envisioning this with human agents or with programs? If with humans, how will they not remember that Omega got it wrong on the past run? If with programs, what's the purpose of the coin?

Comment author: Andreas_Giger 31 January 2013 06:56:39AM 1 point [-]

If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.

For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).

Comment author: ESRogs 01 February 2013 07:57:33AM 0 points [-]

Ah, that makes sense.