Viliam_Bur comments on Chocolate Ice Cream After All? - Less Wrong

3 Post author: pallas 09 December 2013 09:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 10 December 2013 01:00:15PM 2 points [-]

You can also imagine a game where Omega predicts that those who pick a carrot out of a basket of vegetables are the ones that will die shortly of a heart attack.

Those who pick a carrot after hearing Omega's prediction, or without hearing the prediction? Those are two very different situations, and I am not sure which one you meant.

If some people even after hearing the Omega's prediction pick the carrot and then die of a heart attack, there must be something very special about them. They are suicidal, or strongly believe that Omega is wrong and want to prove it, or some other confusion.

If people who without hearing the Omega's prediction pick the carrot and die, that does not mean they would have also picked the carrot if they were warned in advance. So saying "we should also press A here" provides no actionable advice about how people should behave, because it only works for people who don't know it.

Comment author: pallas 10 December 2013 01:54:00PM 4 points [-]

Those who pick a carrot after hearing Omega's prediction, or without hearing the prediction? Those are two very different situations, and I am not sure which one you meant.

That's a good point. I agree with you that it is crucial to keep apart those two situations. This is exactly what I was trying to address considering Newcomb's Problem and Newcomb's Soda. What do the agents (previous study-subjects) know? It seems to me that the games aren't defined precise enough.
Once we specify a game in a way that all the agents hear Omega's prediction (like in Newcomb's Problem), the prediction provides actionable advice as all the agents belong to the same reference class. If we, and we alone, know about a prediction (whereas other agents don't) the situation is different and the actionable advice is not provided anymore, at least not to the same extent.
When I propose a game where Omega predicts whether people pick carrots or not and I don't specify that this only applies to those who don't know about the prediction then I would not assume prima facie that the prediction only applies to those who don't know about the prediction. Without further specification, I would assume that it applies to "people" which is a superset of "people who know of the prediction".