You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Future Filters [draft] - Less Wrong Discussion

0 Post author: snarles 16 May 2011 12:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 16 May 2011 08:12:33PM *  0 points [-]

Considering it as a decision problem is a particular side in the definition/axiom dispute - a side that also corresponds with requiring the probabilities be the frequencies - i.e. if you use the other definitions the probabilities will not be frequencies. So I think the resolution to Sleeping Beauty is even stronger - there is a right side, and a right way to go about the problem.

Comment author: Vladimir_Nesov 16 May 2011 08:51:40PM *  -1 points [-]

Considering it as a decision problem is a particular side in the definition/axiom dispute

Considering what as a decision problem? As formulated, we are not given one.

Comment author: Manfred 16 May 2011 10:50:18PM *  0 points [-]

Exactly! :P

Assigning constant rewards for correct answers can be compared with assigning constant rewards to each person at the end of the experiment, and these options are (I think) isomorphic to the two ways to look at the problem through probability - the fact that the choice seems more intuitive through the lens of decision theory is a fact about our brains, not the problem.

Comment author: Vladimir_Nesov 16 May 2011 10:58:23PM -1 points [-]

You've just shifted the definitional debate to deciding which decision problem to use, which was not my suggestion.

Comment author: Manfred 16 May 2011 11:26:58PM *  1 point [-]

But I claim it is an inevitable consequence of your suggestion, since the same sort of arguments that might be made about which way of calculating the probability can be made about which utility problem to solve, if you're doing the same math. Or put another way, you can take the decision-theory result and use it to calculate the rational probabilities, so any stance on using decision theory is a stance on probabilities (if the rewards are fixed).

I think the problem just looks so obvious to us when we use decision theory that we don't connect it to the non-obvious-seeming dispute over probabilities.

Comment author: Vladimir_Nesov 16 May 2011 11:37:41PM 0 points [-]

Again, I didn't suggest trying to reformulate a problem as a decision problem as a way of figuring out which probability to assign. Probability-assignment is not an interesting game. My point was that if you want to understand a problem, understand what's going on in a given situation, consider some decision problems and try to solve them, instead of pointlessly debating which probabilities to assign (or which decision problems to solve).

Comment author: Manfred 17 May 2011 12:00:29AM -1 points [-]

Oh, so you don't think that viewing it as a decision problem clarifies it? Then choosing a decision problem to help answer the question doesn't seem any more helpful than "make your own decision on the probability problem," since they're the same math. This then veers toward the even-more-unhelpful "don't ask the question."

Comment author: Vladimir_Nesov 17 May 2011 12:16:41AM 0 points [-]

Then choosing a decision problem to help answer the question doesn't seem any more helpful than "make your own decision on the probability problem," since they're the same math.

It's not intended to help with answering the question, no more than dissolving any other definitional debate helps with determining which definition is the better. It's intended to help with understanding of the thought experiment instead.

Comment author: Manfred 17 May 2011 02:24:54AM *  -2 points [-]

Changing the labels on the same math isn't "dissolving" anything, as it would if probabilities were like the word "sound." "Sound" goes away when dissolved because it's subjective and dissolving switches to objective language. Probabilities are uniquely derivable from objective language. Additionally there is no "unaskable question," at least in typical probability theory - you'd have to propose a fairly extreme revision to get a relevant decision theory answer to not bear on the question of probabilities.