Perplexed comments on Desirable Dispositions and Rational Actions - Less Wrong

13 Post author: RichardChappell 17 August 2010 03:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 17 August 2010 05:42:32AM *  1 point [-]

For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing?

Yes, back when we discussed Newcomblike problems frequently I more or less used a form letter to reply to that objection. Any useful treatment of Newcomblike problems will specify explicitly or implicitly how Omega will handle (quantum) randomness if it is allowed. The obvious response for Omega is to either give you nothing (or maybe a grenade!) for being a smart ass or, more elegantly, handle the reward given in commensurate manner to the probabilities. If probabilistic decisions are to be allowed then an Omega that can handle probabilistic decisions quite clearly needs to be supplied.

Thanks for posting. Your analysis is an improvement over the LW conventional wisdom, but you still doesn't get it right, where right, to me, means the way it is analyzed by the guys who won all those Nobel prizes in economics.

I downvoted the parent. How on earth is Perplexed comparing LW conventional wisdom to that of Nobel prize winning economists when he thinks coin tossing is a big deal?

Comment author: Perplexed 17 August 2010 06:42:22AM 2 points [-]

Any useful treatment of Newcomblike problems will specify explicitly or implicitly how Omega will handle (quantum) randomness.

At the risk of appearing stupid, I have to ask: exactly what is a "useful treatment of Newcomb-like problems" used for?

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast? Jaynes says not to include impossible propositions among the conditions in a conditional probability. Bad things happen if you do. Impossible things need to have zero-probability priors. Omega just has no business hanging around with honest Bayesians.

When I read that you all are searching for improved decision theories that "solve" the one-shot prisoner's dilemma and the one-shot Parfit hitchhiker, I just cringe. Surely you shouldn't change the standard, well-established, and correct decision theories. If you don't like the standard solutions, you should instead revise the problems from unrealistic one-shots to more realistic repeated games or perhaps even more realistic games with observers - observers who may play games with you in the future.

In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.

Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.

Comment author: cousin_it 17 August 2010 10:11:11AM *  7 points [-]

Here's another way of looking at the situation that may or may not be helpful. Suppose I ask you, right here and now, what you'd do in the hypothetical future Parfit's Hitchhiker scenario if your opponent was a regular human with Internet access. You have several options:

  1. Answer truthfully that you'd pay $100, thus proving that you don't subscribe to CDT or EDT. (This is the alternative I would choose.)

  2. Answer that you'd refuse to pay. Now you've created evidence on the Internet, and if/when you face the scenario in real life, the driver will Google your name, check the comments on LW and leave you in the desert to die. (Assume the least convenient possible world where you can't change or delete your answer once it's posted.)

  3. Answer that you'd pay up, but secretly plan to refuse. This means you'd be lying to us here in the comments - surely not a very nice thing to do. But if you subscribe to CDT with respect to utterances as well as actions, this is the alternative you're forced to choose. (Which may or may not make you uneasy about CDT.)

Comment author: TobyBartels 18 August 2010 05:34:47AM *  1 point [-]

Answer that you'd pay up, but secretly plan to refuse. This means you'd be lying to us here in the comments - surely not a very nice thing to do. But if you subscribe to CDT with respect to utterances as well as actions, this is the alternative you're forced to choose. (Which may or may not make you uneasy about CDT.)

What makes me uneasy is the assumption I wouldn't want to pay $100 to somebody who rescued me from the desert. Given that, lying to people whom I don't really know should be a piece of cake!

Comment author: Perplexed 17 August 2010 06:20:41PM -1 points [-]

I would of course choose option #1, adding that, due to an affliction giving me a trembling hand, I tend to get stranded in the desert and the like a lot and hence that I would appreciate it if he would spread the story of my honesty among other drivers. I might also promise to keep secret the fact of his own credulity in this case, should he ask me to. :)

I understand quite well that the best and simplest way to appear honest is to actually be honest. And also that, as a practical matter, you never really know who might observe your selfish actions and how that might hurt you in the future. But these prudential considerations can already be incorporated into received decision theory (which, incidentally, I don't think matches up with either CDT or EDT - at least as those acronyms seem to be understood here.) We don't seem to need TDT and UDT to somehow glue them in to the foundations.

Hmmm. Is EY perhaps worried that an AI might need need even stronger inducements toward honesty? Maybe it would, but I don't see how you solve the problem by endowing the AI with a flawed decision theory.

Comment author: JamesAndrix 17 August 2010 07:15:18AM *  5 points [-]

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

...What?

Also, it doesn't matter if he's impossible. He's an easy way to tack on arbitrary rules to hypotheticals without overly tortured explanations, because people are used to getting arbitrary rules from powerful agents.

It's also impossible for a perfectly Absent Minded Driver to come to one of only two possible intersections with 3 destinations with known payoffs and no other choices. To say nothing of the impossibly horrible safety practices of our nation's hypothetical train system.

Comment author: Perplexed 17 August 2010 07:47:51AM 0 points [-]

it doesn't matter if he's impossible

Are you sure? I'm not objecting to the arbitrary payoffs or complaining because he doesn't seem to be maximizing his own utility. I'm objecting to his ability to predict my actions. Give me a scenario which doesn't require me to assign a non-zero prior to woo and in which a revisionist decision theory wins. If you can't, then your "improved" decision theory is no better than woo itself.

Regarding the Absent Minded Driver, I didn't recognize the reference. Googling, I find a .pdf by one of my guys (Nobelist Robert Aumann) and an LW article by Wei-Dai. Cool, but since it is already way past my bedtime, I will have to read them in the morning and get back to you.

Comment author: thomblake 17 August 2010 05:55:23PM 6 points [-]

I'm objecting to his ability to predict my actions. Give me a scenario which doesn't require me to assign a non-zero prior to woo

The only 'woo' here seems to be your belief that your actions are not predictable (even in principle!). Even I can predict your actions within some tolerances, and we do not need to posit that I am a superintelligence! Examples: you will not hang yourself to death within the next five minutes, and you will ever make another comment on Less Wrong.

Comment author: Perplexed 17 August 2010 08:49:42PM -1 points [-]

...you will ever make another comment on Less Wrong.

"ever"? No, "never".

Comment author: thomblake 18 August 2010 12:43:44AM 2 points [-]

Wha?

In case it wasn't clear, it was a one-off prediction and I was already correct.

Comment author: Perplexed 19 August 2010 02:51:18AM 2 points [-]

In case mine wasn't clear, it was a bad Gilbert & Sullivan joke. Deservedly downvoted. Apparently.

Comment author: Alicorn 19 August 2010 02:55:51AM 4 points [-]

You need a little more context/priming or to make the joke longer for anyone to catch this. Or you need to embed it in a more substantive and sensible reply. Otherwise it will hardly ever work.

Comment author: Perplexed 19 August 2010 04:56:38AM 1 point [-]
Comment author: Cyan 19 August 2010 04:51:22AM 0 points [-]

I wasn't sure, so I held off posting my reply (a decision I now regret). It would have been, "Well, hardly ever."

Comment author: Kingreaper 18 August 2010 12:53:56AM *  2 points [-]

I'm objecting to his ability to predict my actions.

Why? What about you is fundamentally logically impossible to predict?

Do you not find that you often predict the actions of others? (ie. giving them gifts that you know they'll like) And that others predict your reactions? (ie. choosing not to give you spider-themed horror movies if you're arachnophobic)

Comment author: Perplexed 17 August 2010 06:56:35PM 0 points [-]

Ok, I've read the paper(most of it) and Wei-Dai's article now. Two points.

  1. In a sense, I understand how you might think that the Absent Minded Driver is no less contrived and unrealistic than Newcomb's Paradox. Maybe different people have different intuitions as to what toy examples are informative and which are misleading. Someone else (on this thread?) responded to me recently with the example of frictionless pulleys and the like from physics. All I can tell you is that my intuition tells me that the AMD, the PD, frictionless pulleys,and even Parfit's Hitchhiker all strike me as admirable teaching tools, whereas Newcomb problems and the old questions of irrestable force vs immovable object in physics are simply wrong problems which can only create confusion.

  2. Reading Wei-Dai's snarking about how the LW approach to decision theory (with zero published papers to date) is so superior to the confusion in which mere misguided Nobel laureates struggle - well, I almost threw up. It is extremely doubtful that I will continue posting here for long.

Comment author: Wei_Dai 18 August 2010 12:00:11AM 5 points [-]

It wasn't meant to be a snark. I was genuinely trying to figure out how the "LW approach" might be superior, because otherwise the most likely explanation is that we're all deluded in thinking that we're making progress. I'd be happy to take any suggestions on how I could have reworded my post so that it sounded less like a snark.

Comment author: Perplexed 20 August 2010 11:39:52PM *  6 points [-]

Wei-Dai wrote a post entitled The Absent-Minded Driver which I labeled "snarky". Moreover, I suggested that the snarkiness was so bad as to be nauseating, so as to drive reasonable people to flee in horror from LW and SAIA. I here attempt to defend these rather startling opinions. Here is what Wei-Dai wrote that offended me:

This post examines an attempt by professional decision theorists to treat an example of time inconsistency, and asks why they failed to reach the solution (i.e., TDT/UDT) that this community has more or less converged upon. (Another aim is to introduce this example, which some of us may not be familiar with.) Before I begin, I should note that I don't think "people are crazy, the world is mad" (as Eliezer puts it) is a good explanation. Maybe people are crazy, but unless we can understand how and why people are crazy (or to put it more diplomatically, "make mistakes"), how can we know that we're not being crazy in the same way or making the same kind of mistakes?

The paper that Wei-Dai reviews is "The Absent-Minded Driver" by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:

(Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don't think we want to call these people "crazy".)

Wei-Dai then proceeds to give a competent description of the problem and the standard "planning-optimality" solution of the problem. Next comes a description of an alternative seductive-but-wrong solution by Piccione and Rubinstein. I should point that everyone - P&R, Aumann, Hart, and Perry, Wei-Dai, me, and hopefully you who look into this - realizes that the alternative P&R solution is wrong. It gets the wrong result. It doesn't win. The only problem is explaining exactly where the analysis leading to that solution went astray, and in explaining how it might be modified so as to go right. Making this analysis was, as I see it, the whole point of both papers - P&R and Aumann et al. Wei-Dai describes some characteristics of Aumann et al's corrected version of the alternate solution. Then he (?) goes horribly astray:

In problems like this one, UDT is essentially equivalent to planning-optimality. So why did the authors propose and argue for action-optimality despite its downsides ..., instead of the alternative solution of simply remembering or recomputing the planning-optimal decision at each intersection and carrying it out?

But, as anyone who reads the paper carefully should see, they weren't arguing for action-optimality as the solution. They never abandoned planning optimality. Their point is that if you insist on reasoning in this way, (and Seldin's notion of "subgame perfection" suggests some reasons why you might!) then the algorithm they call "action-optimality" is the way to go about it.

But Wei-Dai doesn't get this. Instead we get this analysis of how these brilliant people just haven't had the educational advantages that LW folks have:

Well, the authors don't say (they never bothered to argue against it), but I'm going to venture some guesses:

  • That solution is too simple and obvious, and you can't publish a paper arguing for it.
  • It disregards "the probability of being at X", which intuitively ought to play a role.
  • The authors were trying to figure out what is rational for human beings, and that solution seems too alien for us to accept and/or put into practice.
  • The authors were not thinking in terms of an AI, which can modify itself to use whatever decision theory it wants to.
  • Aumann is known for his work in game theory. The action-optimality solution looks particularly game-theory like, and perhaps appeared more natural than it really is because of his specialized knowledge base.
  • The authors were trying to solve one particular case of time inconsistency. They didn't have all known instances of time/dynamic/reflective inconsistencies/paradoxes/puzzles laid out in front of them, to be solved in one fell swoop.

Taken together, these guesses perhaps suffice to explain the behavior of these professional rationalists, without needing to hypothesize that they are "crazy". Indeed, many of us are probably still not fully convinced by UDT for one or more of the above reasons.

Let me just point out that the reason it is true that "they never argued against it" is that they had already argued for it. Check out the implications of their footnote #4!

Ok, those are the facts, as I see them. Was Wei-Dai snarky? I suppose it depends on how you define snarkiness. Taboo "snarky". I think that he was overbearingly condescending without the slightest real reason for thinking himself superior. "Snarky" may not be the best one-word encapsulation of that attitude, but it is the one I chose. I am unapologetic. Wei-Dai somehow came to believe himself better able to see the truth than a Nobel laureate in the Nobel laureate's field. It is a mistake he would not have made had he simply read a textbook or taken a one-semester course in the field. But I'm coming to see it as a mistake made frequently by SIAI insiders.

Let me point out that the problem of forgetful agents may seem artificial, but it is actually extremely important. An agent with perfect recall playing the iterated PD, knowing that it is to be repeated exactly 100 times, should rationally choose to defect. On the other hand, if he cannot remember how many iterations remain to be played, and knows that the other player cannot remember either, should cooperate by playing Tit-for-Tat or something similar.

Well, that is my considered response on "snarkiness". I still have to respond on some other points, and I suspect that, upon consideration, I am going to have to eat some crow. But I'm not backing down on this narrow point. Wei-Dai blew it in interpreting Aumann's paper. (And also, other people who know some game theory should read the paper and savor the implications of footnote #4. It is totally cool).

Comment author: Tyrrell_McAllister 20 August 2010 11:49:27PM *  5 points [-]

The paper that Wei-Dai reviews is "The Absent-Minded Driver" by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:

(Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don't think we want to call these people "crazy".)

How is Wei Dai being condescending there? He's pointing out how weak it is to dismiss people with these credentials by just calling them crazy. ETA: In other words, it's an admonishment directed at LWers.

That, at any rate, was my read.

Comment author: Perplexed 21 August 2010 12:24:20AM 1 point [-]

I'm sure it would be Wei-Dai's read as well. The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary. I'm not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.

Comment author: wedrifid 21 August 2010 01:27:07AM 2 points [-]

Are you essentially saying you are nauseated because Wei Dai disagreed with the authors?

Comment author: Tyrrell_McAllister 21 August 2010 12:37:51AM *  1 point [-]

I'm having trouble following you.

I'm sure it would be Wei-Dai's read as well.

Are you saying that you read him differently, and that he would somehow be misinterpreting himself?

The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary.

The admonishment is necessary if LWers are likely to wrongly dismiss Aumann et al. as "crazy". In other words, to think that the admonishment is necessary is to think that LWers are too inclined to dismiss other people as crazy

I'm not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.

I got that. Who said anything about condescending to LWers?

Comment author: Wei_Dai 21 August 2010 12:56:49AM *  2 points [-]

Preliminary notes: You can call me "Wei Dai" (that's firstname lastname). "He" is ok. I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole's "Game Theory" and Joyce's "Foundations of Causal Decision Theory" as two of the few physical books that I own.

Their point is that if you insist on reasoning in this way, (and Seldin's notion of "subgame perfection" suggests some reasons why you might!) then the algorithm they call "action-optimality" is the way to go about it.

I can't see where they made this point. At the top of Section 4, they say "How, then, should the driver reason at the action stage?" and go on directly to describe action-optimality. If they said something like "One possibility is to just recompute and apply the planning-optimal solution. But if you insist ..." please point out where. See also page 108:

In our case, there is only one player, who acts at different times. Because of his absent-mindedness, he had better coordinate his actions; this coordination can take place only before he starts out}at the planning stage. At that point, he should choose p*1 . If indeed he chose p*1 , there is no problem. If by mistake he chose p*2 or p*3 , then that is what he should do at the action stage. (If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do.)

If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?

I also do not see how subgame perfection is relevant here. Can you explain?

Let me just point out that the reason it is true that "they never argued against it" is that they had already argued for it. Check out the implications of their footnote #4!

This footnote?

Formally, (p*, p*) is a symmetric Nash equilibrium in the (symmetric) game between ‘‘the driver at the current intersection’’ and ‘‘the driver at the other intersection’’ (the strategic form game with payoff functions h.)

Since p* is the action-optimal solution, they are pointing out the formal relationship between their notion of action-optimality and Nash equilibrium. How is this footnote an argument for "it" (it being "recomputing the planning-optimal decision at each intersection and carrying it out")?

Comment author: Perplexed 21 August 2010 01:26:16AM 3 points [-]

I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole's "Game Theory" and Joyce's "Foundations of Causal Decision Theory" as two of the few physical books that I own.

Ok, so it is me who is convicted of condescending without having the background to justify it. :( FWIW I have never taken a course, though I have been reading in the subject for more than 45 years.

My apologies. More to come on the substance.

Comment author: Perplexed 21 August 2010 02:19:09AM *  1 point [-]

Relevance of Subgame perfection. Seldin suggested subgame perfection as a refinement of Nash equilibrium which requires that decisions that seemed rational at the planning stage ought to still seem rational at the action stage. This at least suggests that we might want to consider requiring "subgame perfection" even if we only have a single player making two successive decisions.

Relevance of Footnote #4. This points out that one way to think of problems where a single player makes a series of decisions is to pretend that the problem has a series of players making the decisions - one decision per player, but that these fictitious players are linked in that they all share the same payoffs (but not necessarily the same information). This is a standard "trick" in game theory, but the footnote points out that in this case, since both fictitious players have the same information (because of the absent-mindedness) the game between driver-version-1 and driver-version-2 is symmetric, and that is equivalent to the constraint p1 = p2.

Does Footnote #4 really amount to "they had already argued for [just recalculating the planning-optimal solution]"? Well, no it doesn't really. I blew it in offering that as evidence. (Still think it is cool, though!)

Do they "argue for it" anywhere else? Yes, they do. Section 5, where they apply their methods to a slightly more complicated example, is an extended argument for the superiority of the planning-optimal solution to the action-optimal solutions. As they explain, there can be multiple action-optimal solutions, even if there is only one (correct) planning-optimal solution, and some of those action-optimal solutions are wrong *even though they appear to promise a higher expected payoff than does the planning optimal solution.

I can't see where they made this point. At the top of Section 4, they say "How, then, should the driver reason at the action stage?" and go on directly to describe action-optimality. If they said something like "One possibility is to just recompute and apply the planning-optimal solution. But if you insist ..." please point out where. See also page 108:

In our case, there is only one player, who acts at different times. Because of his absent-mindedness, he had better coordinate his actions; this coordination can take place only before he starts out at the planning stage. At that point, he should choose p1 . If indeed he chose p1 , there is no problem. If by mistake he chose p2 or p3 , then that is what he should do at the action stage. (If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do.)

If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?

I really don't see why you are having so much trouble parsing this. "If indeed he chose p1 , there is no problem" is an endorsement of the correctness of the planning-optimal solution. The sentence dealing with p2 and p3 asserts that, if you mistakenly used p2 for your first decision, then you best follow-up is to remain consistent and use p2 for your remaining two choices. The paragraph you quote to make your case is one I might well choose myself to make my case.

Edit: There are some asterisks in variable names in the original paper which I was unable to make work with the italics rules on this site. So "p2" above should be read as "p <asterisk> 2"

Comment author: Wei_Dai 21 August 2010 02:27:43AM *  1 point [-]

It is a statement that the planning-optimal action is the correct one, but it's not an endorsement that it is correct to use the planning-optimality algorithm to compute what to do when you are already at an intersection. Do you see the difference?

ETA (edited to add): According to my reading of that paragraph, what they actually endorse is to compute the planning-optimal action at START, remember that, then at each intersection, compute the set of action-optimal actions, and pick the element of the set that coincides with the planning-optimal action.

BTW, you can use "\" to escape special characters like "*" and "_".

Comment author: JamesAndrix 17 August 2010 10:21:16PM *  3 points [-]

1A. It may well be a wrong problem. if so it ought to be dissolved.

1B. If so, many theorists (including presumably nobel prize winners), have missed it since 1969.

1C. Your intuition should not be considered a persuasive argument, even by you.

2 . Even ignoring any singularitarian predictions, given the degree to which knowledge acceleration has already advanced, you should expect to see cases where old standards are blown away with seemingly little effort.

Maybe this isn't one of those cases, but it should not surprise you if we learn that humanity as a whole has done more decision theory in the past few years than in all previous history.

Given that the similar accelerations are happening in many fields, there are probably several past-nobel-level advances by rank amateurs with no special genius.

Comment author: cousin_it 17 August 2010 09:00:33PM *  3 points [-]

In the comment section of Wei Dai's post in question, taw and pengvado completed his solution so conclusively that if you really take the time to understand the object level (instead of the meta level where some people are apriori smarter because they won a prize), you can't help but feel the snarking was justified :-)

Comment author: Perplexed 19 August 2010 02:49:06AM 2 points [-]

OK, I've got some big guns pointed at me, so I need to respond. I need to respond intelligently and carefully. That will take some time. Within a week at most.

Comment author: Wei_Dai 18 August 2010 10:34:34PM 1 point [-]

A couple more comments:

  1. For a long time I also didn't think that Newcomb's Problem was worth thinking about. Then I read something by Eliezer that pointed out the connection to Prisoner's Dilemma. (According to Prisoners' Dilemma is a Newcomb Problem, others saw the connection as early as 1969.) See also my Newcomb's Problem vs. One-Shot Prisoner's Dilemma where I explored how they are different as well.
  2. I'm curious what you now think about my perspective on the Absent Minded Driver, on both the object level and meta level (assuming I convinced you that it wasn't meant to be a snark). You're the only person who has indicated actually having read Aumann et al.'s paper.
Comment author: Perplexed 20 August 2010 11:58:24PM 2 points [-]

The possible connection between Newcomb and PD is seen by anyone who considers Jeffrey's version of decision theory (EDT). So I have seen it mentioned by philosophers long before I had heard of EY. Game theorists, of course, reject this, unless they are analysing games with "free precommitment". I instinctively reject it too, for what that is worth, though I am beginning to realize that publishing your unchangeable source code is pretty-much equivalent to free precommitment.

My analysis of your analysis of AMD is in my response to your comment below.

Comment author: Kevin 18 August 2010 01:16:37AM *  0 points [-]

Give me a scenario which doesn't require me to assign a non-zero prior to woo and in which a revisionist decision theory wins.

Omega is a perfect super-intelligence, existing in a computer simulation like universe that can be modeled by a set of physical laws and a very long string of random numbers. Omega knows the laws and the numbers.

Comment author: Kaj_Sotala 17 August 2010 07:55:30PM *  3 points [-]

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast?

Omega is not obviously impossible: in theory, someone could scan your brain and simulate how you react in a specific situation. If you're already an upload and running as pure code, this is even easier.

The question is particularly relevant when trying to develop a decision theory for artificial intelligences: there's nothing impossible about the notion of two adversarial AIs having acquired each others' source codes and basing their actions on how a simulated copy of the other would react. If you presume that this scenario is possible, and there seems to be no reason to assume that it wouldn't be, then developing a decision theory capable of handling this situation is an important part of building an AI.

Comment author: prase 17 August 2010 09:50:26AM 3 points [-]

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

What on Earth gives you that impression? I agree that scenarios with Omega wil have probably little impact on practical matters, at least in near future, but quantum woo?

In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.

Why is Omega physically impossible? What is philosophically impossible, in general?

Comment author: Perplexed 17 August 2010 09:26:36PM -1 points [-]

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

What on Earth gives you that impression?

Omega makes a decision to put the money in the box, or not. In my model of (MWI) reality, that results in a branching - there are now 2 worlds (one with money, one without). The only problem is, I don't know which world I am in. Next, I decide whether to one-box or to two-box. In my model, that results in 4 possible worlds now. Or more precisely, someone who knows neither my decision nor Omega's would count 4 worlds.

But now we are asked to consider some kind of weird quantum correlation between Omega's choice and my own. Omega's choice is an event within my own past light-cone. By the usual physical assumptions, my choice should not have any causal influence on his choice. But I am asked to believe that if I choose to two-box, then he will have chosen not to leave money, whereas if I just believe as Omega wishes me to believe, then my choice will make me rich by reaching back and altering the past (selecting my preferred history?). And you ask "What on Earth gives me the impression that this is quantum woo?"

Comment author: RobinZ 17 August 2010 09:32:35PM 5 points [-]

Omega makes a decision to put the money in the box, or not. In my model of (MWI) reality, that results in a branching - there are now 2 worlds (one with money, one without). The only problem is, I don't know which world I am in. Next, I decide whether to one-box or to two-box. In my model, that results in 4 possible worlds now. Or more precisely, someone who knows neither my decision nor Omega's would count 4 worlds.

Incorrect. Omega's decision is no more indeterministic than the output of a calculation. Asking (say) me "Does two plus two equal three?" does not create two worlds, one in which I say "yes" and one in which I say "no" - overwhelmingly I will tell you "no".

Comment author: JamesAndrix 17 August 2010 10:49:27PM 4 points [-]

Your model ought to be branching at every subatomic event, not at every conscious intelligent choice.

This makes reality (even humans) predictable.

Comment author: prase 18 August 2010 11:16:26AM 2 points [-]

As others have said. Omega-talk is possible in a purely classical world, and is clearer in a classical world. Omega simply scans my brain and deterministically decides whether to put the money in or not. Then I decide whether I take one or two of the boxes. To say my choice should not have any causal influence on his choice is misleading at least. It may be true (depending on how exactly one defines causality), however it doesn't exclude correlations between the two choices simply because they are both consequences of a common cause (state of my brain and the relevant portion of the world immediately before the scenario begun).

There is no need to include quantumness or even MWI into this scenario, and no certain reason why quantum effects would prevent it from happening. That said, I don't say that something similar is probably going to happen soon.

Comment author: FAWS 17 August 2010 09:37:57PM *  2 points [-]

That's the case if you somehow manage to use a quantum coin in your decision. Your decision could be close enough to deterministic that the measure of the words where you decide differently is billions of times or more smaller and can safely be neglected.

Comment author: Perplexed 17 August 2010 06:35:24PM -2 points [-]

Why is Omega physically impossible?

Because, in predicting my future decisions, he is performing Laplace demon computations based on Heisenberg demon measurements. And physics rules out such demons.

What is philosophically impossible, in general?

Anything which cannot consistently coexist with what is already known to exist

Comment author: FAWS 17 August 2010 06:46:26PM 1 point [-]

One possibility: Omega is running this universe as a simulation, and has already run a large number of earlier identical instances.

There may be many less obvious possibilities, even if you require Omega to be certain rather than just very sure.

Comment author: Perplexed 17 August 2010 08:57:16PM 1 point [-]

One possibility: Omega is running this universe as a simulation, and has already run a large number of earlier identical instances.

Ok, that is possible, I suppose. Though it does conflict, in a sense, with the claim that he put the money in the box before I made the decision whether to one-box or two-box. Because, in some sense, I already made that decision in all(?) of those earlier identical simulations.

Comment author: prase 18 August 2010 10:59:05AM 0 points [-]

It is far from sure that the decisions made by human brains rely heavily on quantum effects and that the relevant data can't be obtained by some non-destructive scanning, without Heisenberg-demonic measurements. The Laplace-demon aspects is in fact a matter of precision. If Omega needed to simulate the brain precisely (unfortunately, the formulations of the paradox here on LW and in the subsequent discussions suggest this), then yes, Omega would have to be a demon. But the Newcomb's paradox needn't happen in its idealised version with 100% success of Omega's predictions to be valid and interesting. If Omega is right only 87% of the time, the paradox still holds, and I don't see any compelling reason why this should be impossible without postulating demonic abilities.

Comment author: Mitchell_Porter 17 August 2010 06:54:00AM *  3 points [-]

Have you read the original article? The payoff is less if you follow ordinary decision theory, and yet the whole point of decision theory is to maximize the payoff.

Comment author: thomblake 17 August 2010 03:36:13PM 2 points [-]

Impossible things need to have zero-probability priors.

0 and 1 are not probabilities. I certainly don't have a prior of 0 that Omega's existence is impossible; he's not defined in a contradictory fashion, and even if he was I harbor the tiniest bit of doubt that I'm wrong about how contradictions work.

Comment author: Perplexed 17 August 2010 06:41:43PM 1 point [-]

I am using sloppy language here, perhaps. But to illustrate my usage, I claim that the probability that 2+2=4 is 1. And that p(2+2=5)=0.

Comment author: thomblake 17 August 2010 06:45:54PM 3 points [-]

If you were a Bayesian and assigned 0 probability to 2+2=5, you'd be in unrecoverable epistemic trouble if you turned out to be wrong about that. See How to convince me 2+2=3.

Comment author: Perplexed 19 August 2010 02:04:08AM 1 point [-]

EY to the contrary, I remain smug in my evaluation p(2+2=5)=0. Of all the evidences that Eliezer offered, the only one to convince me was the one which demonstrated to me that I was confused about the meaning of the digit 5. Yes, by Cromwell's rule, I think it possible I might be mistaken about how to count. "1, 2, 3, 5, 6, 4, 7", I recite to myself. "Yes, I had been wrong about that. Thanks for correcting me."

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski") But in neither case am I in unrecoverable epistemic trouble. Those were typos. Correcting them is a simple search-and-replace, not a Bayesian updating. Or so I understand.

Comment author: WrongBot 19 August 2010 02:21:38AM 3 points [-]

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski") But in neither case am I in unrecoverable epistemic trouble. Those were typos. Correcting them is a simple search-and-replace, not a Bayesian updating. Or so I understand.

It's Yudkowsky. Might want to update your general confidence evaluations.

Comment author: timtyler 17 August 2010 05:26:40PM *  1 point [-]

If you run out of material, here's an academic paper, that claims to resolve many of the same problems as are being addressed on this site:

"DISPOSITION-BASED DECISION THEORY"

Comment author: Emile 17 August 2010 09:53:14AM 1 point [-]

Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.

For what it's worth, I have written programs that cooperate on the prisoner's dilemma if and only if their opponent will cooperate, without caring about the opponent's rituals of cognition, only about his behaviour.

Unfortunately, this margin is too small to contain them, I mean, they're not ready for prime time. I'll probably write up a post on that in the near future.

Comment author: ocr-fork 18 August 2010 06:30:55AM 1 point [-]

CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.

Comment author: timtyler 17 August 2010 05:03:34PM *  0 points [-]

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast?

This Omega is not impossible.

It says: "Omega has been correct on each of 100 observed occasions so far".

Not particularly hard - if you pick on decision theorists who had previously publicly expressed an opinion on the subject.

Comment author: Perplexed 17 August 2010 06:27:42PM 0 points [-]

Ah! So I need to assign priors to three hypotheses. (1) Omega is a magician (i.e. illusion artist) (2) Omega had bribed people to lie about his past success. (3) He is what he claims.

So I assign a prior of zero probability to hypothesis #3, and cheerfully one-box using everyday decision theory.

Comment author: timtyler 17 August 2010 06:40:49PM *  1 point [-]

First: http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/

You don't seem to be entering into the spirit of the problem. You are "supposed" to reach the conclusion that there's a good chance that Omega can predict your actions in this domain pretty well - from what he knows about you - after reading the premise of the problem.

If you think that's not a practical possibility, then I recommend that you imagine yourself as a deterministic robot - where such a scenario becomes more believable - and then try the problem again.

Comment author: Perplexed 17 August 2010 09:03:10PM 1 point [-]

If I imagine myself as a deterministic robot, who knows that he is a deterministic robot, I am no longer able to maintain the illusion that I care about this problem.

Comment author: cousin_it 17 August 2010 09:10:09PM *  4 points [-]

Do you think you aren't a deterministic robot? Or that you are, but you don't know it?

Comment author: Perplexed 19 August 2010 01:43:10AM 1 point [-]

It is a quantum universe. So I would say that I am a stochastic robot. And Omega cannot predict my future actions.

Comment author: timtyler 17 August 2010 09:56:51PM *  5 points [-]

...then you need to imagine you made the robot, it is meeting Omega on your behalf - and that it then gives you all its winnings.

Comment author: TobyBartels 18 August 2010 05:41:56AM 4 points [-]

I like this version! Now the answer seems quite obvious.

In this case, I would design the robot to be a one-boxer. And I would harbour the secret hope that a stray cosmic ray will cause the robot to pick both boxes anyway.

Comment author: timtyler 18 August 2010 06:11:55AM *  2 points [-]

Yes - but you would still give its skull a lead-lining - and make use of redundancy to produce reliability...

Comment author: TobyBartels 18 August 2010 07:46:15AM 0 points [-]

Agreed.