All of linas's Comments + Replies

linas00

I will come, unless I utterly space it off and forget.

linas30

The FAQ states that omega has/is a computer the size of the moon -- that's huge but finite. I believe its possible, with today's technology, to create a randomizer that an omega of this size cannot predict. However smart omega is, one can always create a randomizer that omega cannot break.

-2MugaSofer
True, but just because such a randomizer is theoretically possible doesn't mean you have one to hand.
linas00

Yes. I was confused, and perhaps added to the confusion.

linas00

Hmm, the FAQ, as currently worded, does not state this. It simply implies that the agent is human, that omega has made 1000 correct predictions, and that omega has billions of sensors and a computer the size of the moon. That's large, but finite. One may assign some finite complexity to Omega -- say 100 bits per atom times the number of atoms in the moon, whatever. I believe that one may devise pseudo-random number generators that can defy this kind of compute power. The relevant point here is that Omega, while powerful, is still not "God" (i... (read more)

-1incogn
I do not want to make estimates on how and with what accuracy Omega can predict. There is not nearly enough context available for this. Wikipedia's version has no detail whatsoever on the nature of Omega. There seems to be enough discussion to be had, even with the perhaps impossible assumption that Omega can predict perfectly, always, and that this can be known by the subject with absolute certainty.
linas00

Huh? Can you explain? Normally, one states that a mechanical device is "predicatable": given its current state and some effort, one can discover its future state. Machines don't have the ability to choose. Normally, "choice" is something that only a system possessing free will can have. Is that not the case? Is there some other "standard usage"? Sorry, I'm a newbie here, I honestly don't know more about this subject, other than what i can deduce by my own wits.

0ArisKatsaris
Machines don't have preferences, by which I mean they have no conscious self-awareness of a preferred state of the world -- they can nonetheless execute "if, then, else" instructions. That such instructions do not follow their preferences (as they lack such) can perhaps be considered sufficient reason to say that machines don't have the ability to choose -- that they're deterministic doesn't... "Determining something" and "Choosing something" are synonyms, not opposites after all.
linas00

There needs to be an exploration of addiction and rationality. Gamblers are addicted; we know some of the brain mechanisms of addiction -- some neurotransmitter A is released in brain region B, Causing C to deplete, causing a dependency on the reward that A provides. This particular neuro-chemical circuit derives great utility from the addiction, thus driving the behaviour. By this argument, perhaps one might argue that addicts are "rational", because they derive a great utility from their addiction. But is this argument faulty?

A mechanistic ... (read more)

linas00

The collision I'm seeing is that between formal, mathematical axioms, and English language usage. Its clear that Benelliot is thinking of the axiom in mathematical terms: dry, inarguable, much like the independence axioms of probability: some statements about abstract sets. This is correct-- the proper formulation of VNM is abstract, mathematical.

Kilobug is right in noting that information has value, ignorance has cost. But that doesn't subvert the axiom, as the axioms are mathematically, by definition, correct; the way they were mapped to the example ... (read more)

linas40

Hmm. I just got a -1 on this comment ... I thought I posed a reasonable question, and I would have thought it to even be a "commonly asked question", so why would it get a -1? Am I misunderstanding something, or am I being unclear?

0MugaSofer
Omega is, by definition, always truthful. EDIT: Sorry, thought this was in reply to a different comment.
linas-40

How many times in a row will you be mugged, before you realize that omega was lying to you?

1ArisKatsaris
Really you probably need start imagining Omega as a trustworthy process, e.g. a mathematical proof that tells you 'X'-- thinking it as a person seems to trip you up if you are constantly bringing up the possibility it's lying when it says 'X'...
1MugaSofer
Omega is, by definition, always truthful.
linas-10

OK, but this can't be a "minor detail", its rather central to the nature of the problem. The back-n-forth with incogn above tries to deal with this. Put simply, either omega is able to predict, in which case EDT is right, or omega is not able to predict, in which case CDT is right.

The source of entropy need not be a fair coin: even fully deterministic systems can have a behavior so complex that predictability is untenable. Either omega can predict, and knows it can predict, or omega cannot predict, and knows that it cannot predict. The possibility that it cannot predict, yet is erroneously convinced that it can, seems ridiculous.

linas-20

I'm with incogn on this one: either there is predictability or there is choice; one cannot have both.

Incogn is right in saying that, from omega's point of view, the agent is purely deterministic, i.e. more or less equivalent to a computer program. Incogn is slightly off-the-mark in conflating determinism with predictability: a system can be deterministic, but still not predictable; this is the foundation of cryptography. Deterministic systems are either predictable or are not. Unless Newcombs problem explicitly allows the agent to be non-deterministic,... (read more)

3ArisKatsaris
Think of real people making choices and you'll see it's the other way around. The carefully chosen paths are the predictable ones if you know the variables involved in the choice. To be unpredictable, you need think and choose less. Hell, the archetypical imagery of someone giving up on choice is them flipping a coin or throwing a dart with closed eyes -- in short resorting to unpredictability in order to NOT choose by themselves.
2wedrifid
Either your claim is false or you are using a definition of at least one of those two words that means something different to the standard usage.
2scav
Newcomb's problem makes the stronger precondition that the agent is both predictable and that in fact one action has been predicted. In that specific situation, it would be hard to argue against that one action being determined and immutable, even if in general there is debate about the relationship between determinism and predictability.
-2MugaSofer
If Omega cannot predict, TDT will two-box.
-1incogn
I think I agree, by and large, despite the length of this post. Whether choice and predictability are mutually exclusive depends on what choice is supposed to mean. The word is not exactly well defined in this context. In some sense, if variable > threshold then A, else B is a choice. I am not sure where you think I am conflating. As far as I can see, perfect prediction is obviously impossible unless the system in question is deterministic. On the other hand, determinism does not guarantee that perfect prediction is practical or feasible. The computational complexity might be arbitrarily large, even if you have complete knowledge of an algorithm and its input. I can not really see the relevance to my above post. Finally, I am myself confused as to why you want two different decision theories (CDT and EDT) instead of two different models for the two different problems conflated into the single identifier Newcomb's paradox. If you assume a perfect predictor, and thus full correlation between prediction and choice, then you have to make sure your model actually reflects that. Let's start out with a simple matrix, P/C/1/2 are shorthands for prediction, choice, one-box, two-box. * P1 C1: 1000 * P1 C2: 1001 * P2 C1: 0 * P2 C2: 1 If the value of P is unknown, but independent of C: Dominance principle, C=2, entirely straightforward CDT. If, however, the value of P is completely correlated with C, then the matrix above is misleading, P and C can not be different and are really only a single variable, which should be wrapped in a single identifier. The matrix you are actually applying CDT to is the following one: * (P&C)1: 1000 * (P&C)2: 1 The best choice is (P&C)=1, again by straightforward CDT. The only failure of CDT is that it gives different, correct solutions to different, problems with a properly defined correlation of prediction and choice. The only advantage of EDT is that it is easier to cheat in this information without noticing it - even when it wo
linas00

Yes, exactly, and in our modern marketing-driven culture, one almost expects to be gamed by salesmen or sneaky game-show hosts. In this culture, its a prudent, even 'rational' response.

linas40

I'm finding the "counterfactual mugging" challenging. At this point, the rules of the game seem to be "design a thoughtless, inert, unthinking algorithm, such as CDT or EDT or BT or TDT, which will always give the winning answer." Fine. But for the entire range of Newcomb's problems, we are pitting this dumb-as-a-rock algo against a super-intelligence. By the time we get to the counterfactual mugging, we seem to have a scenario where omega is saying "I will reward you only if you are a trusting rube who can be fleeced." N... (read more)

linas-20

The conclusion to section "11.1.3. Medical Newcomb problems" begs a question which remains unanswered: -- "So just as CDT “loses” on Newcomb’s problem, EDT will "lose” on Medical Newcomb problems (if the tickle defense fails) or will join CDT and "lose" on Newcomb’s Problem itself (if the tickle defense succeeds)."

If I was designing a self-driving car and had to provide an algorithm for what to do during an emergency, I may choose to hard-code CDT or EDT into the system, as seems appropriate. However, as an intelligen... (read more)

4linas
Hmm. I just got a -1 on this comment ... I thought I posed a reasonable question, and I would have thought it to even be a "commonly asked question", so why would it get a -1? Am I misunderstanding something, or am I being unclear?
linas30

Presentation of Newcomb's problem in section 11.1.1. seems faulty. What if the human flips a coin to determine whether to one-box or two-box? (or any suitable source of entropy that is beyond the predictive powers of the super-intelligence.) What happens then?

This point is danced around in the next section, but never stated outright: EDT provides exactly the right answer if humans are fully deterministic and predictable by the superintelligence. CDT gives the right answer if the human employs an unpredictable entropy source in their decision-making. It is the entropy source that makes the decision acausal from the acts of the super-intelligence.

2wedrifid
If the FAQ left this out then it is indeed faulty. It should either specify that if Omega predicts the human will use that kind of entropy then it gets a "Fuck you" (gets nothing in the big box, or worse) or, at best, that Omega awards that kind of randomization with a proportional payoff (ie. If behavior is determined by a fair coin then the big box contains half the money.) This is a fairly typical (even "Frequent") question so needs to be included in the problem specification. But it can just be considered a minor technical detail.
linas-10

There is one rather annoying subtext that recurs throughout the FAQ: the very casual and carefree use of the words "rational" and "irrational", with the rather flawed idea that following some axiomatic system (e.g. VNM) and Bayes is "rational" and not doing so is "irrational". I think this is a dis-service, and, what's more, fails to look into the effects of intelligence, experience, training and emotion. The Allias paradox scratches the surface, as do various psych experiments. But ...

The real question is "wh... (read more)

linas00

There are numerous typos throughout the thing. Someone needs to re-read it. The math in "8.6.3. The Allais paradox" is all wrong, option 2A is not actually 34% of 1A and 66% of nothing, etc.