Armok_GoB comments on The mathematics of reduced impact: help needed - Less Wrong

10 Post author: Stuart_Armstrong 16 February 2012 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 18 February 2012 12:10:24PM *  1 point [-]

I've provided two responses, which I will try to make more clear. (Trace distance is just a precise way of measuring distance between distributions; I was trying to commit to an actual mathematical claim which is either true or false, in the spirit of precision.):

  • The mathematical claim: if you have a chaotic system with many random inputs, and you then you consider the distributions obtained by varying one input, they are very close together according to natural distance measures on probability distributions. If the inputs to the system are quantum events, the appropriate formalization of the statement remains true.

My sneezing may be causally connected to the occurrence of a hurricane. However, given that I sneezed, the total probability of a hurricane occurring wasn't changed. It was still equal to the background probability of a hurricane occurring, because many other contributing factors--which have a comparable contribution to the probability of a hurricane in florida--are determined randomly. Maybe for reference it is helpful to think of the occurrence of a hurricane as an XOR of a million events, at least one of which is random. If you change one of those events it "affects" whether a hurricane occurs, but you have to exert a very special influence to make the probability of a hurricane be anything other than 50%. Even if the universe were deterministic, if we define these things with respect to a bounded agent's beliefs then we can appeal to complexity-theoretic results like Yao's XOR lemma and get identical results. If you disagree, you can specify how your mathematical model of hurricane occurrence differs substantially.

  • Your particular example is also precluded by the coarse-graining I mentioned. Namely, define the distance between two worlds in terms of the total perturbation to the world outside the box over the next hour. After 30 minutes, extract some useful info from the box and incinerate it. Of course the box's insulation and flames let out some info, so I need both (1) and (2) to go through, but this gets rid of the intended large impact of things the AI says to you. Also, the information may be maliciously chosen, and you need more power to get the AI to minimize the impact of its answer. I don't think this is a realistic goal.
Comment author: Armok_GoB 18 February 2012 01:40:43PM 0 points [-]

It's even better/worse, since we're operating on multiple worlds quantum mechanics, and many of those random events happens after the AI has stopped having an influence... If you have the AI output a bit, and then XOR it with a random bit, what bit the AI outputs has literally zero impact no matter how you count: you end up with one universe in which 1 was outputed and one in wich 0 was outputed.

... I guess this is based on the assumption that there's no difference between "universe A sees 1 and universe B sees 0" and "universe A sees 0 and universe B sees 1"... but blobs of amplitude having indexical identities like that seems like an incredibly silly notion to me.