paulfchristiano comments on The mathematics of reduced impact: help needed - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
I share your pessimism about this sort of boxing proposal, but I don't think your criticism holds up.
In this setup the AI can always choose to do nothing, so if it is maximizing U - (penalty), where U is bounded between 0 and 1, it will expect to incur a penalty of at most 1. So your objection would be resolved if we could find a (penalty) such that massive nanotech reliably incurs a penalty of at least 1, but such that U - (penalty) was positive for some courses of action. Nanotech isn't magic, so this isn't so hard if we have a coarse graining relative to which the AI should have a very small effect.
For example, if our course-graining was defined by only paying attention to the world outside the box over the next hour, and we expect the AI to have no effect outside of the box until we sterilize the box and look inside in an hour (and the box were relatively well insulated and so on), this is a pretty "easy" target--it requires "merely" reaching into the ontology to specify the coarse-graining, the event X, and time duration, and being able to define a utility function in terms of the AI's beliefs without leading to insanity.
Of course, we can get similar gains in such simple cases with good hardware containment, and the question is what you would safely do with such limited functionality. Stuart is trying to aim for a more generally useful notion, whose possibility I am more dubious of.
I do not understand how your reply addresses the issue of the butterfly effect, which would also radiate out from a sterilized box. Physics is just not that stable; any microscopic equivalent of a sneeze, even in the form of past alternative firings for transistors, will still radiate out to larger and larger effects. If the counterfactual in "my effect on the universe" is defined relative to a privileged null action, the AI will always take that action and behave in an undefined way relative to the effect of electromagnetic radiation from its circuitry, etc., and the timing of its display and anything else that was defined into the coarse-grained equivalence class of the privileged null action, etc., all of which would be subject to optimization in the service of whichever other goals it had, so long as the inevitable huge penalty was avoided by staying in the "null action" equivalence class.
The penalty for impact is supposed to be defined with respect to the AI's current beliefs. Perhaps shuttling around electrons has large effects on the world, but if you look at some particular assertion X and examine P(X | electron shuffle 1) vs. P(X | electron shuffle 2), where P is AI's beliefs, you will not generally see a large difference. (This is stated in Stuart's post, but perhaps not clearly enough.)
I'm aware of the issues arising from defining value with this sort of reference to "the AI's beliefs." I can see why you would object to that, though I think it is unclear whether it is fatal (minimally it restricts the range of applicability, perhaps to the point of unhelpfulness).
Also, I don't quite buy your overall argument about the butterfly effect in general. For many chaotic systems, if you have a lot of randomness going in, you get out an appropriate equilibrium distribution, which then isn't disturbed by changing some inputs arising from the AI's electron shuffling (indeed, by chaoticness it isn't even disturbed by quite large changes). So even if you talk about the real probability distributions over outcomes for a system of quantum measurements, the objection doesn't seem to go through. What I do right now doesn't significantly affect the distribution over outcomes when I flip a coin tomorrow, for example, even if I'm omniscient.
See wedifrid's reply and my comment.
This confuses me. Doesn't the "randomness" of quantum mechanics drown out and smooth over such effects, especially given multiple worlds where there's no hidden pseudorandom number generator that can be perpetuated in unknown ways?
I don't think so. Butterfly effects in classical universes should translate into butterfly effects over many worlds.
If we use trace distance to measure the distance between distributions outside of the box (and trace out the inside of the box) we don't seem to get a butterfly effect. But these things are a little hard to reason about so I'm not super confident (my comment above was referring to probabilities of measurements rather than entire states of affairs, as suggested in the OP, where the randomness more clearly washes out).
So today we were working on the Concreteness / Being Specific kata.
I can't visualize how "trace distance" makes this not happen.
I believe the Oracle approach may yet be recovered, even in light of this new flaw you have presented.
There are techniques to prevent sneezing and if AI researchers were educated in them then such a scenario could be avoided.
(Downvote? S/he is joking and in light of how most of these debates go it's actually pretty funny.)
I've provided two responses, which I will try to make more clear. (Trace distance is just a precise way of measuring distance between distributions; I was trying to commit to an actual mathematical claim which is either true or false, in the spirit of precision.):
My sneezing may be causally connected to the occurrence of a hurricane. However, given that I sneezed, the total probability of a hurricane occurring wasn't changed. It was still equal to the background probability of a hurricane occurring, because many other contributing factors--which have a comparable contribution to the probability of a hurricane in florida--are determined randomly. Maybe for reference it is helpful to think of the occurrence of a hurricane as an XOR of a million events, at least one of which is random. If you change one of those events it "affects" whether a hurricane occurs, but you have to exert a very special influence to make the probability of a hurricane be anything other than 50%. Even if the universe were deterministic, if we define these things with respect to a bounded agent's beliefs then we can appeal to complexity-theoretic results like Yao's XOR lemma and get identical results. If you disagree, you can specify how your mathematical model of hurricane occurrence differs substantially.
This just isn't true. In the counterfactual presented the state of the universe where there is no sneeze will result - by the very operation of phsyics - in a hurricane while the one with a sneeze will not. (Quantum Mechanics considerations change the deterministic certainty to something along the lines of "significantly more weight in resulting Everett Branches without than resulting Everett Branches with" - the principle is unchanged.)
Although this exact state of the univrse not likely to occur - and having sufficient knowledge to make the prediction in advance is even more unlikely - it is certainly a coherent example of something that could occur. As such it fulfills the role of illustrating what can happen when a small intervention results in significant influence.
You seem to be (implicitly) proposing a way of mapping uncertainty about whether there may be a hurricane and then forcing them upon the universe. This 'background probability' doesn't exist anywhere except in ignorance of what will actually occur and the same applies to 'are determined randomly'. Although things with many contributing factors can be hard to predict things just aren't 'determined randomly' - at least not according to physics we have access to. (The aforementioned caveat regarding QM and "will result in Everett Branches with weights of..." applies again.)
This is helpful for explaining where your thinking has gone astray but a red herring when it comes to think about the actual counterfactual. It is true that if the occurrence of a hurricane is an XOR of a million events then if you have zero evidence about any one of those million events then a change in another one of the events will not tell you anything about the occurrence of a hurricane. But that isn't the how the (counterf)actual universe is.
I don't quite understand your argument. Lets set aside issues about logical uncertainty, and just talk about quantum randomness for now, to make things clearer? It seems to make my case weaker. (We could also talk about the exact way in which this scheme "forces uncertainty onto the universe," by defining penalty in terms of the AI's beliefs P, at the time of deciding what disciple to produce, about future states of affairs. It seems to be precise and to have the desired functionality, though it obviously has huge problems in terms of our ability to access P and the stability of the resulting system.)
Why isn't this how the universe is? Is it the XOR model of hurricane occurrence which you are objecting to? I can do a little fourier analysis to weaken the assumption: my argument goes through as long as the occurrence of a hurricane is sufficiently sensitive to many different inputs.
Is it the supposed randomness of the inputs which you are objecting to? It is easy to see that if you have a very tiny amount of independent uncertainty about a large number of those events, then a change in another one of those events will not tell you much about the occurrence of a hurricane. (If we are dealing with logical uncertainty we need to appeal to the XOR lemma, otherwise we can just look at the distributions and do easy calculations.)
There is a unique special case in which learning about one event is informative: the case where you have nearly perfect information about nearly all of the inputs, i.e., where all of those other events do not depend on quantum randomness . As far as I can tell, this is an outlandish scenario when looking at any realistic chaotic system--there are normally astronomical numbers of independent quantum events.
Is it the difference between randomness and quantum events that you are objecting to? I suggested tracing out over the internals of the box, which intuitively means that quantum events which leave residues in the box (or dump waste heat into the box) are averaged over. Would the claim seem truer if we traced over more stuff, say everything far away from Earth, so that more quantum processes looked like randomness from the perspective of our distance measure? It doesn't look to me like it matters. (I don't see how you can make claims about quantumness and randomness being different without getting into this sort of technical detail. I agree that if we talk about complete states of affairs, then quantum mechanics is deterministic, but this is neither coherent nor what you seem to be talking about.)
I'm not going to argue further about the main point. Eliezer has failed to convince you and I know my own explanations are not nearly as clear as he can be so I don't think we would get anywhere. I'll just correct one point, which I'll concede minor in as much as it doesn't change the conclusion anyway, since the XOR business is of only tangential relevance to the question at hand.
The case where learning about one of the XORed variables is informative is not nearly perfect information about nearly all of the inputs. As a matter of plain mathematics you need any information at all about each and every one of the other variables. (And then the level of informativeness is obviously dependent on degree of knowledge, particularly the degree of knowledge with respect to those events that you know least about.)
It's even better/worse, since we're operating on multiple worlds quantum mechanics, and many of those random events happens after the AI has stopped having an influence... If you have the AI output a bit, and then XOR it with a random bit, what bit the AI outputs has literally zero impact no matter how you count: you end up with one universe in which 1 was outputed and one in wich 0 was outputed.
... I guess this is based on the assumption that there's no difference between "universe A sees 1 and universe B sees 0" and "universe A sees 0 and universe B sees 1"... but blobs of amplitude having indexical identities like that seems like an incredibly silly notion to me.
Seems like "minimize impact" is being applied at the wrong granularity, if a large deliberate impact is required to cancel out a large incidental one. If we break open the "utility-function maximizing agent" black box, and apply the minimum-impact rule to subgoals instead of actions, it might work better. (This does, however, require an internal architecture that supports a coherent notion of "subgoal", and maintains it in spite of suboptimality through self modifications - both large cans of worms.)
What "minimum impact rule"? How is "impact" computed so that applying it to "subgoals" changes anything?