shminux comments on Domesticating reduced impact AIs - All

9 Post author: Stuart_Armstrong 14 February 2013 04:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 14 February 2013 06:25:44PM 3 points [-]

But the AI can only reach that if it breaks the laws of physics

... As we know them now. Even then, not quite. The AI might just build a version of the Alcubierre drive, or a wormhole, or... In general, it would try to exploit any potential discrepancy between the domains of U and R.

Comment author: Stuart_Armstrong 14 February 2013 06:29:38PM 0 points [-]

Ok, I concede that if the AI can break physics as we understand it, the approach doesn't work. A valid point, but a general one for all AI (if the AI can break our definitions, then even a friendly AI isn't safe, even if the definitions in it seem perfect).

Any other flaws in the model?

Comment author: Larks 16 February 2013 09:12:02AM 5 points [-]

There's a big difference between UFAI because it turned out tha peano arithmetic was inconsistant, which no-one think possible, and UFAI because our current model of physics was wrong/the true model was given negligible probability, which seems very likely.

Comment author: Stuart_Armstrong 16 February 2013 09:25:45AM 1 point [-]

Yes.

This is related to ontology crises - how does the AI generalise old concepts across new models of physics?

But it may be a problem for most FAI designs, as well.

Comment author: Eliezer_Yudkowsky 17 February 2013 02:29:21AM 3 points [-]

Um, I wouldn't hurt people if I discovered I could violate the laws of physics. Why should a Friendly AI?

Comment author: Stuart_Armstrong 18 February 2013 01:10:01PM 2 points [-]

Here's my intuition: Eliezer and other friendly humans have got their values partially through evolution and selection. Genetic algorithms tend to be very robust - even robust to the problem not being properly specified. So I'd assume that Eliezer and evolved FAIs would preserve their friendliness if the laws of physics were changed.

An AI with a designed utility function is very different, however. These are very vulnerable to ontology crises, as they're grounded in formal descriptions - and if the premises of the description change, their whole values change.

Now, presumably we can do better than that, and design a FAI to be robust across ontology changes - maybe mix in some evolution, or maybe some cunning mathematics. If this is possible, however, I would expect the same approach to succeed with a reduced impact AI.

Comment author: Eliezer_Yudkowsky 18 February 2013 05:55:39PM 11 points [-]

I got 99 psychological drives but inclusive fitness ain't one.

In what way is evolution supposed to be robust? It's slow, stupid, doesn't reproduce the content of goal systems at all and breaks as soon as you introduce it to a context sufficiently different from the environment of evolutionary ancestry because it uses no abstract reasoning in its consequentialism. It is the opposite of robust along just about every desirable dimension.

Comment author: Stuart_Armstrong 19 February 2013 11:35:09AM *  4 points [-]

In what way is evolution supposed to be robust?

It's not as brittle as methods like first order logic or computer programming. If I had really bad computer hardware (corrupted disks and all that), then an evolved algorithm is going to work a lot better than a lean formal program.

Similarly, if an AI was built by people who didn't understand the concept of friendliness, I'd much prefer they used reinforcement learning or evolutionary algorithms than direct programming. With the first approaches, there is some chance the AI may infer the correct values. But with the wrong direct programming, there's no chance of it being safe.

As you said, you're altruistic, even if the laws of physics change - and yet you don't have a full theory of humankind, of worth, of altruism, etc... So the mess in your genes, culture and brain has come up with something robust to ontology changes, without having to be explicit about it all. Even though evolution is not achieving it's "goal" through you, something messy is working.

Comment author: CCC 19 February 2013 08:13:37AM 2 points [-]

In what way is evolution supposed to be robust?

If I had to guess Stuart_Armstrong's meaning, I would guess that genetic algorithms are robust in that they can find a solution to a poorly specified and poorly understood problem statement. They're not robust to dramatic changes in the environment (though they can correct for sufficiently slow, gradual changes very well); but their consequentialist nature provides some layer of protection from ontology crises.

Comment author: Evan_Crowe 18 February 2013 09:05:57PM 2 points [-]

I think there might be a miscommunication going on here.

I see Stuart arguing that Genetic algorithms function independent of physics in terms of their consistent "friendly" trait. i.e. if in universe a there is a genetic algorithm that finds value in expressing the "friendly" trait, then that algorithm would, if placed in universe b (where the boundary conditions of the universe were slightly different) would tend to eventually express that "friendly" trait again. Thereby meaning robust (when compared to systems that could not do this)

I don't necessarily agree with that argument, and my interpretation could be wrong.

I see Eliezer arguing that evolution as a system doesn't do a heck of a lot, when compared to a system that is designed around a goal and involves compensation for failure. i.e. I can't reproduce with a horse, this is a bad thing because if I were trapped on an island with a horse our genetic information would die off, where in a robust system, I could breed with a horse, thereby preserving our genetic information.

I'm sorry if this touches too closely on the entire "well, the dictionary says" argument.

Comment author: Evan_Crowe 18 February 2013 10:57:20PM 0 points [-]

Oh, now I feel silly. The horse IS the other universe.

Comment author: Stuart_Armstrong 19 February 2013 11:36:45AM *  0 points [-]

Comment author: MugaSofer 19 February 2013 02:51:32PM *  0 points [-]

You know this is blank, right?

Comment author: Stuart_Armstrong 19 February 2013 03:01:37PM 2 points [-]

I had a response that was mainly a minor nitpick; it didn't add anything, so I removed it.

Comment author: ialdabaoth 17 February 2013 02:34:53AM 1 point [-]

Why shouldn't it? To rephrase, why do you intuitively generalize your own utility function to that of a FAI?

Comment author: gjm 17 February 2013 02:50:41AM 9 points [-]
  1. Because having a utility function that somewhat resembles humans' (including Eliezer's) is part of what Eliezer means by "Friendly".

  2. Maybe some Friendly AIs would in fact do that. But Eliezer's saying there's no obvious reason why they should; why would finding that the laws of physics aren't what we think they are cause an AI to stop acting Friendly, any more than (say) finding much more efficient algorithms for doing various things, discovering new things about other planets, watching an episode of "The Simpsons", or any of the countless other things an AI (or indeed a human) might do from time to time?

If I'm right that #2 is part of what Eliezer is saying, maybe I should add that I think it may be missing the point Stuart_Armstrong is making, which (I think) isn't that an otherwise-Friendly AI would discover it can violate what we currently believe to be the laws of physics and then go mad with power and cease to be Friendly, but that a purported Friendly AI design's Friendlines might turn out to depend on assumptions about the laws of physics (e.g., via bounds on the amount of computation it could do in certain circumstances or how fast the number of intelligent agents within a given region of spacetime can grow with the size of the region, or how badly the computations it actually does can deviate from some theoretical model because of noise etc.), and if those assumptions then turned out to be wrong it would be bad.

(To which my model of Eliezer says: So don't do that, then. And then my model of Stuart says: Avoiding it might be infeasible; there are just too many, too non-obvious, ways for a purported proof of Friendliness to depend on how physics works -- and the best we can do might turn out to be something way less than an actual proof, anyway. But by now I bet my models have diverged from reality. It's just as well I'm just chattering in an LW discussion and not trying to predict what a superintelligent machine might do.)

Comment author: Stuart_Armstrong 18 February 2013 01:11:37PM 1 point [-]

That model of me forced me to think of a better response :-)

http://lesswrong.com/lw/gmx/domesticating_reduced_impact_ais/8he2

And as for the assumptions, I'm more worried about the definitions: what happens when the AI realises that the definition of what a "human" is turns out to be flawed.

Comment author: JGWeissman 18 February 2013 03:47:23PM *  9 points [-]

what happens when the AI realises that the definition of what a "human" is turns out to be flawed.

The AI's definition of "human" should be computational. If it discovers new physics, it may find additional physical process that implement that computation, but it should not get confused.

Ontological crises seems to be a problem for AIs with utility functions over arrangements of particles, but it doesn't make much sense to me to specify our utility function that way. We don't think of what we want as arrangements of particles, we think at a much higher level of abstraction and we would be happy with any underlying physics that implemented the features of that abstraction level. Our preferences at that high level are what should generate our preferences in terms of ontologically basic stuff whatever ontology the AI ends up using.

Comment author: Eliezer_Yudkowsky 18 February 2013 06:07:51PM 5 points [-]

Right - that's the obvious angle of attack for handling ontological crises.

Comment author: shminux 18 February 2013 06:38:04PM 3 points [-]

Our preferences at that high level are what should generate our preferences in terms of ontologically basic stuff whatever ontology the AI ends up using.

I am not sure that the higher-level of abstraction saves you from sliding into an ontological black hole. My analogy is in physics: Classical electromagnetism leads to the ultraviolet catastrophe, making this whole higher classical level unstable, until you get the lower levels "right".

I can easily imagine that an attempt to specify a utility function over "a much higher level of abstraction" would result in a sort of "ultraviolet catastrophe" where the utility function can become unbounded at one end of the spectrum, until you fix the lower levels of abstraction.

Comment author: MugaSofer 06 March 2013 09:13:37AM -2 points [-]

The AI's definition of "human" should be computational. If it discovers new physics, it may find additional physical process that implement that computation, but it should not get confused.

What if it discovers new math? Less likely, I know, but...

Comment author: shminux 17 February 2013 03:20:36AM *  0 points [-]

Presumably all the math you are working on is required for your proof of friendliness? And if the assumptions behind the math do not match the physics, wouldn't it invalidate the proof, or at least its relevance to the world we live in? And then all bets are off?

Comment author: Eliezer_Yudkowsky 17 February 2013 04:56:44AM 0 points [-]

Even invalidating a proof doesn't automatically mean the outcome is the opposite of the proof. The key question is whether there's a cognitive search process actively looking for a way to exploit the flaws in a cage. An FAI isn't looking for ways to stop being Friendly, quite the opposite. More to the point, it's not actively looking for a way to make its servers or any other accessed machinery disobey the previously modeled laws of physics in a way that modifies its preferences despite the proof system. Any time you have a system which sets that up as an instrumental goal you must've done the Wrong Thing from an FAI perspective. In other words, there's no super-clever being doing a cognitive search for a way to force an invalidating behavior - that's the key difference.

Comment author: shminux 17 February 2013 05:22:18AM *  0 points [-]

Hmm, I did not mean "actively looking". I imagined something along the lines of being unable to tell whether something that is a good thing (say, in a CEV sense) in a model universe is good or bad in the actual universe. Presumably if you weren't expecting this to be an issue, you would not be spending your time on non-standard numbers and other esoteric mathematical models not usually observed in the wild. Again, I must be missing something in my presumptions.

Comment author: Eliezer_Yudkowsky 17 February 2013 06:19:58AM 1 point [-]

The model theory is just for understanding logic in general and things like Lob's theorem, and possibly being able to reason about universes using second-order logic. What you're talking about is the ontological shift problem which is a separate set of issues.

Comment author: Stuart_Armstrong 18 February 2013 01:14:34PM 0 points [-]

The problem is that it's a utility maximiser. If the ontology crises causes the FAI's goals to slide a bit in the wrong direction, it may end up optimising us out of existence (even if "happy humans with worthwhile and exciting lives" is still high in its preference ordering, it might not be at the top).

Comment author: Eliezer_Yudkowsky 18 February 2013 05:52:17PM 1 point [-]

This is a uniform problem among all AIs. Avoiding it is very hard. That is why such a thing as the discipline of Friendly AI exists in the first place. You do, in fact, have to specify the preference ordering sufficiently well and keep it sufficiently stable.

Stepping down from maximization is also necessary just because actual maximization is undoable, but then that also has to be kept stable (satisficers may become maximizers, etc.) and if there's something above eudaimonia in its preference ordering it might not take very much 'work' to bring it into existence.