lyghtcrye comments on The idiot savant AI isn't an idiot - Less Wrong

8 Post author: Stuart_Armstrong 18 July 2013 03:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 18 July 2013 07:49:02PM 6 points [-]

Why wouldn't an AI modify its own goals?

The predictable consequence of an AI modifying its own goals is that the AI no longer takes actions expected to achieve its goals, and therefor does not achieve its goals. The AI would therefor evaluate that the action of modifying its own goals is not effective and it will not do it.

Comment author: lyghtcrye 18 July 2013 09:01:52PM -2 points [-]

I find it highly likely that an AI would modify its own goals such that its goals were concurrent with the state of the world as determined by its information gathering abilities in at least some number of cases (or, as an aside, altering the information gathering processes so it only received data supporting a value situation). This would be tautological and wouldn't achieve anything in reality, but as far as the AI is concerned, altering goal values to be more like the world is far easier than altering the world to be more like goal values. If you want an analogy in human terms, you could look at the concept of lowering ones expectations, or even at recreational drug use. From a computer science perspective it appears to me that one would have to design immutability into goal sets in order to even expect them to remain unchanged.

Comment author: Randaly 18 July 2013 10:50:29PM 0 points [-]

This is another example of something that only a poorly designed AI would do.

Note that immutable goal sets are not feasible, because of ontological crises.

Comment author: lyghtcrye 19 July 2013 01:10:50AM 0 points [-]

Of course this is something that only a poorly designed AI would do. But we're talking about AI failure modes and this is a valid concern.

Comment author: Randaly 19 July 2013 01:22:56AM -1 points [-]

My understanding was that this was about whether the singularity was "AI going beyond "following its programming"," with goal-modification being an example of how an AI might go beyond its programming.

Comment author: lyghtcrye 19 July 2013 10:40:03PM 1 point [-]

I certainly agree with that statement. It was merely my interpretation that violating the intentions of the developer by not "following it's programming" is functionally identical to poor design and therefore failure.

Comment author: sebmathguy 23 July 2013 06:11:10AM -1 points [-]

The AI is a program. Running on a processor. With an instruction set. Reading the instructions from memory. These instructions are its programming. There is no room for acausal magic here. When the goals get modified, they are done so by a computer, running code.

Comment author: Randaly 23 July 2013 06:21:48AM *  0 points [-]

I'm fairly confident that you're replying to the wrong person. Look through the earlier posts; I'm quoting this to summarize its author's argument.