mamert comments on Ethical Injunctions - Less Wrong

26 Post author: Eliezer_Yudkowsky 20 October 2008 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

Sort By: Old

You are viewing a single comment's thread.

Comment author: mamert 15 April 2016 10:26:40AM *  0 points [-]

There is something I don't understand about the "fooling programmers -> shutdown" idea - how "It looks like it should be possible"

Am I correct assuming this requires an AI that:

  1. Implements changes in discrete iterations

  2. Understands how it itself works, both on low and very high levels (getting fractal here)

  3. Can do the same for its more advanced iteration

  4. Monitors the more advanced iteration (a simulation or a live deployment) for a certain behavior, preferably in real time or faster

  5. Has the concept of deception (shutdown condition, AND a salient factor in other calculations)

  6. At all times allows itself to be killed by its previous iteration (accidental Robin Hood/Friar Tuck hacks come to mind)

  7. Has a full-proof definition of "deceiving programmers" which:

a) remains consistent with programmers' intentions across iterations, immune to drift

  • not just from n-1 to n, but from 0 to n - you could have iteration 0 also watch over n, but would it understand n?

b) remains applicable across iterations, cannot be bypassed or cheated

  • "the device category is now 'data processor', not 'human output device', so- [process terminated]"

c) allows n to give different output than in n-1 (else why bother changing?)

  • just having n-1 evaluate n's output might not suffice: n-1 might need to change itself first to understand it

d) does not have too significant an impact on improvement process

  • metaanalysis on own shutdown logs causing shutdown?

e) has a satisfactory solution for the concept of manipulating people, which somehow doesn't paralyze the system

  • situation: output data that is true but would be misinterpreted by / cause a glitch in some other system the results will be fed to, leading people to act upon the wrong results of in a way desirable by the AI

  • "It is not deceit if the format change is communicated along with the output data. Or somewhere in the status update PDFs sent 40 times a second."

  • "this output is not false, it is merely unrelated"

  • "but that would make the programmers' business competitors draw wrong conclusions... wait, that is allowed? Moving non-programmers to Resource.Actuator category, calculating new self-improvement strategy."