DanielLC comments on Interpersonal Morality - Less Wrong

14 Post author: Eliezer_Yudkowsky 29 July 2008 06:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Virge2 30 July 2008 03:37:58PM 0 points [-]

Imagine the year 2100

AI Prac Class Task: (a) design and implement a smarter-than-human AI using only open source components; (b) ask it to write up your prac report. Time allotted: 4 hours Bonus points: disconnect your AI host from all communications devices; place your host in a Faraday cage; disable your AI's morality module; find a way to shut down the AI without resorting to triggering the failsafe host self-destruct.

sophiesdad, since a human today could not design a modern microprocessor (without using the already-developed plethora of design tools) then your assertion that a human will never design a smarter-than-human machine is safe but uninformative. Humans use smart tools to make smarter tools. It's only reasonable to predict that smarter-than-human machines will only be made by a collaboration of humans and existing smart machines.

Speculation on whether "smart enough to self improve" comes before or after the smart-as-a-human mark on some undefined 1-dimensional smartness scale is fruitless. By the look of what you seem to endorse by quoting your unnamed correspondent, your definition of "smart" makes comparison with human intelligence impossible.

Comment author: DanielLC 17 July 2012 05:57:05AM 0 points [-]

disable your AI's morality module; find a way to shut down the AI without resorting to triggering the failsafe host self-destruct.

Trivial. Once you've disabled your AI's morality module, you've already shut it down.

You just build the conscience, and that is the AI.