orthonormal comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 03 March 2010 03:39:46AM 1 point [-]

You might say it's an FAI-complete problem, in the same way "building a transhuman AI you can interact with and keep boxed" is.

Comment author: timtyler 03 March 2010 08:48:00AM *  1 point [-]

You think building a machine that can be stopped is the same level of difficulty as building a machine that reflects the desires of one or more humans while it is left on?

I beg to differ - stopping on schedule or on demand is one of the simplest possible problems for a machine - while doing what humans want you to do while you are switched on is much trickier.

Only the former problem needs to be solved to eliminate the spectre of a runaway superintelligence that fills the universe with its idea of utility against the wishes of its creator.

Comment author: LucasSloan 03 March 2010 06:55:06PM 1 point [-]
Comment author: wedrifid 03 March 2010 03:44:07AM 1 point [-]

Exactly, I like the terminology.