You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Armok_GoB comments on Supposing you inherited an AI project... - Less Wrong Discussion

-5 Post author: bokov 04 September 2013 08:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: Armok_GoB 20 October 2013 07:47:59PM *  2 points [-]

Obvious problem 1: the video output or descriptions can contain basilisks, including ones that cause problem 2.

Obvious problem 2: Someone could ask and then verify it to make full UFAI without realizing it.

Obvious problem 3: UFAI could arise inside the simulation used to produce the hypotheticals, and either hack it's way out directly or cause problem 1 followed by problem 2.

And most oblivious problem of all: Withe being able to repeatedly do the highly dangerous and active step of modifying it's own source code, it'll never get smart enough to be useful on 95% of queries.

Comment author: [deleted] 21 October 2013 01:02:53PM 1 point [-]

Fair point. in that case, given an unknown partially complete AI, if the first action you take is "Let me just start reading the contents of these files without running it to see what it even does." then someone could say "A UFAI put a basilisk in the source code and used it to kill all of humanity, you lose."

That isn't even entirely without precedent, using this as an example: http://boingboing.net/2012/07/10/dropped-infected-usb-in-the-co.html Sometimes malicious code really is literally left physically lying around, waiting for someone to pop it into a computer out of curiosity.