You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Isolated AI with no chat whatsoever - Less Wrong Discussion

14 Post author: ancientcampus 28 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Desrtopa 29 January 2013 12:30:48AM 2 points [-]

Suppose you make a super-intelligent AI and run it on a computer. The computer has NO conventional means of output (no connections to other computers, no screen, etc).

Why would you do that though?

Comment author: Luke_A_Somers 29 January 2013 10:53:06PM 0 points [-]

You can freeze it and take a look at what it's thinking at some point, perhaps?

Comment author: ChristianKl 31 January 2013 04:50:08PM 1 point [-]

If you look at it it can give you a text based message.

Comment author: Luke_A_Somers 31 January 2013 09:08:24PM -1 points [-]

A) You haven't told it that. B) You're just as likely to look where it didn't put this message.

Basically, to be let out, it could overwrite itself with a provably friendly AI and a proof of its friendliness.

If we could verify the proof, I'd take it.

Comment author: RobbBB 25 January 2014 10:06:16AM 0 points [-]

If the ASI has nothing better to do while it's boxed, it will pursue low-probability escape scenarios ferociously. One of those is to completely saturate its source code with brain-hacking basilisks in case any human tries to peer inside.

Comment author: Luke_A_Somers 25 January 2014 12:32:41PM 0 points [-]

It would have to do that blind, without a clear model of our minds in place. We'd likely notice failed attempts and just kill it.