ewang comments on Isolated AI with no chat whatsoever - Less Wrong

14 Post author: ancientcampus 28 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Desrtopa 29 January 2013 02:42:03AM *  0 points [-]

Yes, but why run it on a computer at all? It doesn't seem likely to do you any good that way.

Comment author: ewang 29 January 2013 04:24:13AM *  6 points [-]

It is a hypothetical situation of unreasonably high security that tries to probe for an upper bound on the level of containment required to secure an AI.

Comment author: Desrtopa 29 January 2013 04:33:48AM -1 points [-]

I would think that the sorts of hypotheticals that would be most useful to entertain would be ones that explore the safety of the most secure systems anyone would have an actual incentive to implement.

Could you contain a Strong AI running on a computer with no output systems, sealed in a lead box at the bottom of the ocean? Presumably yes, but in that case, you might as well skip the step of actually making the AI.

Comment author: vi21maobk9vp 29 January 2013 06:38:27PM 3 points [-]

You say "presumably yes". The whole point of this discussion is to listen to everyone who will say "obviously no"; their arguments would automatically apply to all weaker boxing techniques.

Comment author: Desrtopa 29 January 2013 06:46:56PM *  0 points [-]

All the suggestions so far that might allow an AI without conventional outputs to get out would be overcome by the lead box+ocean defenses. I don't think that containing a strong AI is likely to be that difficult a problem. The really difficult problem is containing a strong AI while getting anything useful out of it.

Comment author: vi21maobk9vp 30 January 2013 05:09:15AM -2 points [-]

If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…

Comment author: ikrase 30 January 2013 09:20:14AM 1 point [-]

I... don't believe that.

I think that making a FOOM-able AI is much easier than making an AI that can break out of a (considerably stronger) lead box in solar orbit.

Comment author: vi21maobk9vp 30 January 2013 08:12:50PM 0 points [-]

And you are completely right.

I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.

I.e. walking the walk is harder than talking the talk.