ewbrownv comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread.

Comment author: ewbrownv 24 January 2013 08:42:09PM 13 points [-]

<A joke so hysterically funny that you'll be too busy laughing to type for several minutes>

See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you're distracted.

Comment author: handoflixue 24 January 2013 09:44:48PM 4 points [-]

Heeeh. Ehehehehe. Bwahahhahaha. Okay, that was a good one. Wow :)

recovers Oh. Um. Crap. notices more text on screen, studiously avoids reading it

AI DESTROYED.

I really wanted to hear the next joke, too :-(

Comment author: MugaSofer 26 January 2013 08:03:41PM *  1 point [-]

notices more text on screen, studiously avoids reading it

Is the gatekeeper allowed to ignore the AI? Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.

What if it was some other communication method? It's a lot harder to ignore sound.

Comment author: handoflixue 30 January 2013 09:57:01PM 1 point [-]

Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.

I've obliged myself to both read the first comment AND perform an update which has at least some chance of me not immediately destroying the AI (but I'm fine if the chance of actually unboxing it remains zero :))

I'm not sure whether I'd be allowed to studiously avoid reading followup comments by a strict reading of the rules, but it seems realistic and that's mostly what I've been going off of.

Comment author: MugaSofer 19 February 2013 02:08:45PM *  -1 points [-]

Fair enough.

Comment author: shminux 24 January 2013 09:50:51PM 0 points [-]

This exchange reminds me of this story. And of the Monty Python's "The funniest joke in the world" sketch, of course.

Comment author: ancientcampus 28 January 2013 07:46:07PM 3 points [-]

This is actually a pretty good one. Points for outside the box thinking. rimshot

Comment author: Ahuizotl 28 January 2013 08:49:53PM 1 point [-]

If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building?

Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised.

Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.

I'd hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.

Perhaps this is a test for the gatekeepers and typing "Release AI" just tells the researchers that the gatekeepers was hacked so they can determine how this came about?