ike comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: ike 17 August 2015 02:54:36AM 1 point [-]

Ah, I misunderstood your objection. Your talk about "pre-commitments" threw me off.

just random people plucked out of "Platonic human-space"

It seem to me that these wouldn't quite be following the same general thought processes as an actual human; self-reflection should be able to convince one that they aren't that type of simulation. If the AI is able to simulate someone to the extent that they "think like a human", they should be able to simulate someone that thinks "sufficiently" like the Gatekeeper as well.

I've never heard of that term.

I made it up just now, it's not a formal term. What I mean by it is basically: imagine a robot that wants to press a button. However, its hardware is only sufficient to press it successfully 1% of the time. Is that a lack of rationality? No, it's a lack of control. This seems analogous to a human being unable to precommit properly.

Would this happen to have something to do with Vaniver's series of posts on "control theory"?

No idea, haven't read them. Probably not.