shminux comments on I played the AI Box Experiment again! (and lost both games) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (123)
Assuming none of this is fabricated or exaggerated, every time I read these I feel like something is really wrong with my imagination. I can sort of imagine someone agreeing to let the AI out of the box, but I fully admit that I can't really imagine anything that would elicit these sorts of emotions between two mentally healthy parties communicating by text-only terminals, especially with the prohibition on real-world consequences. I also can't imagine what sort of unethical actions could be committed within these bounds, given the explicitly worded consent form. Even if you knew a lot of things about me personally, as long as you weren't allowed to actually, real-world, blackmail me...I just can't see these intense emotional exchanges happening.
Am I the only one here? Am I just not imagining hard enough? I'm actually at the point where I'm leaning towards the whole thing being fabricated - fiction is more confusing than truth, etc. If it isn't fabricated, I hope that statement is taken not as an accusation, but as an expression of how strange this whole thing seems to me, that my incredulity is straining through despite the incredible extent to which the people making claims seem trustworthy.
It's not fabricated, be sure of that (knowing Tuxedage from IRC, I'd put the odds of 100,000:1 or more against fabrication). And yes, it's strange. I, too, cannot imagine what someone can possibly say that would make me get even close to considering letting them out of the box. Yet those who are complacent about it are the most susceptible.
I know this is off-topic, but is it really justifiable to put so high odds on this? I wouldn't use so high odds even if I had known the person intimately for years. Is it justifiable or is this just my paranoid way of thinking?
That sounds similar to hypnosis, to which a lot of people are susceptible but few think they are. So if you want a practical example of AI escaping the box just imagine an operator staring at a screen for hours with an AI that is very adept at judging and influencing the state of human hypnosis. And that's only a fairly narrow approach to success for the AI, and one that has been publicly demonstrated for centuries to work on a lot of people.
Personally, I think I could win the game against a human but only by keeping in mind the fact that it was a game at all times. If that thought ever lapsed, I would be just as susceptible as anyone else. Presumably that is one aspect of Tuxedage's focus on surprise. The requirement to actively respond to the AI is probably the biggest challenge because it requires focusing attention on whatever the AI says. In a real AI-box situation I would probably lose fairly quickly.
Now what I really want to see is an AI-box experiment where the Gatekeeper wins early by convincing the AI to become Friendly.
That's hard to check. However, there was a game where the gatekeeper convinced the AI to remain in the box.
I did that! I mentioned that in this post:
http://lesswrong.com/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/9thk
Not quite the same, but have you read Watchmen? Specifically, the conversation that fvyx fcrpger naq qe znaunggna unir ba znef. (Disclaimer: it's been a while since I read it and I make no claims on the strength of this argument.)
I did that! I mentioned that in this post:
http://lesswrong.com/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/9thk
Yeah, my gut doesn't feel like it's fabricated - Tuxedage and Eliezer would have to both be in on it and that seems really unlikely. And I can't think of a motive, except perhaps as some sort of public lesson in noticing confusion, and that too seems far fetched.
I've just picked up the whole "if it's really surprising it might be because its not be true" instinct from having been burned in the past by believing scientific findings that were later debunked, and now Lesswrong has condensed that instinct into a snappy little "notice confusion" cache. And this is pretty confusing.
I suppose a fabrication would be more confusing, in one sense.
yeah i think appealing to fabrication can be a bit hand-wavy sometimes. like you're saying it's fabricated like how other things are fabricated (since as we all know fabrication happens). but not every fabrication is the same or equally as easy to pull off. to say it was fabricated doesn't say anything about how it was. but that's not even a question that enters ones mind when they think of fabrication. how? well how anything else is fabricated of course..
it can be as much a reaction of disbelief as it is an alternative explanation.