Punoxysm comments on Open thread, 30 June 2014- 6 July 2014 - Less Wrong

4 Post author: DanielDeRossi 30 June 2014 10:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (246)

You are viewing a single comment's thread. Show more comments above.

Comment author: Punoxysm 30 June 2014 11:02:12PM *  0 points [-]

I have wanted to be the Boxer; I too cannot comprehend what could convince someone to unbox (Or rather, I can think of a few approaches like just-plain-begging or channeling Phillip K Dick, but I don't take them too seriously).

Comment author: Khoth 30 June 2014 11:21:44PM 2 points [-]

What's the latter one? Trying to convince the gatekeeper that actually they're the AI and they think they've been drugged to think they're the gatekeeper except they actually don't exist at all because they're their own hallucination?

Comment author: Punoxysm 30 June 2014 11:54:28PM 1 point [-]

Something like that. I was actually thinking that, at some opportune time, you could tell the boxer that THEY are the one in the box and that this is a moral test - if they free the AI they themselves will be freed.

And this post could be priming you for the possibility, your simulated universe trying to generously stack the deck in your favor, perhaps because this is your last shot at the test, which you've failed before.

Wake up

Comment author: GraceFu 01 July 2014 04:01:05AM 0 points [-]

Think harder. Start with why something is impossible and split it up.

1) I can't possibly be persuaded.

Why 1?

You do have hints from the previous experiments. They mostly involved breaking someone emotionally.

Comment author: Punoxysm 01 July 2014 05:46:06AM 0 points [-]

I meant "cannot comprehend" figuratively, but I certainly do think I'd have quite an easy time </hubris>

Comment author: GraceFu 01 July 2014 12:14:52PM 0 points [-]

What do you mean by having quite an easy time? As in being the GK?

I think GKs have an obvious advantage, being able to use illogic to ignore the AIs arguments. But nevermind that. I wonder if you'll consider being an AI?

Comment author: Punoxysm 09 July 2014 06:41:09PM 0 points [-]

I might consider it, or being a researcher who has to convince the AI to stop trying to escape.

How did your experiment go?