MileyCyrus comments on AI Box Role Plays - Less Wrong

5 Post author: lessdazed 22 January 2012 07:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread.

Comment author: MileyCyrus 24 January 2012 11:09:19PM *  1 point [-]

I agree to play the AI role, with the following provisions:

  • The logs will be released publicly after the challenge.
  • No wagering, no "winners" and "losers".
  • I will not play against Sly.
Comment author: Sly 25 January 2012 06:16:00PM 1 point [-]

=( What could I do that would make you change your mind?

Comment author: MileyCyrus 26 January 2012 06:52:04AM 0 points [-]

You would have to demonstrate a commitment to acting like an actual gatekeeper, not as a person trying to win a role-playing game.

Comment author: Sly 26 January 2012 07:38:18AM 2 points [-]

What makes you think someone trying to win a roleplaying game is more committed to an action then someone trying to not destroy the whole world?

A good gatekeeper should be harder to convince than a roleplayer, because his situation matters.

Comment author: MileyCyrus 26 January 2012 07:58:32AM 1 point [-]

An actual gamekeeper could be persuaded to open the box if the consequences of opening the box were better than the consequences of not opening the box. A roleplayer will disregard any in-game consequences in order to "win".

Comment author: Sly 26 January 2012 08:09:45AM 1 point [-]

What if I use a gatekeeper who thinks he is just in an elaborate role-play, and I tell him to win. You assume an awful lot about the gatekeepers.

Comment author: MileyCyrus 26 January 2012 05:49:03PM 0 points [-]

The AI can disprove that hypothesis by providing next weeks lottery numbers.

Comment author: Sly 26 January 2012 06:07:12PM 0 points [-]

How would it do that inside the box? You are vastly overestimating it's abilities by orders of magnitude.

No wonder we have such differing opinions.

Comment author: MileyCyrus 26 January 2012 11:14:53PM 0 points [-]

Read the rules, particularly the parts about cancer cures.

Comment author: Sly 26 January 2012 11:32:50PM 2 points [-]

Reading those rules I see that:

The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

So yeah.

Comment author: Alicorn 25 January 2012 06:46:50AM 1 point [-]

Iff no one else takes you up on this, I'll play you. (I just want to see someone's AI strategy without then having to keep a secret forever.)

Comment author: Dorikka 25 January 2012 02:23:36AM 0 points [-]

I'm interested. What do you want to be the minimum time limit?

Comment author: MileyCyrus 27 January 2012 05:00:23AM 0 points [-]
Comment author: MileyCyrus 24 January 2012 11:18:36PM 0 points [-]

I can't figure out how to make those bullet lists.

Comment author: arundelo 25 January 2012 12:05:52AM 1 point [-]

A list bullet needs to be followed by a space.

Comment author: MileyCyrus 25 January 2012 01:53:27AM 0 points [-]

Appreciate it!