dlthomas comments on So You Want to Save the World - Less Wrong

41 Post author: lukeprog 01 January 2012 07:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 28 December 2011 02:31:51AM *  12 points [-]

there may be a way to constrain a superhuman AI such that it is useful but not dangerous...Can a superhuman AI be safely confined, and can humans managed to safely confine all superhuman AIs that are created?

Does anyone think that no AI of uncertain Friendliness could convince them to let it out of its box?

I'm looking for a Gatekeeper.

Why doesn't craigslist have a section for this in the personals? "AI seeking human for bondage roleplay." Seems like it would be a popular category...

Comment author: Normal_Anomaly 30 December 2011 04:01:05AM 2 points [-]

You're looking to play AI in a box experiment? I've wanted to play gatekeeper for a while now. I don't know if I'll be able to offer money, but I would be willing to bet a fair amount of karma.

Comment author: dlthomas 30 December 2011 04:09:45AM 6 points [-]

Maybe bet with predictionbook predictions?

Comment author: Jayson_Virissimo 02 January 2012 12:57:00PM 2 points [-]

Maybe bet with predictionbook predictions?

You sir, are a man (?) after my own heart.

Comment author: Normal_Anomaly 30 December 2011 01:43:02PM 1 point [-]

Sounds good. I don't have a predictionbook account yet, but IIRC it's free.