DaFranker comments on I attempted the AI Box Experiment (and lost) - Less Wrong

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: MugaSofer 23 January 2013 02:37:01PM -2 points [-]

long-term emotional damage

Are you worried he'd be hacked back? Or just discover he's not as smart as he thinks he is?

Comment author: DaFranker 23 January 2013 02:51:14PM 4 points [-]

I mostly think the vast majority of possible successful strategies involve lots of dark arts and massive mental effort, and the backlash from failure to be proportional to the effort in question.

I find it extremely unlikely that Eliezer is sufficiently smart to win a non-fractional percent of the time using only safe and fuzzy non-dark-arts methods, and using a lot of bad nasty unethical mind tricks to get people to do what you want repeatedly like I figure would be required here is something that human brains have an uncanny ability to turn into a compulsive self-denying habit.

Basically, the whole exercise would most probably, if my estimates are right, severely compromise the mental heuristics and ability to reason correctly about AI of the participant - or, at least, drag it pretty much in the opposite direction to the one the SIAI seems to be pushing for.