You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Humans get different counterfactuals - Less Wrong Discussion

2 Post author: Stuart_Armstrong 23 March 2015 02:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (2)

You are viewing a single comment's thread.

Comment author: Luke_A_Somers 23 March 2015 03:41:05PM *  4 points [-]

I am not at all sure why the humans wouldn't just turn the AI on again anyway if it were only 99% probable.

Anyway, this reminds me of an oracle system I devised for a fantasy story I never got around to writing - The oracle doesn't always respond, and if they do respond, they tell you what would have happened if they hadn't responded. One of the rules I quickly had to make for the Oracle was that if they didn't say anything, you didn't get to ask again.

I thought (at the time, some time ago) that the Oracle, seeking to be most helpful, would soon converge on answering only around 2/3 - 4/5 of the time so that people wouldn't go and do stupid things in response to the extreme upset of not getting an answer.

Comment author: Stuart_Armstrong 23 March 2015 04:05:47PM 1 point [-]

I am not at all sure why the humans wouldn't just turn the AI on again anyway if it were only 99% probable.

That's a human institution problem, that seems more solvable (at least, we shouldn't run the AI if it isn't solved).