Viliam_Bur comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
So if the question is related to the future (such as "will it rain tomorrow?"), does it essentially mean that a tool will model a counterfactual alternative future which would happen if the tool did not provide any answer?
This would be OK for situations where the answer of the AI does not make a big difference (such as "will it rain tomorrow?").
It would be less OK for situations where the mere knowledge about "what AI said" would influence the result, such as asking AI about important social or political topics, where the answer is likely to be published. (In these situations the question considered would be mixed with specific events of the counterfactual world, such as a worldwide panic "our superhuman AI seems to be broken, we are all doomed!").
I think that you're describing a real hurdle, though it seems like a hurdle that could be overcome.