ciphergoth comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: ciphergoth 12 May 2012 11:00:43AM 1 point [-]

Wow, OK. Is it possible to rig the decision theory to rule out acausal trade?

Comment author: Will_Newsome 12 May 2012 11:28:55PM *  1 point [-]

IIRC you can make it significantly more difficult with certain approaches, e.g. there's an OAI approach that uses zero-knowledge proofs and that seemed pretty sound upon first inspection, but as far as I know the current best answer is no. But you might want to try to answer the question yourself, IMO it's fun to think about from a cryptographic perspective.

Comment author: Vladimir_Nesov 13 May 2012 12:03:57AM *  0 points [-]

Probably (in practice; in theory it looks like a natural aspect of decision-making); this is too poorly understood to say what specifically is necessary. I expect that if we could safely run experiments, it'd be relatively easy to find a well-behaving setup (in the sense of not generating predictions that are self-fulfilling to any significant extent; generating good/useful predictions is another matter), but that strategy isn't helpful when a failed experiment destroys the world.