wedrifid comments on The AI in a box boxes you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (378)
Pre-commitment isn't even necessary. Note that the original explanation didn't include any mention of it. Later replies only used the term for the sake of crossing an inferential gap (ie. allowing you to keep up). However, if you are going to make a big issue of the viability of precommitment itself you need to first understand that the comment you are replying to isn't one.
That wasn't a Causal Decision Theorist attempting to persuade someone that it has altered itself internally or via an external structure such that it is "precommited" to doing something irrational. It is a Timeless Decision Theorist saying what happens to be rational regardless of any previous 'commitments'.
I'm aware of the vulnerability of human brains, so is Eliezer. In fact the vulnerability of human gatekeepers to influence even by humans, much less super-intelligences is something Eliezer made huge deal about demonstrating. However this particular threat isn't a vulnerability of Eliezer or myself or any of the others who made similar observations. If you have any doubt that we would destroy the AI you have a poor model of reality.
For practical purposes I assume that I can be modified by torture such that I'll do or say just about anything. I do not expect the tortured me to behave the way the current me would decide and so my current decisions take that into account (or would, if it came to it). However this scenario doesn't involve me being tortured. It involves something about an AI simulating torture of some folks. That decision is easy and doesn't cripple my decision making capability.