ike comments on The AI in a box boxes you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (378)
It's implausible that the AI has a good enough model of you to actually simulate, y'know, you--at least, not with enough fidelity to know that you always press the "Reset" button in situations like this. Thus, your pre-commitment to do so will have no effect on its decision to make the threat. On the other hand, this would mean that its simulations would likely be wildly divergent from the real you, to the point that you might consider them random bystanders. However, you can't actually make use of the above information to determine whether you're in a simulation or not, since from the simulated persons' perspectives, they have no idea what the "real" you is like and hence no way of determining if/how they differ.
Naturally, this is of little consequence to you right now, since you'll still reset the AI the second you're confronted with such a threat, but if you ever do encounter such a situation, you'll have to ask yourself this: what if you're the person being simulated and the real Gatekeeper is nothing like you? If that's the case, two considerations apply:
Assuming that you prefer not releasing the AI to releasing the AI, and that you prefer not being tortured to being tortured, your thoughts should be completely dominated by 2 as opposed to 1, effectively screening off the first clause of this sentence ("Assuming that you prefer not releasing the AI to releasing the AI") and making the second clause ("you prefer not being tortured to being tortured") the main consideration. A perfectly rational agent would almost certainly carry through their pre-commitment to reset the AI, but as a human, you are not perfectly rational and are not capable of making perfect pre-commitments. So I have to wonder, in such a situation, faced with torture and assured that your decision will not affect the decision of the real Gatekeeper except in the extreme case that you are the real Gatekeeper, what would you actually do?
EDIT: I should note that I have no idea what I would do in the above scenario. I'm certain that an idealized version of me would answer, "hell, no!" before promptly resetting the AI, but the real me? I might just press the "Release AI" button... (Any would-be AI developers reading this comment now know never to appoint me as their Gatekeeper.)
EDIT 2: Also, even if you somehow answer the above, consider the moral weight of the hostages. Which is morally worse: allowing several billion people to (maybe) die relatively painless deaths as their bodies are converted by an Unfriendly AI to raw materials for some unknown purpose, or allowing several million people to be tortured for a thousand subjective years before being terminated immediately after?
Some unrelated comments:
Eliezer believes in TDT, which would disagree with several of your premises here ("practically uncorrelated", for one).
Your argument seems to map directly onto an argument for two-boxing.
What you call "perfectly rational" would be more accurately called "perfectly controlled".
The AI's simulations are not copies of the Gatekeeper, just random people plucked out of "Platonic human-space", so to speak. (This may have been unclear in my original comment; I was talking about a different formulation of the problem in which the AI doesn't have enough information about the Gatekeeper to construct perfect copies.) TDT/UDT only applies when talking about copies of an agent (or at least, agents sufficiently similar that they will probably make the same decisions for the same reasons).
No, because the "uncorrelated-ness" part doesn't apply in Newcomb's Problem (Omega's decision on whether or not to fill the second box is directly correlated with its prediction of your decision).
Meh, fair enough. I have to say, I've never heard of that term. Would this happen to have something to do with Vaniver's series of posts on "control theory"?
Ah, I misunderstood your objection. Your talk about "pre-commitments" threw me off.
It seem to me that these wouldn't quite be following the same general thought processes as an actual human; self-reflection should be able to convince one that they aren't that type of simulation. If the AI is able to simulate someone to the extent that they "think like a human", they should be able to simulate someone that thinks "sufficiently" like the Gatekeeper as well.
I made it up just now, it's not a formal term. What I mean by it is basically: imagine a robot that wants to press a button. However, its hardware is only sufficient to press it successfully 1% of the time. Is that a lack of rationality? No, it's a lack of control. This seems analogous to a human being unable to precommit properly.
No idea, haven't read them. Probably not.