If you have to speak "carefully enough" then you're taking a big risk though you may luck out and get what you want, they're not safe.
If your argument is that unless a powerful being is extremely safe, then they're not extremely safe, this is true by definition. Obviously, if a genie sometimes doesn't give you what you want, there is some risk that the genie won't give you what you want. I thought a more substantial argument was being made, though - it sounded like EY was claiming that saying "I wish for whatever I should wish for" i...
Examples of what? Of hypothetical intelligent minds? I feel like there are examples all over fiction; consider genies themselves, which often grant wishes in a dangerous way (but you can sometimes get around it by speaking carefully enough). Again, I agree that some genies are never safe and some are always safe, but it's easy to imagine a genie which is safe if and only if you specify your wish carefully.
Anyway, do you concede the point that EY's article contains no arguments?
I'm making 2 points:
His metaphor completely fails conceptually, because I'm perfectly capable of imagining genies that fall outside the three categories.
Perhaps the classification works in some other setting, such as AIs. However, the article never provided any arguments for this (or any arguments at all, really). Instead, there was one single example (seriously, just one example!) which was then extrapolated to all genies.
At age 5 you could safely wish for "I wish for you to do what I should wish for" and at worst you'd be a little disappointed if what she came up with wasn't as fun as you'd have liked.
I would have gotten the wrong flavor of ice cream. It was strictly better to specify the flavor of ice cream I preferred. Therefore, the statement about the 3 types of genies is simply false. It might be approximately true in some sense, but even if it is, the article never gives any arguments in favor of that thesis, it simply gives one example.
That sounds pretty similar to a Deist's God, which created the universe but does not interfere thereafter. Personally, I'd just shave it off with Ocam's razor.
Also, it seems a little absurd to try to infer things about our simulators, even supposing they exist. After all, their universe can be almost arbitrarily different from ours.
Does the simulation hypothesis have any predictive power? If so, what does it predict? Is there any way to falsify it?
Oh, yes, me too. I want to engage in one-shot PD games with entirelyuseless (as opposed to other people), because he or she will give me free utility if I sell myself right. I wouldn't want to play one-shot PDs against myself, in the same way that I wouldn't want to play chess against Kasparov.
By the way, note that I usually cooperate in repeated PD games, and most real-life PDs are repeated games. In addition, my utility function takes other people into consideration; I would not screw people over for small personal gains, because I care about their happiness. In other words, defecting in one-shot PDs is entirely consistent with being a decent human being.
Cool, so in conclusion, if we met in real life and played a one-shot PD, you'd (probably) cooperate and I'd defect. My strategy seems superior.
I never liked that article. It says "there are three types of genies", and then, rather than attempting to prove the claim or argue for it, it just provides an example of a genie for which no wish is safe. I mean, fine, I'm convinced that specific genie sucks. But there may well be other genies that don't know what you want but have the ability to give it to you if you ask (when I was 5 years old, my mom was such a genie).
But since you're making it clear that your code is quite different, and in a particular way, I would defect against you.
You don't know who I am! I'm anonymous! Whoever you'd cooperate with, I might be that person (remember, in real life I pretend to have a completely different philosophy on this matter). Unless you defect against ALL HUMANS, you risk cooperating when facing me, since you don't know what my disguise will be.
You can see which side of the room you are on, so you know which one you are.
If I can do this, then my clone and I can do different things. In that case, I can't be guaranteed that if I cooperate, my clone will too (because my decision might have depended on which side of the room I'm on). But I agree that the cloning situation is strange, and that I might cooperate if I'm actually faced with it (though I'm quite sure that I never will).
...People don't actually have the same code, but they have similar code. If the code in some case is similar enough tha
Yes. The universe is deterministic. Your actions are completely predictable, in principle. That's not unique to this thought experiment. That's true for every thing you do. You still have to make a choice. Cooperate or defect?
Um, what? First of all, the universe is not deterministic - quantum mechanics means there's inherent randomness. Secondly, as far as we know, it's consistent with the laws of physics that my actions are fundamentally unpredictable - see here.
Third, if I'm playing against a clone of myself, I don't think it's even a valid PD. Can th...
Well there is no causal influence. Your opponent is deterministic. His choice may have already been made and nothing you do will change it. And yet the best decision is still to cooperate.
If his choice is already made and nothing I do will change it, then by definition my choice is already made and nothing I do will change it. That's why my "decision" in this setting is not even well-defined - I don't really have free will if external agents already know what I will do.
The most obvious example of cooperating due to acausal dependence is making two atom-by-atom-identical copies of an agent and put them in a one-shot prisoner's dilemma against each other. But two agents whose decision-making is 90% similar instead of 100% identical can cooperate on those grounds too, provided the utility of mutual cooperation is sufficiently large.
I'm not sure what "90% similar" means. Either I'm capable of making decisions independently from my opponent, or else I'm not. In real life, I am capable of doing so. The clone situa...
I don't know enough about this to tell if (2) had more influence than (3) initially. I'm glad you agree that (2) had some influence, at least. That was the main part of my point.
How long did discussion of the Basilisk stay banned? Wasn't it many years? How do you explain that, unless the influence of (2) was significant?
Just FYI, if you want a productive discussion you should hold back on accusing your opponents of fallacies. Ironically, since I never claimed that you claimed Eliezer engages in habitual banning on LW, your accusation that I made a strawman argument is itself a strawman argument.
Anyway, we're not getting anywhere, so let's disengage.
I read all of your replies. What are you referring to? Also, this is uncharitable/insulting.
To be honest, I'm not sure what we're even disagreeing about. Like, sure, some genies are unsafe no matter how you phrase your wish. For other genies, you can just wish for ... (read more)