No, he says "you're the first person who etc..."
Is this a "failed utopia" because human relationships are too sacred to break up, or is it a "failed utopia" because the AI knows what it should really have done but hasn't been programmed to do it?
that can support the idea that the much greater incidence of men committing acts of violence is "natural male aggression" that we can't ever eliminate.
The whole point of civilisation is to defeat nature and all its evils.
... how isn't atheism a religion? It has to be accepted on faith, because we can't prove there isn't a magical space god that created everything.
I think there's a post somewhere on this site that makes the reasonable point that "is atheism a religion?" is not an interesting question. The interesting question is "what reasons do we have to believe or disbelieve in the supernatural?"
My issue with this is that we don't, actually, have a philosophical/rational/scientific vision of capital-T Truth yet, despite all of our efforts. (Descartes, Spinoza, Kant, etc.)
Truth is whatever describes the world the way it is.
Even the capital-T Truth believers will admit that we don't know how to achieve an understanding of that truth, they'll just say that it's possible because there really is this kind of truth.
Do you mean an understanding of the way the world is, or an understanding of what truth is?
Isn't it the case, then that your embracing this kind of objective truth is itself a "true because it's useful" kind of thinking, not a "true because it's true" kind of thinking?
You can of course define "truth" however you like - it's just a word. If you're expecting some sort of actual relationship to hold between - say - ink on a page saying "Jupiter has four large moons" and the moons of Jupiter themselves, then of course there's no such thing; the universe is just made of protons, electrons, and such mundane objects.
But there still really are four large moons of Jupiter.
Paul, that's a good point.
Eliezer: If all I want is money, then I will one-box on Newcomb's Problem.
Mmm. Newcomb's Problem features the rather weird case where the relevant agent can predict your behaviour with 100% accuracy. I'm not sure what lessons can be learned from it for the more normal cases where this isn't true.
If a serial killer comes to a confessional, and confesses that he's killed six people and plans to kill more, should the priest turn him in? I would answer, "No." If not for the seal of the confessional, the serial killer would never have come to the priest in the first place.
It's important to distinguish two ways this argument might work. The first is that the consequences of turning him in are bad, because future killers will be (or might be) less likely to seek advice from priests. That's a fairly straightforward utilitarian argument.
But the second is that turning him in is somehow bad, regardless of the consequences, because the world in which every "confessor" did as you do is a self-defeating, impossible world. This is more of a Kantian line of thought.
Eliezer, can you be explicit which argument you're making? I thought you were a utilitarian, but you've been sounding a bit Kantian lately. :)
Benja: But it doesn't follow that you should conclude that the other people are getting shot, does it?
I'm honestly not sure. It's not obvious to me that you shouldn't draw this conclusion if you already believe in MWI.
(Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)
It seems like it does. If people are getting shot then you're not able to observe any decision by the guards that results in you getting taken away. (Or at least, you don't get to observe it for long - I'm don't think the slight time lag matters much to the argument.)
Benja: Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it doesn't follow that if you're a surviving observer, LHC has probably failed.
I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.
... and stuns Akon (or everyone). He then opens a channel to the Superhappies, and threatens to detonate the star - thus preventing the Superhappies from "fixing" the Babyeaters, their highest priority. He uses this to blackmail them into fixing the Babyeaters while leaving humanity untouched.