retired urologist,
There's a distinction to be made between altruism (ethical theory) and altruism (social science). The sense of altruism you use seems more to agree with the former. It seems like Eliezer prefers the latter. To summarize:
Altruism (ethical theory) is just like utilitarianism, except that good for oneself is entirely discounted.
Altruism (social sciences) is a 'selfless concern for others', in which one helps other people without conscious concern for one's personal interests (at least some of the time). It does not require that one aband...
EY, but you are a moral realist (or at least a moral objectivist, which ought to refer to the same thing). There's a fact about what's right, just like there's a fact about what's prime or what's baby-eating. It's a fact about the universe, independent of what anyone has to say about it. If we were human' we'd be moral' realists talking about what's right'. ne?
Anonymous, that sound you hear is probably people rushing to subscribe. http://www.rifters.com/crawl/?p=266 - note the comments.
Nick,
There is a tendency for some folks to distinguish between descriptive and normative statements, in the sense of 'one cannot derive an ought from an is' and whatnot. A lot of this comes from hearing about the "naturalistic fallacy" and believing this to mean that naturalism in ethics is dead. Naturalists in turn refer to this line of thinking as the "naturalistic fallacy fallacy", as the strong version of the naturalistic fallacy does not imply that naturalism in ethics is wrong.
As for the fallacy you mention, I disagree that it's...
Nick,
Behavior isn't an argument (except when it is), but it is evidence. And it's akrasia when you say, "Man, I really think spending this money on saving lives is the right thing to do, but I just can't stop buying ice cream" - not when you say "buying ice cream is the right thing to do". Even if you are correct in your disagreement with Simon about the value of ice cream, that would be a case of Simon being mistaken about the good, not a case of Simon suffering from akrasia. And I think it's pretty clear from context that Simon believes he values ice cream more.
And it sounds like that first statement is an attempt to invoke the naturalistic fallacy fallacy. Was that it?
I prefer the ending where we ally ourselves with the babyeaters to destroy the superhappies. We realize that we have more in common with the babyeaters, since they have notions of honor and justified suffering and whatnot, and encourage the babyeaters to regard the superhappies as flawed. The babyeaters will gladly sacrifice themselves blowing up entire star systems controlled by the superhappies to wipe them out of existence due to their inherently flawed nature. Then we slap all of the human bleeding-hearts that worry about babyeater children, we come...
Doug S,
Indeed. The AI wasn't paying attention if he thought bringing me to this place was going to make me happier. My stuff is part of who I am; without my stuff he's quite nearly killed me. Even moreso when 'stuff' includes wife and friends.
But then, he was raised by one person so there's no reason to think he wouldn't believe in wrong metaphysics of self.
I don't find this surprising at all, other than that it occurred to a consequentialist. Being a virtue ethicist and something of a Romantic, it seems to me that the best world will be one of great and terrible events, where a person has the chance to be truly and tragically heroic. And no, that doesn't sound comfortable to me, or a place where I'd particularly thrive.
Tilden is another roboticist who's gotten rich and famous off of unintelligent robots: BEAM robotics
Interesting idea... though I still think you're wrong to step away from anthropomorphism, and 'necessary and sufficient' is a phrase that should probably be corralled into the domain of formal logic.
And I'm not sure this adds anything to Sternberg and Salter's definition: 'goal-directed adaptive behavior'.
I've yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.
It seems to be one interpretation of the Buddhist project
Regarding self, I tend to include much more than my brain in "I" - but then, I'm not one of those who thinks being 'uploaded' makes a whole lot of sense.
Anonymous: torture's inefficacy was well-known by the fourteenth century; Bernardo Gui, a famous inquisitor who supervised many tortures, argued against using it because it is only good at getting the tortured to say whatever will end the torture. I can't seem to find the citation, but here is someone who refers to it: http://www.ewtn.com/library/ANSWERS/INQUIS2.htm
Toby,
You should never, ever murder an innocent person who's helped you, even if it's the right thing to do
You should never, ever do X, even if if you are exceedingly confident that it is the right thing to do
I believe a more sensible interpretation would be, "You should have an unbreakable prohibition against doing X, even in cases where X is the right thing to do" - the issue is not that you might be wrong about it being the right thing to do, but rather that not having the prohibition is a bad thing.
Russell, I don't think that necessarily specifies a 'cheap trick'. If you start with a rock on the "don't let the AI out" button, then the AI needs to start by convincing the gatekeeper to take the rock off the button. "This game has serious consequences and so you should really play rather than just saying 'no' repeatedly" seems to be a move in that direction that keeps with the spirit of the protocol, and is close to Silas's suggestion.
This doesn't seem to mesh with the Friendly AI goal of getting it perfectly right on the first try.
Do we accept some uncertainty and risk to do something extraordinary now, or do we take the slow, calm, deliberative course that stands a chance of achieving perfection?
Is there any chance of becoming a master of the blade without beginning to cut?
I think if history remembers you, I'd bet that it will be for the journey more than its end. If the interesting introspective bits get published in a form that gets read, then I'd bet it will be memorable in the way that Lao zi or Sun zi is memorable. In case the Singularity / Friendly AI stuff doesn't work out, please keep up the good work anyway.
But if there are repeatable psi experiments, then why hasn't anyone won the million dollars? (or even passed the relatively easy first round?)
@Eliezer I mostly agree with Caledonian here. I disagree with much of what you say, and it has nothing to do with being 'fooled'. Censoring the few dissenters who actually comment is not a good idea if you have any interest in avoiding an echo chamber. You're already giving off the Louis Savain vibe pretty hard.
Aristotelians may not be teaching physics courses (though I know of no survey showing that) but they do increasingly teach ethics courses. It makes sense to think of what qualities are good for a fox or good for a rabbit, and so one can speak about them with respect to ethics.
However, there is no reason to think that they disagree about ethics, since disagreement is a social activity that is seldom shared between species, and ethics requires actually thinking about what one has most reason to do or want. While it makes sense to attribute the intentional ...
For the record, Thom_Blake is thomblake.