blacktrance comments on AALWA: Ask any LessWronger anything - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (611)
Ethics, I suppose. Most of my other interests are either probably too mindkilling for LW or are written about in the Sequences already, more clearly than I could write about them.
What are your Sequence-superseded interests? Would you please name three points from anywhere within them where your opinion differs (even if minorly) from EY (or the author of the most relevant sequence if different)?
My sequence-superseded interests include the nature of free will, self-improvement (in the sense of luminosity, not productivity), and general interest in rational thinking.
Three areas where I disagree with the Sequences:
Fake Selfishness. EY mistakenly treats "selfishness" as something like wealth maximization, or at least something that excludes caring about others. Selfishness means acting in one's self-interest. There are three major philosophical views as to what people's interests are: hedonism (pleasure), preference satisfaction, and objective-list (i.e. if a person has the things on this list, their interests are being fulfilled). Wealth maximization is only a plausible manifestation of self-interest for a person with very atypical preferences or for an unusual list. There is no reason why egoism would automatically exclude caring about others - in fact, caring about others often makes people happy, and fulfills their preferences. As for the assumption in the sentence "Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?", that ignores virtue ethical egoism, as in the Epicurean tradition - that is, exploiting people (in the sense in which exploitation is bad) is not conducive to happiness, and that being honest, just, benevolent, etc, is actually in one's self-interest.
Not for the Sake of Happiness Alone. EY fails to apply reductionism to human values. He says, "I care about terminal values X, Y, and Z", but when it comes down to it, people would really like pleasure more than anything else, and the distinction between wanting and liking is irrelevant. To indulge in a bit of psychologizing, I think that trying to depict multiple values as irreducible comes from an aversion to wireheading - because if you conclude that all values reduce to happiness/pleasure, you must also conclude that wireheading is the ideal state. But I don't share this aversion - wireheading is the ideal state.
Because of the above, I disagree with basically the entirety of the Fun Theory sequence. It seems to be an attempt to reconcile Transhumanism as Simplified Humanism with not wanting to wirehead, and the two really aren't reconcilable - and Transhumanism as Simplified Humanism is correct.
Is there anyone in the world whose well-being you care strongly about?
Yes, myself and others, though the well-being of others is an instrumental value.
To confirm: you're the only person whose well-being you care about "terminally"?
Yes.
(nods) OK. Accepting that claim as true, I agree that you should endorse wireheading.
(Also that you should endorse having everyone in the world suffer for the rest of their lives after your death, in exchange for you getting a tuna fish sandwich right now, because hey, a tuna fish sandwich is better than nothing.)
Do you believe that nobody else in the world "terminally" cares about the well-being of others?
No, because I care (instrumentally) about the well-being of others in the future as well, and knowing that they'll be tortured, especially because of me, would reduce my happiness now by significantly more than a tuna sandwich would increase it.
That's a difficult question to answer because of the difficulties surrounding what it means for someone to care. People's current values can change in response to introspection or empirical information - and not just instrumental values, but seemingly terminal values as well. This makes me question whether their seemingly terminal values were actually their terminal values to begin with. Certainly, people believe that they terminally care about the well-being of others, and if believing that you care qualifies as actually caring, then yes, they do care. But I don't think that someone who'd experience ideal wireheading would like anything else more.
What is the terminal goal which the well-being of people after your death achieves?
Oh, sure, you shouldn't endorse knowing about it. But it would be best, by your lights, if I set things up that way in order to give you a tuna-fish sandwich, and kept you in ignorance. And you should agree to that in principle... right?
(nods) In the face of that uncertainty, how confident are you that your seemingly terminal values are actually your terminal values?
(nods) I'm inclined to agree.
What objection do you have with the argument "Most humans object to wireheading in principle, therefore wireheading is not the ideal state?"
Because it seems like a state most people would not choose voluntarily, is not ideal.
Many humans have objected to many things in the past that are now widely accepted, so I don't find that to be a convincing objection.
Many people wouldn't choose it voluntarily, but some drug addicts wouldn't voluntarily choose to quit, either. Even if they didn't want it, they'd like it, and liking is what ultimately matters.