Eliezer_Yudkowsky comments on Raising the Sanity Waterline - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (207)
By "person I aspire to be" I mean that my present self has this property and my present self wants my future self to have this property. I originally wrote "person I define as me" but that seemed like too much of a copout.
Yes, I'm repulsed by imagining the alternative Eliezer who feels no pain when his friends, family, or a stranger in another country dies. It is not clear to me why you feel this is irrational. Nor is it based on any particular emotional experience of mine of having ever been a sociopath.
It seems to me that you are verging here on the failure mode of having psychoanalysis the way that some people have bad breath. If you don't like my arguments, argue otherwise. Just casting strange hints of childhood trauma is... well, it's having psychoanalysis the way some people have bad breath.
So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.
Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?
While EY might not put it this way, this line:
answered your question
since Eliezer was making a moral observation. The answer: It is obviously so. Do you have conflicting observational data?
How is it rational to treat a "moral observation" as "obviously so"? That's how religion works, isn't it?
I'm not aware of religions that work that way.
However, that's how observation works.
How is it rational to treat an observation as not obviously so? I'm pretty sure that's inconsistent, if not contradictory.
This discussion is now about
my view on which is summarized in Joy in the Merely Good.
My question is about the implementation of meta-ethics in the human brain. If I were going to write a program to simulate Eliezer Yudkowsky, what rules (other than "be unhappy when others are unhappy") would I need to program in for you to arrive at this "obvious" conclusion?
In my personal experience, the morality that people arrive at by avoiding negative consequences is substantially different than the morality they arrive at by seeking positive ones.
In other words, a person who does good because they will otherwise be a bad person, is not the same as a person who does good because it brings good. Their actions and attitudes differ in substantive ways, besides the second person being happier. For example, the second person is far more likely to actually be generous and warm towards other people -- especially living, present, individual people, rather than "people" as an abstraction.
So which of these two is really the "good" person, from your moral perspective?
(On another level, by the way, I fail to see how contagious, persistent unhappiness is a moral good, since it greatly magnifies the total amount of unhappiness in the universe. But that's a separate issue from the implementation question.)
It seems to me that when you say 'meta-ethics' you simply mean 'ethics'. I don't know why you'd think meta-ethics would need to be implemented in the human brain. Ethics is in the world; meta-ethics doubly so. There's a fact about what's right, just like there's a fact about what's prime. You could ask why we care about what's right, but that's neither an ethical question nor a meta-ethical one. The ethical question is 'what's right?' and the meta-ethical question is 'what makes something a good answer to an ethical question?'. Both of those questions can be answered without reference to humans, though humans are the only reason why anyone would care.
Unless Eliezer has some supernatural entity to do his thinking for him, his ethics and meta-ethics require some physical implementation. Where else are you proposing that he store and process them, besides physical reality?
I think you're shifting between 'ethics' and 'what Eliezer thinks about ethics'. While it's possible that ideas are not real save via some implementation, I don't think it would therefore have to be in a particular human; systems know things too.
You seem to frequently shift the focus of conversation as it happens, hurting the potential for rational discourse in favor of making emotively positive statements that loosely correlate with the topic at hand. Would you be the same pjeby that writes those reprehensible self-help books?
I don't see how I can separate "ethics" from "what Eliezer thinks about ethics" and still have a meaningful conversation with him on the topic.
Meanwhile, reading back through the thread, the only digressions I see in my comments are those made in response to those raised by you or Eliezer. Perhaps you could point to some specific examples of these shifted foci and emotively positive statements? I do not see them.
As for my "reprehensible" books, I trust you formed that judgment by actually reading them, yes? If so, then yes, I'm that person. But if you didn't read them, then clearly your judgment isn't about the books I actually wrote... and thus, I could not have been the person who wrote the (imaginary) ones you'd therefore be talking about. ;-)
That seemed a bit ad hominem. The commenter pjeby (I know nothing else about him) seems like someone who might be unfamiliar with part of the LW/OB background corpus but is reasoning pretty well under those conditions.