pjeby comments on Raising the Sanity Waterline - Less Wrong

112 Post author: Eliezer_Yudkowsky 12 March 2009 04:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (207)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 12 March 2009 09:16:32PM 0 points [-]

Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?

Comment author: thomblake 12 March 2009 09:22:16PM 1 point [-]

While EY might not put it this way, this line:

So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.

answered your question

Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?

since Eliezer was making a moral observation. The answer: It is obviously so. Do you have conflicting observational data?

Comment author: pjeby 12 March 2009 09:27:40PM -1 points [-]

How is it rational to treat a "moral observation" as "obviously so"? That's how religion works, isn't it?

Comment author: Eliezer_Yudkowsky 12 March 2009 09:30:10PM 2 points [-]

This discussion is now about

 NATURALISTIC METAETHICS

my view on which is summarized in Joy in the Merely Good.

Comment author: pjeby 12 March 2009 09:52:21PM 2 points [-]

My question is about the implementation of meta-ethics in the human brain. If I were going to write a program to simulate Eliezer Yudkowsky, what rules (other than "be unhappy when others are unhappy") would I need to program in for you to arrive at this "obvious" conclusion?

In my personal experience, the morality that people arrive at by avoiding negative consequences is substantially different than the morality they arrive at by seeking positive ones.

In other words, a person who does good because they will otherwise be a bad person, is not the same as a person who does good because it brings good. Their actions and attitudes differ in substantive ways, besides the second person being happier. For example, the second person is far more likely to actually be generous and warm towards other people -- especially living, present, individual people, rather than "people" as an abstraction.

So which of these two is really the "good" person, from your moral perspective?

(On another level, by the way, I fail to see how contagious, persistent unhappiness is a moral good, since it greatly magnifies the total amount of unhappiness in the universe. But that's a separate issue from the implementation question.)

Comment author: thomblake 12 March 2009 10:02:02PM 1 point [-]

It seems to me that when you say 'meta-ethics' you simply mean 'ethics'. I don't know why you'd think meta-ethics would need to be implemented in the human brain. Ethics is in the world; meta-ethics doubly so. There's a fact about what's right, just like there's a fact about what's prime. You could ask why we care about what's right, but that's neither an ethical question nor a meta-ethical one. The ethical question is 'what's right?' and the meta-ethical question is 'what makes something a good answer to an ethical question?'. Both of those questions can be answered without reference to humans, though humans are the only reason why anyone would care.

Comment author: pjeby 12 March 2009 10:28:07PM 0 points [-]

Unless Eliezer has some supernatural entity to do his thinking for him, his ethics and meta-ethics require some physical implementation. Where else are you proposing that he store and process them, besides physical reality?

Comment author: thomblake 12 March 2009 10:41:42PM 0 points [-]

I think you're shifting between 'ethics' and 'what Eliezer thinks about ethics'. While it's possible that ideas are not real save via some implementation, I don't think it would therefore have to be in a particular human; systems know things too.

You seem to frequently shift the focus of conversation as it happens, hurting the potential for rational discourse in favor of making emotively positive statements that loosely correlate with the topic at hand. Would you be the same pjeby that writes those reprehensible self-help books?

Comment author: Eliezer_Yudkowsky 12 March 2009 10:49:38PM 4 points [-]

That seemed a bit ad hominem. The commenter pjeby (I know nothing else about him) seems like someone who might be unfamiliar with part of the LW/OB background corpus but is reasoning pretty well under those conditions.

Comment author: pjeby 12 March 2009 11:02:04PM *  1 point [-]

Actually, I'm quite familiar with a large segment of the OB corpus -- it's been highly influential on my work. However, I also see what appear to be a few holes or incoherencies within the OB corpus... some of which appear to stem from precisely the issue I've been asking you about in this thread. (i.e. the role of negative utilities in creating bias)

In my personal experience, negative utilities create bias because they cut off consideration of possibilities. This is useful in an emergency -- but not much anywhere else. If human beings had platonically perfect minds, there would be no difference between a uniform utility scale and a dual positive/negative one... but as far as I can tell (and research strongly suggests) we do have two different systems.

So, although you're wary of Robin's "cynicism" and my "psychological explanations", this is inconsistent with your own statements, such as:

There is no perfect argument that persuades the ideal philosopher of perfect emptiness to attach a perfectly abstract label of 'good'. The notion of the perfectly abstract label is incoherent, which is why people chase it round and round in circles. What would distinguish a perfectly empty label of 'good' from a perfectly empty label of 'bad'? How would you tell which was which?

See, I'm as puzzled by your ability to write something like that, and then turn around and argue an absolute utility for unhappiness, as you are puzzled by that Nobel-winning Bayesian dude who still believes in God. From my POV, it's just as inconsistent.

There must be some psychology that creates your position, but if your position is "truly" valid (assuming there were such a thing), then the psychology wouldn't matter. You should be able to destroy the position, and then reconstruct it from more basic principles, once the original influence is removed, no? (This idea is also part of the corpus.)

Comment author: thomblake 12 March 2009 10:55:45PM 1 point [-]

It was deliberately ad hominem, of course - just not the fallacious kind. We seriously need profile pages of some sort. Wish I had the stomach for Python.

I don't expect anyone to be familiar with the LW/OB background corpus - I expect my education and training is quite different from yours, for example. However, I still expect one to follow rules of conduct with respect to reasonable discourse, for example avoiding equivocation and its related vices.

Or maybe I'm just viscerally angered by the winky smileys. Who knows.

Comment author: pjeby 12 March 2009 10:56:27PM 0 points [-]

I don't see how I can separate "ethics" from "what Eliezer thinks about ethics" and still have a meaningful conversation with him on the topic.

Meanwhile, reading back through the thread, the only digressions I see in my comments are those made in response to those raised by you or Eliezer. Perhaps you could point to some specific examples of these shifted foci and emotively positive statements? I do not see them.

As for my "reprehensible" books, I trust you formed that judgment by actually reading them, yes? If so, then yes, I'm that person. But if you didn't read them, then clearly your judgment isn't about the books I actually wrote... and thus, I could not have been the person who wrote the (imaginary) ones you'd therefore be talking about. ;-)

Comment author: thomblake 12 March 2009 11:01:27PM 1 point [-]

Perhaps you could point to some specific examples of these shifted foci and emotively positive statements? I do not see them.

I was not referring only to this thread, but to several ongoing discussions. If you'd like clear examples, feel free to contact me via http://thomblake.com or http://thomblake.mp

As Eliezer has kindof pointed out, I'm weary enough from this discussion to be on the verge of irrationality, so I shall retire from it (if only because this forum is devoted to rationality!).

Comment author: thomblake 12 March 2009 09:30:12PM 1 point [-]

I'm not aware of religions that work that way.

However, that's how observation works.

How is it rational to treat an observation as not obviously so? I'm pretty sure that's inconsistent, if not contradictory.