You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lukas_Gloor comments on Eudaimonic Utilitarianism - Less Wrong Discussion

7 Post author: Darklight 04 September 2013 07:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lukas_Gloor 09 September 2013 06:36:12PM *  -1 points [-]

Sure. I responded to this post originally not because I think wireheading is something I want to be done, but rather because I wanted to voice the position of it being fine in theory.

I also take moral disagreement seriously, even though I basically agree with EY's meta-ethics. My terminal value is about doing something that is coherent/meaningful/altruistic, and I might be wrong about what this implies. I have a very low credence in views that want to increase the amount of sentience, but for these views, much more is at stake.

In addition, I think avoiding zero-sum games and focusing on ways to cooperate likely leads to the best consequences. For instance, increasing the probability of a good (little suffering plus happiness in the ways people want it) future conditional on humanity surviving seems to be something lots of altruistically inclined people can agree on being positive and (potentially) highly important.

Comment author: TheOtherDave 09 September 2013 06:53:40PM 0 points [-]

Ah, OK. Thanks for clarifying.

Sure, I certainly agree that if the only valuable thing is eliminating suffering, wireheading is fine... as is genocide, though genocide is preferable all else being equal.

I'm not quite sure what you mean by taking moral disagreement seriously, but I tentatively infer something like you assign value to otherwise-valueless things that other people assign value to, within limits. (Yes? No?) If that's right, then sure, I can see where wireheading might be preferable to genocide conditional on other people valuing not-being-genocided more than not-being-wireheaded, .

Comment author: Lukas_Gloor 09 September 2013 09:25:45PM -1 points [-]

Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same "ethical" questions I'm interested in.

So it's not that I have other people's values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn't yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.

Comment author: TheOtherDave 09 September 2013 09:42:53PM 0 points [-]

Let me echo that back to you to see if I get it.

We posit some set S1 of meaningful/altruistic acts.
You want to perform acts in S1.
Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2.
For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.

And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide

Did I get that right?

Comment author: Lukas_Gloor 09 September 2013 09:59:43PM -1 points [-]

Yes, that sounds like it. Of course I have to specify what exactly I mean by "altruistic/meaningful", and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I'm not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that's not how I want to see it and not how it felt.

Comment author: TheOtherDave 10 September 2013 01:11:30AM 0 points [-]

OK. Thanks.