Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lukas_Gloor 13 January 2016 06:55:54AM *  2 points [-]

Ethically, I think one could justify all this. It is hard to argue, for example, that we (real human beings) have been harmed by being brought into existence in a universe without a God who is listening; almost all of us would prefer to be alive rather not. The same would go for them: surely, their simulated existence, imperfect as it is, is not worse than not having been brought into the world in the first place?

At least some of them will tell you they had rather not been born. But maybe you'll want to equip these orcs with an even stronger drive for existence, so they never choose death over life even if you torture them; would that make it more ok? I suspect not, so something with the "Do they complain to having been created?" approach seems flawed imo. Creating beings with a strong preference for existence would make it too easy to legitimize doing with them whatever you want.

How about imagining beings who at any moment are intrinsically indifferent to whether they exist or not? They only won't complain as long as they don't suffer. Perhaps that's too extreme as well, but if it's only simple/elegant rules you're looking for, this one seems more acceptable to me than the torture-bots above.

Comment author: OrphanWilde 27 October 2015 01:33:29PM 2 points [-]

See, the issue is that you think the downvotes were because of your views. I can't speak for other people, but I downvoted you because you were engaging in behaviors I prefer to discourage; namely, ignoring the substantive thrust of a post to nitpick at a relatively insignificant comment made in the middle whose absence wouldn't affect the post as a whole. And, as we see here, you made that comment not because it was substantive or seriously detracted from the post, but because it was an ideological matter with which you disagreed with the author. Hence my comment to you: "I found it concern-trolling at worst, and irrelevant at best".

Because, as Dagon pointed out, using your criteria, the progress is -still- a positive thing. That's the point of this post. Taking it as an opportunity to try to start an ideological fight is just bad manners.

See, downvotes here don't mean Less Wrong disagrees with you (although that's how some people use it, it's not the cultural standard). Downvotes mean people want to see less of the kind of post/comment that was downvoted.

I honestly don't give a tinker's cuss about the intra-movement arguments within EA, and if this is how EA behaves, I'd like to see less of it as a whole. You're not representing your movement very well.

Comment author: Lukas_Gloor 27 October 2015 02:22:36PM -1 points [-]

you made that comment not because it was substantive or seriously detracted from the post, but because it was an ideological matter with which you disagreed with the author

I generally dislike it when people talk about moral views that way, even if they mention views I support. I might be less inclined to call it out in a case where I intuitively strongly agree, but I still do it some of the time. I agree it wasn't the main point of his post, I never denied that. In fact I wrote that I agree the developments are impressive. By that, I meant the graphs. Since when is it discouraged to point out minor criticism in a post? The fact that I singled out this particular post to make a comment that would maybe fit just as well elsewhere just happens to be a coincidence.

Taking it as an opportunity to try to start an ideological fight is just bad manners.

No one is even talking about arguments or intuition-pumps for or against any of the moral views mentioned. I wasn't "starting an ideological flight", I was making meta remark about the way people present moral views. If anything, I'd be starting an ideological fight about my metaethical views and what I consider to be a productive norm of value-related discourse on this site.

Comment author: Luke_A_Somers 26 October 2015 11:12:40PM 2 points [-]

You imply that doubling extreme poverty would be a good thing if it comes with a doubling of the rest of the population.

Kind of? The point of the second plot is to show that we didn't get where we are in fractional terms by murdering the poor, which would be bad, I think, regardless of whether one holds that doubling the overall population is good or bad. And if we got where we are in fractional terms by adding rich people without actually cutting into the number of poor people, that would be bad too, though not as bad as murdering them.

Of course, the plots can't show that we didn't grow the rich population while also killing the poor, but, well, that's not what happened either.

Comment author: Lukas_Gloor 26 October 2015 11:46:26PM -2 points [-]

I at one point phrased it "comes with a doubling of the (larger) rest of the population" to make it more clear, but deleted it for a reason I have no introspective access to.

And if we got where we are in fractional terms by adding rich people without actually cutting into the number of poor people, that would be bad too, though not as bad as murdering them.

It would, obviously, if there are better alternatives. In consequentialism, everything where you have better viable alternatives is bad to some extent. What I meant is: If the only way to double the rest of the population is by also doubling the part that's in extreme poverty, then the OP's values implies that it would be a good thing. I'm not saying this view is crazy, I'm just saying that creating the impression that it's some sort of LW-consensus is mistaken. And in a latter point I added that it makes me, and probably also other people with different values, feel unwelcome. It's bad for an open dialogue on values.

Comment author: Dagon 26 October 2015 03:44:43PM 7 points [-]

I think he's showing the opposite. The first graph does imply what you say. The second graph shows that EVEN if we look at number of people in extreme poverty as an absolute, rather than a ratio, we've been making steady progress since 1971 and are now below 1820 levels of poverty.

It's not judgement-free, as nothing on this topic can or should be. However, it's showing that the positive results are robust to multiple dimensions that people are likely to judge on.

To be specific: what normative judgement do you prefer for which this graph is misleading? Or are you saying "there are important things not covered in either graph", which is true of pretty much any such summary.

Comment author: Lukas_Gloor 26 October 2015 06:33:22PM *  1 point [-]

I'm referring to the text, not the graph(s). The two paragraphs between the graphs imply

that doubling extreme poverty would be a good thing if it comes with a doubling of the rest of the population.

He does not preface any of it by saying "I think", he just presents it as obvious. Well, I know for a fact that there are many people who self-identify as rationalists to whom this is not obvious at all. It also alienates me that people here, according to the karma distributions, don't seem to get my point.

Comment author: Lukas_Gloor 26 October 2015 12:47:36PM *  -1 points [-]

The developments you highlight are impressive indeed. But you're making it sound as though everyone should agree with your normative judgments. You imply that doubling extreme poverty would be a good thing if it comes with a doubling of the rest of the population. This view is not uncontroversial and many EAs would disagree with it. Please respect that other people will disagree with your value judgments.

Comment author: [deleted] 09 July 2015 01:13:09AM -1 points [-]

Thanks for your comment.

How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient?

Yes, as you stated I was working with the visible sample of EA's who aren't focused on existential risk. I feel the term in relation to existential risk is redundant since effective thinking about existential risk on Lesswrong.

And even if you're talking about e.g. donations to GiveWell's recommended charities, how does the first link establish that it's inefficient?

The crowding out effect occurs not just as the individual level (which isn't applicable to individual EA's given room for more funding consideration), but also at the movement level. Because EA's act en-bloc, and factor into their considerations 'what are other people not funding', they compete the supply a demand for donations against established institutional donors like the Gate's Foundation. One might wonder then that if that was true, why those Foundations don't close the funding gaps as a priority - and it looks like someone is trying to answer that here. Admittedly, I haven't got to reading the article fully but from a quick skim it looks like the magnitude of donations of high impact philanthropists is such it compensates for the 'ineffectiveness of their cause', since those charities Givewell recommends have less room for more funding - which becomes a higher order consideration at that scale. The obvious counterexample to this is GiveDirectly, but I wouldn't be suprised if the reason philanthropists don't like them is because of fear of setting a precedence (sp?) against productive mutualistic exchange.

"human values are complex". That's misleading, what's complex is human moral intuitions. When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, "the buck has to stop somewhere".

I can't find the original post about the buck stopping after a bit of Googling. I'd like to keep looking into this!

In response to comment by [deleted] on Effective Altruism from XYZ perspective
Comment author: Lukas_Gloor 09 July 2015 09:14:13AM 2 points [-]

I can't find the original post about the buck stopping after a bit of Googling. I'd like to keep looking into this!

The post I'm referring to is here, but I should note that EY used the phrase in a different context, and my view on terminal values does not reflect his view. My critique of the idea that all human values are complex is that it presupposes too narrow of an interpretation of "values". Let's talk about "goals" instead, defined as follows:

Imagine you could shape yourself and the world any way you like, unconstrained by the limits of what is considered feasible and what not, what would you do? Which changes would you make? The result describes your ideal world, it describes everything that is at all important to you. However, it does not yet describe how important these things are in relation to other things you consider important. So imagine that you had the same super-powers, but this time they are limited: You cannot make every change you had in mind, you need to prioritize some changes over others. Which changes would be most important to you? The outcome of this thought experiment approximates your goals. (This question is of course a very difficult one, and what someone says after thinking about it for five minutes might be quite different from what someone would choose if she had heard all the ethical arguments in the world and thought about the matter for a very long time. If you care about making decisions for good/informed reasons, you might want to refrain from committing too much to specific answers and instead give weight to what a better informed version of yourself would say after longer reflection.)

I took the definition from this blogpost I wrote a while back. The comment section there contains a long discussion on a similar issue where I elaborate on my view of terminal values.

Anyway, the way my definition of "goals" seems to differ from the interpretation of "values" in the phrase "human values are complex" is that "goals" allow for self-modification. If I could, I would self-modify into a utilitarian super-robot, regardless of whether it was still conscious or not. According to "human values are complex", I'd be making a mistake in doing so. What sort of mistake would I be making?

The situation is as follows: Unlike some conceivable goal-architectures we might choose for artificial intelligence, humans do not have a clearly defined goal. When you ask people on the street what their goals are in life, they usually can't tell you, and if they do tell you something, they'll likely revise it as soon as you press them with an extreme thought experiment. Many humans are not agenty. Learning about rationality and thinking about personal goals can turn people into agents. How does this transition happen? The "human values are complex" theory seems to imply that we introspect, find out that we care/have intuitions about 5+ different axes of value, and end up accepting all of them for our goals. This is probably how quite a few people are doing it, but they're victim of a gigantic typical mind fallacy if they think that's the only way to do it. Here's what happened to me personally (and incidentally, to about "20+" agents I know personally and to all the hedonistic utilitarians who are familiar with Lesswrong content and still keep their hedonistic utilitarian goals):

I started out with many things I like (friendship, love, self-actualization, non-repetitiveness, etc) plus some moral intuitions (anti-harm, fairness). I then got interested in ethics and figuring out the best ethical theory. I turned into a moral anti-realist soon, but still wanted to find a theory that incorporates my most fundamental intuitions. I realized that I don't care intrinsically about "fairness" and became a utiltiarian in terms of my other-regarding/moral values. I then had the decision to what extent I should invest into utilitarianism/altruism, and how much into values that are more about me specifically. I chose altruism, because I have a strong, OCD-like tendency for doing things either fully or not at all, and I thought saving for retirement, eating healthy etc is just as bothersome as trying to be altruistic, because I don't strongly self-identify with a 100-year-old version of me anyway, so might as well try to make sure that all future sentience will be suffering-free. I still take a lot of care about my long-term happiness and survival, but much less so than if I had the goal to live forever, and as I said I would instantly press the "self-modify into utilitarian robot" button, if there was one. I'd be curious to hear whether I am being "irrational" somewhere, whether there was a step involved that was clearly mistaken. I cannot imagine how that would be the case, and the matter seems obvious to me. So every time I read the link "human values are complex", it seems like an intellectually dishonest discussion stopper to me.

Comment author: [deleted] 08 July 2015 04:35:10AM *  2 points [-]

Could charity distorts market signals which cripples the ability of sponsored economies to develop sustainability, leading to negative utility in the long term

Hikma and Norbrook are examples of ethical UK/worldwide pharmaceutical companies. I've worked for and can vouch for both.

In response to comment by [deleted] on Effective Altruism from XYZ perspective
Comment author: Lukas_Gloor 08 July 2015 11:14:07AM *  3 points [-]

I get the impression that you're not well informed about EA and the diverse stances EAs have, and that you're singling out an idiosyncratic interpretation and giving it an unfair treatment.

Effective altruism is inefficient and socially suboptima.

The first link you cite talks about public good provision within the current economy. How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient? And even if you're talking about e.g. donations to GiveWell's recommended charities, how does the first link establish that it's inefficient? Sick people in Africa usually tend to not be included in calculations about economical common goods, but EAs care about more than just their country's economy.

Effective Altruism isn’t utilitarian. It’s explicitly welfarist and given the complexity of individual value, probably undermines overall utility, including your own.

FYI, you're using highly idiosyncratic terminology here. Outside of LW, "utilitarianism" is the name for a family of consequentialist views that also include solely welfare-focused varieties like negative hedonistic utilitarianism or classical hedonistic utilitarianism.

In addition, you repeat the mantra that it's an objective fact that "human values are complex". That's misleading, what's complex is human moral intuitions. When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, "the buck has to stop somewhere".

EA is prioritarian.

This claim is wrong, only about 5% of the EAs I know are prioritiarians (I have met close to 100 EAs personally). And the link you cite doesn't support that EAs are prioritarians either, it just argues that you get more QALYs from donating to AMF than from doing other things.

Comment author: OrphanWilde 02 May 2015 05:57:10PM 1 point [-]

And if making people more informed in this manner makes them worse off?

Comment author: Lukas_Gloor 02 May 2015 11:29:35PM *  0 points [-]

The sad thing is it probably will (the rationalist's burden: aspiring to be more rational makes rationalizating harder, and you can't just tweak your moral map and your map of the just world/universe to fit your desired (self-)image).

What is it that counts, revealed preferences or stated preferences or preferences that are somehow idealized (if the person knew more, was smarter etc.)? I'm not sure the last option can be pinned down in a non-arbitrary way. This would leave us with revealed preferences and stated preferences, even though stated preferences are often contradictory or incomplete. It would be confused to think that one type of preferences are correct, whereas the others aren't. There are simply different things going on, and you may choose to focus on one or the other. Personally I don't intrinsically care about making people more agenty, but I care about it instrumentally, because it turns out that making people more agenty often increases their (revealed) concern for reducing suffering.

What does this make of the claim under discussion, that deontology could sometimes/often be a form of moral rationalizing? The point still stands, but it is qualified with a caveat, namely that it is only a rationalizing if we are talking about (informed/complete) stated preferences. For whatever that's worth. On LW, I assume it is worth a lot to most people, but there's no mistake being made if it isn't for someone.

Comment author: TheAncientGeek 02 May 2015 01:03:30PM 0 points [-]

to give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.

Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:

Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.

Biases are regarded as obstructions to rationality ... but which rationality? Any bias is a stumbling block to epistemic rationalism ... but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.

That tells us two things: one is that what counts as a bias is relative, or context dependent. The other -- in conjunction the reasonable supposition that humans don't follow a single set of values all the time -- is where bias comes from.

If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.

Comment author: Lukas_Gloor 02 May 2015 01:39:33PM 0 points [-]

I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I'm aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.

Comment author: OrphanWilde 01 May 2015 07:29:47PM 3 points [-]

You're making a mistake, in assuming that ethical systems are intended to do what you think they're intended to do. I'm going to make some complete unsubstantiated claims; you can evaluate them for yourself.

Point 1: The ethical systems aren't designed to be followed by the people you're talking to.

Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That's not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.

Point 2: The ethical systems aren't just there to be followed, they're there to see who follows them.

People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; "normal" people don't -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can't know any better, is going to be built into these ethical systems. It will generally take some form akin to "altruism", and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.

Point 3: The ethical systems are designed to be flexible

People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don't permit considerable flexibility in dealing with these situations aren't useful. But because of sociopaths, who still need ethical systems to be kept in line, you can't just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they're trying to follow all the rules all the time.

Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys

Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous "ethical system"; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.

See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody's understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn't considered? They're just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.

If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.

Comment author: Lukas_Gloor 02 May 2015 10:53:52AM *  0 points [-]

Good points. My entire post assumes that people are interested in figuring out what they would want to do in every conceivable decision-situation. That's what I''d call "doing ethics", but you're completely correct that many people do something very different. Now, would they keep doing what they're doing if they knew exactly what they're doing and not doing, i.e. if they were aware of the alternatives? If they were aware of concepts like agentyness? And if yes, what would this show?

I wrote down some more thoughts on this in this comment. As a general reply to your main point: Just because people act as though they are interested in x rather than y doesn't mean that they wouldn't rather choose y if they were more informed. And to me, choosing something because one is not optimally informed seems like a bias, which is why I thought the comparison/the term "moral anti-epistemology" has merits. However, under a more Panglossian interpretation of ethics, you could just say that people want to do what they do, and that this is perfectly fine. I depends on how much you value ethical reflection (there is quite a rabbit hole to go down to, actually, having to do with the question whether terminal values are internal or chosen).

View more: Next