Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 15 November 2016 03:34:52PM 2 points [-]

Keep in mind that all the empirical data on the basis of which we conclude that democracy is an okay political system comes from reality which includes stupid and ignorant electorates.

Comment author: Carinthium 15 November 2016 11:10:51PM 0 points [-]

A good question to keep in mind is how much real power the electorate has, as opposed to entrenched bureaucrats or de facto oligarchies.

Comment author: Sable 14 November 2016 11:03:15PM 4 points [-]


Unless I am much mistaken, the reason that no one has yet used Nuclear Weapons is Mutually Assured Destruction, the idea that there can be no victor in a nuclear war. MAD holds so long as the people in control of nuclear weapons have something to lose if everything gets destroyed, and Trump has grandchildren.

Grandchildren who would burn in nuclear fire if he ever started a nuclear war.

So I am in no way sympathetic to any argument that he's stupid enough to start one. He has far too much to lose.


I believe that the sets of skills necessary to be a good president, and to be elected president, are two entirely separate things. They may be correlated, but I doubt they're correlated that highly; a popularity contest selects for popularity, after all.

So far, we have information on Trump's skill set as a businessman: immoral and unethical perhaps, but ultimately very successful.

And we have information on Trump's skill set as a Presidential Candidate: bombastic, brash, witty, politically incorrect and able to motivate large numbers of people to vote for him.

We have no information on what Trump will be like as President; that's the gamble. We can guess, but trends don't always continue, and I suspect, based on more recent data, that Trump has an inkling that now is not the time to do anything drastic.


Aside from the usual LW topics concerning existential risk (i.e. AI, Climate Change, etc.), my biggest concern is Islam. Mutually Assured Destruction only works when those with the Nuclear Weapons have nothing to lose, and if someone with such weapons genuinely believes that they and their family will go to heaven for using them, then MAD no longer applies.

From what meager evidence I can gather, I believe that Trump lowers the chance of such a war breaking out compared to Clinton. We've had a chance to see what Clinton's foreign policy looks like, and so far as I can tell, it isn't lowering the risk of nuclear war. It's heightening it.

Assuming other existential risks would be equal under either administration (which is a very questionable assumption, granted, and I would be happy to discuss it), that makes Trump look at the very least no worse than Clinton when it comes to existential risk.

I'd also like to note that I've been told plenty of people thought that Ronald Reagan would start a nuclear war with Russia, and he did nothing of the sort. Granted, I wasn't around then, so it's second person information, but there you go.


I don't know about the rest of you, but I am sick of having to expend copious amount of mental energy trying to remain as rational as I can throughout this election cycle. I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller.

If you disagree with anything I have to say, please respond - if my thinking is wrong, I want your help to make it better, to make it closer to correct.

Comment author: Carinthium 15 November 2016 10:59:34PM 0 points [-]

Question. I admit I have a low EQ here, but I"m not sure if 4) is sarcasm or not. It would certainly make a lot of sense if "I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller." were sarcasm.

I would have said we had information on 2), but I've made so many wrong predictions about Donald Trump privately that I think my private opinion has lost all credibility there. 1) makes sense.

I can see why you might be afraid of war breaking out with Russia, but why do you consider Islam a major threat? Maybe you don't and I'm misinterpreting you, but given how little damage terrorist attacks actually do isn't Islam a regional problem to which the West has a major overreaction problem?

Comment author: Lumifer 15 November 2016 03:35:56PM 0 points [-]

the Romans ... were under misconceptions about the nature of male and female minds

And what evidence do you have that they laboured under such major misconceptions which we successfully overcame?

Comment author: Carinthium 15 November 2016 10:50:54PM 0 points [-]

I was trying to say with my second paragraph that we specifically cannot be sure about that. My first paragraph was simply my best effort at interpreting what I think hairyfigment thinks, not a statement of what I believe to be true.

From my vague recollections I think the idea is worth looking up one way or the other. After all, a massive portion of modern culture is under the impression there are no gender differences and there are other instances of clear major misconceptions I actually can attest to throughout history. But I don't have any idea with the Romans.

Comment author: TheAncientGeek 26 October 2016 09:36:09AM *  0 points [-]

You are currently saying that the good is what people fundamentally value, and what people fundamentally value is good....for them. To escape vacuity, the second phrase would need to be cashed out as something like "side survival".

But whose survival? If I fight for my tribe, I endanger my own survival, if I dodge the draft, I endanger my tribes'.

Real world ethics has a pretty clear answer: the group wins every time. Bravery beats cowardice, generosity beats meanness...these are human universals. if you reverse engineer that observation back into a theoretical understanding, you get the idea that morality is something programned into individuals by communities to promote the survival and thriving of communities.

But that is a rather different claim to The Good is the Good.

Comment author: Carinthium 15 November 2016 12:41:30AM 0 points [-]

Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search: A: "I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws." B: "Yes, but is that a cat?"

Which could eventually lead back to A saying that:

A: "Yes you've said all these things, but it basically comes back to the claim a cat is a cat."

Comment author: TheAncientGeek 20 October 2016 01:20:04PM *  1 point [-]

The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.

That seems different to what you were saying before.

This is well explored in "Three worlds collide". Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I'm using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.

There's not much objectivity in that.

Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.

Comment author: Carinthium 15 November 2016 12:27:16AM *  0 points [-]

Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky's theory- an "objective" morality would be the tablet he refers to as something to ignore.

I'm not entirely on Yudkowsky's side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is "What do I want?". There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of "the good of society" or "objective ethical truth" are simply flawed concepts.

But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.

"I think Eliezer is correct in showing that the only solution is avoiding contact at all.": Assumes that there is such a thing as an objective solution, if implicitly.

"The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.": Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.

"There's not much objectivity in that.": What if objectivity in the sense you describe is impossible?

"Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.": If it isn't, then it comes back to the amoralist challenge. Why should we even care?

Comment author: TheAncientGeek 12 October 2016 04:05:54PM *  1 point [-]

Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".

The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff

That doesn't generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn't morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn't fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).

The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven't conceded that they are amoral. Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn't do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on ... why isn't that a moral system?

This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?


If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do.

Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.

So, to say the elves have their own “morality,” is not quite right. The elves have their own set of things that they care about instead of morality

That is arguing from the point that morality is a label for whatever humans care about, not toward it.

This helps us see the other problem, when people say that “different people at different times in history have been okay with different things, who can This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares? who’s really right?”

 There are many ways of refuting relativism, and most don't involve the claim that humans are uniquely moral.

Morality is a fixed thing. Frozen, if you will. It doesn’t change.

It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.

Rather, humans change. Humans either do or don’t do the moral thing. If they do something else, that doesn’t change morality, but rather, it just means that that human is doing an immoral

If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won't support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.

Rather, humans happen to care about moral things. If they start to care about different things, like slavery, that doesn’t make slavery moral, it just means that humans have stopped caring about moral things.

If it is possible for human values to diverge from morality. then something else must define morality, because human values can't diverge from human values. So you are not using a stipulative definition... here....although you are when you argue that elves can't be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)

So, when humans disagree about what’s moral, there’s a definite answer.

There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.

How do we find that moral answer, then? Unfortunately, there is no simple answer

Why doesn't that constitute an admission that you don't actually have a theory of morality?

You see, we don’t know all the pieces of morality, not so we can write them down on paper. And even if we knew all the pieces, we’d still have to weigh which ones are worth how much compared to each other.

On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don't have to pursue assumptions to their illogical conclusions.

Humans all care about the same set of things (in the sense I’ve been talking about). Does this seem contradictory? After all, we all know humans do not agree about what’s right and wrong; they clearly do not all care about the same things.

On the face of it , it's contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.  

Well, they do. Humans are born with the same Morality Equation in their brains, with them since birth.

Is that a fact? It's eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual's Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.

How then all their disagreements? There are three ways for humans to disagree about morals, even though they’re all born with the same morality equation in their heads (1 Don't do it, 2 don't do it right, 3 don't want to do it)

What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.

If the Moral Equation is something ideal and abstract, why can't aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..

Comment author: Carinthium 14 November 2016 11:52:51PM -1 points [-]

The Open Question argument is theoretically flawed because it relies too much on definitions (see this website's articles on how definitions don't work that way, more specifically http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/).

The truth is that humans have an inherent instinct towards seeing "Good" as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as "good".

But although I am not a total supporter of Yudowksy's moral support, he is right in that humans want to do good regardless of some "tablet in the sky". Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system.

The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with "Is it good?".

Comment author: Lumifer 24 October 2016 03:08:20PM 2 points [-]

having them read a lot of women's minds

I don't understand what that means.

You think no male Roman actually knew what women think? The Roman matrons were entirely voiceless?

Comment author: Carinthium 14 November 2016 11:00:31PM 0 points [-]

I think hairyfigment is of the belief that the Romans (and in the most coherent version of his claim you would have to say male and female) were under misconceptions about the nature of male and female minds, and believes that "a sufficiently deep way" would mean correcting all these misconceptions.

My view is that we really can't say that as things stand. We'd have to know a lot more about the Roman beliefs about the male and female minds, and compare them against what we know to be accurate about male and female minds.

Comment author: Lumifer 14 November 2016 07:24:56PM *  3 points [-]

Well then, is there someone or someones you could trust to make such a decision? And what do you base your trust on?

Comment author: Carinthium 14 November 2016 10:13:20PM *  0 points [-]

On a purely theoretical level (which is fun to talk about so I think worth talking about) I would like to see one of the high status and respected members of the rationalist movement (Yudowsky, Hanson etc) in power. They'd become corrupt eventually, but do a lot of good before they did.

On a practical level, our choices are the traditional establishment (which has shown its major flaws), backing Trump, or possibly some time in the future backing Sanders. Unless somebody here has a practical way to achieve something different, that's all we have.

(EDIT: For what it's worth, I base my trust on their works, somewhat on their theories on rationality, and the fact that reviewing ideas in far mode for so long has them "nailed" to policies. Without, say, an implacable Congress in their way, I think they'd do enough good to outweigh their inevitable corruption)

Comment author: Carinthium 14 November 2016 07:01:41AM 5 points [-]

What is this even? I don't get it.

Comment author: TheAncientGeek 12 November 2016 12:33:49PM 1 point [-]

Why Massimo Pigliucci thinks something like that

But then I noticed that the post was a follow up to two more, one entitled “If many-worlds had come first,” the other “The failures of Eld science.” Oh crap, now I had to go back and read those before figuring out what Yudkowsky was up to. (And before you ask, yes, those posts too linked to previous ones, but by then I had had enough.)

Except that that didn’t help either. Both posts are rather bizarre, if somewhat amusing, fictional dialogues, one of which doesn’t even mention the word “Bayes” (the other refers to it tangentially a couple of times), and that certainly constitute no sustained argument at all. (Indeed, “The failures of Eld science” sounds a lot like the sort of narrative you find in Atlas Shrugged, and you know that’s not a compliment coming from me.)


Comment author: Carinthium 13 November 2016 12:37:37AM 1 point [-]

Got it. Thanks.

View more: Next