You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Lumifer 14 October 2016 02:28:44PM 0 points [-]

white supremacism

That's actually Sino-Judaic supremacism, you white gweilo untermenschen!

Comment author: Lumifer 14 October 2016 02:26:41PM 0 points [-]

What are you even trying to say?

I'm saying that if you can't recognize Friendliness (and I don't think you can), trying to build a FAI is pointless as you will not be able to answer "Is it Friendly?" even when looking at it.

I think an AI will easily be able to learn human values from observations.

So if you can't build a supervised model, you think going to unsupervised learning will solve your problems? The quote I gave you is part of human values -- humans do value triumph over their enemies. Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values. Why do you think your values will be preferred by the AI to values of, say, ISIS or third-world Maoist guerrillas? They're human, too.

Comment author: scarcegreengrass 14 October 2016 02:11:15PM 0 points [-]

Both of those Ito remarks referenced supposedly widespread perspectives. But personally, i have almost never encountered these perspectives before.

Comment author: scarcegreengrass 14 October 2016 02:03:01PM 0 points [-]

No, i don't. One possible explanation for the bug is that the successful time i used the dropdown to post the link directly to Discussion, rather than first to Drafts.

Comment author: hairyfigment 14 October 2016 11:09:12AM 0 points [-]

I'm getting really sick of this claim that Eliezer says all humans would agree on some morality under extrapolation. That claim is how we get garbage like this. At no point do I recall Eliezer saying psychopaths would definitely become moral under extrapolation. He did speculate about them possibly accepting modification. But the paper linked here repeatedly talks about ways to deal with disagreements which persist under extrapolation:

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. (emphasis added)

Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.

(Naturally, Eugine Nier as "seer" downvoted all of my comments.)

The metaethics sequence does say IMNSHO that most humans' extrapolated volitions (maybe 95%) would converge on a cluster of goals which include moral ones. It furthermore suggests that this would apply to the Romans if we chose the 'right' method of extrapolation, though here my understanding gets hazier. In any case, the preferences that we would loosely call 'moral' today, and that also survive some workable extrapolation, are what I seem to mean by "morality".

One point about the ancient world: the Bhagavad Gita, produced by a warrior culture though seemingly not by the warrior caste, tells a story of the hero Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn't change his mind simply because of arguments about duty. In the climax, Krishna assumes his true form as a god of death with infinitely many heads and jaws, saying, 'I will eat all of these people regardless of what you do. The only deed you can truly accomplish is to follow your warrior duty or dharma.' This view seems plainly environment-dependent.

In response to comment by MrMind on Quantum Bayesianism
Comment author: hairyfigment 14 October 2016 10:15:08AM 0 points [-]

No, that is not the question I asked. The question I asked was what the god-damned imaginary numbers mean, if they aren't describing reality. Because they don't look like subjective probability.

Comment author: MrMind 14 October 2016 09:13:26AM 0 points [-]

That's the basic, some say the only, mystery of MWI: why the world operates according to subjective probability?
You'll find this question posed in the Sequence in some places.

Comment author: MrMind 14 October 2016 09:11:09AM *  0 points [-]

As far as I know, neoEverett is the smallest realist interpretation: Eliezer argued not only against anti-realism, but also in favor of the smallest theory that falls out of the formalism.

Comment author: MrMind 14 October 2016 08:24:02AM *  0 points [-]

Ah, as it happens, I have none of those conflicts. I asked because I'm preparing an article on utilitarianism, and I happened to bounce on the question I posted as a good proxy of the hard problems in adopting it as a moral theory.
But I can understand that someone who believes this might have a lot of internal struggles.

Full disclosure: I'm a Duster, not a Torturer. But I'm trying to steelman Torture.

Comment author: Houshalter 14 October 2016 06:23:52AM 0 points [-]

Do you expect me to give you the complete solution to AI right here, right now? What are you even trying to say? You seem to be arguing that FAI is impossible. How can you possibly know that? Just because you can't immediately see a solution to the problem, doesn't mean a solution doesn't exist.

I think an AI will easily be able to learn human values from observations. It will be able to build a model of humans, and predict what we will do and say. It certainly won't base all it's understanding on a stupid movie quote. The AI will know what you want.

Comment author: waveman 14 October 2016 03:28:42AM 1 point [-]

How does low IQ directly cause crime?

See any criminology textbook. Low IQ is a strong predictor of criminal behavior.

Why? This is more specuative

  1. Inability to forsee consequences of actions.

  2. Opportunity cost is lower - if you have a good chance to enjoy a good income through talent and hard work, then the alternative is less appealing.

  3. Low IQ people are more likely to be at Kegan Development Level 2, which impairs empathy.

Comment author: entirelyuseless 14 October 2016 03:23:22AM 0 points [-]

What I am saying that being enormously powerful and useful does not determine the meaning of a word. Yes, something that optimizes can be enormously useful. That doesn't make it intelligent, just like it doesn't make it blue or green. And for the same reason: neither "intelligent" nor "blue" means "optimizing." And your case of evolution proves that; evolution is not intelligent, even though it was enormously useful.

"This claim doesn't follow from your premise at all." Not as a logical deduction, but in the sense that if you pay attention to what I was talking about, you can see that it would be true. For example, precisely because they have general knowledge, human beings can pursue practically any goal, whenever something or someone happens to persuade them that "this is good." AIs will have general knowledge, and therefore they will be open to pursuing almost any goal, in the same way and for the same reasons.

Comment author: Brillyant 14 October 2016 01:17:56AM 0 points [-]

low IQ

How does low IQ directly cause crime?

properly police black neighborhoods

What does this entail?

Comment author: chron 14 October 2016 12:40:40AM 0 points [-]

What is it's cause in your view?

A combination of low IQ and the fact the the political will to properly police black neighborhoods doesn't exist due to the type of "anti-racism" you support.

Comment author: Brillyant 14 October 2016 12:37:47AM -1 points [-]

Not to mention that we'll never solve the problem of large amount of black-on-black crime if we can't admit it's cause.

What is it's cause in your view? How do we solve it?

Comment author: username2 13 October 2016 11:42:29PM *  1 point [-]

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking.

You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.

But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...

Comment author: Tem42 13 October 2016 11:23:22PM 0 points [-]

If it is severe enough that you are posting here about it making you feel bad, it is worth trying to replace it with a mental habit that works equally well to prevent future errors but feels better.

It is good to gain control over your mental habits in general, and this sounds like a good place to start.

If those statements appear true to you, no other analysis of this behavior is likely necessary.

Comment author: DanArmak 13 October 2016 11:19:20PM 3 points [-]

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Comment author: chron 13 October 2016 11:02:36PM *  1 point [-]

If there truly are meaningful genetic differences between races, then so be it. But that seems to be the justification for the portion of "white supremacist" Trump supporters I mentioned above. It's an angry racism that seems likely to be problematic.

Well, as compared the hypothetical problems this "racism" or "white supremacism" might supposedly cause in the future; the type of "police and all whites are racist" anti-racism you are promoting is having problematic consequences right now, in the form of anti-police and generally anti-white rioting by blacks in places like Ferguson, Baltimore, Charlotte, etc. Not to mention that we'll never solve the problem of large amount of black-on-black crime if we can't admit it's cause.

Comment author: chron 13 October 2016 10:43:36PM 0 points [-]

However, not necessarily extreme in terms of what some immigrant groups experienced before arriving in the US.

Comment author: Brillyant 13 October 2016 08:57:20PM -1 points [-]

Under a common definition of racism as belief in meaningful differences between races, these views are racism. So?

I mean "racism" in a way that is significantly consequential for those who are discriminated against. An active racism.

If there truly are meaningful genetic differences between races, then so be it. But that seems to be the justification for the portion of "white supremacist" Trump supporters I mentioned above. It's an angry racism that seems likely to be problematic.

Anyway, thanks for your thoughts.

Comment author: Brillyant 13 October 2016 08:30:33PM 0 points [-]

Extreme in terms of U.S. then? Extreme is relative, right? That's what you're saying?

Comment author: Lumifer 13 October 2016 08:20:51PM *  2 points [-]

These views seem very likely to lead to racism.

LOL. "Could lead to dancing".

Under a common definition of racism as belief in meaningful differences between races, these views are racism. So?

Comment author: chron 13 October 2016 08:18:44PM 0 points [-]

In regard to social issues, such as the murder rate by race you cited earlier, I'm not compelled to believe blacks are genetically wired to behave poorly and kill more often. Rather, as I've said, I believe there has been an extreme set of circumstances in the U.S. that have led to lots of problems.

You and Lumifer have two different theories to explain the difference in murder rate. The rational way to resolve this dispute is to look at areas where the two theories make different predictions and see which set of predictions is correct. This is more or less what Lumifer has been doing in this thread. You have been coming up with incresingly flimsy rationalizations to avoid coming to the obvious conclusions. Furthermore, the only prediction you've made using your theory, the continued existence of "racist attitudes" against blacks, is something Lumifer's theory also predicts.

Comment author: chron 13 October 2016 08:09:45PM 0 points [-]

In the case of African Americans' treatment in U.S. history, it's an extreme set of "nurture" circumstances

No it's not. It only seems that way to you because you know almost no non-US history.

Comment author: chron 13 October 2016 08:04:56PM *  1 point [-]

These views seem very likely to lead to racism.

What do you mean by "racism". If you mean "the belief that people of different races differ in ability", then yes. Of cource, in that case being "racist" is in fact rational.

As Eliezer likes to say "that which can be destroyed by the truth should be".

Comment author: Brillyant 13 October 2016 07:54:58PM -1 points [-]

Hm. These views seem very likely to lead to racism.

I've read Breitbart frequently since Steve Bannon was added to Trump's campaign because I'm fascinated with how Trump (an obvious hustler/fraud/charlatan in my view) has managed to get this close to the Oval Office. It's been illuminating (in a disturbing way) in understanding where I now believe a lot of the Trump support is coming from.

I'm confident a portion of his support is just Red-Team-no-matter-what Repubs. And some are one issue Pro-Life Christians. And some are fiscal conservatives who are sincerely just concerned about the debt and spending. And some are blue collar workers in areas (Ohio, Pennsylvania, etc.) where the global economy/technology caused manufacturing to dry up decades ago and they are mad as hell about the facts of the world and will just keep voting to change something, anything until they day they die...

But there is also this (disturbingly large) element of the movement that think non-white people are less than white people. Like, this group of Trump supporters are literally white supremacists—they believe white people are better suited for civilization. And, of course, no one can say that and politically get away with it in 2016, so they use all sorts of dog whistle-y language to imply it—including the main Trumpian slogan, "Make America Great Again­™"

Comment author: ChristianKl 13 October 2016 07:03:42PM 0 points [-]

Is it because of the IQ difference you believe exists between black and whites?

Lumifer likely believes that IQ predicts school performance and there are many studies that back this claim. He quite specifically said that you can calculate outcomes.

However not all white/black people are the same. Statements about the average IQ are statements about averages. Not all white have the same IQ and not all black people have the same IQ. Low IQ white people have low IQ children.

In Germany a white child named "Kevin" is likely to have a lower IQ than a child named "Jakob" and if you run your implicit bias tests you find that there's bias against the child named "Kevin".

Comment author: Lumifer 13 October 2016 07:00:24PM 1 point [-]

How can this not matter much?

Stupid people are still people. They have rights. Their propensity to make stupid decisions is not sufficient to take away from them the power to make decisions.

Is it because of the IQ difference you believe exists between black and whites?

Yes.

your reaction to those who are?

Is a shrug :-) People have all kinds of political beliefs, I don't find the white nationalists to be extraordinary.

As to re-colonising Africa, see the first paragraph :-)

Comment author: TheAncientGeek 13 October 2016 06:30:32PM *  0 points [-]

it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?

Almost everything. You explain morality by putting forward one theory. Under those circumstances, most people would expect to see some critique of other theories, and explanation of why your theory is the One True Theory. You don't do the first, and it is not clear that you are even trying to do the second.

Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.

And to say that only humans have morality. But if there is something the Elves should do, then morality applies to them., contradicting that claim.

For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it's moral and don't care, that also means they agree they should do it and don't care.

That doesn't help. For one thing, humans don't exactly want to be moral...their moral fibre has to be buttressed bty various punishments and rewards. For another "should" and "want to" are not synonyms..but "moral" and "what you should do" are. So if there is something the Elves should do, at that point you have established that morality applies to the Elves, and the fact that they don't want to do it is a side-issue. (And of course they could tweak their own motivations by constructing punishments and rewards).

Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don't care about what they spiritould do, they only care about what they should do.

OK. Now you seem to be saying..without quite making it quite explicit of course, ..that morality is by definition unique to humans, because the word "moral" just labels what motivates humans, in the way that "Earth" or "Terra" labels the planet where humans live. That claim isn't completely incomprehensible, it's just strange and arbitrary, and what is considerably strange is the way you feel no need to defend it against alternative theories -- the main alternative being that morality is multiply instantiable, that other civilisations could have their own versions. like they have their own versions , in the way they could have their own versions of houses or money.

You state it as though it is obvious, yet it has gone unnoticed for thousands of years.

Suppose I were to announce that dark matter is angels' tears. Doesn't it need some expansion? That's how your claim reads, that' the outside view.

Obligatory is just a kind of "should." Elves agree that some things are obligatory, and don't care, they care about what's ochristmastory.

Obligatory is a kind of "should" *that shouldn't be overridden by other considerations. (A failure to do what is obligatory is possible, of course, but it is important to remember that it is seen as a lapse, as something wrong, not a valid choice). Yet the Elves are overriding it, casting doubt on whether they have actually understood the concept of "obligatory"

Likewise, to say that today's morality equation is the "best" is to say that today's morality equation is the equation which is most like today's morality equation. Tautology.

Since anyone can say that at any time, that breaks the meaning of "best", which is supposed to pick out something unique. That would be a reductio ad absurdum of your own theory.

Comment author: Brillyant 13 October 2016 06:17:25PM *  -2 points [-]

There are smart people, there are stupid people, and the correlation to some outwardly visible feature like the colour of the skin doesn't matter much.

How do you mean? You're saying you believe it to be true that, generally, people with black skin color are more likely to have a significantly lower IQ than people with white skin color... And you believe that IQ is correlated with life outcomes. How can this not matter much?

I find affirmative action counter-productive.

I also have the sense this may be true in many instances. The theory seems solid, but I'm not sure it works as intended in practice.

For another example, I don't believe the claims that inner-city schools (read: black) lag behind suburban schools (read: not black) because of lack of funding or because of surrounding poverty.

Why do they lag behind? Is it because of the IQ difference you believe exists between black and whites?

...

You say you're not a white nationalist...I'm curious about your reaction to those who are? In regard to segregation, for instance... You say you don't think the Europeans should re-colonise Africa for the natives' own good—Why not?

Comment author: TheAncientGeek 13 October 2016 05:06:17PM 0 points [-]

Technically, you could believe that people are equally allowed to be enslaved. All people equal + it's wrong to make me a slave = it's wrong to make anyone a slave.

You realise that's a reinvention of Kant?

Comment author: SithLord13 13 October 2016 03:37:16PM 0 points [-]

There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it's an emotionally biasing topic, you've got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you've got the fact that a child raised by a mother who is willing to do it has a greater chance of being raised in such a way as to have a net positive impact on society. Then you have the greater potential for preventing the situation in the future, caused by the increased visibility of the higher death toll. I'm certain there are more aspects I'm failing to note.

But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let's scale the problem up for a moment. Say instead of 5 it's 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.

In response to comment by qmotus on Quantum Bayesianism
Comment author: TheAncientGeek 13 October 2016 03:07:55PM 0 points [-]

There are modified theories, there is no unequivocal "face value".

In response to comment by MrMind on Quantum Bayesianism
Comment author: TheAncientGeek 13 October 2016 02:59:54PM 0 points [-]

In the Sequence, Eliezer made a strong case for the realist interpretation of QM (neo-Everettian many worlds), based on decoherence and Occam's razor.

It's tendentious to call MWI the only realistic interpretation.

EY makes a case against CI, which in most circumstances would be a case against anti-realism. However his version of CI is actually OR, another realistic theory. So he never makes a case for realism against irrealism.

Comment author: Lumifer 13 October 2016 02:47:08PM 2 points [-]

just looking for a rough sketch

Well, you can probably go about it in the following way. IQ is and was a controversial concept. One of the lines of attack against it was that it is meaningless, that the number coming out of the IQ test does not correspond to anything in real life. This is often expressed as "IQ measures the skill of taking IQ tests".

To deal with this objection people ran a number of studies. Typically you take a set of young people and either give them a proper IQ test or rely on another test which is a decent IQ proxy -- usually the SAT in the US or one of the tests that the military gives to all its drafted or enlisted men. After that you follow that set of people and collect their life outcomes, from income to criminal records. Once you've done that you can see whether the measured IQ actually correlates to life outcomes. And yes, it does.

I don't have links to actual studies handy, but you can easily google them up, and you can take a look at a not-fully-rigorous description of the various tiers of IQ and what do they mean in real-life terms.

Basically what these studies give you is the cost of an IQ point, cost in terms of a lot of things -- income, chance to end up in prison, longevity (high-IQ people are noticeably healthier), etc.

Given this, you can calculate the expected outcomes for the US black population. If their average IQ is 10-15 points lower, you can translate this into expected income (lower than the US mean), expected chance of a criminal conviction (higher than the US mean) and other things you're interested in. Once you've done that, you can compare your expected values with ones empirically observed. Any remaining gap will be due to something other than the IQ differential.

informs your politics

On a macro level it does not. There are smart people, there are stupid people, and the correlation to some outwardly visible feature like the colour of the skin doesn't matter much. I am not a white nationalist, I do not think the Europeans should re-colonise Africa for the natives' own good, etc.

On a micro level it does. For example, I find affirmative action counter-productive. For another example, I don't believe the claims that inner-city schools (read: black) lag behind suburban schools (read: not black) because of lack of funding or because of surrounding poverty. Throwing money at the problem will achieve nothing.

Comment author: qmotus 13 October 2016 02:34:07PM 0 points [-]

I'm certainly not an instrumentalist. But the argument that MWI supporters (and some critics, like Penrose) generally make, and which I've found persuasive, is that MWI is simply what you get if you take quantum mechanics at face value. Theories like GRW have modifications to the well-established formalism that we, as far as I know, have no empirical confirmation of.

Comment author: Lumifer 13 October 2016 02:24:26PM *  0 points [-]

if they were shown in a sufficiently deep way everything we know, they would be moved by it

That doesn't seem obvious to me at all.

Let's try it on gay marriage. Romans certainly knew and practiced homosexuality, same for marriage. What knowledge exactly do you want to convey to them to persuade them that gay marriage is a good thing?

I'm talking metaethics, what makes something moral

So, prescriptive. I am not sure in which way do you consider the theories "failed" -- in the sense that they have not risen to the status of physics meaning being able to empirically prove all their claims? That doesn't look to be a viable criterion. In the sense of not having taken over the world? I don't know, the divine command theory is (or, at least, has been) pretty good at that. You probably wouldn't want a single theory to take over the world, anyway.

Comment author: Lumifer 13 October 2016 02:22:08PM *  0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly.

Of course you do. You test it. You show it a lot of images (that it hasn't seen before) of dogs and not-dogs and check how good it is at differentiating them.

How would that process work for an AI and human values?

the principle of letting AIs learn models from real world data

Right, human values: “A man's greatest pleasure is to defeat his enemies, to drive them before him, to take from them that which they possessed, to see those whom they cherished in tears, to ride their horses, and to hold their wives and daughters in his arms.”

Comment author: ChristianKl 13 October 2016 01:10:09PM 2 points [-]

tl;dr Obama doesn't really now what he's talking about but tries to use talking points to make sense of the new project.

In response to comment by qmotus on Quantum Bayesianism
Comment author: TheAncientGeek 13 October 2016 01:02:50PM *  0 points [-]

I guess that's possible, but how seriously should we take those when we have no empirical reasons to prefer them?

Doesn' that rebound on the argument for MWI?

Sincere and consistent instrumentalists may exist, but I think they are rare. What is much more common is for people to compartmentalise, to take and irrealist or instrumetalist stance about things that make them feel uncomfortable, while remaining cheerfully realist about other things.

At the end of the day, being able to predict phenomena isn’t that exciting. People generally do science because they want to find out about the world. And “rationaists”, internet atheists and so on generally do have ontological commitments, to the non-existence of gods and ghosts, some view about whether or not we are ina matrix and so on.

Comment author: scarcegreengrass 13 October 2016 11:57:11AM 1 point [-]

Oh, this is much more complete, thanks.

Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.

Comment author: BiasedBayes 13 October 2016 11:45:12AM 0 points [-]

Morality binds and blinds. People derive moral claims from emotional and intuitive notions. It can feel good and moral to do amoral things. Objective morality has to be tied to evidence what really is human wellbeing; not to moral intuitions that are adaptions to the benefit of ones ingroup; or post hoc thought experiments about knowledge.

Comment author: DanArmak 13 October 2016 09:47:14AM 1 point [-]

Technically, you could believe that people are equally allowed to be enslaved.

In a sense, the ancient Romans did believe this. Anyone who ended up in the same situation - either taken as a war captive or unable to pay their debts - was liable to be sold as a slave. So what makes you think your position is objectively better than theirs?

"All men are created equal" emerges from two or more basic principles people are born with. You might say: "Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn't know them? No? Well, you don't know them; do they have value?

This assumes without argument that "value" is something people intrinsically have or can have. If instead you view value as value-to-someone, i.e. I value my loved ones, but someone else might not value them, then there is no problem.

And it turns out that yes, most people did not have an intuition that anyone has intrinsic value just by virtue of being human. Most people throughout history assigned value only to ingroup members, to the rich and powerful, and to personally valued individuals. The idea that people are intrinsically valuable is historically very new, still in the minority today globally, and for both these reasons doesn't seem like an idea everyone should naturally arrive at if they only try to universalize their intuitions a bit.

In response to comment by qmotus on Quantum Bayesianism
Comment author: MrMind 13 October 2016 07:57:45AM *  1 point [-]

it's just that it's unclear to me how seriously we should take them at this stage

Well, categorical quantum mechanics is a program under developement since 2008, and it gives you a quantum framework in any computational theory with enough symmetries (databases, linguistics, etc).
It spawned quantum programming languages and a graphical calculus. So I think it's pretty succesful and has to be taken seriously, albeit it's far from being complete (it lacks a unified treatment of infinite systems, for example).

Comment author: Houshalter 13 October 2016 05:00:45AM 0 points [-]

since by that argument dogs and cats are optimization, and blue and green are optimization, and everything is optimization

I have no idea what you are talking about. Optimization isn't that vague of a word, and I tried to give examples of what I meant by it. The ability to solve problems and design technologies. Dogs and cats can't design technology. Blue and green can't design technology. Call it what you want, but to me that's what intelligence is.

And that's all that really matters about intelligence, is it's ability to do that. If you gave me a computer program that could solve arbitrary optimization problems, who cares if it can't speak language? Who cares if it isn't an agent? It would be enormously powerful and useful.

That is also why when AI is actually programmed, people will do it by trying to get something to understand language, and that will in fact result in the kind of AI that I was talking about, namely one that aims at vague goals that can change from day to day, not at paperclips.

Again this claim doesn't follow from your premise at all. AIs will be programmed to understand language... therefore they won't have goals? What?

Humans definitely have goals. We have messy goals. Nothing explicit like maximizing paperclips, but a hodge podge of goals that evolution selected for, like finding food, getting sex, getting social status, taking care of children, etc. Humans are also more reinforcement learners than pure goal maximizers, but it's the same principle.

Comment author: Houshalter 13 October 2016 04:13:14AM 0 points [-]

If I train a neural network to recognize dogs, I have no way of knowing if it learned correctly. I can't look at the weights and see if they are correct dog image recognizing weights and not something else. But I can trust the process of training and validation, that the AI has learned to recognize what dogs look like.

It's a similar principle with learning human values. Of course it's more complicated than just feeding it images of dogs, but the principle of letting AIs learn models from real world data is the important part.

Comment author: WalterL 13 October 2016 03:52:56AM 0 points [-]

Yeah, I agree that people being able to travel freely and choose where they live is good.

Comment author: WhySpace 13 October 2016 03:42:41AM *  1 point [-]

Related:

There's a cheesy children's book called Death is Wrong

Also, in this thread several people recount precisely what events in their lives caused them to become rationalists. Influential books include:

It's not clear to me why many efforts to raise the sanity water line are focused on adults. It seems like it would be more effective to try to teach children to think more like programmers, engineers, and scientists. I intend to use this sort of thing whenever I owe obligatory Christmas/birthday gifts to satisfy cultural norms.

Comment author: Daniel_Burfoot 13 October 2016 02:27:02AM 2 points [-]

Okay, I obviously don't mean that we should value-segregate people at the point of a gun. I mean that if people naturally want to migrate towards geopolitical communities that better fit their particular value system, this is probably a good thing.

Comment author: entirelyuseless 13 October 2016 02:15:44AM 0 points [-]

I think it is perfectly obvious that this usage of "should" and so on is wrong. A paperclipper believes that it should make paperclips, and it means exactly the same thing by "should" that I do when I say I should not murder.

And when I say it is obvious, I mean it is obvious in the same way that it is obvious that you are using the word "hat" wrong if you use it for a coat.

Comment author: torekp 13 October 2016 12:36:16AM 0 points [-]

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

Comment author: torekp 13 October 2016 12:27:16AM 0 points [-]

The linked paper is only about current practices, their benefits and harms. You're right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.

Comment author: Bound_up 12 October 2016 11:16:29PM 0 points [-]

This is an explanation of Yudkowsky's idea from the metaethics sequence. I'm just trying to make it accessible in language and length with lots of concept handles and examples.

Technically, you could believe that people are equally allowed to be enslaved. All people equal + it's wrong to make me a slave = it's wrong to make anyone a slave.

"All men are created equal" emerges from two or more basic principles people are born with. You might say: "Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn't know them? No? Well, you don't know them; do they have value?

You take "people I care about have value" (born with it) and combine it with "be consistent" (also born with), and you get "everyone has value."

That's the idea in principle, anyway. You take some things people are all born with, and they combine to make the moral insights people can figure out and teach each other, just like we do with math.

Comment author: Bound_up 12 October 2016 10:18:19PM 0 points [-]

Mmm, that's not quite the right abstraction. You're probably against innocents going to jail in general, no?

Whereas some Roman might not care, as long as it's no one they care about.

All I'm getting at is that the Romans didn't think certain things were wrong, but if they were shown in a sufficiently deep way everything we know, they would be moved by it, whereas if we were shown everything they know, we would not find it persuasive of their position. Neither would they, after they had seen what we've seen.

I'm talking metaethics, what makes something moral, what it means for something to be moral. Failed ones include divine command theory, the "whatever contributes to human flourishing" idea, whatever makes people happy, whatever matches some platonic ideals out there somehow, whatever leads to selfish interest, etc.

Comment author: DanArmak 12 October 2016 10:13:13PM 0 points [-]

Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it's right or wrong is to look for evidence?

Yes, I think it is coherent.

Ideological Turing test: I think your theory is this: there is some set of values, which we shall call Morals. All humans have somewhat different sets of lower-case morals. When people make moral mistakes, they can be corrected by learning or internalizing some relevant truths (which may of course be different in each case). These truths can convince even actual humans to change their moral values for the better (as opposed to values changing only over generations), as long as these humans honestly and thoroughly consider and internalize the truths. Over historical time, humans have approached closer to true Morals, and we can hope to come yet closer, because we generally collect more and more truths over time.

the way to find out if it's right or wrong is to look for evidence?

If you mean you don't have any evidence for your theory yet, then how or why did you come by this theory? What facts are you trying to explain or predict with it?

Remember that by default, theories with no evidence for them (and no unexplained facts we're looking for a theory about) shouldn't even rise to the level of conscious consideration. It's far, far more likely that if a theory like that comes to mind, it's for due to motivated reasoning. For example, wanting to claim your morality is better by some objective measure than that of other people, like slavers.

by the way, understanding slavery might be necessary, but not sufficient to get someone to be against it. They might also need to figure out that people are equal, too.

That's begging the question. Believing that "people are equal" is precisely the moral belief that you hold and ancient Romans didn't. Not holding slaves is merely one of many results of having that belief; it's not a separate moral belief.

But why should Romans come to believe that people are equal? What sort of factual knowledge could lead someone to such a belief, despite the usually accepted idea that should cannot be derived from is?

Comment author: DanArmak 12 October 2016 09:46:58PM 1 point [-]

IIRC that's what happened to me as well. I had a working post, then edited the description, and the link was gone and I couldn't bring it back.

In response to comment by MrMind on Quantum Bayesianism
Comment author: qmotus 12 October 2016 09:08:30PM 0 points [-]

Fair enough. I feel like I have a fairly good intuitive understanding of quantum mechanics, but it's still almost entirely intuitive, and so is probably entirely inadequate beyond this point. But I've read speculations like this, and it sounds like things can get interesting: it's just that it's unclear to me how seriously we should take them at this stage, and also some of them take MWI as a starting point, too.

Regarding QBism, my idea of it is mostly based on a very short presentation of it by Rüdiger Schack at a panel, and the thing that confuses me is that if quantum mechanics is entirely about probability, then what do those probabilities tell us about?

Comment author: qmotus 12 October 2016 08:57:14PM 0 points [-]

I'm not sure what you mean by OR, but if it refers to Penrose's interpretation (my guess, because it sounds like Orch-OR), then I believe that it indeed changes QM as a theory.

Comment author: qmotus 12 October 2016 08:55:19PM 0 points [-]

Guess I'll have to read that paper and see how much of it I can understand. Just at a glance, it seems that in the end they propose one of the modified theories like GRW interpretation might be the right way forward. I guess that's possible, but how seriously should we take those when we have no empirical reasons to prefer them?

Comment author: Brillyant 12 October 2016 08:36:40PM *  -1 points [-]

Will it? I agree that it will cause some harm, but I'm not sure about "significant".

I'd submit it's a matter of definition.

Note that race-based discrimination is explicitly illegal and agencies such as EEOC do prosecute. Moreover, EEOC uses the concept of "disparate impact" which basically means that if you statistically discriminate regardless of your intent, you are in trouble.

Great point. I didn't know this. I'll have to do more reading. Generally though, I'd concede anti-discrimination laws have an impact.

Also, did a bias against those-not-like-me cause employment problems for, say, the Chinese? Why not?

Well, the Chinese weren't enslaved. And it's my experience there is not nearly as much racism against Asians as against blacks in America, but that is just my anecdotal experience.

I am saying people with African ancestry (regardless of their citizenship) belong to a gene pool which has average IQ lower than that of people with European ancestry. Lest you think that the whites are the pinnacle of evolution, the European gene pool has lower average IQ than, say, Han Chinese.

I've looked into this only briefly, and I'll take your word for it.

There are two separable questions here. The first one is do you agree that people with African ancestry have lower average IQ (by about one standard deviation) than people with European ancestry? That question has nothing to do with slavery and segregation. If you do not, we hit a major disagreement right here and there's not much point in discussing why contemporary black Americans have different outcomes than whites or Asians. If you do, we can move on to the second question: what is the relative role of various factors which determine the current state of the black Americans?

It makes sense to me to separate this into two questions like you propose. As I said, I'll defer to your research and knowledge on the first point (and suspend my skepticism in the process), and move to your second question.

As to that second question—what is the relative role of various factors which determine the current state of the black Americans—I'm interested to know what you think, given your view that people with African ancestry have lower IQs...

I might suggest the following approach. If you agree that the average IQ of blacks is lower, then let's estimate the effect of that on social outcomes. It might be that this cause will explain a great deal of what we observe. If so, there's no need to bring in the history of slavery and segregation as a major factor because there wouldn't be much left to explain.

...You've stated it's complex, but roughly, what percentage of contemporary social outcomes experienced by blacks in America are a result of genetic differences ("nature"), and what percentage are a result of environmental factors (nurture)? Of that percentage that you deem to be the result of environmental factors, what portion is a result of slavery/segregation/discrimination? Again, just looking for a rough sketch from your mind here, as I recognize you have stated it's complex and difficult to parse.

Also, I'm wondering how the idea people with African ancestry have lower average IQ than people with European ancestry informs your politics?

Comment author: username2 12 October 2016 08:18:51PM 1 point [-]

Hey thanks for this. I had some time and I compiled this chronologically ordered list of links from those threads for personal use. https://my.mixtape.moe/nrbmyr.html

Comment author: WalterL 12 October 2016 07:53:10PM 1 point [-]

Here's a more serious response.

  1. Segregating the world, period, based on whatever, is impossible without a coercive power that the existing nations of earth would consider illegal. Before you could forcefully migrate a large percentage of the world's humans you'd have to win a war with whatever portion of the UN stood against you.
  2. If you could do it, no one would admit to having any values other than those which got to live in/own the nicest places/stuff/be with their family / not be with their competitors/whatever. The technology to determine everyone's values does not exist.
  3. If you somehow derived everyone's values and split them by these, you would probably be condemning large segments of the population to misery (Lots of people's values are built around living around people who don't share them.), and there would be widespread resentment. The invincible force you used to overcome objection 1 would be tested within a generation.
Comment author: Lumifer 12 October 2016 07:07:50PM 1 point [-]

Someone could be against slavery for THEM personally without being against slavery in general if they didn't realize that what was wrong for them was also wrong for others.

Huh? I'm against going to jail personally without being against the idea of jail in general. In any case, wasn't your original argument that ancient Greeks and Romans just didn't understand what does it mean to be a slave? That clearly does not hold.

most moral theories are so bad you don't even need to talk about evidence. You can show them to be wrong just because they're incoherent or self-contradictory.

Do you mean descriptive or prescriptive moral theories? If descriptive, humans are incoherent and self-contradictory.

Which moral theories do you have in mind? A few examples will help.

Comment author: Daniel_Burfoot 12 October 2016 06:49:03PM 1 point [-]

Downvoted for making a flippant, argument-based-on-fiction response to serious comment.

Comment author: Bound_up 12 October 2016 06:32:26PM 0 points [-]

Right. Someone could be against slavery for THEM personally without being against slavery in general if they didn't realize that what was wrong for them was also wrong for others. That's all I'm getting at, there.

Or do you mean that they should have opposed slavery for everybody as a sort of game theory move to reduce their chance of ever becoming a slave?

"You do understand that debates about objective vs relative morality has been going on for millenia?"

What I'm getting at here is that most moral theories are so bad you don't even need to talk about evidence. You can show them to be wrong just because they're incoherent or self-contradictory.

It's a pretty low standard, but I'm asking if this theory is at least coherent and consistent enough that you have to look at evidence to know if it's wrong, instead of just pointing at its self-defeating nature to show it's wrong. If so, yay, it might be the best I've ever seen. :)

Comment author: Lumifer 12 October 2016 06:05:49PM *  1 point [-]

Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it's right or wrong is to look for evidence?

You do understand that debates about objective vs relative morality has been going on for millenia?

They might also need to figure out that people are equal, too

No, they don't if they themselves are in danger of becoming slaves. Notably, a major source of slaves in the Ancient world was defeated armies. Slaves weren't clearly different people (like the blacks were in America), anyone could become a slave if his luck turned out to be really bad.

Comment author: Bound_up 12 October 2016 06:02:16PM 0 points [-]

Okay. By saying "If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do."

it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?

Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.

For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it's moral and don't care, that also means they agree they should do it and don't care.

Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don't care about what they spiritould do, they only care about what they should do.

moral is to Christmas spiritey what "should" is to (make up a word like) "spiritould"

Obligatory is just a kind of "should." Elves agree that some things are obligatory, and don't care, they care about what's ochristmastory.

.

Likewise, to say that today's morality equation is the "best" is to say that today's morality equation is the equation which is most like today's morality equation. Tautology.

Best = most good, and good = valued by the morality equation.

Comment author: Bound_up 12 October 2016 05:54:11PM 0 points [-]

You're right; I've provided no evidence.

Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it's right or wrong is to look for evidence?

If it was incoherent or contradicted itself, it wouldn't even need evidence to be disproven; we would already know it's wrong. Have I avoided being wrong in that way?

(by the way, understanding slavery might be necessary, but not sufficient to get someone to be against it. They might also need to figure out that people are equal, too. Good point, I might need to add that note into the post).

Comment author: Lumifer 12 October 2016 04:56:57PM *  1 point [-]

a bias against those-not-like-me would be sufficient in this case to cause blacks a significant deficit in opportunity for employment in a historically majority white nation.

Will it? I agree that it will cause some harm, but I'm not sure about "significant". Note that race-based discrimination is explicitly illegal and agencies such as EEOC do prosecute. Moreover, EEOC uses the concept of "disparate impact" which basically means that if you statistically discriminate regardless of your intent, you are in trouble.

Also, did a bias against those-not-like-me cause employment problems for, say, the Chinese? Why not?

You are saying black Americans have a genetic deficit in the form of lower average IQ.

I am saying people with African ancestry (regardless of their citizenship) belong to a gene pool which has average IQ lower than that of people with European ancestry. Lest you think that the whites are the pinnacle of evolution, the European gene pool has lower average IQ than, say, Han Chinese.

I don't know if "deficit" is a useful word -- there is no natural baseline and the fact that the IQ scale has the average IQ of Europeans as the "norm" (100) is just a historical accident. I think it's more correct to just say that different gene pools have different IQ distributions.

There are two separable questions here. The first one is do you agree that people with African ancestry have lower average IQ (by about one standard deviation) than people with European ancestry? That question has nothing to do with slavery and segregation. If you do not, we hit a major disagreement right here and there's not much point in discussing why contemporary black Americans have different outcomes than whites or Asians. If you do, we can move on to the second question: what is the relative role of various factors which determine the current state of the black Americans?

I might suggest the following approach. If you agree that the average IQ of blacks is lower, then let's estimate the effect of that on social outcomes. It might be that this cause will explain a great deal of what we observe. If so, there's no need to bring in the history of slavery and segregation as a major factor because there wouldn't be much left to explain.

I'd hypothesize slavery/segregation/discrimination has been consequential to the extent that even if blacks had a higher average IQ than whites, they would still be in a similar situation.

Ashkenazi Jews have higher average IQ than whites and were segregated and discrimated against. Are they in a similar situation? Were they in a similar situation at the time when the segregation was just ending?

Besides, you're forgetting that one can just go and measure IQ. There is a lot of data on the average IQ of racial groups in the US. Hint: American blacks do not have higher IQ.

Plainly, advanced IQ (or other genetic advantages) aren't enough to overcome significant discrimination in all cases.

Yes, but we're not talking about "all cases". We are talking about the very specific case of the United States of America.

Things can change. Slowly.

Um, things have changed. Already.

Comment author: Lightwave 12 October 2016 04:48:07PM 4 points [-]
Comment author: Lumifer 12 October 2016 04:37:31PM *  0 points [-]

The two kinds of discrimination -- (1) because I prefer people-like-me, and (2) because I have informative priors about groups -- can perfectly well co-exist.

Comment author: Lumifer 12 October 2016 04:35:05PM 1 point [-]

But if you don't know what human values are, how can you be sure that the AI will learn them correctly?

So you make an AI and tell it: "Go forth and learn human values!" It goes and in a while comes back and says "Behold, I have learned them". How do you know this is true?

Comment author: TheAncientGeek 12 October 2016 04:05:54PM *  1 point [-]

Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".

The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff

That doesn't generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn't morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn't fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).

The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven't conceded that they are amoral. Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn't do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on ... why isn't that a moral system?

This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?

If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do.

Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.

So, to say the elves have their own “morality,” is not quite right. The elves have their own set of things that they care about instead of morality

That is arguing from the point that morality is a label for whatever humans care about, not toward it.

This helps us see the other problem, when people say that “different people at different times in history have been okay with different things, who can This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares? who’s really right?”

There are many ways of refuting relativism, and most don't involve the claim that humans are uniquely moral.

Morality is a fixed thing. Frozen, if you will. It doesn’t change.

It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.

Rather, humans change. Humans either do or don’t do the moral thing. If they do something else, that doesn’t change morality, but rather, it just means that that human is doing an immoral

If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won't support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.

Rather, humans happen to care about moral things. If they start to care about different things, like slavery, that doesn’t make slavery moral, it just means that humans have stopped caring about moral things.

If it is possible for human values to diverge from morality. then something else must define morality, because human values can't diverge from human values. So you are not using a stipulative definition... here....although you are when you argue that elves can't be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)

So, when humans disagree about what’s moral, there’s a definite answer.

There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.

How do we find that moral answer, then? Unfortunately, there is no simple answer

Why doesn't that constitute an admission that you don't actually have a theory of morality?

You see, we don’t know all the pieces of morality, not so we can write them down on paper. And even if we knew all the pieces, we’d still have to weigh which ones are worth how much compared to each other.

On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don't have to pursue assumptions to their illogical conclusions.

Humans all care about the same set of things (in the sense I’ve been talking about). Does this seem contradictory? After all, we all know humans do not agree about what’s right and wrong; they clearly do not all care about the same things.

On the face of it , it's contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.

Well, they do. Humans are born with the same Morality Equation in their brains, with them since birth.

Is that a fact? It's eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual's Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.

How then all their disagreements? There are three ways for humans to disagree about morals, even though they’re all born with the same morality equation in their heads (1 Don't do it, 2 don't do it right, 3 don't want to do it)

What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.

If the Moral Equation is something ideal and abstract, why can't aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..

Comment author: DanArmak 12 October 2016 03:50:56PM 1 point [-]

Do you know what went wrong or what's the difference in making a working link post?

Comment author: scarcegreengrass 12 October 2016 03:49:07PM 0 points [-]
Comment author: scarcegreengrass 12 October 2016 03:45:16PM 0 points [-]

What?? Weird!

Maybe it was lost when i edited the draft.

Comment author: Lightwave 12 October 2016 03:32:19PM *  1 point [-]
Comment author: turchin 12 October 2016 03:25:20PM 1 point [-]
Comment author: DanArmak 12 October 2016 02:59:06PM 1 point [-]

I don't see a link. Was it lost like in my link post on a different subject? I still don't know how to post links correctly.

View more: Next