Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

rkyeun comments on Magical Categories - Less Wrong

24 Post author: Eliezer_Yudkowsky 24 August 2008 07:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: rkyeun 11 November 2016 02:37:56PM *  0 points [-]

No spooky or supernatural entities or properties are required to explain ethics (naturalism is true)

There is no universally correct system of ethics. (Strong moral realism is false)

I believe that iff naturalism is true then strong moral realism is as well. If naturalism is true then there are no additional facts needed to determine what is moral than the positions of particles and the outcomes of arranging those particles differently. Any meaningful question that can be asked of how to arrange those particles or rank certain arrangements compared to others must have an objective answer because under naturalism there are no other kinds and no incomplete information. For the question to remain unanswerable at that point would require supernatural intervention and divine command theory to be true. If you there can't be an objective answer to morality, then FAI is literally impossible. Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved. Instead I posit that the real morality is orders of magnitude more complicated, and finding it more difficult, than for real physics, real neurology, real social science, real economics, and can only be solved once these other fields are unified. If we were uncertain about the morality of stabbing someone, we could hypothetically stab someone to see what happens. When the particles of the knife rearranges the particles of their heart into a form that harms them, we'll know it isn't moral. When a particular subset of people with extensive training use their knife to very carefully and precisely rearrange the particles of the heart to help people, we call those people doctors and pay them lots of money because they're doing good. But without a shitload of facts about how to exactly stab someone in the heart to save their life, that moral option would be lost to you. And the real morality is a superset that includes that action along with all others.

Comment author: TheAncientGeek 11 November 2016 07:02:59PM *  1 point [-]

I believe that iff naturalism is true then strong moral realism is as well. If naturalism is true then there are no additional facts needed to determine what is moral than the positions of particles and the outcomes of arranging those particles differently. Any meaningful question that can be asked of how to arrange those particles or rank certain arrangements compared to others must have an objective answer because under naturalism there are no other kinds and no incomplete information. For the question to remain unanswerable at that point would require supernatural intervention and divine command theory to be true.

You need to refute non-cognitivism, as well as asserting naturalism.

Naturalism says that all questions that have answer have naturalistic answers, which means that if there are answers to ethical questions, they are naturalistic answers. But there is no guarantee that ethical questions mean anything, that they have answers.

For the question to remain unanswerable at that point would require supernatural intervention and divine command theory to be true.

No, only non-cogntivism, the idea that ethical questions just don't make sense, like "how many beans make yellow?".

. If you there can't be an objective answer to morality, then FAI is literally impossible.

Not unless the "F" is standing for something weird. Absent objective morality, you can possibly solve the control problem, ie achieving safety by just making the AI do what you want; and absent objective morality, you can possibly achieve AI safety by instilling a suitable set of arbitrary values. Neither is easy, but you said "impossible".

Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved.

That's not an argument for cognitivism. When I entertain the thought "how many beans make yellow?", that's an arrangement of particles.

Instead I posit that the real morality is orders of magnitude more complicated, and finding it more difficult, than for real physics, real neurology, real social science, real economics, and can only be solved once these other fields are unified.

Do you have an argument for that proposal? Because I am arguing for something much simpler, that morality only needs to be grounded at the human level, so reductionism is neither denied nor employed.

If we were uncertain about the morality of stabbing someone, we could hypothetically stab someone to see what happens. When the particles of the knife rearranges the particles of their heart into a form that harms them, we'll know it isn't moral. When a particular subset of people with extensive training use their knife to very carefully and precisely rearrange the particles of the heart to help people, we call those people doctors and pay them lots of money because they're doing good. But without a shitload of facts about how to exactly stab someone in the heart to save their life, that moral option would be lost to you. And the real morality is a superset that includes that action along with all others.

It's hard to see what point you are making there. The social and evaluative aspects do make a difference to the raw physics, and so much that the raw physics counts for very little. yet previously you were insisting that a reduction to fundamental particles was what underpinned the objectivity of morality.

Comment author: g_pepper 11 November 2016 07:17:57PM 1 point [-]

If naturalism is true then there are no additional facts needed to determine what is moral than the positions of particles and the outcomes of arranging those particles differently. Any meaningful question that can be asked of how to arrange those particles or rank certain arrangements compared to others must have an objective answer because under naturalism there are no other kinds and no incomplete information.

Even if it were true that under naturalism we could determine the outcome of various arrangements of particles, wouldn't we still be left with the question of which final outcome was the most morally preferable?

Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved.

But, you and I might have different moral preferences. How (under naturalism) do we objectively decide between your preferences and mine? And, Isn't it also possible that neither your preferences nor my preferences are objectively moral?

Comment author: BrianPansky 12 November 2016 12:14:57AM *  -1 points [-]

Even if it were true that under naturalism we could determine the outcome of various arrangements of particles, wouldn't we still be left with the question of which final outcome was the most morally preferable?

Yup.

But that's sort-of contained within "the positions of particles" (so long as all their other properties are included, such as temperature and chemical connections and so on...might need to include rays of light and non-particle stuff too!). The two are just different ways of describing the same thing. Just like every object around you could be described either with their usual names, ("keyboard:, "desk", etc) or with an elaborate molecule by molecule description. Plenty of other descriptions are possible too (like "rectangular black colored thing with a bunch of buttons with letters on it" describes my keyboard kinda).

How (under naturalism) do we objectively decide between your preferences and mine?

You don't. True preferences (as opposed to mistaken preferences) aren't something you get to decide. They are facts.

Comment author: TheAncientGeek 12 November 2016 12:37:03PM *  1 point [-]

ut that's sort-of contained within "the positions of particles" (so long as all their other properties are included, such as temperature and chemical connections and so on...might need to include rays of light and non-particle stuff too!). The two are just different ways of describing the same thing. Just like every object around you could be described either with their usual names, ("keyboard:, "desk", etc) or with an elaborate molecule by molecule description. Plenty of other descriptions are possible too (like "rectangular black colored thing with a bunch of buttons with letters on it" describes my keyboard kinda).

That's an expression of ethical naturalism not a defence of ethcial naturalism.

How (under naturalism) do we objectively decide between your preferences and mine?

You don't. True preferences (as opposed to mistaken preferences) aren't something you get to decide. They are facts.

Missing the point. Ethics needs to sort good actors from bad--decisions about punishments and rewards depend on it.

PS are you the same person as rkyeun? If not, to what extent are you on the same page?

Comment author: BrianPansky 12 November 2016 06:27:19PM 0 points [-]

Missing the point. Ethics needs to sort good actors from bad--decisions about punishments and rewards depend on it.

(I'd say need to sort good choices from bad. Which includes the choice to punish or reward.) Discovering which choices are good and which are bad is a fact finding mission. Because:

  • 1) it's a fact whether a certain choice will successfully fulfill a certain desire or not

  • And 2) that's what "good" literally means: desirable.

So that's what any question of goodness will be about: what will satisfy desires.

PS are you the same person as rkyeun? If not, to what extent are you on the same page?

No I'm not rkyeun. As for being on the same page...well I'm definitely a moral realist. I don't know about their first iff-then statement though. Seems to me that strong moral realism could still exist if supernaturalism were true. Also, talking in terms of molecules is ridiculously impractical and unnecessary. I only talked in those terms because I was replying to a reply to those terms :P

Comment author: g_pepper 13 November 2016 04:56:23AM *  0 points [-]

Discovering which choices are good and which are bad is a fact finding mission... So that's what any question of goodness will be about: what will satisfy desires.

But, what if two different people have two conflicting desires? How do we objectively find the ethical resolution to the conflict?

Comment author: BrianPansky 14 November 2016 11:14:55PM *  0 points [-]

But, what if two different people have two conflicting desires? How do we objectively find the ethical resolution to the conflict?

Basically: game theory.

In reality, I'm not sure there ever are precise conflicts of true foundational desires. Maybe it would help if you had some real example or something. But the best choice for each party will always be the one that maximizes their chances of satisfying their true desire.

Comment author: g_pepper 15 November 2016 05:13:59AM 0 points [-]

I was surprised to hear that you doubt that there are ever conflicts in desires. But, since you asked, here is an example:

A is a sadist. A enjoys inflicting pain in others. A really wants to hurt B. B wishes not to be hurt by A. (For the sake of argument, lets suppose that no simulation technology is available that would allow A to hurt a virtual B, and that A can be reasonably confident that A will not be arrested and brought to trial for hurting B.)

In this scenario, since A and B have conflicting desires, how does a system that defines objective goodness as that which will satisfy desires resolve the conflict?

Comment author: BrianPansky 23 November 2016 05:51:08AM 0 points [-]

I was surprised to hear that you doubt that there are ever conflicts in desires.

Re-read what I said. That's not what I said.

First get straight: good literally objectively does mean desirable. You can't avoid that. Your question about conflict can't change that (thus it's a red herring).

As for your question: I already generally answered it in my previous post. Use Game theory. Find the actions that will actually be best for each agent. The best choice for each party will always be the one that maximizes their chances of satisfying their true desires.

I might finish a longer response to your specific example, but that takes time. For now, Richard Carrier's Goal Theory Update probably covers a lot of that ground.

http://richardcarrier.blogspot.ca/2011/10/goal-theory-update.html

Comment author: CCC 23 November 2016 08:57:23AM 1 point [-]

First get straight: good literally objectively does mean desirable.

It does not.

Wiktionary states that it means "Acting in the interest of good; ethical." (There are a few other definitions, but I'm pretty sure this is the right one here). Looking through the definitions of 'ethical', I find "Morally approvable, when referring to an action that affects others; good. " 'Morally' is defined as "In keeping of requirements of morality.", and 'morality' is "Recognition of the distinction between good and evil or between right and wrong; respect for and obedience to the rules of right conduct; the mental disposition or characteristic of behaving in a manner intended to produce morally good results. "

Nowhere in there do I see anything about "desirable" - it seems to simplify down to "following a moral code". I therefore suspect that you're implicitly assuming a moral code which equates "desirable" with "good" - I don't think that this is the best choice of a moral code, but it is a moral code that I've seen arguments in favour of before.

But, importantly, it's not the only moral code. Someone who follows a different moral code can easily find something that is good but not desirable; or desirable but not good.

Comment author: g_pepper 25 November 2016 06:05:55PM *  0 points [-]

I was surprised to hear that you doubt that there are ever conflicts in desires.

Re-read what I said. That's not what I said.

Right. You said:

In reality, I'm not sure there ever are precise conflicts of true foundational desires.

Do you have an objective set of criteria for differentiating between true foundational desires and other types of desires? If not, I wonder if it is really useful to respond to an objection arising from the rather obvious fact that people often have conflicting desires by stating that you doubt that true foundational desires are ever in precise conflict.

First get straight: good literally objectively does mean desirable.

As CCC has already pointed out, no, it is not apparent that (morally) good and desirable are the same thing. I won’t spend more time on this point since CCC addressed it well.

Your question about conflict can't change that (thus it's a red herring).

The issue that we are discussing is objective morals. Your equating goodness and desirability leads (in my example of the sadist) A to believe that hurting B is good, and B to believe that hurting B is not good. But moral realism holds that moral valuations are statements that are objectively true or false. So, conflicting desires is not a red herring, since conflicting desires leads (using your criterion) to subjective moral evaluations regarding the goodness of hurting B. Game theory on the other hand does appear to be a red herring – no application of game theory can change the fact that A and B differ regarding the desirability of hurting B.

One additional problem with equating moral goodness with desirability is that it leads to moral outcomes that are in conflict with most people’s moral intuitions. For example, in my example of the sadist A desires to hurt B, but most people’s moral intuition would say that A hurting B just because A wants to hurt B would be immoral. Similarly, rape, murder, theft, etc., could be considered morally good by your criterion if any of those things satisfied a desire. While conflicting with moral intuition does not prove that your definition is wrong, it seems to me that it should at a minimum raise a red flag. And, I think that the burden is on you to explain why anyone should reject his/her moral intuition in favor of a moral criterion that would adjudge theft, rape and murder to be morally good if they satisfy a true desire.

Comment author: TheAncientGeek 26 November 2016 08:29:28AM 0 points [-]

First get straight: good literally objectively does mean desirable.

It's not at all clear that morally good means desirable. The idea that the good is the desirable gets what force it has from the fact that "good" has a lot of nonmoral meanings. Good ice cream is desirable ice cream, but what's that got to do with ethics?

Comment author: rkyeun 26 December 2017 10:28:03AM *  0 points [-]

I would be very surprised to find that a universe whose particles are arranged to maximize objective good would also contain unpaired sadists and masochists. You seem to be asking a question of the form, "But if we take all the evil out of the universe, what about evil?" And the answer is "Good riddance." Pun intentional.

Comment author: g_pepper 26 December 2017 11:08:26PM *  0 points [-]

I would be very surprised to find that a universe whose particles are arranged to maximize objective good would also contain unpaired sadists and masochists.

The problem is that neither you nor BrianPansky has proposed a viable objective standard for goodness. BrianPansky said that good is that which satisfies desires, but proposed no objective method for mediating conflicting desires. And here you said “Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved” but proposed no way for resolving conflicts between different people’s ethical preferences. Even if satisfying desires were an otherwise reasonable standard for goodness, it is not an objective standard, since different people may have different desires. Similarly, different people may have different ethical preferences, so an individual’s ethical preference would not be an objective standard either, even if it were otherwise a reasonable standard.

You seem to be asking a question of the form, "But if we take all the evil out of the universe, what about evil?"

No, I am not asking that. I am pointing out that neither your standard nor BrianPansky’s standard is objective. Therefore neither can be used to determine what would constitute an objectively maximally good universe nor could either be used to take all evil out of the universe, nor even to objectively identify evil.

Comment author: TheAncientGeek 13 November 2016 09:22:21AM 2 points [-]

I'd say need to sort good choices from bad. Which includes the choice to punish or reward.) Discovering which choices are good and which are bad is a fact finding mission. Because:

1) it's a fact whether a certain choice will successfully fulfill a certain desire or not

And 2) that's what "good" literally means: desirable.

So that's what any question of goodness will be about: what will satisfy desires.

Whose desires? The murderer wants to murder the victim, the victim doesn't want to be murdered. You have realism without objectivism. There is a realistic fact about people's preferences, but since the same act can increase one person's utility and reduce anothers, there is no unambiguous way to label an arbitrry outcome.

Comment author: BrianPansky 15 November 2016 12:05:00AM *  0 points [-]

The murderer wants to murder the victim, the victim doesn't want to be murdered.

Murder isn't a foundational desire. It's only a means to some other end. And usually isn't even a good way to accomplish its ultimate end! It's risky, for one thing. So usually it's a false desire: if they knew the consequences of this murder compared to all other choices available, and they were correctly thinking about how to most certainly get what they really ultimately want, they'd almost always see a better choice.

(But even if it were foundational, not a means to some other end, you could imagine some simulation of murder satisfying both the "murderer"'s need to do such a thing and everyone else's need for safety. Even the "murderer" would have a better chance of satisfaction, because they would be far less likely to be killed or imprisoned prior to satisfaction.)

since the same act can increase one person's utility and reduce anothers, there is no unambiguous way to label an arbitrry outcome.

Well first, in the most trivial way, you can unambiguously label an outcome as "good for X". If it really is (it might not be, after all, the consequences of achieving or attempting murder might be more terrible for the would-be murderer than choosing not to attempt murder).

It works the same with (some? all?) other adjectives too. For example: soluble. Is sugar objectively soluble? Depends what you try to dissolve it in, and under what circumstances. It is objectively soluble in pure water at room temperature. It won't dissolve in gasoline.

Second, in game theory you'll find sometimes there are options that are best for everyone. But even when there isn't, you can still determine which choices for the individuals maximize their chance of satisfaction and such. Objectively speaking, those will be the best choices they can make (again, that's what it means for something to be a good choice). And morality is about making the best choices.

Comment author: TheAncientGeek 26 November 2016 12:59:25PM *  1 point [-]

Murder isn't a foundational desire.

It can be instrumental or terminal, as can most other criminal impulses.

But even if it were foundational, not a means to some other end, you could imagine some simulation of murder satisfying both the "murderer"'s need to do such a thing and everyone else's need for safety. Even the "murderer" would have a better chance of satisfaction, because they would be far less likely to be killed or imprisoned prior to satisfaction

You can't solve all ethical problems by keeping everyone in permanent simulation.

Well first, in the most trivial way, you can unambiguously label an outcome as "good for X". If it really is

That's no good. You can't arrive at workable ethics by putting different weightings on the same actions from different perspectives. X stealing money form Y is good for X and bad for Y, so why disregard Y's view? An act is either permitted or forbidden, punished or praised. You can't say it is permissible-for-X but forbidden-for-Y if it involves both of them.

It works the same with (some? all?) other adjectives too.

No, there's no uniform treatment of all predicates. Some are one-place, some are two-place. For instance, aesthetic choices can usually be fulfilled on a person-by-person basis.

Second, in game theory you'll find sometimes there are options that are best for everyone.

To be precise, you sometimes find solutions that leave everyone better off, and more often find solutions that leave the average person better off.

Objectively speaking, those will be the best choices they can make (again, that's what it means for something to be a good choice). And morality is about making the best choices.

Too vague. For someone who likes killing ot kill a lot of people is the best choice for them, but not the best ethical choice.