MugaSofer comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong

27 Post author: orthonormal 01 April 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1750)

You are viewing a single comment's thread. Show more comments above.

Comment author: MugaSofer 14 April 2013 04:43:38PM -2 points [-]

How is that example any different, how is it not also a matter of your individual moral preferences? Again, you can imagine a society or species of rational agents that regard homosexuality as moral, just as you can imagine one that regards it as immoral.

We seem to be using "moral" differently. You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically. I find this is more useful, for the reasons EY puts forth in the sequences.

By objectively right or wrong I meant right or wrong regardless of the frame of reference (as it's usually interpreted as far as I know). Of course you can be mistaken about your own preferences, and other agents can be mistaken when describing your preferences.

If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?

Of course "I think abortion is moral" can widely differ from rational agent to rational agent. Clippy talking to AbortAI (the abortion maximizing AI) could easily agree about what constitutes an abortion, or how that procedure is usually done. Yet they wouldn't need to agree about the morality each of them ascribes to that procedure. They would need to agree on how others ("this human in 21th century America") morally judge abortion, but they could still judge it differently. It is like "I prefer a ball in the box over no ball in the box", not like "There is a ball in the box".

Again, I think we're arguing over terminology rather than meaning here.

I forgive you, though I won't die for your sins.

Zing!

It is ... an argument ... strictly formally speaking. What else could explain some eye witness testimony of an empty grave, if not divine intervention?

Because that's the only eyewitness testimony contained in the Bible.

Only when some nonsense about "that cause must be a non-physical mind" (without defining what a non-physical mind is, and reaching that conclusion by saying "either numbers or a mind could be first causes, and it can't be numbers") is dragged in, even then the effect on the prior of some particular holy text on some planet in some galaxy in some galactic cluster would be negligible.

Well, since neither of actually have a solution to the First Cause argument (unless you're holding out on me) that's impossible to say. However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.

"I can confirm that it is indeed annoying", although I of course admit that this is branching out on a tangent - but why shouldn't we, it's a good place for branching out without having to start a new topic, or PMs.

What does the relative strength of evidence required for various "godlike" hypotheses have to do with the annoyance of seeing a group you identify with held up as an example of something undesirable?

Not everything I write needs to be controversial between us, it can be related to a comment I respond to, and you can agree or disagree, engage or disengage at your leisure.

Uh ... sure ... I don't exactly reply to most comments you make.

What do you mean, protected off in the sense of compartmentalized / cordoned off?

Yup.

Comment author: Kawoomba 18 April 2013 09:56:49AM 1 point [-]

You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically.

Which humans? Medieval peasants? Martyrs? Witch-torturers? Mercenaries? Chinese? US-Americans? If so, which party, which age-group?

If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?

The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.

However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.

Negligibly so, especially if it's non verifiable second hand stories passed down through the ages, and when the whole system is ostentatiously based on non-falsifiability in an empirical sense.

You realize that your fellow Christians from a few centuries back would burn you for heresy if you told them that many of the supernatural magic tricks were just meant as metaphors. Copernicus didn't doubt Jesus Christ was a god-alien-human. They may not even have considered you to be a Christian. Nevermind that, the current iteration has gotten it right, doesn't it? Your version, I mean.

Because that's the only eyewitness testimony contained in the Bible.

There are three little pigs who saw the big bad wolf blowing away their houses, that's three eyewitnesses right there.

Do Adam and Eve count as eyewitnesses for the Garden of Eden?

Comment author: PrawnOfFate 18 April 2013 10:31:24AM *  1 point [-]

The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.

OK. So moral realism is false, and moral relativism is true and that's provable in a paragraph. Hmmm. Aliens and other societies might have all sorts of values, but that does not necessarily mean they have all sorts of ethical values. "Murder is good" might not be a coherent ethical principle, any more than "2+2=5" is a coherent mathematical one. The says-so of authorities, or Authorities is not the only possible source of objectivity.

Comment author: Kawoomba 18 April 2013 10:53:14AM 0 points [-]

So if you constructed an artificial agent, you would somehow be stopped from encoding certain actions and/or goals as desirable? Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?

Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference. Seems pretty variable.

Comment author: PrawnOfFate 18 April 2013 10:56:58AM *  1 point [-]

Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?

It would be correctly describing its preferences, and its preferences would not be ethically correct. You could construct an AI that frimly believed 2+2=5. And it would be wrong. As before, you are glibly assuming that the word "ethical" does no work, and can be dropped from the phrase "ethical value".

Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference.

All of those are believed ethical. It's very shallow to argue for relativism by ignoring the distinction between believed-to-be-true and true.

Comment author: Kawoomba 18 April 2013 11:12:24AM 1 point [-]

Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.

Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do you think to know at least some of what's universally ethical, or could you unknowingly be the evil twin believing to be ethical?

Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?

All advanced societies will agree about 2+2!=5, because that's falsifiable. Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?

Comment author: ArisKatsaris 18 April 2013 01:03:27PM 1 point [-]

Who gets to set the axioms and rules for ethicality?

Axioms are what we use to logically pinpoint what it is we are talking about. If our world and theirs has different axioms for "ethicality", then they simply don't have what we mean by "ethicality" -- and we don't have what they mean by the word "ethicality".

Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.

Comment author: Creutzer 19 April 2013 12:35:00PM *  0 points [-]

Unfortunately, words of natural language have the annoying property that it's often very hard to tell if people are disagreeing about the extension or the meaning. It's also hard to tell what disagreement about the meaning of a word actually is.

Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.

The analogy is flawed. German and English speakers don't disagree about the word (conceived as a string of phonemes; otherwise "tier" and "Tier" are not identical), and it's not at all clear that disagreement about the meaning of words is the same thing as speaking two different languages. It's certainly phenomenologically pretty different.

I do agree that reducing it to speaking different languages is one way to dissolve disagreement about meaning. But I'm not convinced that this is the right approach. Some words are in acute danger of being dissolved with the question in that it will turn out that almost everyone has their own meaning for the word, and everybody is talking past each other. It also leaves you with a need to explain where this persistent illusion that people are disagreeing when they're in fact just talking past each other (which persists even when you explain to them that they're just speaking two different languages; they'll often say no, they're not, they're speaking the same language but the other person is using the word wrongly) comes from.

Of course, all of this is connected to the problem that nobody seems to know what kind of thing a meaning is.

Comment author: Kawoomba 18 April 2013 01:38:01PM 0 points [-]

So there is an objective measure for what's "right" and "wrong" regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?

Well, let's define a series of ethics, from ethics1 to ethicsn. Let's call your system of ethics which contains a "correct" conclusion such as "murder is WONG", say, ethics211412312312.

Why should anyone care about ethics211412312312?

(If you don't mind, let's consolidate this into the other sub-thread we have going.)

Comment author: PrawnOfFate 18 April 2013 02:22:25PM 1 point [-]

but other people may just decide not to give a hoot, using some other definition of ethics

If what they have can't do what ethics is supposed to do, why call it ethics?

Comment author: Kawoomba 18 April 2013 02:23:16PM 0 points [-]

What is ethics supposed to do?

Comment author: nshepperd 18 April 2013 01:47:45PM *  1 point [-]

Why should anyone care about ethics211412312312?

"Should" is an ethical word. To use your (rather misleading) naming convention, it refers to a component of ethics211412312312.

Of course one should not confuse this with "would". There's no reason to expect an arbitrary mind to be compelled by ethics.

Comment author: PrawnOfFate 18 April 2013 02:23:22PM *  1 point [-]

"Should" is an ethical word

No. it's much wider than that. There are rational and instrumental should's.

ETA:

here's no reason to expect an arbitrary mind to be compelled by ethics.

Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.

Comment author: Kawoomba 18 April 2013 02:02:25PM -1 points [-]

There's no reason to expect an arbitrary mind to be compelled by ethics.

As one should not expect an arbitrary mind with its own notions of "right" or "wrong" to yield to any human's proselytizing about objectively correct ethics, "murder is bad", and trying to provide a "correct" solution for that arbitrary mind to adopt.

The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the "should" in "you should choose ethics211412312312", that was my point. Calling some church's (or other group's) ethical doctrine objectively correct for all minds doesn't make a dint of difference, and doesn't go beyond "my ethics are right! no, mine are!"

Comment author: PrawnOfFate 18 April 2013 02:21:11PM -1 points [-]

Axioms are what we use to logically pinpoint what it is we are talking about.

Axioms have a lot to do with truth, and little to do with meaning.

Comment author: ArisKatsaris 18 April 2013 02:35:23PM *  1 point [-]

Axioms have a lot to do with truth, and little to do with meaning.

Would that make the Euclidean axioms just "false" according to you, instead of meaningfully defining the concept of a Euclidean space that turned out not to be completely corresponding to reality, but is still both quite useful and certainly meaningful as a concept?

I first read the concept of axioms as means of logical pinpointing in this and it struck me as brilliant insight which may dissolve a lot of confusions.

Comment author: PrawnOfFate 18 April 2013 02:36:34PM 0 points [-]

Corresponding to reality is physical truth, not mathematical truth.

Comment author: MugaSofer 19 April 2013 12:24:09PM -2 points [-]

Cannot upvote enough.

Also, pretty sure I've made this exact argument to Kawoomba before, but I didn't phrase it as well, so good luck!

Comment author: PrawnOfFate 18 April 2013 02:19:34PM 0 points [-]

Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do

If relativism is true, yes. If realism is true no. So?

Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?

If realism is true, they could have got it right by chance, although whoever is right is more likely to be right by approaching it systematically.

All advanced societies will agree about 2+2!=5, because that's falsifiable.

Inasmuch as it is disproveable from non-arbitrary axioms. You are assuming that maths has non-arbitrary axioms, but morality doesn't. Is that reasonable?

Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?

Axioms aren't true or false because of who is "setting" them. Maths is supposed to be able to do certain things, it is supposed to allow you to prove theorems, it is supposed to be free from contradiction and so on. That considerably constrains the choice of axioms. Non-euthyphric moral realism works the same way.

Comment author: ArisKatsaris 18 April 2013 12:43:52PM *  0 points [-]

Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.

Okay, let's try to figure out how that would work. A world where preferences are the same (e.g. everyone wants to live as long as possible, and wants other people to live as well), but the ethics are reversed (saving lives is considered morally wrong, murdering other people at random is morally right)

Don't you see an obvious asymmetry here between their world and ours? Their so-called ethics about murder (murder=good) would end up harming their preferences, in a way that our ethics about murder (murder=bad) does not?

Comment author: Kawoomba 18 April 2013 01:33:44PM 0 points [-]

So is it a component of the "correct" ethical preferences that they satisfy the preferences of others? It seems this way since you use this to hold "our" ethics about murder over those of the mirror world (In actuality there'd be vast swaths of peaceful coexistence in the mirror world, e.g. in Ruanda).

But hold on, our ethical preferences aren't designed to maximize other sapients' preferences. Wouldn't it be more ethical still to not want anything for yourself, or to be happy to just stare at the sea floor, and orient those around you to look at the sea floor as well? Seems like those algae win, after all! God's chosen seaweed!

What about when a quadrillion bloodthirsty but intelligent killer-algae (someone sent them a Bible, turned them violent) invaded us, wouldn't it be more ethical for us to roll over, since that satisfies total preferences more effectively?

I see the asymmetry. But I don't see the connection to "there is a correct morality for all sentients". On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.

Comment author: PrawnOfFate 18 April 2013 02:35:05PM 0 points [-]

On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.

It clearly wouldn't satisfy their preference not to be slaves.

Comment author: Kawoomba 18 April 2013 02:44:21PM -1 points [-]

It clearly wouldn't satisfy their preference not to be slaves.

Slip of tongue, you must have meant esteemed citizens.

You're concerned with the average preference satisfaction of other agents, then? Why not total average preference satisfaction, which you just rejected? Which is ethical, and who decides? Where are the axioms?

We're probably talking about different ethics, since I don't even know your axioms, or priorities. Something about trying to satisfy the preferences of others, or at least taking that into account. What does that mean? To what degree? If one says, "to this degree", and another said "to that degree", who's ethical? Neither, both? Who decides? There's no math that tells to to what degree you satisfying others is ethical.

Is their an ethical component to flushing my toilet? Killing my goldfish? All my actions impact the world (definition of action), yet some are ethical (or unethical), whereas some are ethically undefined? How does that work?

Can I find it all written in an ancient scroll, by chance?

Comment author: PrawnOfFate 18 April 2013 02:31:48PM 0 points [-]

So is it a component of the "correct" ethical preferences that they satisfy the preferences of others?

Take into account, at least. In which case: of course. An "ethics" that was all about your own preferences would be vacuous--it would just be a duplicate of instrumental rationality.

Comment author: PrawnOfFate 18 April 2013 02:47:08PM -2 points [-]

But hold on, our ethical preferences aren't designed to maximize other sapients' preferences. Wouldn't it be more ethical still to not want anything for yourself

Not necessarily. Ethics uncontentiously includes fairness. Treating an arbitrary person's preferences as being unimportant would be unfair, so treating your own preferences as unimportant would be unfair.

Comment author: Kawoomba 18 April 2013 02:48:48PM 1 point [-]

No, no. Wouldn't it be more ethical if your preferences were "I want nothing above strict subsistence".

You can take those preferences as seriously and important as anything.

More ethical, no?

Comment author: TheOtherDave 18 April 2013 01:12:17PM 0 points [-]

Can you expand on how you got the "preferences are the same" part?

Comment author: ArisKatsaris 18 April 2013 01:20:34PM 1 point [-]

I thought we were keeping everything else the same, and reversing only the ethics.

In a world where everyone preferred to be murdered as soon as possible, I can agree that murder may very well be ethical.

Comment author: TheOtherDave 18 April 2013 03:02:32PM 1 point [-]

What do you want to say about a world where everyone agreed that there were some people who they preferred be murdered, and some people they preferred not be murdered, and that it's ethical to murder people you prefer to be murdered, even if everyone doesn't necessarily agree on which people fall into which category?

Comment author: Estarlio 18 April 2013 01:13:53PM -1 points [-]

It would be correctly describing its preferences, and its preferences would not be ethically correct. You could construct an AI that frimly believed 2+2=5. And it would be wrong. As before, you are glibly assuming that the word "ethical" does no work, and can be dropped from the phrase "ethical value".

Well, what work does it do? You haven't pointed to or defined ethically it's difficult to see how your statement is expected to parse:

"Their values wouldn't be [untranslatable 1] correct." is more or less what I'm getting at the moment.

What are you actually talking about? Where's your information for this idea that some values are 1+1=3 style incorrect coming from?

Comment author: MugaSofer 19 April 2013 12:19:20PM 0 points [-]

It's worth noting that they would definitely be "unethical" if we define "ethical" in terms of our own preferences. It's a rigid designator, just not one inscribed on a stone tablet at the center of the universe.

Comment author: PrawnOfFate 18 April 2013 03:11:23PM -2 points [-]

I didn't define any of the other words I used either. "Ethics" isn't a word I invented.

Where's your information for this idea that some values are 1+1=3 style incorrect coming from?

Moral realism. Shelves full of books have been written about it over many centuries. Why has no-one here heard of it?

Comment author: Estarlio 18 April 2013 04:30:15PM 0 points [-]

Moral realism. Shelves full of books have been written about it over many centuries. Why has no-one here heard of it?

Moral realism has been formulated in a great number of ways over the years. In my opinion never convincingly. A guy further up the thread mentioned the form of it you seem to be using.

Perhaps I was unclear. Where is your second correlate? What are you mapping onto? Where's your information coming from that you're right or wrong in light of?

If you just mean something to the effect of one should always act in a way that favours one's most dominant long-term interests, that seems to be the typical situational pragmatism account of normative ethics. As such:

A) A matter of pragmatism rather than what people would generally mean by ethics. To roughly paraphrase some guy whose name I can't remember, 'As soon as they can get away with doing otherwise they become justified in doing so.'

&

B) Massively unactionable for most people. It's not clear that my higher order goals always outweigh a combination of lower order goals, or even that they should considering that rewards are going to vary over time.

I suppose you might formulate the idea that one should always act in the present such that one will have cause for the least regret in the future. That you would choose the same course of action for your past self looking back from the future as you would for your future self looking forwards from the past. Ethics would in other words be anti-akrasia.

And fair enough, maybe so. But now relating that back to discussion that you responded to I don't see how it serves one way or the other with respect to homosexuality and religion as preference choices, nor how it serves as a response to a refutation of moral universalism that arose in that discussion which you seemed to be replying to.

So - is that actually what you mean; how do you resolve the issues of relative weighting of preferences and changing situations; and if you resolve that, how do you apply it to the case in hand?

Comment author: PrawnOfFate 18 April 2013 04:47:19PM *  0 points [-]

Where's your information coming from that you're right or wrong in light of?

The functional role of ethics places constraints on metaethical axioms or maxims, which, when combined with facts about preferences, can be concretised into an object level ethics.

So - is that actually what you mean; how do you resolve the issues of relative weighting of preferences and changing situations; and if you resolve that, how do you apply it to the case in hand?

I don't have a know what the One True Ethics is. I don't know what the One True Physics is either. That doesn't refute physical realism. The former doesn't refute metaethical realism. I am only arguing that realism is not obviously false, not relativism obviously true.

Comment author: MugaSofer 19 April 2013 12:17:22PM -2 points [-]

It's a real position, if one based on rather questionable arguments.

OTOH, there really are some "values" that (sufficiently advanced) consequentialists will hold unless they specifically value not doing them, for instrumental reasons.