MugaSofer comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1750)
Well, this explains the mystery of why that got downvoted by someone.
Firstly, you're replying to an old version of my comment - the section you're replying to is part of a quote which had a formatting error, which is why it forms a complete non-sequitur taken as a reply. I did not write that, I merely replied to it.
You know, I agree with you, homosexuality isn't a great example there. However, it's trivially easy to ironman as "homosexuality is moral" or some other example involving the rationality skills of the of the general populace.
The fact that something is true only relative to a frame of reference does not mean it "can by definition not be objectively right or wrong". For example, if I believe it is correct (by my standards) to fly a plane into a building full of people, I am objectively wrong - this genuinely, verifiably doesn't satisfy my preferences. I may have been persuaded a Friendly superintelligence has concluded that it is, or that it will cause me to experience subjective bliss (OK, this one is harder to prove outright, we could be in a simulation run by some very strange people. It is, however, irrational to believe it based on the available evidence.)
Ayup.
As I said earlier, it's trivially easy to ironman that reference to mean one of the political positions regarding the sexual preference. If he had said "abortion", would you tell him that a medical procedure is a completely different thing to an empirical claim?
Forgive me if I disagree with that particular empirial claim about how our community thinks.
"The Bible is right because of the evidence of a supernatural resurrection" is an argument in itself, not something one derives from the First Cause. However, the prior of supernatural resurrections might be raised by a particular solution to the First Cause problem, I suppose, requiring that argument to be made first.
I guess I can follow that analogy - you require more evidence to postulate a specific First Mover than the existence of a generalized First Cause - but I have no idea how it bears on your misreading of my comment.
Source? I find most rationalists encounter more irrational beliefs being protected off from rational ones than the inverse.
How is that example any different, how is it not also a matter of your individual moral preferences? Again, you can imagine a society or species of rational agents that regard homosexuality as moral, just as you can imagine one that regards it as immoral.
By objectively right or wrong I meant right or wrong regardless of the frame of reference (as it's usually interpreted as far as I know). Of course you can be mistaken about your own preferences, and other agents can be mistaken when describing your preferences.
"Agent A has preference B" can be correct or incorrect / right or wrong / accurate or inaccurate, but "Preference B is moral, period, for all agents" would be a self-contradictory nonsense statement.
Of course "I think abortion is moral" can widely differ from rational agent to rational agent. Clippy talking to AbortAI (the abortion maximizing AI) could easily agree about what constitutes an abortion, or how that procedure is usually done. Yet they wouldn't need to agree about the morality each of them ascribes to that procedure. They would need to agree on how others ("this human in 21th century America") morally judge abortion, but they could still judge it differently. It is like "I prefer a ball in the box over no ball in the box", not like "There is a ball in the box".
I forgive you, though I won't die for your sins.
It is ... an argument ... strictly formally speaking. What else could explain some eye witness testimony of an empty grave, if not divine intervention?
Only when some nonsense about "that cause must be a non-physical mind" (without defining what a non-physical mind is, and reaching that conclusion by saying "either numbers or a mind could be first causes, and it can't be numbers") is dragged in, even then the effect on the prior of some particular holy text on some planet in some galaxy in some galactic cluster would be negligible.
"I can confirm that it is indeed annoying", although I of course admit that this is branching out on a tangent - but why shouldn't we, it's a good place for branching out without having to start a new topic, or PMs.
Not everything I write needs to be controversial between us, it can be related to a comment I respond to, and you can agree or disagree, engage or disengage at your leisure.
What do you mean, protected off in the sense of compartmentalized / cordoned off?
We seem to be using "moral" differently. You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically. I find this is more useful, for the reasons EY puts forth in the sequences.
If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?
Again, I think we're arguing over terminology rather than meaning here.
Zing!
Because that's the only eyewitness testimony contained in the Bible.
Well, since neither of actually have a solution to the First Cause argument (unless you're holding out on me) that's impossible to say. However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.
What does the relative strength of evidence required for various "godlike" hypotheses have to do with the annoyance of seeing a group you identify with held up as an example of something undesirable?
Uh ... sure ... I don't exactly reply to most comments you make.
Yup.
Which humans? Medieval peasants? Martyrs? Witch-torturers? Mercenaries? Chinese? US-Americans? If so, which party, which age-group?
The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.
Negligibly so, especially if it's non verifiable second hand stories passed down through the ages, and when the whole system is ostentatiously based on non-falsifiability in an empirical sense.
You realize that your fellow Christians from a few centuries back would burn you for heresy if you told them that many of the supernatural magic tricks were just meant as metaphors. Copernicus didn't doubt Jesus Christ was a god-alien-human. They may not even have considered you to be a Christian. Nevermind that, the current iteration has gotten it right, doesn't it? Your version, I mean.
There are three little pigs who saw the big bad wolf blowing away their houses, that's three eyewitnesses right there.
Do Adam and Eve count as eyewitnesses for the Garden of Eden?
OK. So moral realism is false, and moral relativism is true and that's provable in a paragraph. Hmmm. Aliens and other societies might have all sorts of values, but that does not necessarily mean they have all sorts of ethical values. "Murder is good" might not be a coherent ethical principle, any more than "2+2=5" is a coherent mathematical one. The says-so of authorities, or Authorities is not the only possible source of objectivity.
So if you constructed an artificial agent, you would somehow be stopped from encoding certain actions and/or goals as desirable? Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?
Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference. Seems pretty variable.
It would be correctly describing its preferences, and its preferences would not be ethically correct. You could construct an AI that frimly believed 2+2=5. And it would be wrong. As before, you are glibly assuming that the word "ethical" does no work, and can be dropped from the phrase "ethical value".
All of those are believed ethical. It's very shallow to argue for relativism by ignoring the distinction between believed-to-be-true and true.
Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.
Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do you think to know at least some of what's universally ethical, or could you unknowingly be the evil twin believing to be ethical?
Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?
All advanced societies will agree about 2+2!=5, because that's falsifiable. Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?
Axioms are what we use to logically pinpoint what it is we are talking about. If our world and theirs has different axioms for "ethicality", then they simply don't have what we mean by "ethicality" -- and we don't have what they mean by the word "ethicality".
Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.
Unfortunately, words of natural language have the annoying property that it's often very hard to tell if people are disagreeing about the extension or the meaning. It's also hard to tell what disagreement about the meaning of a word actually is.
The analogy is flawed. German and English speakers don't disagree about the word (conceived as a string of phonemes; otherwise "tier" and "Tier" are not identical), and it's not at all clear that disagreement about the meaning of words is the same thing as speaking two different languages. It's certainly phenomenologically pretty different.
I do agree that reducing it to speaking different languages is one way to dissolve disagreement about meaning. But I'm not convinced that this is the right approach. Some words are in acute danger of being dissolved with the question in that it will turn out that almost everyone has their own meaning for the word, and everybody is talking past each other. It also leaves you with a need to explain where this persistent illusion that people are disagreeing when they're in fact just talking past each other (which persists even when you explain to them that they're just speaking two different languages; they'll often say no, they're not, they're speaking the same language but the other person is using the word wrongly) comes from.
Of course, all of this is connected to the problem that nobody seems to know what kind of thing a meaning is.
So there is an objective measure for what's "right" and "wrong" regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?
Well, let's define a series of ethics, from ethics1 to ethicsn. Let's call your system of ethics which contains a "correct" conclusion such as "murder is WONG", say, ethics211412312312.
Why should anyone care about ethics211412312312?
(If you don't mind, let's consolidate this into the other sub-thread we have going.)
Axioms have a lot to do with truth, and little to do with meaning.
Cannot upvote enough.
Also, pretty sure I've made this exact argument to Kawoomba before, but I didn't phrase it as well, so good luck!
If relativism is true, yes. If realism is true no. So?
If realism is true, they could have got it right by chance, although whoever is right is more likely to be right by approaching it systematically.
Inasmuch as it is disproveable from non-arbitrary axioms. You are assuming that maths has non-arbitrary axioms, but morality doesn't. Is that reasonable?
Axioms aren't true or false because of who is "setting" them. Maths is supposed to be able to do certain things, it is supposed to allow you to prove theorems, it is supposed to be free from contradiction and so on. That considerably constrains the choice of axioms. Non-euthyphric moral realism works the same way.
Okay, let's try to figure out how that would work. A world where preferences are the same (e.g. everyone wants to live as long as possible, and wants other people to live as well), but the ethics are reversed (saving lives is considered morally wrong, murdering other people at random is morally right)
Don't you see an obvious asymmetry here between their world and ours? Their so-called ethics about murder (murder=good) would end up harming their preferences, in a way that our ethics about murder (murder=bad) does not?
So is it a component of the "correct" ethical preferences that they satisfy the preferences of others? It seems this way since you use this to hold "our" ethics about murder over those of the mirror world (In actuality there'd be vast swaths of peaceful coexistence in the mirror world, e.g. in Ruanda).
But hold on, our ethical preferences aren't designed to maximize other sapients' preferences. Wouldn't it be more ethical still to not want anything for yourself, or to be happy to just stare at the sea floor, and orient those around you to look at the sea floor as well? Seems like those algae win, after all! God's chosen seaweed!
What about when a quadrillion bloodthirsty but intelligent killer-algae (someone sent them a Bible, turned them violent) invaded us, wouldn't it be more ethical for us to roll over, since that satisfies total preferences more effectively?
I see the asymmetry. But I don't see the connection to "there is a correct morality for all sentients". On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.
Can you expand on how you got the "preferences are the same" part?
Well, what work does it do? You haven't pointed to or defined ethically it's difficult to see how your statement is expected to parse:
"Their values wouldn't be [untranslatable 1] correct." is more or less what I'm getting at the moment.
What are you actually talking about? Where's your information for this idea that some values are 1+1=3 style incorrect coming from?
It's worth noting that they would definitely be "unethical" if we define "ethical" in terms of our own preferences. It's a rigid designator, just not one inscribed on a stone tablet at the center of the universe.
I didn't define any of the other words I used either. "Ethics" isn't a word I invented.
Moral realism. Shelves full of books have been written about it over many centuries. Why has no-one here heard of it?
Moral realism has been formulated in a great number of ways over the years. In my opinion never convincingly. A guy further up the thread mentioned the form of it you seem to be using.
Perhaps I was unclear. Where is your second correlate? What are you mapping onto? Where's your information coming from that you're right or wrong in light of?
If you just mean something to the effect of one should always act in a way that favours one's most dominant long-term interests, that seems to be the typical situational pragmatism account of normative ethics. As such:
A) A matter of pragmatism rather than what people would generally mean by ethics. To roughly paraphrase some guy whose name I can't remember, 'As soon as they can get away with doing otherwise they become justified in doing so.'
&
B) Massively unactionable for most people. It's not clear that my higher order goals always outweigh a combination of lower order goals, or even that they should considering that rewards are going to vary over time.
I suppose you might formulate the idea that one should always act in the present such that one will have cause for the least regret in the future. That you would choose the same course of action for your past self looking back from the future as you would for your future self looking forwards from the past. Ethics would in other words be anti-akrasia.
And fair enough, maybe so. But now relating that back to discussion that you responded to I don't see how it serves one way or the other with respect to homosexuality and religion as preference choices, nor how it serves as a response to a refutation of moral universalism that arose in that discussion which you seemed to be replying to.
So - is that actually what you mean; how do you resolve the issues of relative weighting of preferences and changing situations; and if you resolve that, how do you apply it to the case in hand?
It's a real position, if one based on rather questionable arguments.
OTOH, there really are some "values" that (sufficiently advanced) consequentialists will hold unless they specifically value not doing them, for instrumental reasons.