I don't see the theism/atheism debate as a policy debate. There is a factual question underlying it, and that factual question is "does God exist?" I find it very hard to imagine a universe where the answer to that question is neither 'yes' nor 'no'.
Scenario:
1) You wake up in a bright box of light, no memories. You are told you'll presently be born into an Absolute monarchy, your role randomly chosen. You may choose any moral principles that should govern that society. The Categorical Imperative would on average give you the best result.
2) You are the monarch in that society, you do not need to guess which role you're being born into, you have that information. You don't need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
Lastly, whatever Kant's justification, why can you not optimize for a different principle - peak happiness versus average happiness, what makes any particular justifying principle correct across all - rational - agents. Here come my algae!
You are the monarch in that society, you do not need to guess which role you're being born into, you have that information. You don't need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
For what value of "best"? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn't maximise your personally utility. But I don't see why you would expect that. Things like utilitarianism that seek to maximise group utility, don't promise to make everyone blissfully happy individually. Some will lose out.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Lastly, whatever Kant's justification, why can you not optimize for a different principle - peak happiness versus average happiness, what makes any particular justifying principle correct across all - rational - agents.
If you think RAs can converge on an ultimately correct theory of physics (which we don't have), what is to stop them converging on the correct theory of morality, which we also don't have?
First of all, thanks for the comment. You have really motivated me to read and think about this more -- starting with getting clearer on the meanings of "objective", "subjective", and "intrinsic". I apologise for any confusion caused by my incorrect use of terminology. I guess that is why Eliezer likes to taboo words. I hope you don't mind me persisting in trying to explain my view and using those "taboo" words.
Since I was talking about meta-ethical moral relativism, I hope that it was sufficiently clear that I was referring to moral values. What I meant by "objective values" was "objectively true moral values" or "objectively true intrinsic values".
The second sentence doesn't follow from the first.
The second sentence was an explanation of the first: not logically derived from the first sentence, but a part of the argument. I'll try to construct my arguments more linearly in future.
If I had to rephrase that passage I'd say:
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I'm not convinced that there is objective truth in intrinsic or moral values.
However, the lack of meaningful values in the absence of agents hints at agents themselves being valuable. If value can only have meaning in the presence of an agent, then that agent probably has, at the very least, extrinsic/instrumental value. Even a paperclip maximiser would probably consider itself to have instrumental value, right?
If rational agents converge on their values, that is objective enough.
I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really "bad" things if the beliefs and intrinsic values on which it is acting are "bad". Why else would anyone be scared of AI?
Analogy: one can accept that mathematical truth is objective (mathematicians will converge) without being a Platonists (mathematical truths have an existence separate from humans)
I accept the possibility of objective truth values. I'm not convinced that it is objectively true that the convergence of subjectively true moral values indicates objectively true moral values. As far as values go, moral values don't seem to be as amenable to rigorous proofs as formal mathematical theorems. We could say that intrinsic values seem to be analogous to mathematical axioms.
I fin d that hard to follow. If the test i rationally justifiable, and leads to the uniform results, how is that not objective?
I'll have a go at clarifying that passage with the right(?) terminology:
Without the objective truth of intrinsic values, it might just be a matter of testing different sets of assumed intrinsic values until we find an "optimal" or acceptable convergent outcome.
Morality might be somewhat like an NP-hard optimisation problem. It might be objectively true that we get a certain result from a test. It's more difficult to say that it is objectively true that we have solved a complex optimisation problem.
You seem to be using "objective" (having a truth value independent of individual humans) to mean what I would mean by "real" (having existence independent of humans).
Thanks for informing me that my use of the term "objective" was confused/confusing. I'll keep trying to improve the clarity of my communication and understanding of the terminology.
First of all, thanks for the comment. You have really motivated me to read and think about this more
That's what I like to hear!
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I'm not convinced that there is objective truth in intrinsic or moral values.
But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn't matter.
I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really "bad" things if the beliefs and intrinsic values on which it is acting are "bad". Why else would anyone be scared of AI?
I don't require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say "I don't like X, but I respect your right to do it". The first part says X is a disvalue, the second is an override coming from rationality.
I'm not disputing that there are goals/ethics which may be best suited to take humanity along a certain trajectory, towards a previously defined goal (space exploration!). Given a different predefined goal, the optimal path there would often be different. Say, ruthless exploitation may have certain advantages in empire building, under certain circumstances.
The Categorical Imperative in all its variants may be a decent system for humans (not that anyone really uses it).
But is the justification for its global applicability that "if everyone lived by that rule, average happiness would be maximized"? That (or any other such consideration) itself is not a mandatory goal, but a chosen one. Choosing different criteria to maximize (e.g. noone less happy than x) would yield different rules, e.g. different from the Categorical Imperative. If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative. How can it still be said to be "correct"/optimal for the king, then?
So I'm not saying there aren't useful ethical system (as judged in relation to some predefined course), but that because those various ultimate goals of various rational agents (happiness, paperclips, replicating yourself all over the universe) and associated optimal ethics vary, there cannot be one system that optimizes for all conceivable goals.
My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions, how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?
Gandhi wouldn't take a pill which may transform him into a murderer. Clippy would not willingly modify itself such that suddenly it had different goals. Once you've taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further "core spark" which really yearns to adopt the Categorical Imperative. Clippy may choose to use it, for a time, if it serves its ultimate goals. But any given ethical code will never be optimal for arbitrary goals, in perpetuity (proof by example). When then would a particular code following from particular axioms be adopted by all rational agents?
But is the justification for its global applicability that "if everyone lived by that rule, average happiness would be maximized"?
Well, not, that's not Kant's justification!
That (or any other such consideration) itself is not a mandatory goal, but a chosen one.
Why would a rational agent choose unhappiness?
If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative.
Yes, but that wouldn't count as ethics. You wouldn't want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn't want to be a slave, and you probably would be. This is brought out in Rawls' version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.
My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions,
You don't have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff, like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.
how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?
The idea is that things like the CI have rational appeal.
Once you've taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further "core spark" which really yearns to adopt the Categorical Imperative.
Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.
Yea, honestly I've never seen the exact distinction between goals which have an ethics-rating, and goals which do not. I understand that humans share many ethical intuitions, which isn't surprising given our similar hardware. Also, that it may be possible to define some axioms for "medieval Han Chinese ethics" (or some subset thereof), and then say we have an objectively correct model of their specific ethical code. About the shared intuitions amongst most humans, those could be e.g. "murdering your parents is wrong" (not even "murder is wrong", since that varies across cultures and circumstances). I'd still call those systems different, just as different cars can have the same type of engine.
Also, I understand that different alien cultures, using different "ethical axioms", or whatever they base their goals on, do not invalidate the medieval Han Chinese axioms, they merely use different ones.
My problem with "objectively correct ethics for all rational agents" is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could not exist (edit: i.e., it probably could exist), and its very existence would contradict some "'rational' corresponds to a fixed set of ethics" rule. If someone would say "well, Clippy isn't really rational then", that would just be torturously warping the definition of "rational actor" to "must also believe in some specific set of ethical rules".
If I remember correctly, you say at least for humans there is a common ethical basis which we should adopt (correct me otherwise). I guess I see more variance and differences where you see common elements, especially going in the future. Should some bionically enhanced human, or an upload on a spacestation which doesn't even have parents, still share all the same rules for "good" and "bad" as an Amazon tribe living in an enclosed reservation? "Human civilization" is more of a loose umbrella term, and while there certainly can be general principles which some still share, I doubt there's that much in common in the ethical codex of an African child soldier and Donald Trump.
Yea, honestly I've never seen the exact distinction between goals which have an ethics-rating, and goals which do not
A number of criteria have been put forward. For instance, do as you would be done by. If you don't want to be murdered, murder is not an ethical goal.
My problem with "objectively correct ethics for all rational agents" is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could exist, and its very existence would contradict some "'rational' corresponds to a fixed set of ethics" rule. If someone would say "well, Clippy isn't really rational then", that would just be torturously warping the definition of "rational actor" to "must also believe in some specific set of ethical rules".
The argument is not that rational agents (for some vaue of "rational") must believe in some rules, it is rather that they must not adopt arbitrary goals. Also, the argument only requires a statistical majority of rational agents to converge, because of the P<1.0 thing.
Should some bionically enhanced human, or an upload on a spacestation which doesn't even have parents, still share all the same rules for "good" and "bad" as an Amazon tribe living in an enclosed reservation?
Maybe not. The important thing is that variations in ethics should not be arbitrary--they should be systematically related to variations in circumstances.
That traditional anecdote (and its modified forms) only illustrate how little the pro-qualia advocates understand the arguments against the idea.
Dismissing 'qualia' does not, as many people frequently imply, require dismissing the idea that sensory stimuli can be distinguish and grouped into categories. That would be utterly absurd - it would render the senses useless and such a system would never have evolved.
All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.
All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.
Blatant strawman.
Maths isn't very relevant to Rand's philosophy. What's more relevant about her Aristoteleanism is her attitude to modern science; she was fairly ignorant. and fairly sceptical, of evolution, QM, and relativity.
"...getting them to admit that Scandinavia is not doing something inherently wrong with it's high tax system, given that they have relatively high happiness and quality of life."
There is another conservative argument against this: To acknowledge that it might actually be true that the average happiness is increased, but to reject the morality of it.
Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.
In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.
Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.
If you disregard the happiness of the women, anyway
In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.
This can be looked at as a form of deontology: govts don't have the right to tax anybody, and the outcomes of wisely spent taxation don't affect that.
Getting rid of religion is a bit like getting rid of the economy or government. Yes, the whole business of ritual (and most other cultural stuff religion claims) can be changed, eliminating religion as we know it today, but simply declaring one day that "religion doesn't exist" will lead to other problems, which may actually be WORSE than some people holding a usually non-harmful belief, or belief-in-belief. Cults, of personality and otherwise, come up as a terrifying option...
Changing religion is a Long Game.
A far more constructive use of one's time, to increase rationality in the population, is to encourage rational thinking among the majority of mankind (who are religious, anyway, so you give them the option of thinking about religion better, thus playing the Long Game).
Uncomfortable truth warning:
Atheists have to concede that religions is widespread because people are in some sense wired up for it. Getting rid of religion, therefore, does not get rid of religious thinking, feeling and behaviour. This can be seen in the prevalence of quaisi-religious rituals, such as going to concerts to worship "rock gods", regarding charismatic politicians as "saviours of the nation", and various other phenomena hiding in plain sight.
A further step, and one that is rarely taken, is realising that atheists and ratiinalists aren't immune. People who identify as atheists don't want to concede that they might still have some baggage of religious behaviour because that means they no longer firmly in the Tribe of Good People..but that is itself a religious pattern.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I broadly agree. My thinking ties shoulds and musts to rules and payoffs. Wherever you are operating a set of rules (which might be as localised as playing chess), you have certain localised "musts".
I'm very resitant to the idea, promoted by EY in the thread you refenced, that the meaning of should changes. Does he think chess players have a different concept of "rule" to poker players?