My position is something like "I haven't yet seen anyone compellingly both define and argue for moral realism, so until then the whole notion seems confused to me".
It is unclear to me what it would even mean for a moral claim to actually be objectively true or false. At the same time, there are many evolutionary and game-theoretical reasons for why various moral claims would feel objectively true or false to human minds, and that seems sufficient for explaining why many people have an intuition of moral realism being true. I have also personally found some of my moral beliefs changing as a result of psychological work - see the second example here - which makes me further inclined to believe that moral beliefs are all psychological (and thus subjective, as I understand the term).
So my argument is simply that there doesn't seem to be any reason for me to believe in moral realism, somewhat analogous to how there doesn't seem to be any reason for me to believe in a supernatural God.
I think a simpler way to state the objection is to say that "value" and "meaning" are transitive verbs. I can value money; Steve can value cars; Mike can value himself. It's not clear what it would even mean for objective reality to value something. Similarly, a subject may "mean" a referent to an interpreter, but nothing can just "mean" or even "mean something" without an implicit interpreter, and "objective reality" doesn't seem to be the sort of thing that can interpret.
How do you feel about:
1
There is
a procedure/algorithm which doesn't seem biased towards a particular value system
such that
a class of AI systems that implement it end up having a common set of values, and they endorse the same values upon reflection.
2
This set of values might have something in common with what we, humans, call values.
If 1 and 2 seem at least plausible or conceivable, why can't we use them as a basis to design aligned AI? Is it because of skepticism towards 1 or 2?
There's a counterargument-template which roughly says "Suppose the ground-truth source of morality is X. If X says that it's good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it's good actually make it good?"
Applied to the most strawmannish version of moral realism, this might say something like "Suppose the ground-truth source of morality is a set of stone tablets inscribed with rules. If one day someone finds the tablets, examines them, and notices some previously-overlooked text at the bottom saying that it's good to torture babies, would you then accept this truth and spend your resources to torture babies? Does the tablets saying it's good actually make it good?"
Applied to a stronger version of moral realism, it might say something like "Suppose the ground-truth source of morality is game-theoretic cooperation. If it turns out that, in our universe, we can best cooperate with most other beings by torturing babies (perhaps as a signal that we are willing to set aside our own preferences in order to cooperate), would you then accept this truth and spend your resources to torture babies? Does the math saying it's good actually make it good?"
The point of these templated examples is not that the answer is obviously "no". (Though "no" is definitely my answer.) A true moral realist will likely respond by saying "yes, but I do not believe that X would actually say that". That brings us to the real argument: why does the moral realist believe this? "What do I think I know, and how do I think I know it?" What causal, physical process resulted in that belief?
(Often, the reasoning goes something like "I'm fairly confident that torturing babies is bad, therefore I'm fairly confident that the ground-truth source of morality will say it's bad". But then we have to ask: why are my beliefs about morality evidence for the ground-truth? What physical process entangled these two? If the ground-truth source had given the opposite answer, would I currently believe the opposite thing?)
In the strawmannish case of the stone tablets, there is pretty obviously no causal link. Humans' care for babies' happiness seems to have arisen for evolutionary fitness reasons; it would likely be exactly the same if the stone tablets said something different.
In the case of game-theoretic cooperation, one could argue that evolution itself is selecting according to the game-theoretic laws in question. On the other hand, thou art godshatter, and also evolution is entirely happy to select for eating other people's babies in certain circumstances. The causal link between game-theoretic cooperation and our particular evolved preferences is unreliable at best.
At this point, one could still self-consistently declare that the ground-truth source is still correct, even if one's own intuitions are an unreliable proxy. But I think most moral realists would update away from the position if they understood, on a gut level, just how often their preferred ground-truth source diverges from their moral intuitions. Most just haven't really attacked the weak points of that belief. (And in fact, if they would update away upon learning that the two diverge, then they are not really moral realists, regardless of whether the two do diverge much.)
Side-note: Three Worlds Collide is a fun read, and is not-so-secretly a great thinkpiece on moral realism.
There's a counterargument-template which roughly says "Suppose the ground-truth source of morality is X. If X says that it's good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it's good actually make it good?"
I'm not sure if I'm able to properly articulate my thoughts on this but I'd be interested to know if it's understandable and where it might fit. Sorry if I repeat myself.
from my perspective It's like if you appli...
Thank you for the detailed answer! I'll read Three Worlds Collide.
That brings us to the real argument: why does the moral realist believe this? "What do I think I know, and how do I think I know it?" What causal, physical process resulted in that belief?
I think a world full of people who are always blissed out is better than a world full of people who are always depressed or in pain. I don't have a complete ordering over world-histories, but I am confident in this single preference, and if someone called this "objective value" or "moral truth" I wouldn't s...
If you think nothing is "valuable in itself" / "objectively valuable", why do you think so?
I think that's the wrong way round. If you want to claim things have some property, then you have to put forward evidence they do. My strongest argument that things do not objectively have value is, why on earth would you think they do?
It's also clear that this discussion is fruitless. The only way to make progress will be to give some sort of definition for "objective value" at which point this will degenerate into an argument about semantics.
I didn't want to start a long discussion. My idea was to get some random feedback to see if I was missing some important ideas I had not considered
The strongest argument I know of for this is "that's the default, simplest explanation". My prior is quite low that there's any external force which values or judges things. I have yet to see any evidence that there is such a thing.
Mostly: it's hard to prove a negative, but you shouldn't have to, unless there some positive evidence to explain otherwise.
My own argument, see https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism and the post it links back to.
It seems that if external reality is meaningless, then it's difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.
That is an interesting point. More or less, I agree with this sentence in your fist post:
As far as I can tell, we can do science just as well without assuming that there's a real territory out there somewhere.
in the sense that one can do science by speaking only about their own observations, without making a distinction between what is observed and what "really exists".
On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine. How does this fit in your framework? (Might be irrelevant, sorry if I misunderstood)
Moral realism:
I think determinism qualifies. Morality implies right versus wrong which implies the existence of errors. If everything is predetermined according to initial conditions, the concept of error becomes meaningless. You can't correct your behavior any more than an atom on Mars can; que sera, sera. Everything becomes the consequence of the initial conditions of the universe at large and so morality becomes inconsequential. You can't even change your mind on this topic because the only change possible is that dictated by initial conditions. If you imagine that you can, you do so because of the causal chain of events that necessitated it.
There's no rationality or irrationality either because these concepts imply, once again, the possibility of errors in a universe that can't err.
You're an atheist? Not your choice. You're a theist? Not your choice. You disagree with this sentiment? Again; que sera, sera.
How can moral realism be defended in a universe where no one is responsible for anything?
I disagree. Determinism doesn't make the concepts of "control" or "causation" meaningless. It makes sense to say that, to a certain degree, you often can control your own attention, while in other circumstances you can't: if there's a really loud sound near you, you are somewhat forced to pay attention to it.
From there you can derive a concept of responsibility, which is used e.g. in law. I know that the book Actual Causality focuses on these ideas (but there might be other books on the same topics that are easier to read or simply better in their exposition).
If you're just looking for the arguments. This are what you're looking for:
https://plato.stanford.edu/entries/moral-anti-realism
How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?
What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?
I can't say I am an expert on realism and antirealism, but I have already spent time on metaethics textbooks and learning about metaethics in general. With this question I wanted to get an idea of what are the main arguments on LW, and maybe find new ideas I hadn't considered.
What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?
I see a relation with realism. If certain pieces of knowledge about the physical world (how human and animal cognition works) can motivate a class of agents that we would also recognise as unbiased and rational, that would be a form of altruism that is not instrumental and not related to game theory.
If you think nothing is “valuable in itself” / “objectively valuable”, why do you think so?
Value isn't a physical property, even an emergent one.
How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn’t make any sense?
That's a different question. Rationality is defined in terms of values, but they don't have to be objective values. There can even be facts about how ethics should work, but,in view of the above,they would be facts of a game theoretic sort, not facts by virtue of correspondence to moral properties. If you want to be altruistic, then the buck stops there, and it makes sense for you -- where "makes sense" means instrumental rationality. But altruism isn't a fact about the world that you are compelled to believe by epistemic rationality.
Thanks for your answer, but I am looking for arguments, not just statements or opinions. How do you know that value is not a physical property? What do you mean when you say that altruism is not a consequence of epistemic rationality, and how do you know?
What is the strongest argument you know for antirealism?
From Aella; the external world is a meaningless hypothesis; given a set of experiences and a consistent set of expectations about what form those experiences will take in the future, positing an external world doesn't add any additional information. That is, the only thing that "external world" would add would be an expectation of a particular kind of consistency to those experiences; you can simply assume the consistency, and then the external world adds no additional informational content or predictive capacity.
What is the strongest argument against moral realism?
Just as an external world changes nothing about your expectations of what you will experience, moral realism, the claim that morality exists as a natural feature of the external world, changes nothing about your expectations of what you will experience.
If you think nothing is "valuable in itself" / "objectively valuable", why do you think so?
Consider a proposal to replace all the air around you with something valuable. Consider a proposal to replace some percentage of the air around you with something valuable.
The ideal proposal replaces neither all of the air, nor none of the air. In the limit of all of the air being replaced, the air achieves infinite relative value. In the limit of none of the air being replaced, the air has, under normal circumstances, no value.
Consider the value of a vacuum tube; vacuum, the absence of anything, has particular value in that case.
Which is all to say - value is strictly relative, and it is unfixed. The case of the vacuum tube demonstrates that there are cases where having nothing at all in a given region is more valuable than having something at all there. If the vacuum tube is part of a mechanical contraption that is keeping you alive, there is nothing you want in that vacuum tube, more than vacuum itself; thus, there is nothing that has, in that specific situation, objective value, given that the only sense by which we can make sense of objective value is a comparison to nothing, and in that particular case nothing is more valuable than the something.
How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?
Because you've tautologically defined it to be so when you said the altruism is disinterested. If I have no interest in a thing, it makes no sense to behave as if I have an interest in that thing. Any sense in which it would make sense for me to have an interest in a thing, is a claim that I have an interest in that thing.
From Aella; the external world is a meaningless hypothesis; given a set of experiences and a consistent set of expectations about what form those experiences will take in the future, positing an external world doesn’t add any additional information. That is, the only thing that “external world” would add would be an expectation of a particular kind of consistency to those experiences; you can simply assume the consistency, and then the external world adds no additional informational content or predictive capacity.
You can assume inexplicable consistency,...
Other questions you can answer:
What is the strongest argument against moral realism?
If you think nothing is "valuable in itself" / "objectively valuable", why do you think so?
How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?
I am interested in these arguments because I am trying to guess the behaviour of an AI system that, roughly speaking:
1) knows a lot about the physical word;
2) has some degree of control over its own actions and what goals to pursue—something like the human brain.
(See this if you want more details.)
If you could also write the precise statement about realism/antirealism that you are arguing against/for, that would be great. Thanks!