Roko comments on Complexity of Value ≠ Complexity of Outcome - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (198)
There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.
http://philpapers.org/surveys/results.pl
Given this, and given comments from people like me in the intersection of the philosophical and LW communities who can point out that it isn't a case of stupid philosophers supporting realism and all the really smart ones supporting anti-realism, there is no way that the LW community should have anything like the confidence that it does on this point.
Moreover, I should point out that most of the realists lean towards naturalism, which allows a form of realism that is very different to the one that Eliezer critiques. I should also add that within philosophy, the trend is probably not towards anti-realism, but towards realism. The high tide of anti-realism was probably in the middle of the 20th Century, and since then it has lost its shiny newness and people have come up with good arguments against it (which are never discussed here...).
Even for experts in meta-ethics, I can't see how their confidence can get outside the 30%-70% range given the expert disagreement. For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.
I strongly agree with Roko that something like his strong version is the interesting version. What matters is what range of creatures will come to agree on outcomes; it matters much less what range of creatures think their desires are "right" in some absolute sense, if they don't think that will eventually be reflected in agreement.
In the context of this comment, the goal of FAI can be said to be to constrain the world by "moral facts", just like laws of physics constrain the world by "physical facts". This is the sense in which I mean "FAI=Physical Laws 2.0".
I'm describing the sense of post-FAI world.
Roko, you make a good point that it can be quite murky just what realism and anti-realism mean (in ethics or in anything else). However, I don't agree with what you write after that. Your Strong Moral Realism is a claim that is outside the domain of philosophy, as it is an empirical claim in the domain of exo-biology or exo-sociology or something. No matter what the truth of a meta-ethical claim, smart entities might refuse to believe it (the same goes for other philosophical claims or mathematical claims).
Pick your favourite philosophical claim. I'm sure there are very smart possible entities that don't believe this and very smart ones that do. There are probably also very smart entities without the concepts needed to consider it.
I understand why you introduced Strong Moral Realism: you want to be able to see why the truth of realism would matter and so you came up with truth conditions. However, reducing a philosophical claim to an empirical one never quite captures it.
For what its worth, I think that the empirical claim Strong Moral Realism is false, but I wouldn't be surprised if there was considerable agreement among radically different entities on how to transform the world.