CarlShulman comments on Complexity of Value ≠ Complexity of Outcome - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (198)
There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.
http://philpapers.org/surveys/results.pl
Given this, and given comments from people like me in the intersection of the philosophical and LW communities who can point out that it isn't a case of stupid philosophers supporting realism and all the really smart ones supporting anti-realism, there is no way that the LW community should have anything like the confidence that it does on this point.
Moreover, I should point out that most of the realists lean towards naturalism, which allows a form of realism that is very different to the one that Eliezer critiques. I should also add that within philosophy, the trend is probably not towards anti-realism, but towards realism. The high tide of anti-realism was probably in the middle of the 20th Century, and since then it has lost its shiny newness and people have come up with good arguments against it (which are never discussed here...).
Even for experts in meta-ethics, I can't see how their confidence can get outside the 30%-70% range given the expert disagreement. For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.
Among target faculty listing meta-ethics as their area of study moral realism's lead is much smaller: 42.5% for moral realism and 38.2% against.
Looking further through the philpapers data, a big chunk of the belief in moral realism seems to be coupled with theism, where anti-realism is coupled with atheism and knowledge of science. The more a field is taught at Catholic or other religious colleges (medieval philosophy, bread-and-butter courses like epistemology and logic) the more moral realism, while philosophers of science go the other way. Philosophers of religion are 87% moral realist, while philosophers of biology are 55% anti-realist.
In general, only 61% of respondents "accept" rather than lean towards atheism, and a quarter don't even lean towards atheism. Among meta-ethics specialists, 70% accept atheism, indicating that atheism and subject knowledge both predict moral anti-realism. If we restricted ourselves to the 70% of meta-ethics specialists who also accept atheism I would bet at at least 3:1 odds that moral anti-realism comes out on top.
Since the Philpapers team will be publishing correlations between questions, such a bet should be susceptible to objective adjudication within a reasonable period of time.
A similar pattern shows up for physicalism.
In general, those interquestion correlations should help pinpoint any correct contrarian cluster.
This is why I put more weight on Toby's personal position, than on the majority expert position. As far as I know, Toby is in the same contrarian cluster as me, yet he seems to give much more weight to moral realism (and presumably not the Yudkowskian kind either) than I do. Like ciphergoth, I wish he would tell us which arguments in favor of realism, or against anti-realism, that he finds persuasive.
It seems that would be more likely if some people would put effort into apparently wanting to learn more about moral realism, or would read and present some of the arguments charitably to LW.
Thanks for looking that up Carl -- I didn't know they had the break-downs. This is the more relevant result for this discussion, but it doesn't change my point much. Unless it was 80% or so in favour of anti-realism, I think holding something like 95% credence in anti-realism this is far too high for non-experts.
Atheism doesn't get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).
You are correct that it is reasonable to assign high confidence to atheism even if it doesn't have 80% support, but we must be very careful here. Atheism is presumably the strongest example of such a claim here on Less Wrong (i.e. one which you can tell a great story why so many intelligent people would disagree etc and hold a high confidence in the face of disagreement). However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.
Strong agreement about not overgeneralizing. It does appear, however, that libertarianism about free well, non-physicalism about the mind, and a number of sorts of moral realism form a cluster, sharing the feature of reifying certain concepts in our cognitive algorithms even when they can be 'explained away.' Maybe we can discuss this tomorrow night.
Of course not; the substance of one's reasons for disagreeing matters greatly. In this case, I suspect there's probably a significant amount of correlation/non-independence between the reasons for believing atheism and believing something like moral non-realism.
One thing we should take away from cases like atheism is that surveys probably shouldn't be interpreted naively, but rather as somewhat noisy information. I think my own heuristic (on binary questions where I already have a strong opinion) is basically to look on which side of 50% my position falls; if the majority agrees with me (or, say, the average confidence in my position is over 50%), I tend to regard that as (more) evidence in my favor, with the strength increasing as the percentage increases.
(This, I think, would be part of how I would answer Yvain.)
I think the arguments you're developing here go a long way towards answering Toby's point, but what safeguards can we use to ensure we can't use it as a generalized anti-expert defence?
The prerequisite for this heuristic is coming to a conclusion with near-certainty on an amateur level. The safeguard has to be general ability to not get that much unjustified overconfidence.
Are you proposing a safeguard here or setting out what the safeguard has to achieve?
I'm pointing out that there is already a generally applicable enough set of safeguards that covers this case in particular, adequate or not. That is, this heuristic doesn't automatically lead as astray.
I don't think I can understand you properly; it reads like you're saying that we can be confident in rejecting expert advice if we've already reached a contrary position with high confidence. That doesn't sound Bayesian. I suspect the error is mine but I'd appreciate your help in finding and fixing it!
EDIT: I [not Vladimir] would say that if we have one position that we can be confident in (atheism) we can use it as an indicator of expert quality, and pay more attention to those experts on other issues (e.g. moral realism as philosophers define it).
And with respect to the selection effect among philosophers of religion, there's overwhelming direct evidence on this in the form of the Catholic Church push on this front.
I agree with this interpretation.
Zack is making basically the same point here.
(This discussion is about meta-level mechanism for agreement, where you accept a conclusion; experts might well have persuasive arguments that inverse one's confidence.)
Perhaps we shouldn't look for professional consensus on things we accept with almost-certainty, because things that can be correctly accepted with almost-certainty by amateurs will not be professionally studied, except by people who are systematically confused. Instead, we should ask non-professional opinion of people who are in the position to know most about the subject, but don't study it professionally.