Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Is my view contrarian?

22 Post author: lukeprog 11 March 2014 05:42PM

Previously: Contrarian Excuses, The Correct Contrarian Cluster, What is bunk?, Common Sense as a Prior, Trusting Expert Consensus, Prefer Contrarian Questions.

Robin Hanson once wrote:

On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation. Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.

I tend to think through the issue in three stages:

  1. When should I consider myself to be holding a contrarian[1] view? What is the relevant expert community?
  2. If I seem to hold a contrarian view, when do I have enough reason to think I’m correct?
  3. If I seem to hold a correct contrarian view, what can I do to give other people good reasons to accept my view, or at least to take it seriously enough to examine it at length?

I don’t yet feel that I have “answers” to these questions, but in this post (and hopefully some future posts) I’d like to organize some of what has been said before,[2] and push things a bit further along, in the hope that further discussion and inquiry will contribute toward significant progress in social epistemology.[3] Basically, I hope to say a bunch of obvious things, in a relatively well-organized fashion, so that less obvious things can be said from there.[4]

In this post, I’ll just address stage 1. Hopefully I’ll have time to revisit stages 2 and 3 in future posts.

 

Is my view contrarian?

World model differences vs. value differences

Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model,[5] and by “contrarian view” I tend to mean “contrarian world model.” Some apparently contrarian views are probably actually contrarian values.

 

Expert consensus

Is my atheism a contrarian view? It’s definitely a world model, not a value judgment, and only 2% of people are atheists.

But what’s the relevant expert population, here? Suppose it’s “academics who specialize in the arguments and evidence concerning whether a god or gods exist.” If so, then the expert population is probably dominated by academic theologians and religious philosophers, and my atheism is a contrarian view.

We need some heuristics for evaluating the soundness of the academic consensus in different fields. [6]

For example, we should consider the selection effects operating on communities of experts. If someone doesn’t believe in God, they’re unlikely to spend their career studying arcane arguments for and against God’s existence. So most people who specialize in this topic are theists, but nearly all of them were theists before they knew the arguments.

Perhaps instead the relevant expert community is “scholars who study the fundamental nature of the universe” — maybe, philosophers and physicists? They’re mostly atheists. [7] This is starting to get pretty ad-hoc, but maybe that’s unavoidable.

What about my view that the overall long-term impact of AGI will be, most likely, extremely bad? A recent survey of the top 100 authors in artificial intelligence (by citation index)[8] suggests that my view is somewhat out of sync with the views of those researchers.[9] But is that the relevant expert population? My impression is that AI experts know a lot about contemporary AI methods, especially within their subfield, but usually haven’t thought much about, or read much about, long-term AI impacts.

Instead, perhaps I’d need to survey “AGI impact experts” to tell whether my view is contrarian. But who is that, exactly? There’s no standard credential.

Moreover, the most plausible candidates around today for “AGI impact experts” are — like the “experts” of many other fields — mere “scholastic experts,” in that they[10] know a lot about the arguments and evidence typically brought to bear on questions of long-term AI outcomes.[11] They generally are not experts in the sense of “Reliably superior performance on representative tasks” — they don’t have uniquely good track records on predicting long-term AI outcomes, for example. As far as I know, they don’t even have uniquely good track records on predicting short-term geopolitical or sci-tech outcomes — e.g. they aren’t among the “super forecasters” discovered in IARPA’s forecasting tournaments.

Furthermore, we might start to worry about selection effects, again. E.g. if we ask AGI experts when they think AGI will be built, they may be overly optimistic about the timeline: after all, if they didn’t think AGI was feasible soon, they probably wouldn’t be focusing their careers on it.

Perhaps we can salvage this approach for determining whether one has a contrarian view, but for now, let’s consider another proposal.

 

Mildly extrapolated elite opinion

Nick Beckstead instead suggests that, at least as a strong prior, one should believe what one thinks “a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to [one’s own] evidence.”[12] Below, I’ll propose a modification of Beckstead’s approach which aims to address the “Is my view contrarian?” question, and I’ll call it the “mildly extrapolated elite opinion” (MEEO) method for determining the relevant expert population. [13]

First: which people are “trustworthy”? With Beckstead, I favor “giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally.” (This guideline aims to avoid parochialism and self-serving cognitive biases.)

What are some “clear indicators that many people would accept”? Beckstead suggests:

IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions…

Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy).

Hence MEEO outsources the challenge of evaluating academic consensus in different fields to the “generally trustworthy people.” But in doing so, it raises several new challenges. How do we determine which people are trustworthy? How do we “mildly extrapolate” their opinions? How do we weight those mildly extrapolated opinions in combination?

This approach might also be promising, or it might be even harder to use than the “expert consensus” method.

 

My approach

In practice, I tend to do something like this:

  • To determine whether my view is contrarian, I ask whether there’s a fairly obvious, relatively trustworthy expert population on the issue. If there is, I try to figure out what their consensus on the matter is. If it’s different than my view, I conclude I have a contrarian view.
  • If there isn’t an obvious trustworthy expert population on the issue from which to extract a consensus view, then I basically give up on step 1 (“Is my view contrarian?”) and just move to the model combination in step 2 (see below), retaining pretty large uncertainty about how contrarian my view might be.


When do I have good reason to think I’m correct?

Suppose I conclude I have a contrarian view, as I plausibly have about long-term AGI outcomes,[14] and as I might have about the technological feasibility of preserving myself via cryonics.[15] How much evidence do I need to conclude that my view is justified despite the informed disagreement of others?

I’ll try to tackle that question in a future post. Not surprisingly, my approach is a kind of model combination and adjustment.

 

 


  1. I don’t have a concise definition for what counts as a “contrarian view.” In any case, I don’t think that searching for an exact definition of “contrarian view” is what matters. In an email conversation with me, Holden Karnofsky concurred, making the point this way: “I agree with you that the idea of ‘contrarianism’ is tricky to define. I think things get a bit easier when you start looking for patterns that should worry you rather than trying to Platonically define contrarianism… I find ‘Most smart people think I’m bonkers about X’ and ‘Most people who have studied X more than I have plus seem to generally think like I do think I’m wrong about X’ both worrying; I find ‘Most smart people think I’m wrong about X’ and ‘Most people who spend their lives studying X within a system that seems to be clearly dysfunctional and to have a bad track record think I’m bonkers about X’ to be less worrying.”  ↩

  2. For a diverse set of perspectives on the social epistemology of disagreement and contrarianism not influenced (as far as I know) by the Overcoming Bias and Less Wrong conversations about the topic, see Christensen (2009); Ericsson et al. (2006); Kuchar (forthcoming); Miller (2013); Gelman (2009); Martin & Richards (1995); Schwed & Bearman (2010); Intemann & de Melo-Martin (2013). Also see Wikipedia’s article on scientific consensus.  ↩

  3. I suppose I should mention that my entire inquiry here is, ala Goldman (1998), premised on the assumptions that (1) the point of epistemology is the pursuit of correspondence-theory truth, and (2) the point of social epistemology is to evaluate which social institutions and practices have instrumental value for producing true or well-calibrated beliefs.  ↩

  4. I borrow this line from Chalmers (2014): “For much of the paper I am largely saying the obvious, but sometimes the obvious is worth saying so that less obvious things can be said from there.”  ↩

  5. Holden Karnofsky seems to agree: “I think effective altruism falls somewhere on the spectrum between ‘contrarian view’ and ‘unusual taste.’ My commitment to effective altruism is probably better characterized as ‘wanting/choosing to be an effective altruist’ than as ‘believing that effective altruism is correct.’”  ↩

  6. Without such heuristics, we can also rather quickly arrive at contradictions. For example, the majority of scholars who specialize in Allah’s existence believe that Allah is the One True God, and the majority of scholars who specialize in Yahweh’s existence believe that Yahweh is the One True God. Consistency isn’t everything, but contradictions like this should still be a warning sign.  ↩

  7. According to the PhilPapers Surveys, 72.8% of philosophers are atheists, 14.6% are theists, and 12.6% categorized themselves as “other.” If we look only at metaphysicians, atheism remains dominant at 73.7%. If we look only at analytic philosophers, we again see atheism at 76.3%. As for physicists: Larson & Witham (1997) found that 77.9% of physicists and astronomers are disbelievers, and Pew Research Center (2009) found that 71% of physicists and astronomers did not believe in a god.  ↩

  8. Muller & Bostrom (forthcoming). “Future Progress in Artificial Intelligence: A Poll Among Experts.”  ↩

  9. But, this is unclear. First, I haven’t read the forthcoming paper, so I don’t yet have the full results of the survey, along with all its important caveats. Second, distributions of expert opinion can vary widely between polls. For example, Schlosshauer et al. (2013) reports the results of a poll given to participants in a 2011 quantum foundations conference (mostly physicists). When asked “When will we have a working and useful quantum computer?”, 9% said “within 10 years,” 42% said “10–25 years,” 30% said “25–50 years,” 0% said “50–100 years,” and 15% said “never.” But when the exact same questions were asked of participants at another quantum foundations conference just two years later, Norsen & Nelson (2013) report, the distribution of opinion was substantially different: 9% said “within 10 years,” 22% said “10–25 years,” 20% said “25–50 years,” 21% said “50–100 years,” and 12% said “never.”  ↩

  10. I say “they” in this paragraph, but I consider myself to be a plausible candidate for an “AGI impact expert,” in that I’m unusually familiar with the arguments and evidence typically brought to bear on questions of long-term AI outcomes. I also don’t have a uniquely good track record on predicting long-term AI outcomes, nor am I among the discovered “super forecasters.” I haven’t participated in IARPA’s forecasting tournaments myself because it would just be too time consuming. I would, however, very much like to see these super forecasters grouped into teams and tasked with forecasting longer-term outcomes, so that we can begin to gather scientific data on which psychological and computational methods result in the best predictive outcomes when considering long-term questions. Given how long it takes to acquire these data, we should start as soon as possible.  ↩

  11. Weiss & Shanteau (2012) would call them “privileged experts.”  ↩

  12. Beckstead’s “elite common sense” prior and my “mildly extrapolated elite opinion” method are epistemic notions that involve some kind idealization or extrapolation of opinion. One earlier such proposal in social epistemology was Habermas’ “ideal speech situation,” a situation of unlimited discussion between free and equal humans. See Habermas’ “Wahrheitstheorien” in Schulz & Fahrenbach (1973) or, for an English description, Geuss (1981), pp. 65–66. See also the discussion in Tucker (2003), pp. 502–504.  ↩

  13. Beckstead calls his method the “elite common sense” prior. I’ve named my method differently for two reasons. First, I want to distinguish MEEO from Beckstead’s prior, since I’m using the method for a slightly different purpose. Second, I think “elite common sense” is a confusing term even for Beckstead’s prior, since there’s some extrapolation of views going on. But also, it’s only a “mild” extrapolation — e.g. we aren’t asking what elites would think if they knew everything, or if they could rewrite their cognitive software for better reasoning accuracy.  ↩

  14. My rough impression is that among the people who seem to have thought long and hard about AGI outcomes, and seem to me to exhibit fairly good epistemic practices on most issues, my view on AGI outcomes is still an outlier in its pessimism about the likelihood of desirable outcomes. But it’s hard to tell: there haven’t been systematic surveys of the important-to-me experts on the issue. I also wonder whether my views about long-term AGI outcomes are more a matter of seriously tackling a contrarian question rather than being a matter of having a particularly contrarian view. On this latter point, see this Facebook discussion.  ↩

  15. I haven’t seen a poll of cryobiologists on the likely future technological feasibility of cryonics. Even if there were such polls, I’d wonder whether cryobiologists also had the relevant philosophical and neuroscientific expertise. I should mention that I’m not personally signed up for cryonics, for these reasons.  ↩

Comments (94)

Comment author: elharo 11 March 2014 11:48:18PM 14 points [-]

Not only

If someone doesn’t believe in God, they’re unlikely to spend their career studying arcane arguments for and against God’s existence. So most people who specialize in this topic are theists, but nearly all of them were theists before they knew the arguments.

but also

If someone doesn’t believe in UFAI , they’re unlikely to spend their career studying arcane arguments about AGI impact. So most people who specialize in this topic are UFAI believers, but nearly all of them were UFAI believers before they knew the arguments.

Thus I do not think you should rule out the opinions of the large community of AI experts who do not specialize in AGI impact.

Comment author: brainoil 31 March 2014 02:52:28AM 1 point [-]

This is a false analogy. You can be a believer in God when you're five years old and haven't read any relevant arguments due to childhood indoctrination that happens in every home. You might even believe in income redistribution when you're five years old if your parents tell you that it's the right thing to do. I'm pretty sure nobody teaches their children about UFAI that way. You'd have to know the arguments for or against UFAI to even know what that means.

Comment author: RichardKennaway 31 March 2014 07:29:46PM 1 point [-]

You'd have to know the arguments for or against UFAI to even know what that means.

You just have to watch the Terminator movies, or the Matrix, or read almost any science fiction with robots in it. The UFness of AI is a default assumption in popular culture.

Comment author: Lumifer 31 March 2014 07:51:02PM 2 points [-]

The UFness of AI is a default assumption in popular culture.

This is true. On the other hand, the default is for the AI to be both unfriendly and stupid. Notice, for example, the complete inability of the Matrix overlords to make their agents hit anything they're shooting at :-D

Comment author: christopherj 01 April 2014 04:18:35AM 1 point [-]

It's more complicated than that. We use (relatively incompetent) AIs all over the place, and there is no public outcry, even as we develop combat AI for our UAVs and ground based combat robots, most likely because everyone thinks of AIs as merely idiot-savant servants or computer programs. Few people think much about the distinction between specialized AIs and general AI, probably because we don't actually have any general AI, though no doubt they understand that the simpler AI "can't become self-aware".

People dangerously anthropomorphize AI, expecting it by default to assign huge values to human life (huge negative values in the case of "rogue AI"), with a common failure mode of immediate and incompetent homicidal rampage while being plagued by various human failings. Even general AIs are viewed as being inferior to humans in several aspects.

Overall, there is not a general awareness that a non-friendly general AI might cause a total extinction of human life due to apathy.

Comment author: ThrustVectoring 11 March 2014 07:20:59PM 5 points [-]

Standard beliefs are only more likely to be correct when the cause of their standard-ness is causally linked to its correctness.

That takes care of things like, say, pro-American patriotism and pro-Christian religious fervor. Specifically, these ideas are standard not because contrary views are wrong, but because expressing contrary views makes you lose status in the eyes of a powerful in-group. Furthermore, it does not exclude beliefs like "classical physics is an almost entirely accurate description of the world at a macro scale" - inaccurate models would contradict observations of the world and get replaced with more accurate ones.

Granted, standard opinions often are standard because they are right. But, the more you can separate out the standard beliefs into ones with stronger and weaker links to correctness, the more this effect shows up in the former and not the latter.

To determine whether my view is contrarian, I ask whether there’s a fairly obvious, relatively trustworthy expert population on the issue.

I think that's on the same page as my initial thoughts on the matter. At least, it is a useful heuristic that applies more to correct standard beliefs than incorrect ones.

Comment author: Pablo_Stafforini 14 March 2014 10:57:34AM *  0 points [-]

I'm sympathetic to your position. Note, however, that the causal origin of a belief is itself a question about which there can be disagreement. So the same sort of considerations that make you give epistemic weight to majoritarian opinion should sometimes make you revise your decision to dismiss some of those majorities on the grounds that their beliefs do not reliably track the truth. For example, do most people agree with your causal explanation of pro-Christian religious fervor? If not, that may itself give you a reason to distrust those explanations, and consequently increase the evidential value you give to the beliefs of Christians. Of course, you can try to debunk the beliefs of the majority of people who disagree with your preferred causal explanation, but that just shifts the dispute to another level, rather than resolving it conclusively. (I'm not saying that, in the end, you can't be justified in dismissing the opinions of some people; rather, I'm saying that doing this may be trickier than it might at first appear. And, for the record, I do think that pro-Christian religious fervor is crazy.)

Comment author: fezziwig 11 March 2014 06:04:31PM 17 points [-]

I've not read all your references yet, so perhaps you can just give me a link: why is it useful to classify your beliefs as contrarian or not? If you already know that e.g. most philosophers of religion believe in God but most physicists do not, then it seems like you already know enough to start drawing useful conclusions about your own correctness.

In other words, I guess I don't see how the "contrarianism" concept, as you've defined it, helps you believe only true things. It seems...incidental.

Comment author: katydee 11 March 2014 08:54:17PM *  4 points [-]

One reason to classify beliefs as contrarian is that it helps you discuss them more effectively. My presentation of an idea that I expect will seem contrarian is going to look very different from my presentation of an idea that I expect will seem mainstream, and it's useful to know what will and won't be surprising or jarring to people.

This seems most relevant to stage 3-- if you hold what you believe to be a correct contrarian view (as opposed to a correct mainstream view), this has important ramifications as to how to proceed in conversation. Thus, knowing which of your views are contrarian has instrumental value.

Comment author: lukeprog 11 March 2014 06:36:17PM *  1 point [-]

I agree: see footnote 1.

The 'My Approach' summary was supposed to make clear that in the end it always comes down to a "model combination and adjustment" anyway, but maybe I didn't make that clear enough.

Comment author: fezziwig 12 March 2014 01:41:47PM 1 point [-]

Mm, fair enough. Maybe I'm just getting distracted by the word "contrarian".

Would another reasonable title for this sequence be "How to Correctly Update on Expert Opinion", or would that miss some nuance?

Comment author: Error 11 March 2014 09:33:38PM 4 points [-]

Some apparently contrarian views are probably actually contrarian values.

It doesn't help that a lot of people conflate beliefs and values. That issue has come up enough that I now almost instinctively respond to "how in hell can you believe that??!" by double-checking whether we're even talking about the same thing.

It also does not help that "believing" and "believing in" are syntactically similar but have radically different meanings.

Is my atheism a contrarian view? It’s definitely a world model, not a value judgment

Is it, though? I suspect that to most people discussing the subject, it's actually a disguised value judgement.

Comment author: Protagoras 11 March 2014 09:45:19PM 1 point [-]

Most people discussing the subject from which side? If you think those who profess atheism are mostly expressing disguised value judgments, I'd like to know which ones you suspect they're expressing; the theory doesn't sound very plausible to me. I do grant that I find it more plausible that religious claims are mostly disguised value judgments, no doubt including religious objections to atheism, but I don't think it's the case that atheists are commonly taking the other side of those value questions; rather, I'm inclined to take them at face value as rejecting the factual disguise. Do you have reasons for thinking otherwise? Or did you just mean to comment on the value judgment aspect of the religious majority side of the dispute?

Comment author: christopherj 29 March 2014 06:57:59PM 0 points [-]

I'd say atheism correlates strongly with various value judgements. For example, almost everyone who doesn't believe in a god, also doesn't approve of that god's morality. Few people believe that a given god has excellent morals but does not exist. And many people lose their religion when they get upset at their god/gods. Part of this is likely to be because said god is used as the source or justification for a morality, so that rejecting one will result in rejecting the other. I believe there was also research indicating that whatever a person believes, they're likely to believe their god believes the same.

Comment author: jimrandomh 12 March 2014 05:22:23PM 2 points [-]

If relevant experts seem to disagree with a position, this is evidence against it. But this evidence is easily screened off, if:

  • The standard position is fully explained by a known bias
  • The position is new enough that newness alone explains why it's not widespread (eg nutrition)
  • The nonstandard position is more complicated than the standard one, and exceeds the typical expert's complexity limit (AGI)
  • The nonstandard position would destroy concepts in which experts have substantial sunk costs (Bayesianism)
Comment author: V_V 13 March 2014 01:50:54PM 2 points [-]

The position is new enough that newness alone explains why it's not widespread (eg nutrition)

Nutrition what?

The nonstandard position is more complicated than the standard one, and exceeds the typical expert's complexity limit (AGI)

And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?

The nonstandard position would destroy concepts in which experts have substantial sunk costs (Bayesianism)

What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?

Comment author: jimrandomh 13 March 2014 06:55:43PM 0 points [-]

Nutrition what?

This was shorthand for "I hold several contrarian beliefs about nutrition which seem to fit this pattern but don't really belong in this comment".

And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?

Sometimes. To make a good decision about whether to copy a contrarian position, you generally have to either be smarter (but perhaps less domain-knowledgeable) than typical experts, or else have a good estimate of some other contrarian's intelligence and rationality and judge them to be high. (If you can't do either of these things, then you have little hope of choosing correct contrarian beliefs.)

What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?

I mean Bayesian statistical methods, as opposed to Frequentist ones. (This isn't a great example because there's not actually such a clean divide, and the topic is tainted by prior use as a Less Wrong shibboleth. Luke's original example - theology - illustrates the point pretty well).

Comment author: Lumifer 13 March 2014 07:01:05PM 1 point [-]

If you can't do either of these things, then you have little hope of choosing correct contrarian beliefs.

Notably, even if you can't do either of these things, sometimes you can rationally reject the mainstream position if you can conclude that the incentive structure for the "typical experts" makes them hopelessly biased in a particular direction.

Comment author: Nick_Tarleton 14 March 2014 02:21:45AM 2 points [-]

This shouldn't lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.

Comment author: elharo 13 March 2014 12:17:28PM 1 point [-]

I doubt that in the particular case of AGI, the nonstandard position's complexity exceeds the typical AI expert's complexity limit. I know a few AI experts, and they can handle extreme complexity. For that matter, I think AGI is well within the complexity limits of general computer science/mathematics/physics/science experts and at least some social science experts (e.g. Robin Hanson).

In fact, the number of such experts that have looked seriously at AGI, and come to different conclusions, strongly suggests to me that the jury is still out on this one. The answers, whatever they are, are not obvious or self-evident.

Repeat the same but s/AGI/Bayesianism. Bayesianism is routinely and quickly adopted within the community of mathematicians/scientists/software developers when it is useful and produces better answers. The conflict between Bayesianism and frequentism that is sometimes alluded to here is simply not an issue in every day practical work.

Comment author: jpaulson 13 March 2014 05:04:43AM 2 points [-]

Doesn't "contrarian" just mean "disagrees with the majority"? Any further logic-chopping seems pointless and defensive.

The fact that 98% of people are theists is evidence against atheism. I'm perfectly happy to admit this. I think there is other, stronger evidence for atheism, but the contrarian heuristic definitely argues for belief in God.

Similarly, believing that cryonics is a good investment is obviously contrarian. AGI is harder to say; most people probably haven't thought about it.

It seems like the question you're really trying to answer is "what is a good prior belief for things I am not an expert on?"

(I'm sorry about arguing over terminology, which is usually stupid, but this case seems egregious to me).

Comment author: elharo 13 March 2014 12:06:28PM 2 points [-]

I wonder. Perhaps that 98% of people are theists is better evidence that theism is useful than that it's correct. In fact, I think ihe 98%, or even an 80% figure, is pretty damn strong evidence that theism is useful; i.e. instrumentally rational. It's basic microeconomics: if people didn't derive value from religion, they'd stop doing it. To cite just one example, lukeprog has written previously about joining Scientology because they had best Toastmasters group. There are many other benefits to be had by professing theism.

However I'm not sure that this strong majority belief is particularly strong evidence that theism is correct, or epistemically rational. In particular if it were epistemically rational, I'd expect religions would be more similar than they are. To say that 98% of people believe in God, requires that one accept Allah, the Holy Trinity, and Hanuman as instances of "God". However, adherents of various religions routinely claim that others are not worshipping God at all (though admittedly this is less common than it used to be). Is there some common core nature of "God" that most theists believe in? Possibly, but it's a lot hazier. I've even heard some professed "theists" define God in such a way that it's no more than the physical universe, or even one small group of actual, currently living, not-believed-to-be-supernatural people. (This happens on occasion in Alcoholics Anonymous, for members who don't like accepting the "Higher Power".)

At the least, majority beliefs and practice are stronger evidence of instrumental rationality than epistemic rationality.

Are there other cases where we have evidence that epistemic and instrumental rationality diverge? Perhaps the various instances of Illusory Superiority; for instance where the vast majority of people think they're an above average driver or the Dunning-Krueger effect. Such beliefs may persist in the face of reality because they're useful to people who hold these beliefs.

Comment author: Articulator 25 March 2014 02:25:31AM 0 points [-]

I don't think it is so much that it suggests Theism is useful - rather that Theism is a concept which tends to propagate itself effectively, of which usefulness is one example. Effectively brainwashing participants at an early age is another. There almost certainly several factors, only some of which are good.

Comment author: JQuinton 14 March 2014 07:38:16PM 0 points [-]

On the face of it, I also think that the fact that the majority believes something is evidence for that something. But then what about how consensus belief is also a function of time period?

How many times over the course of all human history has the consensus of average people been wrong about some fact about the universe? The consensus of say, what causes disease back in 1400 BCE is different than the consensus about the same today. What's to say that this same consensus won't point to something different 3400 years in the future?

It seems that looking at how many times the consensus has been wrong over the course of human history is actually evidence that "consensus" -- without qualification (e.g. consensus of doctors, etc.) -- is more likely to be wrong than right; the consensus seems to be weak evidence against said position.

Comment author: Nornagest 14 March 2014 09:15:47PM 0 points [-]

Seems to me that we're more likely to remember instances where the expert consensus was wrong than instances where it wasn't. The consensus among Classical Greek natural philosophers in 300 BC was that the earth was round, and it turns out they were absolutely right.

And I can only pick that out as an example because of the later myth regarding Christopher Columbus et al. There are probably hundreds of cases where consensus, rather than being overturned by some new paradigm, withstood all challenges and slowly fossilized into common knowledge.

Comment author: XiXiDu 11 March 2014 06:32:39PM *  2 points [-]

If you are already unable to determine the relevant expert community you should maybe ask how accurate people have been who started a new research field compared to people decades after the field has been established.

If it turns out that most people who founded a research field should expect their models to be radically revised at some point, then you should probably focus on verifying your models rather than prematurely drawing action relevant conclusions.

Comment author: Will_Sawin 11 March 2014 09:33:23PM 2 points [-]

No, you should focus on founding a research field, which mainly requires getting other people interested in the research field.

Comment author: TheAncientGeek 11 March 2014 07:59:04PM 1 point [-]

Maybe the easy answer is to turn "contrarian" into a two place predicate.

Comment author: JoshuaZ 17 March 2014 01:13:58AM 0 points [-]

What would the two places be?

Comment author: Articulator 25 March 2014 03:27:36AM 1 point [-]

With all due respect, I feel like this subject is somewhat superfluous. It seems to be trying to chop part of a general concept off into its own discrete category.

This can all be simplified into accepting that Expert and Common majority opinion are both types of a posteriori evidence that can support an argument, but can be overturned by better a posteriori or a priori evidence.

In other words, they are pretty good heuristics, but like any heuristics, can fail. Making anything more out of it seems to just be artificial, and only necessary if the basic concept proves to difficult to understand.

Comment author: itaibn0 11 March 2014 09:11:06PM 1 point [-]

Warning: Reference class tennis below.

I think you're neglecting something when trying to determine the right group of experts for judging AGI risk. You consider experts on AI, but the AGI risk thesis is not just a belief on the behavior of AI, it is also a belief on our long-term future, and it is incompatible with many other beliefs intelligent people hold on our long-term future. Therefore I think you should also consider experts on humanity's long-term future as relevant. As an analogy, if the question you want to answer is "Is the on Bible a correct account of the nature of God?", you should not just ask experts on the Bible. Instead, you should broaden your question to "What religion is true?" and ask experts on religion in general.

A difficult aspect is that it's not clear who is an expert on the long-term future, seeing as almost everybody is interested in the issue. Politicians, historians, economists, and philosophers are all candidates. Perhaps the best candidates are the super forecasters you mentioned.

Comment author: fubarobfusco 12 March 2014 01:53:28AM 0 points [-]

As an analogy, if the question you want to answer is "Is the on Bible a correct account of the nature of God?", you should not just ask experts on the Bible. Instead, you should broaden your question to "What religion is true?" and ask experts on religion in general.

One thing we notice when we look at differences among religious views in the population is that there is a substantial geographic component. People in Germany are a lot more likely than people in Japan to be Roman Catholics. People in Bangkok, Thailand are a lot more likely than people in Little Rock, Arkansas to be Buddhists.

Comment author: Brian_Tomasik 12 March 2014 09:03:22AM 2 points [-]

There are also clusters of opinion in many other fields based on location (continental philosophy, Chicago school of economics, Washington consensus, etc.).

Comment author: Jayson_Virissimo 12 March 2014 06:01:25PM *  1 point [-]

One thing we notice when we look at differences among religious views in the population is that there is a substantial geographic component. People in Germany are a lot more likely than people in Japan to be Roman Catholics. People in Bangkok, Thailand are a lot more likely than people in Little Rock, Arkansas to be Buddhists.

If I am not mistaken, Newtonian physics was accepted almost instantly in the anglosphere, while Cartesian physics was dominant on the Continent for an extended period of time thereafter. Also, interpretations of probability (like Frequentism and Bayesianism) have often clustered by university (or even particular faculties within an individual university). So, physics and statistics seems to suffer from the same problem.

Comment author: fubarobfusco 13 March 2014 06:56:54AM *  2 points [-]

For religion, the difference seems to persist over millennia, even under reasonably close contact, except in specific circumstances of authority and conquest — whereas for science and technology, adoption spreads over years or decades whenever there's close contact and comparison.

Religious conquests such as the Catholic conquest of South America don't seem more common worldwide than persistent religious differences such as in India, where Christians have remained a 2% minority despite almost 2000 years of contact including trade, missionary work, and even occasional conquest (the British Raj).

With religion, whenever there aren't authorities with the political power to expel heretics, syncretism and folk-religion develop — the blending of religious traditions, rather than the inexorable disproof and overturning of one by another.

This suggests that differences in religious practice do not reliably bring the socioeconomic and geopolitical advantages that differences in scientific and technological practice bring. If one of the major religions brought substantial advantages, we would not expect to see the persistence of religious differences over millennia that we do.

(IIRC, Newtonian physics spread to the Continent beginning with du Châtelet's translation of the Principia into French, some sixty years after its original publication.)

Comment author: Jiro 12 March 2014 06:45:45PM 1 point [-]

First of all, it seems like the mechanisms for this happening are different for science. One rarely sees scientific ideas spread by conquest, or by the kind of social pressure brought to bear by religions. Nobody gets told by their mother to believe Newton's Laws every week at college in the way they might be told to go to Mass.

Second, everyone is (according to religion) supposed to believe in and live by religion, whether they are educated or not. I'd have greater expectations that beliefs associated with education are geographically limited.

Comment author: Lumifer 12 March 2014 08:00:36PM 3 points [-]

One rarely sees scientific ideas spread by conquest

Except when a more-scientific society kicks the hell out of the less-scientific society and takes it over (or, maybe, just takes over its land and women). Science is very helpful for building weapons.

Comment author: Strange7 13 March 2014 05:26:45AM 2 points [-]

In those situations, though, the conquering force usually makes a point to avoid letting their core scientific insights spread, lest the tables turn.

Comment author: Lumifer 13 March 2014 03:07:05PM 0 points [-]

the conquering force usually makes a point to avoid letting their core scientific insights spread, lest the tables turn.

Only if the conquered society survives. Often it doesn't -- look at what happened in the Americas.

Comment author: Strange7 15 March 2014 08:23:42AM 3 points [-]

That seems like a subset, not an exception. A conquered society which doesn't survive is certainly not going to be adopting higher-tech weapons.

Comment author: Eugine_Nier 18 March 2014 03:16:28AM 0 points [-]

Well that didn't stop Japan from getting western advisers to help with it's modernization after Perry opened it.

Comment author: newerspeak 18 March 2014 05:33:28AM 1 point [-]
Comment author: TheAncientGeek 12 March 2014 08:35:16PM 0 points [-]

Science =/= technology. Profoundly anti scientific movements can make their point using tech. .bought on the open market.

Comment author: Eugine_Nier 13 March 2014 03:35:07AM 0 points [-]

One rarely sees scientific ideas spread by conquest,

The spread of scientific ideas from Europe was almost entirely by conquest.

or by the kind of social pressure brought to bear by religions.

Try being openly creationist at a major university, you'll quickly discover the kind of social pressure science can bring to bear.

Nobody gets told by their mother to believe Newton's Laws every week at college in the way they might be told to go to Mass.

Although they might get told to go to the doctor and not the New Age healer.

Comment author: EHeller 13 March 2014 03:54:28AM *  6 points [-]

Try being openly creationist at a major university, you'll quickly discover the kind of social pressure science can bring to bear.

There were several people in my physics phd program who were openly creationist, and they were politely left alone. I don't know of an environment more science-filled, and honestly I've never known a higher density of creationists.

Comment author: Jiro 13 March 2014 08:19:23AM *  2 points [-]

The spread of scientific ideas from Europe was almost entirely by conquest.

Only in a very literal sense. Nobody said "let's conquer everyone in order to spread the atomic of theory of matter" and once they conquered they didn't execute people who didn't believe in atoms or decide that people who don't believe in atoms are not permitted to testify in court.

Try being openly creationist at a major university, you'll quickly discover the kind of social pressure science can bring to bear.

That's not the same kind of social pressure. I'm referring to one's personal life, not one's professional life.

Comment author: Bugmaster 13 March 2014 04:58:23AM 2 points [-]

Although they might get told to go to the doctor and not the New Age healer.

Where do you draw the line, though ? Kids also get told to brush their teeth and to never play in traffic...

Comment author: Eugine_Nier 18 March 2014 03:17:15AM 0 points [-]

Ask Jiro, I'm not convinced a line exists.

Comment author: Jiro 18 March 2014 08:39:28PM *  0 points [-]

The question dealt with beliefs. Brushing one's teeth (or going to a doctor instead of a healer) is an action. Going to Mass is technically an action, but its primary effect is to instill a belief system in the child.

Comment author: christopherj 01 April 2014 06:47:21AM 0 points [-]

It seems to me that having some contrarian views is a necessity, despite the fact that most contrarian views are wrong. "Not every change is an improvement, but every improvement is a change." As such I'd recommend going meta, teaching other people the skills to recognize correct contrarian arguments. This of course will synergize with recognizing whether your own views are probable or suspect, as well as with convincing others to accept your contrarian views.

  1. Determine levels of expertise in the subject. Not a binary distinction between "expert" and "non-expert" that would put nutritionists, theologians, and futurists in the same category as physicists, materials scientists, and engineers. 1a. The main determinants would be how easy it is to test things, and how much testing has been done. 1b. What's the level of consensus? I'd say less than 90% consensus is suspicious; probably indicative of a difficult profession (the experts cannot give definitive well-tested answers).

  2. What's the experts' reaction to the contrarian view? Do the experts have good reason for rejecting the view, or do they become converts upon hearing it?

  3. What's the epistemic basis of the views? Are we talking about empirical tests, logical deduction, educated guesses, or wild speculation?

  4. Look for conflicts of interest. Don't exclude your own. Look for monetary interests, political interests, moral/values interests, emotional interests, aesthetic interests. Subjects like climate change and economic policy are so interest-laden that besides the difficulties in testing it becomes difficult to find honest, actual experts. Conversely, some ideas are accepted despite interests; dentists advise you against sugar despite their monetary interests, and quantum mechanics is accepted despite being unintuitive.

  5. Consider how you'd convince an honest, intelligent, well-educated expert to accept your contrarian view. If you don't think you can, odds are you don't have cause to believe it yourself.

  6. Test point 5. Remember, you're making a difference in the world, so don't make excuses.

Comment author: [deleted] 12 March 2014 12:55:22PM 0 points [-]

But what’s the relevant expert population, here? Suppose it’s “academics who specialize in the arguments and evidence concerning whether a god or gods exist.” If so, then the expert population is probably dominated by academic theologians and religious philosophers, and my atheism is a contrarian view.

Have you actually checked whether most theologians and philosophers of religion believe in God? Have you picked out which God they believe in?

A priori, academics usually believe in God less than the general population.

Comment author: Alejandro1 12 March 2014 11:31:38PM *  3 points [-]

Here are the results of a survey of philosophers of religion specifically. It has lots of interesting data, among them:

  • Most philosophers of religion are committed Christians.

  • The most common reasons given for specializing in philosophy of religion presupposed a previous belief in religion. (E.g. "Faith seeking understanding"; "Find arguments in order to witness", etc.)

  • Most belief revisions due to studying philosophy of religion were in the direction of greater atheism rather than the opposite. However, this seems to be explained by a combination of two separated facts: most philosophers of religion begin as theists as mentioned above, and most (from both sides) become less dogmatic and more appreciative of arguments for the opposing view with time.

Comment author: pragmatist 12 March 2014 04:52:45PM 2 points [-]

According to the PhilPapers survey, around 70% of philosophers of religion are theists.

Comment author: [deleted] 12 March 2014 05:21:54PM 2 points [-]

Has someone asked them why? Perhaps we should be theists.

Comment author: pragmatist 12 March 2014 10:05:34PM *  11 points [-]

I will admit to not being all that familiar with contemporary arguments in the philosophy of religion. However, there are other areas of philosophy with which I am quite familiar, and where I regard the debates as basically settled. According to the PhilPapers survey, pluralities of philosophers of religion line up on the wrong side of those debates. For example, philosophers of religion are much more likely (than philosophers in general) to believe in libertarian free will, non-physicalism about the mind, and the A-theory of time (a position that has, for all intents and purposes, been refuted by the theory of relativity). These are not, by the way, issues that are incidental to a philosopher of religion's area of expertise. I imagine views about the mind, the will and time are integral to most intellectual theistic frameworks.

The fact that these philosophers get things so wrong on these issues considerably reduces my credence that I will find their arguments for God convincing. And this is not just a facile "They're wrong about these things, so they're probably wrong about that other thing too" kind of argument. Their views on those issues are indicative of a general philosophical approach --, one that takes our common-sense conceptual scheme and our pre-theoretic intuitions as much stronger evidence than I think they actually are, and correspondingly takes the deliverances of our best scientific theories much less seriously than I do. I very strongly suspect that their arguments for theism will fit this pattern (reliance on a priori "common-sense" principles like the Principle of Sufficient Reason, for example).

I am familiar with the kinds of arguments made by people who adopt this philosophical outlook -- not in the case of theism specificially, but in other domains of philosophy -- and I don't find them all that compelling. In fact, I think they represent much of what is pathological about contemporary philosophy. So I think there is sound evidence that philosophers of religion tend to practice a mode of philosophy which, although quite sophisticated and intellectually challenging, is not particularly truth-conducive.

Comment author: [deleted] 13 March 2014 11:33:06PM *  0 points [-]

Their views on those issues are indicative of a general philosophical approach --, one that takes our common-sense conceptual scheme and our pre-theoretic intuitions as much stronger evidence than I think they actually are, and correspondingly takes the deliverances of our best scientific theories much less seriously than I do. I very strongly suspect that their arguments for theism will fit this pattern (reliance on a priori "common-sense" principles like the Principle of Sufficient Reason, for example).

So you're saying they practice non-naturalized philosophy? Are you sure these are philosophers of religion we're dealing with and not AIXI instances incentivized by professorships?

Comment author: Protagoras 12 March 2014 10:44:41PM *  -1 points [-]

Very good diagnosis. While I don't recall where the philosophers of religion came out in this area on the survey, some of the popular arguments for the existence of God seem to rely on a very strong version of mathematical Platonism, a believe that there is a one true logic/mathematics and that strong conclusions about the world can be drawn by the proper employment of that logic (the use of PSR that you mention is a common, but not the only, example of this). Since I reject that kind of One True Logic (I'm a Carnapian, "in logic there are no morals!" guy), I tend to think that any logical proof of the existence of God (or anything else) serves only to establish that you are using a logic which has a built in "God exists" (or whatever) assumption. For example, the simple modal ontological argument which says God possibly exists, God is a necessary being, so by a few simple steps in S5, God exists; if you're using S5, then once you've made one of the assumptions about God, making the other one just amounts to assuming God exists, and so in effect committing yourself to reasoning only about God worlds. Such a logic may have its uses, but it is incapable of reasoning about the possibility that God might or might not exist; for such purposes a neutral logic would be required.

Comment author: Protagoras 12 March 2014 06:24:56PM *  1 point [-]

I read some of the discussion on philosophy of religion blogs after the survey came out. One of the noteworthy results of the survey was that philosophers who don't specialize in philosophy of religion were about 3/4 atheists. One or two of the philosophy of religion bloggers claimed that their non-specialist colleagues weren't familiar with some of the recent literature presenting new arguments for the existence of God. As a philosopher who doesn't specialize in philosophy of religion, I thought they underestimated how familiar people like me are with the arguments concerning theism. However, I admit for people like me it comes up especially in history, so I followed up on that and looked at some of the recommended recent papers. I was unable to find anything that looked at all compelling, or really even new, but perhaps I didn't try hard enough.

Comment author: [deleted] 12 March 2014 07:48:40PM *  1 point [-]

So, sum total, you're saying that philosophers of religion believe because they engage in special pleading to get separate epistemic standards for God?

(Please note that my actual current position is strong agnosticism: "God may exist, but if so, He's plainly hiding, and no God worthy of the name would be incapable of hiding from me, so I cannot know if such a God exists or not.")

Comment author: Strange7 13 March 2014 05:45:33AM 0 points [-]

I've got a variant of that. "Assuming God exists, He seems to be going to some trouble to hide. Either He doesn't want to be found, in which case the polite thing is to respect that, or He's doing some screwy reverse-psychology thing, in which case I have better things to do with my time than engage an omnipotent troll."

Comment author: Protagoras 12 March 2014 08:28:34PM 0 points [-]

Well, I didn't want to go into detail, because I don't remember all the details and didn't feel like wasting time going and looking it up again, but yes, essentially. The usual form is "if you make these seemingly reasonable assumptions, you get to God, so God!", and usually the assumptions actually didn't look that reasonable to me to begin with, and of course an alternative response to the arguments is always that they provide evidence that the assumptions are much more powerful than they seem and so need much closer examination.

Comment author: chaosmage 12 March 2014 05:32:01PM 0 points [-]

Would you believe their self-reported answer to be the actual reason?

Comment author: [deleted] 12 March 2014 05:47:57PM 0 points [-]

I would believe that their self-reported answer is at least partially the real reason. I would also want to know about any mitigating factors, of course, but saying something about the reasons for their beliefs implies some evidence.

Comment author: Jayson_Virissimo 12 March 2014 11:05:10PM *  0 points [-]

Has someone asked them why? Perhaps we should be theists.

Yes, they say something like this, this, or this. Whether that is their True Reason for Believing...well, I don't think so in most cases, but that is just my intuition.

BTW, I'm also a theist (some might say on a technicality) for the fairly boring and straightforward reason that I affirm Bostrom's trilemma and deny its first two disjuncts along with accepting having created our universe as a sufficient (but not necessary) condition for godhood.

Comment author: TheAncientGeek 14 March 2014 11:09:10AM *  2 points [-]

The trilemma should be a tetralemma: you also need to deny that consciousness, qualia and all, is unsimulable.

Comment author: [deleted] 14 March 2014 07:05:14AM 0 points [-]

BTW, I'm also a theist (some might say on a technicality) for the fairly boring and straightforward reason that I affirm Bostrom's trilemma and deny its first two disjuncts along with accepting having created our universe as a sufficient (but not necessary) condition for godhood.

So you believe there exists a Matrix Lord? Basically, computational Deism?

Comment author: tom_cr 11 March 2014 09:55:11PM 0 points [-]

A minor point in relation to this topic, but an important point, generally:

It seems to be more of a contrarian value judgment than a contrarian world model

Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

Many tell me (effectively) that what I've just expressed is a contrarian view. Certainly, for many years I would have happily agreed with the non-overlapping-ness of value judgements and world views. But then I started to think about it. I thought about it all the more carefully because it seemed the conclusion I was reaching was a contrarian position. I thought about it so much, in fact, that it's now quite obvious to me that I'm right, regardless how large the majority who profess to disagree with me.

Perhaps this illustrates the utility of recognizing an idea's contrarian nature (and conversely, the danger of not pursuing ideas simply because consensus is deemed to have been already reached).

Comment author: nshepperd 12 March 2014 12:30:01AM 0 points [-]

Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

That's confusing levels. A world model that makes some factual assertions, some of which imply "my values are X" is a distinct thing from your values actually being X. To begin with, it's entirely possible for your world model to imply that "my values are X" when your values are actually Y, in which case your world model is wrong.

Comment author: tom_cr 12 March 2014 03:47:11AM -1 points [-]

What levels am I confusing? Are you sure it's not you that is confused?

Your comment bears some resemblance to that of Lumifer. See my reply above.

Comment author: nshepperd 12 March 2014 04:22:34AM 0 points [-]

To put it simply, what I am saying is that a value judgement is about whatever it is you are in fact judging. While a factual assertion such as you would find in a "model of the world" is about the physical configuration of your brain. This is similar to the use/mention distinction in linguistics. When you make a value judgement you use your values. A model of your brain mentions them.

An argument like this

You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you [therefore a value judgement is necessarily part of a world model].

could be equally well applied to claim that the act of throwing a ball is necessarily part of a world model, because your arm is physical. In fact, they are completely different things (for one thing, simply applying a model will never result in the ball moving), even though a world model may well describe the throwing of a ball.

Comment author: tom_cr 12 March 2014 03:52:32PM 0 points [-]

A value judgement both uses and mentions values.

The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one's inferences.)

This is how it is with all forms of inference.

Throwing a ball is not an inference (note that 'inference' and 'judgement' are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.

Comment author: nshepperd 12 March 2014 11:27:08PM *  0 points [-]

Here is a quote from the article:

Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model, and by “contrarian view” I tend to mean “contrarian world model.”

Lukeprog thinks that effective altruism is good, and this is a value judgement. Obviously, most of mainstream society doesn't agree—people prefer to give money to warm fuzzy causes, like "adopt an endangered panda". So that value judgement is certainly contrarian.

Presumably, lukeprog also believes that "lukeprog thinks effective altruism is good". This is a fact in his world model. However, most people would agree with him when asked if that is true. We can see that lukeprog likes effective altruism. There's no reason for anyone to claim "no, he doesn't think that" when he obviously does. So this element of his world model is not contrarian.

Comment author: tom_cr 13 March 2014 02:40:24AM 0 points [-]

I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?

One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one's own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I'm sure you appreciate how fundamentally silly this is, but maybe you could take a little time to meditate on it some more.

Sorry if my tone is a little condescending, but understand that you have totally failed to support your initial claim that I was confused.

Comment author: nshepperd 13 March 2014 12:05:03PM *  1 point [-]

That's not at all what I meant. Obviously minds and brains are just blobs of matter.

You are conflating the claims "lukeprog thinks X is good" and "X is good". One is an empirical claim, one is a value judgement. More to the point, when someone says "P is a contrarian value judgement, not a contrarian world model", they obviously intend "world model" to encompass empirical claims and not value judgements.

Comment author: tom_cr 13 March 2014 04:45:47PM 0 points [-]

I'm not conflating anything. Those are different statements, and I've never implied otherwise.

The statement "X is good," which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.

"X is good" is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that "X is good" is a claim about physical matter, and therefore part of the world model of anybody who believes it.

If there is some particular point or question I can help with, don't hesitate to ask.

Comment author: nshepperd 13 March 2014 09:23:43PM -1 points [-]

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good" and would not say things like "taking a murder pill doesn't affect the fact that murder is bad".

Alternative: what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

Comment author: Strange7 13 March 2014 05:35:27AM 0 points [-]

My theory is that the dualistic theory of mind is an artifact of the lossy compression algorithm which, conveniently, prevents introspection from turning into infinite recursion. Lack of neurosurgery in the environment of ancestral adaptation made that an acceptable compromise.

Comment author: tom_cr 13 March 2014 05:11:29PM 0 points [-]

I quite like Bob Trivers' self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call "me" as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.

Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you're quite possibly on to something, that the computational overhead just doesn't pay off.

Comment author: Lumifer 12 March 2014 01:06:05AM 0 points [-]

but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

The issue is, whose world model? Your world model does not necessarily include values even if they were to be deterministically derived from "the arrangement of the matter".

The map is not the territory. Models are imperfect and many different models can be build on the basis of the same reality.

Comment author: tom_cr 12 March 2014 02:19:03AM *  0 points [-]

whose world model?

Trivially, it is the world model of the person making the value judgement I'm talking about. I'm trying hard, but I'm afraid I really don't understand the point of your comment.

If I make a judgement of value, I'm making an inference about an arrangement of matter (mostly in my brain), which (inference) is therefore part of my world model. This can't be otherwise.

Furthermore, any entity capable of modeling some aspect of reality must be, by definition, capable of isolating salient phenomena, which amounts to making value judgements. Thus, I'm forced to disagree when you say "your world model does not necessarily include values..."

Your final sentence is trivially correct, but its relevance is beyond me. Sorry. If you mean that my world model may not include values I actually possess, this is correct of course, but nobody stipulated that a world model must be correct.

Comment author: Lumifer 12 March 2014 04:16:12AM 1 point [-]

I don't think we understand each other. Let me try to unroll.

A model (of the kind we are talking about) is some representation of reality. It exists in a mind.

Let's take Alice. Alice holds an apple in her hand. Alice believes that if she lets go of the apple it will fall to the ground. This is an example of a simple world model that exists inside Alice's mind: basically, that there is such a thing as gravity and that it pulls objects towards the ground.

You said "isn't a value judgement necessarily part of a world model?" I don't see a value judgement in this particular world model inside Alice's mind.

You also said "You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you." That is a claim about how Alice's values came to be. But I don't see why Alice's values must necessarily be part of all world models that exists inside Alice's mind.

Comment author: tom_cr 12 March 2014 03:42:29PM 0 points [-]

I never said anything of the sort that Alice's values must necessarily be part of all world models that exist inside Alice's mind. (Note, though, that if we are talking about 'world model,' singular, as I was, then world model necessarily includes perception of some values.)

When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.

Comment author: Lumifer 12 March 2014 03:55:33PM 0 points [-]

When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.

So, Alice likes cabernet and dislikes merlot. Alice says "I value cabernet more than merlot". This is a value judgement. How is it a part of Alice's world model and which world model?

By any chance, are you calling "a world model" the totality of a person's ideas, perceptions, representations, etc. of external reality?

Comment author: tom_cr 12 March 2014 04:14:27PM 1 point [-]

Alice is part of the world, right? So any belief about Alice is part of a world model. Any belief about Alice's preference for cabernet is part of a world model - specifically, the world model of who-ever holds that belief.

By any chance....?

Yes. (The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of". )

Is there something wrong with that? I inferred that to also be the meaning of the original poster.

Comment author: Lumifer 12 March 2014 04:20:39PM 0 points [-]

specifically, the world model of who-ever holds that belief

Not "whoever", we are talking specifically about Alice. Is Alice's preference for cabernet part of Alice's world model?

I have a feeling we're getting into the snake-eating-its-own-tail loops. If Alice's preferences are part of Alice's world model then Alice's world model is part of Alice's world model as well. Recurse until you're are reduced to praying to the Holy Trinity of Godel, Escher, and Bach :-)

The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of".

Could it? You are saying that value judgments must be a part of. Are there "elements of" which do not contain value judgements?

Comment author: tom_cr 12 March 2014 04:41:20PM 1 point [-]

Are there "elements of" which don't contain value judgements?

That strikes me as a question for dictionary writers. If we agree that Newton's laws of motion constitute such an element, then clearly, there are such elements that do not not contain value judgements.

Is Alice's preference for cabernet part of Alice's world model?

iff she perceives that preference.

If Alice's preferences are part of Alice's world model, then Alice's world model is part of Alice's world model as well.

I'm not sure this follows by logical necessity, but how is this unusual? When I mention Newton's laws, am I not implicitly aware that I have this world model? Does my world model, therefore, not include some description of my world model? How is this relevant?

Comment author: Dagon 11 March 2014 08:11:25PM 0 points [-]

More recent (last week) Hanson writing on contrarianism: http://www.overcomingbias.com/2014/03/prefer-contrarian-questions-vs-answers.html He takes a tack similar to your "value contrarianism" - you and he think that beliefs on these topics (values for you, important topics for him) are less likely for the consensus (whichever one you're contrary-ing) to be correct.

I wonder if some topics, especially far-mode ones, don't have truth, or truth is less important.to actions. Those topics would be the ones to choose for contrarian signaling.

Comment author: common_law 31 March 2014 06:48:00PM 0 points [-]

You're mistaken in applying the same standards to personal and deliberative decisions. The decision to enroll in cryonics is different in kind from the decision to promote safe AI for the public good. The first should be based on the belief that cryonics claims are true; the second should be based (ultimately) on the marginal value of advocacy in advancing the discussion. The failure to understand this distinction is a major failing in public rationality. For elaboration, see The distinct functions of belief and opinion.

Comment author: Filipe 12 March 2014 05:58:16PM *  -1 points [-]

Garth Zietsman, who according to himself, "Scored an IQ of 185 on the Mega27 and has a degree in psychology and statistics and 25 years experience in psychometrics and statistics", proposed the statistical concept of The Smart Vote , which seems to resemble your "Mildly extrapolate elite opinion". There are many applications of his idea to relevant topics on his blog.

It's not choosing the most popular answer among the smart people in any (aggregation of) poll(s), but comparing the proportion of the most to the less intelligent in any answer, and deciding The Smart Vote is that which has the largest ratio, after controlling for possible interests.

Comment author: Jayson_Virissimo 12 March 2014 06:10:31PM 1 point [-]

J. S. Mill had a similar idea:

...one might also mention his acceptance of the principle of multiple votes, in which educated and more responsible persons would be made more influential by giving them more votes than the uneducated.

-- Wilson, Fred, "John Stuart Mill", The Stanford Encyclopedia of Philosophy

Comment author: Jiro 12 March 2014 06:16:38PM 5 points [-]

There is nobody who I'd trust to decide who is considered educated and more responsible and therefore who would get more votes. And the historical record on similar subjects is pretty bad.

I would also expect a feedback loop where people who can vote more vote to give more voting power to people like themselves.

(And I also find it odd that most people who contemplate such things assume they would be considered educated and more responsible. Imagine a world where, say, the socially responsible get more voting power, and that (for instance) thinking that there are innate differences between races or sexes disqualifies one from being socially responsible.)

Comment author: jpaulson 13 March 2014 05:09:59AM 1 point [-]

The pervasive influence of money in politics sort of functions as a proxy of this. YMMV for whether it's a good thing...

Comment author: Filipe 12 March 2014 06:18:34PM *  -1 points [-]

Even though he calls it "The Smart Vote", the concept is a way to figure out the truth, not to challenge current democratic notions (I think), and is quite a bit more sophisticated than merely giving greater weight to smarter people's opinions.