by [anonymous]
5 min read8th May 2010108 comments

26

Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/

Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating?  How do you identify "bunk" or "crackpot" ideas?

Here are some examples to give an idea. 

"Here's a perpetual motion machine": bunk.  "I've found an elementary proof of Fermat's Last Theorem": bunk.  "9-11 was an inside job": bunk.  

 "Humans did not cause global warming": possibly bunk, but I'm not sure.  "The Singularity will come within 100 years": possibly bunk, but I'm not sure.  "The economic system is close to collapse": possibly bunk, but I'm not sure.

"There is a genetic difference in IQ between races": I think it's probably false, but not quite bunk.  "Geoengineering would be effective in mitigating global warming": I think it's probably false, but not quite bunk. 

(These are my own examples.  They're meant to be illustrative, not definitive.  I imagine that some people here will think "But that's obviously not bunk!"  Sure, but you probably can think of some claim that *you* consider bunk.)

A few notes of clarification: I'm only examining factual, not normative, claims.  I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, "There are 10 people in the US population.")  I'm concerned with claims that look unlikely, but not impossible. Also, "Is this bunk?" is not the same question as "Is this true?"  A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes.  That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.)  The question "Is this bunk?" arises when someone makes an unlikely-sounding claim, but I don't actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot.  In other words, is it a scientific or a pseudoscientific hypothesis?  Or, in practical terms, is it worth it for me or anybody else to investigate it?

This is an important question, and especially to this community.  People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it's particularly crucial to distinguish an interesting hypothesis from a bunk one.  Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.

I claim bunk exists. That is, there are claims so cracked that they aren't worth investigating. "I was abducted by aliens" has such a low prior that I'm not even going to go check up on the details -- I'm simply going to assume the alleged alien abductee is a fraud or nut.  Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim.  Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.

But how do we determine what's likely to be nonsense?  "I know it when I see it" is a pretty bad guide.

First idea: check if the proposer uses the techniques of rationality and science.  Does he support claims with evidence?  Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim?  Does he appeal to dogma or authority?  If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.

But what if there aren't such clear warning signs?  Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience -- he doesn't ask people to take things on faith, he's very alert to fallacies in reasoning, and so on.  And yet he's making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible.  What now?  Is this bunk?

A key thing to consider is the role of the "mainstream."  When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?  There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.  As far as I can tell, the best representatives of these schools don't commit the kinds of fallacies and bad arguments of the typical pseudoscientist.  How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?  Perhaps it's only reasonable to give some weight to that fact.  

Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then "mainstream" status doesn't confer any information.  The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves.  We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them.  But when they do that, don't they risk being considered crackpots themselves?  This is some version of "Tolerate tolerance."  If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.  

So the question "What is bunk?", that is, the question, "What is likely enough to be worth investigating?", apparently destroys itself.  You can only tell if a claim is unlikely by doing a little investigation.  It's probably a reflexive process: when you do a little investigation, if it's starting to look more and more like the claim is false, you can quit, but if it's the opposite, then the claim is probably worth even more investigation.  

The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further.  Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won't bother; some will go even further and do extensive research.  When we check the consensus to see if a claim is considered bunk, we're acting on the hope that somebody has a lower threshold for investigation than we do.  We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don't have to.  From an economic perspective, this is an enormous free-rider problem, though -- who wants to be that poor dogged sap?  The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation.  We should thank the poor dogged saps of the world.  We should create more incentives to be a poor dogged sap.  Because if we don't have enough of them, we're going to be very mistaken when we think "Well, this wasn't important enough for anyone to investigate, so it must be bunk."

(N.B.  I am aware that many climate scientists are being "poor dogged saps" by communicating with and attempting to refute global warming skeptics.  I'm not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)

 

New Comment
108 comments, sorted by Click to highlight new comments since: Today at 10:57 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

SarahC:

A key thing to consider is the role of the "mainstream." When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?

An important point here is that the intellectual standards of the academic mainstream differ greatly between various fields. Thus, depending on the area we're talking about, the fact that a view is out of the mainstream may imply that it's bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.

From my own observations of research literature in various fields and the way academia operates, I have concluded that healthy areas where the mainstream employs very high intellectual standards of rigor, honesty, and judicious open-mindedness are normally characterized by two conditions:

(1) There is lots of low-hanging fruit available, in the sense of research goals that are both interesting and doable, so that there are clear paths to quality work, which makes it unnecessary to invent bullshit instead.

(2) There are no incentives to invent bullshit for political or ideological reasons.

As soon as either of these conditions doesn't hold in an academic area, th... (read more)

SarahC:

There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.

So, to apply my above criteria to these cases:

  • Climate science is politicized to an extreme degree and plagued by vast methodological difficulties. (Just think about the difficulty of measuring global annual average temperature with 0.1C accuracy even in the present, let alone reconstructing it far into the past.) Thus, I'd expect a very high level of bullshit infestation in its mainstream, so critics scorned by the mainstream should definitely not be dismissed out of hand.

  • Ditto for mainstream vs. Austrian macroeconomics; in fact, even more so. If you look at the blogs of prominent macroeconomists, you'll see lots of ideologically motivated mutual scorn and abuse even within the respectable mainstream. Austrians basically call bullshit on the entire mainstream, saying that the whole idea of trying to study economic aggregates by aping physics is a fundamentally unsound cargo-cult approach, so they're hated by everyone. While Austrians have their own dubious (and sometimes obvious

... (read more)
4gwern13y
If it's not presumptuous of me, I'd like the Bogdanov affair removed as an example. I was one of the Wikipedia administrators deeply involved in the BA edit-wars on Wikipedia, and while I originally came to it with an open mind (why I was asked to intervene), quickly there came to be not a single doubt in my mind that the brothers were complete con artists and possess only a talent for self-promotion and media manipulation. This is unlike string theory, where there are good arguments on both sides and one could genuinely be uncertain.
2Vladimir_M13y
However, would you agree that Bogdanoff brothers' work has been, at least at some points, approved and positively reviewed by credentialed physicists with official and reputable academic affiliations? After all, they successfully published several papers and defended their theses. Now, it may be that after their work came under intense public scrutiny, it was shown to be unsound so convincingly that it led some of these reviewers to publicly reverse their previous judgments. However, considering that the overwhelming majority of research work never comes under any additional scrutiny beyond the basic peer review and thesis defense procedures, this still seems to me like powerful evidence that the quality of many lower-profile publications in the field could easily be as bad.
2gwern13y
As I recall, they didn't defend their theses, and only eventually got their degrees by a number of questionable devices like replacing a thesis with publications somewhere and forcing a shift to an entirely different field like mathematics. EDIT: The oddities of their theses is covered in http://en.wikipedia.org/wiki/Bogdanoff_affair#Origin_of_the_affair
0multifoliaterose13y
Very articulate comment, it helped clarify my thinking on this topic; thanks.

For me the primary evidence of a bunk claim is when the claimant fails to reasonably deal with the mainstream. Let's take the creation evolution debate. If someone comes along claiming a creationist position, but is completely unable to even describe what the evolutionary position is, or what might be good about it, then their idea is bunk. If someone is very good at explaining evolution as it really happens, but then goes on to claim something different can happen as well - then it becomes interesting.

Anyone proposing an alternative idea needs to know precisely what it is an alternative to - otherwise they haven't done their homework, and it isn't worth my time.

Yes! This is a key point in the Alternative-Science Respectability Checklist, for example:

Someone comes along and says “I’ve discovered that there’s no need for dark matter.” A brief glance at the abstract reveals that the model violates our understanding of perturbation theory. Well, perhaps there is something subtle going on here, and our conventional understanding of perturbation theory doesn’t apply in this case. So here’s what any working theoretical cosmologist would do (even if they aren’t consciously aware that they’re doing it): they would glance at the introduction to the paper, looking for a paragraph that says “Look, we know this isn’t what you would expect from elementary perturbation theory, but here’s why that doesn’t apply in this case.” Upon not finding that paragraph, they would toss the paper away.

2Eugine_Nier13y
Replace "creationist" and "evolutionary" in that sentence with "atheist" and "religious" respectively and you have the most common theist criticism of Dawkins. Therefore, since theism is more-or-less the mainstream position, wouldn't following your rule force you to conclude that Dawkins' atheism is bunk?
1DuncanS13y
Sorry to take a while to look at this. It would. I'm aware of what Dawkins has said about this - that one doesn't neeed to be an expert on fairies in order to conclude that they don't exist, and that this ought to apply to Gods as well. This is fair enough. It's a rule of argument. If someone doesn't want to learn about fairies, that's their own concern. But if they want to persuade some other people who do believe in the fairies, they ought to take the time to learn enough about what those people say about fairies to plug into their world. Theories are like languages, I think. If someone has a mental vocabulary which involves fairies, you will more easily persuade them if you can use the language too. What too often happens is that a critic doesn't learn the other person's language. They then end up misrepresenting what the other party believes, and to follow that up, they tell them that their first step to knowledge is to throw away a language that they find useful in favour of a different one that they've never used. They then go on to make arguments to which they have no idea how I'm going to respond. As a persuasion strategy, this is a non-starter. I'm not at all saying all theories/languages are equal, some are far better than others. But if you want to persuade an outsider, learning their language is only courteous, and gives you a huge advantage. You learn where the real problems of the other belief system are. You discover what it does successfully explain. You discover how to partially express your beliefs in their system, which makes it easier for them to accept and test what you're saying. My original point is that, as an optimisation, you can immediately reject any arguer who hasn't realised that they need to talk the language of their hearers. It does explain why Dawkin's book has resulted in more heat than light. Reading it, Dawkin's book can be summarised as saying "Your theism seems completely ridiculous, for all these reasons. I don't know h
0Eugine_Nier13y
Why is this a good optimization? Do you have any particular evidence that an arguer who is willing to learn and use your language is more likely to have accurate beliefs?
0DuncanS13y
It's the other way about - I can't think of an example where someone who didn't know the language of any field of learning has successfully convinced that field of anything (other than that they are a fool). I'm not saying that person is particularly ignorant - they may be quite smart in some ways - but they're not doing what's necessary to convince. My optimisation is to ignore them until they put in the effort - it's much easier for them to learn the language than to do the novel thinking, after all. If that makes them frustrated, so be it.
0Eugine_Nier13y
The point is not that it keeps them frustrated, the point is that it keeps you ignorant.
0DuncanS13y
Quite the reverse - it guides me to pay attention to those people who do take the trouble. It's not as if I'm in any danger of running out of information these days.
0Eugine_Nier13y
The question still remains why you think your heuristic is particularly good.

Note that when you consider a claim, you shouldn't set out to prove it false, or to prove it true. You should set out to find a correct conclusion about the claim, the truth about it. Not being skeptical is a particular failure mode that makes experts who you suspect of having this flaw, inappropriate source of knowledge about the claim. "Skepticism" is a similarly flawed mode of investigation.

So, the question shouldn't be, "Who is qualified to refute the Friendly AI idea?", but "Who is qualified to reveal the truth about the Friendly AI idea?".

It should be an established standard to link to the previous posts on the same topic. This is necessary to actually build upon existing work, and not just create blogging buzz. In this case, the obvious reference is The Correct Contrarian Cluster, and also probably That Magical Click and Reason as memetic immune disorder.

4Paul Crowley14y
A related post is my Survey of anti-cryonics writing.
2arbimote14y
Post also mentioned Tolerate Tolerance
1[anonymous]14y
Thank you!

By the way, I have spent quite a long time trying to "debunk" the set of ideas around Friendly AI and the Singularity, and my conclusion is that there's simply no reasonable mainstream disagreement with that somewhat radical hypothesis. Why is FAI/Singularity not mainstream? Because the mainstream of science doesn't have to publicly endorse every idea it cannot refute. There is no "court of crackpot appeal" where a correct contrarian can go to once and for all show that their problem/idea is legit. Academia can basically say "fuck off, we don't like you or your idea, you won't get a job at a university unless you work on something we like".

Now such ability to arbitrarily tell people to get lost is useful because there are so many crackpots around, and they are really annoying. But it is a very simple and crude filter, akin to cutting your internet connection to prevent spam email. Just losing Eliezer and Nick Bostrom's insight about friendly AI may cost academia more than all the crackpots put together could ever have cost.

Robin Hanson's way around this was to expend a significant fraction of his life getting tenure, and now they can't sack him, but that doesn't mean that mainstream consensus will update to his correct contrarian position on the singularity; they can just press the "ignore" button.

4[anonymous]14y
That's precisely the point I'm trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important. On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to "promiscuous investigators," people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a "promiscuous investigator.")
2Roko14y
I hereby nominate this for understatement of the millennium:
1Thomas14y
If true, it will eventually be accepted by the academia. Ironically enough, there will be no academia in the present sense anymore.
3Roko14y
Does a uFAI killing all of our scientists count as them "accepting" the idea? Rhetorical question.

My social intuitions tell me it is generally a bad idea to say words like 'kill' (as opposed to, say, 'overwrite', 'fatally reorganize', or 'dismantle for spare part(icle)s') in describing scenarios like that, as they resemble some people's misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)

2steven046114y
Good point, but one of the possibilities is the UFAI takes long enough to become completely secure in its power that it actually does try to eliminate people as a threat or a slowing factor. Since in this scenario, unlike in the "take apart for raw materials" scenario, people dying is the UFAI's intended outcome and not just a side effect, "kill" seems an accurate word.
0Roko14y
Yes, it is true. I would avoid 'overwrite' or 'fatally reorganize' because people might not get the idea. Better to go with "rip you apart and re-use your constituent atoms for something else".

I like to use the word "eat"; it's short, evocative, and basically accurate. We are edible.

9Mass_Driver14y
I want a uFAI lolcat that says "I can has ur constituent atomz?" and maybe a "nom nom nom" next to an Earth-sized paper clip.
0Nick_Tarleton14y
I'd never thought about that, but it sounds very likely, and deserves to be pointed out in more than just this comment.
2Thomas14y
I don't expect the post Singularity world as something pretty much as an extended today, with scientists in postlabs and postuniversities and waitresses in postpubs A childish assumption.
3Kevin14y
Come on, where else could I possibly get my postbeer?
3LordTC14y
http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/ is a post that I find relevant. Peer-Review is about low hanging branches, the stuff supported by enough evidence already that writing about it can be done easily by sourcing extensive support from prior work. As for the damage of ignoring correct contrarians, there was a nobel prize in economics awarded for a paper on markets with asymmetric information which a reviewer rejected with a comment like "If this is correct then all of economics is wrong". There is also the story of someone who failed to get a PhD for their work presenting it on multiple seperate occasions, the last of which Einstein was in the room and said it was correct (and it was).
2Blueberry14y
You might be thinking of de Broglie. Einstein was called in to review his PhD thesis. Though he did end up getting his PhD (and the Nobel).
1RobinZ14y
Another near-miss case also preceding peer review was Arrhenius's PhD thesis.
2Roko14y
I should clarify: my position on the factual questions surrounding the Singularity/FAI is mostly the same as the consensus of the original SIAI guys: Eliezer, Mike Vassar, Carl Shulman. Perhaps I have a slightly larger probability assigned to the "Something outside of our model will happen" category, and I place a slightly longer time lag on any of this stuff happening. And this is after disagreeing significantly with them and admitting that they were right.
1timtyler14y
Does "Friendly AI and the Singularity" qualify as being "a hypothesis" in the first place? "Friendly AI" seems more like an action plan - and "the Singularity" seems to be a muddled mixture of ideas - some of which are more accurate than others.

I think it's worth emphasizing that ideas aren't "worth investigating" or "not worth investigating" in themselves; different people will have different opportunities to investigate things at different costs, and will have different info and care about the answers to different degrees.

2[anonymous]14y
True. We have people like Mythbusters and Michael Shermer to debunk certain pseudoscientific claims, for instance. The effort to do that research is worth it, for them. For most of us, it's only worth the effort to watch Mythbusters and read Michael Shermer. My father is a scientist who works in an area with many crackpots (and many misguided but intelligent non-crackpots.) One of his professional duties is to investigate and usually debunk extraordinary claims in his area. It's worth the effort for him -- sometimes there's nobody else to do the job. But most scientists free ride on his efforts. We depend on the efforts of these people -- those who are willing to investigate extraordinary or minority claims. We assume they're out there. We assume there's some investigator who has independent credibility. The big problem is -- what if there isn't? If a claim is simply ignored by everyone with independent credibility, and if it's too much trouble for most of us to investigate ourselves, then even rational actors can make very serious mistakes. The policy prescription is to think up ways to ensure that someone, somewhere, is bothering to investigate the kinds of claims that would be important if they were true.
0steven046114y
I don't disagree, but I see it as more of a continuum. All else equal, the more people investigating a claim, the better. And more importantly, one careful investigator is worth more than ten superficial investigators (e.g., Shermer on cryonics).

This is the bunk-detection strategy on TakeOnIt:

  1. Collect top experts on either side of an issue, and examine their opinions.
  2. If '1' does not make the answer clear, break the issue down into several sub-issues, and do '1' for each sub-issue.

Examples that you alluded to in your post (I threw in cryonics because that's a contrarian issue often brought up on LW):

Global Warming
Cryonics
Climate Engineering
9-11 Conspiracy Theory
Singularity

In addition, TakeOnIt will actually predict what you should believe using collaborative filtering. The way it works, is th... (read more)

8JoshuaZ14y
I'm unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don't think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I've ever talked to about this cringe when Conservapedia gets mentioned. So this isn't even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that's roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.
7BenAlbahari14y
TakeOnIt records the opinions of BOTH experts and influencers - not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates - not blurs - the distinction between those who have relevant expertise and those who don't (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I'm open to suggestions if you think it could be made clearer than it is.
3JoshuaZ14y
I don't think they are doing as good a job as you think separating experts from non-experts. For example, they describe Conservapedia as an "encyclopedia" with no other modifier. Similarly they describe Deepak Chopra as an "expert on alternative medicine." If they want to make a clear distinction I'd suggest having different color schemes (at minimum). Overall, to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.
7JGWeissman14y
If Deepak Chopra is blatantly flagged as a "fake expert", it will alienate people who are initially impressed with his arguments, and they will not participate, and they will not see all the opposing opinions. Color schemes indicating how much the site administrators believe someone to be a real expert would be mind-killing.
1JoshuaZ14y
Upvoting for making a very valid point. I'm not completely sure though that's necessarily the perfect solution. Wikipedia for example specifically has a set of very careful rules to handle minority viewpoints and what constitutes a reliable source or relevant expert. But it may be that that sort of thing works better in an encylopedia format (also even Wikipedia will quite Deepak on alt med things even if we spend a lot of time making clear what the science says).
4BenAlbahari14y
No no no! It's vital that the opinions of influential people - even if they're completely wrong - are included on TakeOnIt. John Stuart Mill makes my point perfectly: P.S. I updated the tag line for Conservapedia from "Encyclopedia" to "Christian Encyclopedia". Thanks for pointing that out.
4[anonymous]14y
I've been playing with the site and from my perspective there are two problems. One is that there's a lot of chaff. The other is that there doesn't seem to be enough activity yet. If there were a lot of activity, I wouldn't necessarily mind that there are "experts" I don't respect; it would still be extremely useful as a microcosm of the world's beliefs. I do want to know which people the public considers to be "experts." That's a useful service in itself. Censorship? Not in a political sense, of course. But there are privately owned institutions which have an interest in permitting a diversity of views. Universities, for instance. This is a site whose usefulness depends on it having no governing ideology. Blocking "unreliable" sources isn't really censorship, but it makes the site less good at what it purports to do.
2BenAlbahari14y
Thanks for the feedback. Do you mean chaff as in "stuff that I personally don't care about" or chaff as in "stuff that anyone would agree is bad"? Yes, the site is still in the bootstrapping phase. Having said that, the site needs to have a better way of displaying recent activity.
3[anonymous]14y
Stuff that I think is bad, and that I would say "reasonable" people agree is bad -- celebrities as experts, Deepak Chopra, mentalists, and so on. But I don't necessarily think that's a problem for the site. If people really get their information from those sources, then I want to know that.
0JoshuaZ14y
I'm almost inclined to say that calling Conservapedia a Christian Encyclopedia is an insult to Christianity more than it deserves (theism is very likely incorrect but Conservapedia's attitude towards the universe is much more separated from reality than that of most Christians). Also, I don't think that what John Stuart Mill is talking about is the same thing. First, note that I'm not saying one should censor Chopra, merely that he's not worth including for this sort of thing. That's not "silencing" by any reasonable definition. And there are other experts there who I disagree with whom I wouldn't put in that category. Thus for example, in both the cryonics and Singularity questions there are included people whom I disagree with whom I don't think are at all helpful. Or again consider Benjamin Franklin, whose opinion on cryonics I'm sympathetic with but whom just didn't have any knowledge that would justify considering his opinion worthy of weight.
3JGWeissman14y
It should be noted that TakeOnIt is setup to allow the general public to suggest expert quotes, and with a short track record as a non-spammer, people get promoted to moderator status, and can directly add a quote. So some members of TakeOnIt are impressed with Chopra, and it would be counterproductive censorship to say that they are not allowed to add his quotes. What we get in exchange for allowing this is that the general public is helping to build the database of expert opinions, and may even include real experts that we would not have known to look at. Franklin's quote is more about cryonics being good if it were feasible than if it is feasible. Ben, do you think it should be moved to this question?
0BenAlbahari14y
Good call.
0JoshuaZ14y
I see the argument for it being counterproductive which I'm tentatively convinced by. But it isn't censorship by most definitions of the term. Saying "you can't say X" is censorship saying "You can't say X on my website" is not censorship. (Again, I am convinced by the counterproductivity argument so we seem to at this point be in more or less agreement if one is going to try to run TakeOnIt in a manner close to the intended general purpose). Moving Franklin might make sense. Unfortunately, many of the people discussing cryonics are also talking about its general desirability. The questions seem to be frequently discussed together. Incidentally note that there's a high correlation between having a moral or philosophical objection to cryonics and being likely to think it won't work. This potentially suggests that there's some belief overkill going on on one or both sides of this argument.
3JGWeissman14y
There is value in recording the opinions of anyone perceived as an expert by a segment of the general population, as it builds a track record for each supposed expert, so that the statistical analysis can reveal that the opinions of some so called experts are just noise, and give a result influenced mainly by the real experts. See The Correct Contrarian Cluster.
0JoshuaZ14y
That might work if we had major track records for people. Unfortunately for a lot of issues that could potentially matter (say the Singularity and Cryonics) we won't have a good idea who was correct for some time. It seems like a better idea to become an expert on a few issues and then see how much a given expert agrees with you in the area of your expertise. If they agree with you, you should be more likely to give credence to them in their claimed areas of expertise.
2JGWeissman14y
Well, I would like to see more short term predictions on TakeOnIt, where after the event in question, comments are closed, and what really happened is recorded. From this data, we would extrapolate who to believe about the long term predictions.
0JoshuaZ14y
That might work in some limited fields (economics and technological developement being obvious ones). Unfortunately, many experts don't make short term predictions. In order for this to work one would need to get experts to agree to try to make those predictions. And they have a direct incentive not to do so since it can be used against them later (well up to a point. Psychics like Sylvia Brown make repeated wrong predictions and their followers don't seem to mind). I give Ray Kurzweil a lot of credit for having the courage to make many relatively short term predictions (many which so far have turned out to be wrong but that's a separate issue).
0JGWeissman14y
Yes, in some cases, there is no (after the fact) non-controversial set of issues to use to determine how effective an expert is. Which means that I can't convince the general public of how much they should trust the expert, but I can still figure out how much I should trust em by looking at their positions that I can evaluate. There is also the possibility of saying something about such an expert based on correlations with experts whose predictions can be non-controversially evaluated.
1simplicio14y
From a comment to Bryan Caplan's contra opinion in the cryonics article:

Liked the post. One of the two big questions it's poking at is 'how does one judge a hypothesis without researching it?' To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:

  • Is evidence presented for H?
  • Do those supporting H share data for repeatability?
  • Is H internally inconsistent?
  • Does H depend on logical fallacies?
  • (Debatable) Is H mainstream?

I'll add a few more:

  • If H is a physical or mathematical hypoth

... (read more)
8Mitchell_Porter14y
This isn't the actual epistemic situation. The usual measure of the magnitude of CO2-induced warming is "climate sensitivity" - increase in temperature per doubling of CO2 - and its consensus value is 3 degrees. But the physically calculable warming induced directly by CO2 is, in terms of this measure, only 1 degree. Another degree comes from the "water vapor feedback", and the final degree from all the other feedbacks. But the feedback due to clouds, in particular, still has a lot of uncertainty; enough that, at the lower extreme, it would be a negative feedback that could cancel all the other positive feedbacks and leave the net sensitivity at 1 degree. The best evidence that the net sensitivity is 3 degrees is the ice age record. The relationship between planetary temperature and CO2 levels there is consistent with that value (and that's after you take into account the natural outgassing of CO2 from a warming ocean). People have tried to extract this value from the modern temperature record too, but it's rendered difficult by uncertainties regarding the magnitude of cooling due to aerosols and the rate at which the ocean warms (this factor dominates how rapidly atmospheric temperature approaches the adjusted equilibrium implied by a changed CO2 level). The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles. It is implied by the ice-age paleo record, and is consistent with the contemporary record, with older and sparser paleo data, and with the independently derived range of possible values for the feedbacks. But the uncertainty regarding cloud feedback is still too great to say that we can retrodict this value, just from a knowledge of atmospheric physics.
2cupholder14y
Agreed. Nonetheless, as best I can calculate, Really Existing Global Warming (the warming that has occurred from the 19th century up to now, rather than that predicted in the medium-term future) is of similar order to what one would get from the raw, feedback-less effect of modern human CO2 emissions. The additional radiative forcing due to increasing the atmospheric CO2 concentration from C0 to C1 is about 5.4 * log(C1/C0) W/m^2. The preindustrial baseline atmospheric CO2 concentration was about 280 ppm, and now it's more like 388pm - plugging in C0 = 280 and C1 = 388 gives a radiative forcing gain around 1.8W/m^2 due to more CO2. Without feedback, climate sensitivity is λ = 0.3 K/(W/m^2) - this is the expected temperature increase for an additional W/m^2 of radiative forcing. Multiplying the 1.8W/m^2 by λ makes an expected temperature increase of 0.54K. Eyeballing the HADCRUT3 global temperature time series, I estimate a rise in the temperature anomaly from about -0.4K to +0.4K, a gain of 0.8K since 1850. The temperature boost of 0.54K from current CO2 levels takes us most of the way towards that 0.8K increase. The remaining gap would narrow if we included methane and other greenhouse gases also. Admittedly, we won't have the entire 0.54K temperature boost just yet, because of course it takes time for temperatures to approach equilibrium, but I wouldn't expect that to take very long because the feedbackless boost is relatively small.
6Mitchell_Porter14y
This might actually be a nice exercise in choosing between hypotheses. Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here. You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the "transient response" phase for the additional perturbation they impose...
6cupholder14y
Now you've handed me a quantitative model I'm going to indulge my curiosity :-) I think we can account for this by tweaking equation 4.14 on your linked page. Whoever wrote that page solves it for a constant additional forcing, but there's nothing stopping us rewriting it for a variable forcing: where T(t) is now the change in temperature from the starting temperature, Q(t) the additional forcing, and I've written the equation in terms of my λ (climate sensitivity) and not theirs (feedback parameter). Solving for T(t), If we disregard pre-1850 CO2 forcing and take the year 1850 as t = 0, we can drop the free constant. Next we need to invent a Q(t) to represent CO2 forcing, based on CO2 concentration records. I spliced together two Antarctic records to get estimates of annual CO2 concentration from 1850 to 2007. A quartic is a good approximation for the concentration: The zero year is 1850. Dividing the quartic by 280 gives the ratio of CO2 at time t to preindustrial CO2. Take the log of that and multiply by 5.35 to get the forcing due to CO2, giving Q(t): Plug that into the T(t) formula and we can plot T(t) as a function of years after 1850: The upper green line is a replication of the calculation I did in my last post - it's the temperature rise needed to reach equilibrium for the CO2 level at time t, which doesn't account for the time lag needed to reach equilibrium. For t = 160 (the year 2010), the green line suggests a temperature increase of 0.54K as before. The lower red line is T(t): the temperature rise due to the Q(t) forcing, according to the thermal inertia model. At t = 160, the red line has increased by only 0.46K; in this no-feedback model, holding CO2 emissions constant at today's level would leave 0.08K of warming in the pipeline. So in this model the time lag causes T(t) to be only 0.46K, instead of the 0.54K expected at equilibrium. Still, that's 85% of the full equilibrium warming, and the better part of the 0.8K increase; this s

"How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?"

This is not what's actually going on. To quote Eliezer:

"With regard to academia 'showing little interest' in my work - you have a rather idealized view of academia if you think that they descend on every new idea in existence to approve or disapprove it. It takes a tremendous amount of work to get academia to notice something at all - you have to publish article after article, write commentaries on other people's work from within your re... (read more)

There isn't any universal distinguishing rule, but in general you want to ask would a world where this were false, look just like our own world? A couple of useful specific guidelines:

  1. Is this something people would be disposed to believe even if it were false?

  2. Is this something that would be impossible to disprove even if it were false?

Flying saucers, psychic powers, and the Singularity are good examples here: suppose we lived in a world where they were not real, what would it look like? Answer, people would still believe them because we are disposed... (read more)

2JoshuaZ14y
I'm not sure that your comparison of the Singularity to these others works. Consider for example practical fusion reactors or space elevators. Both fit well with your rules that people would like to believe they are possible and the world would look very similar to what it looks like today even if they aren't. There's seems to be a major distinction between ideas like the Singularity or space elevators which contrasts with alien saucers or psychic powers: The first category has plausible mechanisms that aren't intrinsically disruptive to major metapatterns about how the world functions. In contrast psychic powers goes against much of our understanding of how the world functions (does bad things to evolution, basic laws of physics, and amounts to a claim of irreducible mental constructs to name just three of the serious problems). As a non-Singularitan, I have to say that I find this sort of comparison deeply unpersuasive.
4rwallace14y
Oh, the two guidelines I suggested certainly aren't a complete algorithm -- that's why I called them guidelines not rules :-) Maybe I should list a third (or first) guideline: 1. Is this claim extraordinary; does it contradict what we think we know about how the world works? The Singularity definitely falls into this category; the idea that you can handwave that sort of capability into existence is contrary to everything we know about science and engineering that nothing useful happens for free and every optimization needs real-world feedback; and when you look at the details of the Singularitarian arguments, there are an awful lot of gaps of the "and then a miracle occurs" variety. Fusion reactors are fundamentally plausible because they match both our knowledge of nuclear physics and our experiences building better engines. Interestingly, I've seen it credibly suggested that fusion reactors of the kind we are currently trying to build won't work out after all, because we are trying to make them too small, so the heat radiates away too quickly, so it will cost more to run the reactor than the value of the energy generated, and we need to either change our plans or make the reactors a lot bigger. But even if true, that's not something that could possibly have been predicted without in-depth study of the subject matter.
0JoshuaZ14y
We may need to break down which form of the Singularity we are then discussing. See Eliezer's list. I agree that a pure optimization process based on no connection to the real world seems unlikely. But if for example, the general AI came along at about the same time as access to marginally efficient nanotech came around, that allows a plausible method of optimization. Or to use another example, if we construct a reasonably smart general AI and it turns out that it actually requires very little comparative processing power to what we have available at the time. Either of these allow for very efficient optimization processes. Nothing in the Singularity notion goes against the fundamental picture of the world we've developed in the same way that say psychic powers would. If I had to make a continuum I'd put them something in order of plausibility like: [psychic powers, alien UFOs, Kurzweil-type Singularity, Yudkowskian Singularity, practical fusion power, space elevators] and there's a major gap between alien UFOs and K-type Singularity. I'm not sure what would plausibly go in between them to narrow the gap. Maybe something like a Penrose version of consciousness?
3rwallace14y
Right, in truth none of the three versions really hangs together when you look at the arguments, though they are listed in decreasing order of plausibility. "Our intuitions about change are linear" -- no they aren't, we attach equal significance to equal percentage changes, so our intuition expects steady exponential change. "Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence." -- artificial intelligence, along with flying cars, moon bases and a cure for cancer, refutes this idea by its continued nonexistence. "To know what a superhuman intelligence would do, you would have to be at least that smart yourself." -- my brother's cat can predict that when it meows, he will put out food for it. He cannot predict whether the cat will eat the food. "Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable." -- the future has always been unpredictable, so by that definition we have always been in the Singularity. "each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude" -- knowing whether a change is actually an improvement takes more than just thinking about it. "Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons." -- technological progress is much slower than the characteristic timescale of neurons. That doesn't mean the Singularity can't exist by some other definition, "For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down." but as Eliezer also points out, this definition does not imply any particular conclusions. The Penrose version of consciousness is an interesting case. It is clearly something Penrose would be disposed to believe even if it we
3rwallace14y
Thinking about it a bit more, I wonder if my greater confidence in dismissing the Singularity than Penrose's theory of consciousness as bunk, is influenced by the fact that the former is in my area of expertise and the latter is not. Obviously the more we know about something, the easier it is to be confident, but the original topic was possible methods of making summary judgment without detailed knowledge (given the impossibility of knowing all the details of everything). Are there any physicists or neuroscientists in the audience who would be more confident in dismissing Penrose's theory of consciousness?
9Mitchell_Porter14y
I spent a year as a guest of Penrose's biologist collaborator, Stuart Hameroff, at the University of Arizona, and my one peer-reviewed publication dates from that time, so I can tell you more than you want to know about this subject. :-) First you should understand the order of events. Penrose published his book arguing that there should be a trans-Turing quantum-gravity process happening in the brain. Then Hameroff wrote to him and said, I bet it's happening in the microtubules. Thus was born the version of the idea that most people hear about. Penrose's original argument combines an old interpretation of Gödel's theorem with his own speculations about quantum gravity. The first part goes like this: For any mechanized form of mathematical reasoning, there are, necessarily, mathematical truths which it cannot prove. But we can know these propositions to be true. Therefore, human cognition must have capabilities which are not Turing-computable. In the second part, Penrose observes that the whole of nongravitational physics is Turing-computable, but that gravitational physics is at least potentially not, because it may involve quantum sums over arbitrary 4-manifolds, and topological equivalence of 4-manifolds is not Turing-decidable. He also introduces one of his own physical ideas: Hawking evaporation of black holes appears to involve destruction of quantum information, so he proposes that conservation of probability flow is maintained by nondeterministic wavefunction collapse, which creates quantum information. He also has a technical argument against the possibility of superpositions of different geometries. So, if there are mesoscopic quantum superpositions in the brain whose components evolve towards mass distributions (and hence local space-time geometries) sufficiently different from each other that the superposition must break down, then, there is an opportunity for trans-Turing physical dynamics to play a role in human cognition. The physical argument is
0rwallace14y
Excellent explanation, thanks! So if I'm understanding correctly, while there are severe problems with Penrose's theory, it's not in the category of things to be casually dismissed as bunk; experts have found it an interesting line of thought to investigate, at least.
2JoshuaZ14y
You may be putting to much emphasis on what people would be predisposed to believe. While when evaluating our own probability estimates we should correct for our emotional predispositions, it in no way says anything substantive about whether a given claim is correct or not. Tendencies to distort my map in no way impacts what the territory actually looks like.
0rwallace14y
Sure, at the end of the day there is no reliable way to tell truth from falsehood except by thorough scientific investigation. But the topic at hand is whether, in the absence of the time or other resources to investigate everything, there are guidelines that will do better than random chance in telling us what's promising enough to be worth how much investigation. While the heuristic about predisposition to believe falls far short of certainty, I put it to you that it is significantly better than random chance -- that in the absence of any other way to distinguish true claims from false ones, you would do quite a bit better by using that heuristic, than by flipping a coin.

We need some fraction of respected scientists -- even a small fraction -- who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don't they risk being considered crackpots themselves? This is some version of "Tolerate tolerance." If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.

(Original post.)

More generally, one can't optimize a process of getting some kind of answers by also usi... (read more)

First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it's safely dismissed; no need to look further.

More:

Does he use math or formal logic when a claim demands it? Does he accuse others of suppressing his views?

The Crackpot index is helpful, though it is physics centric.

0djcb14y
I always like the Crackpot Index, but I guess it should be balanced with a list of scientists who would probably be considered crackpots because they are a bit 'weird', say Newton or Tesla. Of course there are many more crackpots than there are Newtons or Teslas, but I suppose it's good to not dismiss thing too quickly when they are radical and proposed by somewhat special individuals.

So a claim is bunk if and only if:

  1. Those with the right kind of difficult-to-access information or who trust the relevant "expert" class will assign it an extremely low probability.

  2. Those without that information who either don't know or don't trust the relevant expert class may assign it a more reasonable probability or even believe it.

  3. The claim is false.

  4. (?) The claim is non-trivial, if true, it would have wide-reaching implications.

So claims to have a perpetual motion machine are bunk because to understand how unlikely they are you eit... (read more)

5Vladimir_M14y
Jack: You ignore the possibility of crackpots who are not contrarians, but instead well established or even dominant in the mainstream. You have a very rosy view of academia if you believe that this phenomenon is entirely nonexistent nowadays! That said, I'd say the main defining criterion of crackpots -- as opposed to ordinary mistaken folks -- is that their emotions have got the better of them, rendering them incapable of further rational argument. A true crackpot views the prospect of changing his mind as treachery to his cause, similar to a soldier scorning the possibility of surrender after suffering years of pain, hardship, and danger in a war. Trouble is, protracted intellectual battles in which contrarians are exposed to hostility and ridicule often push them beyond the edge of crackpottery at some point. It's a pity because smart contrarians, even when mistaken about their main point, can often reveal serious weaknesses in the mainstream view. But then this is often why they are met with such hostility in the first place, especially in fields with political/ideological implications.
1Jack14y
Er. I think there are plenty of people in academia who have very wrong beliefs with poor justifications. But I took our working definition of crackpot and bunk to exclude such people. We're asking about a particular kind of being wrong: being wrong and unpopular. The question is, is there something beyond that to being a crackpot. Must you also, say, engage in pseudoscience, be non-falsifiable, or engage in unsavory tactics etc. Obviously we don't want to debate definitions, but I think the claim that you picked out is true given the way we've been using the words in this thread. Your point about emotions is a good one.
5Vladimir_M14y
Fair enough, if we define "crackpot" as necessarily unpopular. However, what primarily comes to my mind when I hear this word is the warlike emotional state that renders one incapable of changing one's mind, which I described in the above comment. If people like that manage to grab positions of power in the academia and don the cloak of respectability, I still think that they share more relevant similarity with various scorned crackpot contrarians than with people whose mainstream respectability is well earned. I think a good test for a crackpot vs. an ordinary mistaken contrarian would be how this individual would behave if the power relations were suddenly reversed, and the mainstream and contrarian views changed places. A crackpot would not hesitate to use his power to extirpate the views he dislikes with all means available, whereas an non-crackpot contrarian would show at least some respect for his (now contrarian) opponents.
1[anonymous]14y
"It seems to me that even if Eliezer Yudkowsky is really wrong about a lot that he believes (and this seems possible to me) he is nonetheless not a crackpot. But is there more to this than 'crackpots are incorrect contrarians who I don't like or have never agreed with'? Is there an objective distinction? Perhaps because he is ignored rather than rejected?" Also a question I don't know the answer to. I wrote this post partly in response to my worries about Eliezer (and certain other autodidacts) whom I perceive not to be crackpots. Does that perception weigh in their favor, or only confirm me to be a fellow crackpot? I'm still trying to figure out what a crackpot is.
4orthonormal14y
If you find yourself worrying whether a certain label applies to you, rather than wondering whether a specific set of claims are more or less likely to be true, be careful; social fears can easily derail the rational evaluation of evidence. The question "What is bunk?" seems nigh unanswerable, a search for a dictionary definition to fill in a hanging node. Thinking in terms of "what class of claims can I dismiss as too unlikely on the face of it, and what claims have a high enough chance of truth that they're worth investigating?" is more realistic, IMO.
0[anonymous]14y
The Crackpot Index is a good place to start.

Bryan Caplan spends time refuting Austrians - he thinks Austrian Economics is a mistake that wastes the time of a lot of quality free market economists.

1CronoDAS14y
Paul Krugman also made a couple of short blog posts on the subject as well.

There isn't as much of a free rider problem as you make there out to be. Different people can divide their time to different subjects to investigate. Thus, we all benefit from the collective effort to investigate.

Investigating unlikely claims is also healthy in general because it helps us hone our reasoning capabilities so people investigating them may get some direct benefit.

I'm not sure I like the category of "bunk"; it seems overly broad and not clearly defined. Your definition "there are claims so cracked that they aren't worth investiga... (read more)

2[anonymous]14y
You're right, it is mostly a question of minority views, but I'll defend my use of "bunk" a little bit. Not every bunk view is a minority view; the majority of Americans believe in ghosts, for example. What makes me initially estimate it unlikely that ghosts exist is not that it's a minority opinion (it's not) but that it contradicts the entire framework I have for understanding the physical world. I start off, therefore, with a really low prior for ghosts. So low, in fact, that it's potentially not worth the effort of further investigation. In the case of ghosts it doesn't take very much effort to investigate enough to toss out the claim; ghosts are an easy case. Other topics, though, take a lot of effort to investigate, and my initial low prior isn't based on much evidence. Misclassifying them as bunk can be costly. But classifying nothing as bunk would break the bank, in attention and effort terms. Bunk is anything which, for whatever reason (being a minority view, requiring large realignment of our worldview, etc) is too unlikely to be worth checking. And the problem of bunk is this: if it isn't even worth it to do a preliminary check, how do you know how unlikely it is? What I worry about is that, given that investigation takes effort, and given that we decide whether or not to investigate based on prior estimation of how likely a claim is, there are potentially claims that we're disbelieving for no good reason. Perhaps individuals with limited time and energy are doomed to disbelieve some claims for no good reason.

or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.

Electrical engineering is not the appropriate discipline, and neither is most of computer science. AI/cognitive science and philosophy are the closest.

Appropriate experts to "debunk" the singularity would be analytic philosophers such as David Chalmers, or AI/cognitive science people like Josh Tenebaum or Stuart Russell, Peter Norvig, etc.

8alyssavance14y
David Chalmers, by the way, has come out pretty strongly in support of us. See The Singularity: A Philosophical Analysis (http://consc.net/papers/singularity.pdf).
1timtyler14y
Peter Norvig's 2p was in: "Peter Norvig - Singularity Institute Interview Series" http://video.google.com/videoplay?docid=-6754621605046052935#
1[anonymous]14y
Also, thanks, I didn't know about that. My mistake.

In other words, is it a scientific or a pseudoscientific hypothesis?

Surprisingly, I don't think we've ever gotten deep into demarcation issues here. Anyone want to attempt demarcation criteria? Is that even a worthwhile task?

One word: attachment.

Claims like, "The singularity will occur within this century," do not have attached implications, i.e. there aren't any particular facts we would would expect to be able to currently observe if they were true. Things we dismiss as bunk we either have evidence that directly contradicts them, (e.g. "The Earth is 6000 years old" is directly contradicted by evidence) or we lack evidence that would expect to observe with extremely high probability were they true (e.g alien abductions - it's rather bizarre that aliens wou... (read more)

(e.g alien abductions - it's rather bizarre that aliens would do such specific things and somehow invariably avoid large demographics of society. ...

When I abduct humans, I abduct specifically those who are known to be liars, insane, or seeking attention.

Works wonders for the problem of witnesses.

Before anyone asks: rectal probing has extensive applications in paperclip manufacturing.

Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.

You just can't do that. At least not without some a posteriori empirical data about the said innovation. More it is an innovation, the less you can know about it in advance. And less something is a novum, better you can judge it.

1JoshuaZ14y
You can certainly do it to some extent. Thus for example, just because their are innovations in physics that are ongoing doesn't mean I can't safely dismiss perpetual motion claims. And while there's constant research in my own field (number theory) I can dismiss a lot of claims of proofs of major theorems by crackpots even though there's ongoing research. Moreover, people in some fields are able to evaluate claims as having very low probability even though they are technically possible given what we have today. Thus for example, I have a friend who is a physicist who considers it extremely unlikely that we will ever have room temperature superconductors. If some random individual came up to him claiming to have a way of constructing them, he'd be completely justified in giving this a low confidence. I don't know if you'd label that evidence as posteriori or not given that he has zero data about the individual claim, just the type of claim in general.
3Thomas14y
Now, as you mentioned your field, I have a crackpot idea to evolve a divisor of a big number. How much points on the crackpot scale from 0 to 99 I've earned with this? Zero means no quacking at all, while 80 is something like "I have an UFO in the basement, and a private zoo with the captured aliens". I can't imagine 99.
2JoshuaZ14y
I'm not sure. I'd say it would depend on if you've got an actual procedure to do it. If yes, pretty close to 0. If not, maybe around 40 or so. Although the term "evolve" isn't used, there are some procedures that try to do similar things. Consider for example primitive roots. A primitive root an integer g such that g^k runs through every possible non-zero remainder when divided by p. Thus, for example, 2 is a primitive root modulo 5, since 2^1=2 (mod 5), 2^2=4 (mod 5), 2^3=8 = 3 (mod 5) and 2^4=16=1 (mod 5) so 1,2,3, and 4 are all accounted for. 2 is not a primitive root mod 7 since one only can get as remainders 1,2 and 4. (Most people here probbably already know about primitive roots but it seemed like a good idea to just go over the basics for readers who might know. Also my assumption that most people will know may be some form of projection and I'm assuming a much higher degree of knowledge about my field than can be reasonably expected). Now, it turns out that number theorists care a lot about primitive roots. Aside from intrinsic mathematical interest, they turn out to be useful in a number of practical algorithms such as the Diffie-Hellman algorithm which is a simple to implement key exchange procedure useful in cryptography. It turns out that every prime has a primitive root (a non-obvious fact first proved by Gauss) but for a given prime, finding a primitive root is tough in general. However, some of the procedures used to find primitive roots work off of picking a set of random numbers, checking if any is a primitive root and if not combining them in a certain way to get a number whose powers run through more remainders. One can iterate this process to eventually get a primitive root. In some sense, this is evolving an answer to the problem, although that terminology would never be used. And there are procedures to find factors which rely on not so far off procedures (although calling them evolution would be more of a stretch). So the rough idea isn't i
2Baughn14y
Umh.. twenty? You'd be applying a weak optimization process to the problem instead of using your built-in much stronger one, and hoping that its different set of biases will let it hit on a useful algorithm that you yourself wouldn't. Intuitively, math-space is too big and twisted for evolution to work, and it'd suffer horribly from getting stuck on local maxima. I don't know this for certain, however, and even if you fail you'll still have learned something.
3Thomas14y
At least not always. At least.
1Thomas14y
Yes, but a perpetual machine would be an innovation par excellence, wouldn't it be? Especially for you and me and everybody else, who are almost certain, it's not possible. Yes, again. But whatever is quite familiar for you, what you can easily grasp, is not a big innovation for you. Maybe important, but not that innovative. You have thought similar thoughts already. I tend to agree with him. Anyway, superconductivity would be a very important but not a very innovative thing. Unless based on some completely unexpected principles. Then it would be innovative too.
0JoshuaZ14y
Could you expand on what you mean by innovative then? How do you define something as innovative?
0Thomas14y
Done on a new way. Unprecedented and mainly unexpected. It doesn't mean that it is very important then, only a surprise for almost everyone. http://wordnetweb.princeton.edu/perl/webwn?s=innovativeness Check!
0JoshuaZ14y
I'm still not clear on this definition as it applies to what the top-level post discussed. Everything in the top level post are ideas that aren't unprecedented. Many of these ideas have been around for a very long time. So only talking about ideas which are unprecedented and mainly unexpected seems unhelpful. Also, I'm not sure what constitutes unprecedented in this context.