In praise of gullibility?

23 ahbwramc 18 June 2015 04:52AM

I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:

When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable.

He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).

This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.

[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]

All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:

"Don't be gullible."

Huh. Sounds a lot more obvious that way.

Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?

Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.

There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.

Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.

This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?

That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?

I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.

I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.

Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.

(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)

Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?

Philosophical differences

18 ahbwramc 13 June 2015 01:16AM

[Many people have been complaining about the lack of new content on LessWrong lately, so I thought I'd cross-post my latest blog post here in discussion. Feel free to critique the content as much as you like, but please do keep in mind that I wrote this for my personal blog and not with LW in mind specifically, so some parts might not be up to LW standards, whereas others might be obvious to everyone here. In other words...well, be gentle]

---------------------------

You know what’s scarier than having enemy soldiers at your border?

Having sleeper agents within your borders.

Enemy soldiers are malevolent, but they are at least visibly malevolent. You can see what they’re doing; you can fight back against them or set up defenses to stop them. Sleeper agents on the other hand are malevolent and invisible. They are a threat and you don’t know that they’re a threat. So when a sleeper agent decides that it’s time to wake up and smell the gunpowder, not only will you be unable to stop them, but they’ll be in a position to do far more damage than a lone soldier ever could. A single well-placed sleeper agent can take down an entire power grid, or bring a key supply route to a grinding halt, or – in the worst case – kill thousands with an act of terrorism, all without the slightest warning.

Okay, so imagine that your country is in wartime, and that a small group of vigilant citizens has uncovered an enemy sleeper cell in your city. They’ve shown you convincing evidence for the existence of the cell, and demonstrated that the cell is actively planning to commit some large-scale act of violence – perhaps not imminently, but certainly in the near-to-mid-future. Worse, the cell seems to have even more nefarious plots in the offing, possibly involving nuclear or biological weapons.

Now imagine that when you go to investigate further, you find to your surprise and frustration that no one seems to be particularly concerned about any of this. Oh sure, they acknowledge that in theory a sleeper cell could do some damage, and that the whole matter is probably worthy of further study. But by and large they just hear you out and then shrug and go about their day. And when you, alarmed, point out that this is not just a theory – that you have proof that a real sleeper cell is actually operating and making plans right now – they still remain remarkably blase. You show them the evidence, but they either don’t find it convincing, or simply misunderstand it at a very basic level (“A wiretap? But sleeper agents use cellphones, and cellphones are wireless!”). Some people listen but dismiss the idea out of hand, claiming that sleeper cell attacks are “something that only happen in the movies”. Strangest of all, at least to your mind, are the people who acknowledge that the evidence is convincing, but say they still aren’t concerned because the cell isn’t planning to commit any acts of violence imminently, and therefore won’t be a threat for a while. In the end, all of your attempts to raise the alarm are to no avail, and you’re left feeling kind of doubly scared – scared first because you know the sleeper cell is out there, plotting some heinous act, and scared second because you know you won’t be able to convince anyone of that fact before it’s too late to do anything about it.

This is roughly how I feel about AI risk.

You see, I think artificial intelligence is probably the most significant existential threat facing humanity right now. This, to put it mildly, is something of a fringe position in most intellectual circles (although that’s becoming less and less true as time goes on), and I’ll grant that it sounds kind of absurd. But regardless of whether or not you think I’m right to be scared of AI, you can imagine how the fact that AI risk is really hard to explain would make me even more scared about it. Threats like nuclear war or an asteroid impact, while terrifying, at least have the virtue of being simple to understand – it’s not exactly hard to sell people on the notion that a 2km hunk of rock colliding with the planet might be a bad thing. As a result people are aware of these threats and take them (sort of) seriously, and various organizations are (sort of) taking steps to stop them.

AI is different, though. AI is more like the sleeper agents I described above – frighteningly invisible. The idea that AI could be a significant risk is not really on many people’s radar at the moment, and worse, it’s an idea that resists attempts to put it on more people’s radar, because it’s so bloody confusing a topic even at the best of times. Our civilization is effectively blind to this threat, and meanwhile AI research is making progress all the time. We’re on the Titanic steaming through the North Atlantic, unaware that there’s an iceberg out there with our name on it – and the captain is ordering full-speed ahead.

(That’s right, not one but two ominous metaphors. Can you see that I’m serious?)

But I’m getting ahead of myself. I should probably back up a bit and explain where I’m coming from.

Artificial intelligence has been in the news lately. In particular, various big names like Elon Musk, Bill Gates, and Stephen Hawking have all been sounding the alarm in regards to AI, describing it as the greatest threat that our species faces in the 21st century. They (and others) think it could spell the end of humanity – Musk said, “If I had to guess what our biggest existential threat is, it’s probably [AI]”, and Gates said, “I…don’t understand why some people are not concerned [about AI]”.

Of course, others are not so convinced – machine learning expert Andrew Ng said that “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars”.

In this case I happen to agree with the Musks and Gates of the world – I think AI is a tremendous threat that we need to focus much of our attention on it in the future. In fact I’ve thought this for several years, and I’m kind of glad that the big-name intellectuals are finally catching up.

Why do I think this? Well, that’s a complicated subject. It’s a topic I could probably spend a dozen blog posts on and still not get to the bottom of. And maybe I should spend those dozen-or-so blog posts on it at some point – it could be worth it. But for now I’m kind of left with this big inferential gap that I can’t easily cross. It would take a lot of explaining to explain my position in detail. So instead of talking about AI risk per se in this post, I thought I’d go off in a more meta-direction – as I so often do – and talk about philosophical differences in general. I figured if I couldn’t make the case for AI being a threat, I could at least make the case for making the case for AI being a threat.

(If you’re still confused, and still wondering what the whole deal is with this AI risk thing, you can read a not-too-terrible popular introduction to the subject here, or check out Nick Bostrom’s TED Talk on the topic. Bostrom also has a bestselling book out called Superintelligence. The one sentence summary of the problem would be: how do we get a superintelligent entity to want what we want it to want?)

(Trust me, this is much much harder than it sounds)

So: why then am I so meta-concerned about AI risk? After all, based on the previous couple paragraphs it seems like the topic actually has pretty decent awareness: there are popular internet articles and TED talks and celebrity intellectual endorsements and even bestselling books! And it’s true, there’s no doubt that a ton of progress has been made lately. But we still have a very long way to go. If you had seen the same number of online discussions about AI that I’ve seen, you might share my despair. Such discussions are filled with replies that betray a fundamental misunderstanding of the problem at a very basic level. I constantly see people saying things like “Won’t the AI just figure out what we want?”, or “If the AI gets dangerous why can’t we just unplug it?”, or “The AI can’t have free will like humans, it just follows its programming”, or “lol so you’re scared of Skynet?”, or “Why not just program it to maximize happiness?”.

Having read a lot about AI, these misunderstandings are frustrating to me. This is not that unusual, of course – pretty much any complex topic is going to have people misunderstanding it, and misunderstandings often frustrate me. But there is something unique about the confusions that surround AI, and that’s the extent to which the confusions are philosophical in nature.

Why philosophical? Well, artificial intelligence and philosophy might seem very distinct at first glance, but look closer and you’ll see that they’re connected to one another at a very deep level. Take almost any topic of interest to philosophers – free will, consciousness, epistemology, decision theory, metaethics – and you’ll find an AI researcher looking into the same questions. In fact I would go further and say that those AI researchers are usually doing a better job of approaching the questions. Daniel Dennet said that “AI makes philosophy honest”, and I think there’s a lot of truth to that idea. You can’t write fuzzy, ill-defined concepts into computer code. Thinking in terms of having to program something that actually works takes your head out of the philosophical clouds, and puts you in a mindset of actually answering questions.

All of which is well and good. But the problem with looking at philosophy through the lens of AI is that it’s a two-way street – it means that when you try to introduce someone to the concepts of AI and AI risk, they’re going to be hauling all of their philosophical baggage along with them.

And make no mistake, there’s a lot of baggage. Philosophy is a discipline that’s notorious for many things, but probably first among them is a lack of consensus (I wouldn’t be surprised if there’s not even a consensus among philosophers about how much consensus there is among philosophers). And the result of this lack of consensus has been a kind of grab-bag approach to philosophy among the general public – people see that even the experts are divided, and think that that means they can just choose whatever philosophical position they want.

Want. That’s the key word here. People treat philosophical beliefs not as things that are either true or false, but as choices – things to be selected based on their personal preferences, like picking out a new set of curtains. They say “I prefer to believe in a soul”, or “I don’t like the idea that we’re all just atoms moving around”. And why shouldn’t they say things like that? There’s no one to contradict them, no philosopher out there who can say “actually, we settled this question a while ago and here’s the answer”, because philosophy doesn’t settle things. It’s just not set up to do that. Of course, to be fair people seem to treat a lot of their non-philosophical beliefs as choices as well (which frustrates me to no end) but the problem is particularly pronounced in philosophy. And the result is that people wind up running around with a lot of bad philosophy in their heads.

(Oh, and if that last sentence bothered you, if you’d rather I said something less judgmental like “philosophy I disagree with” or “philosophy I don’t personally happen to hold”, well – the notion that there’s no such thing as bad philosophy is exactly the kind of bad philosophy I’m talking about)

(he said, only 80% seriously)

Anyway, I find this whole situation pretty concerning. Because if you had said to me that in order to convince people of the significance of the AI threat, all we had to do was explain to them some science, I would say: no problem. We can do that. Our society has gotten pretty good at explaining science; so far the Great Didactic Project has been far more successful than it had any right to be. We may not have gotten explaining science down to a science, but we’re at least making progress. I myself have been known to explain scientific concepts to people every now and again, and fancy myself not half-bad at it.

Philosophy, though? Different story. Explaining philosophy is really, really hard. It’s hard enough that when I encounter someone who has philosophical views I consider to be utterly wrong or deeply confused, I usually don’t even bother trying to explain myself – even if it’s someone I otherwise have a great deal of respect for! Instead I just disengage from the conversation. The times I’ve done otherwise, with a few notable exceptions, have only ended in frustration – there’s just too much of a gap to cross in one conversation. And up until now that hasn’t really bothered me. After all, if we’re being honest, most philosophical views that people hold aren’t that important in grand scheme of things. People don’t really use their philosophical views to inform their actions – in fact, probably the main thing that people use philosophy for is to sound impressive at parties.

AI risk, though, has impressed upon me an urgency in regards to philosophy that I’ve never felt before. All of a sudden it’s important that everyone have sensible notions of free will or consciousness; all of a sudden I can’t let people get away with being utterly confused about metaethics.

All of a sudden, in other words, philosophy matters.

I’m not sure what to do about this. I mean, I guess I could just quit complaining, buckle down, and do the hard work of getting better at explaining philosophy. It’s difficult, sure, but it’s not infinitely difficult. I could write blogs posts and talk to people at parties, and see what works and what doesn’t, and maybe gradually start changing a few people’s minds. But this would be a long and difficult process, and in the end I’d probably only be able to affect – what, a few dozen people? A hundred?

And it would be frustrating. Arguments about philosophy are so hard precisely because the questions being debated are foundational. Philosophical beliefs form the bedrock upon which all other beliefs are built; they are the premises from which all arguments start. As such it’s hard enough to even notice that they’re there, let alone begin to question them. And when you do notice them, they often seem too self-evident to be worth stating.

Take math, for example – do you think the number 5 exists, as a number?

Yes? Okay, how about 700? 3 billion? Do you think it’s obvious that numbers just keep existing, even when they get really big?

Well, guess what – some philosophers debate this!

It’s actually surprisingly hard to find an uncontroversial position in philosophy. Pretty much everything is debated. And of course this usually doesn’t matter – you don’t need philosophy to fill out a tax return or drive the kids to school, after all. But when you hold some foundational beliefs that seem self-evident, and you’re in a discussion with someone else who holds different foundational beliefs, which they also think are self-evident, problems start to arise. Philosophical debates usually consist of little more than two people talking past one another, with each wondering how the other could be so stupid as to not understand the sheer obviousness of what they’re saying. And the annoying this is, both participants are correct – in their own framework, their positions probably are obvious. The problem is, we don’t all share the same framework, and in a setting like that frustration is the default, not the exception.

This is not to say that all efforts to discuss philosophy are doomed, of course. People do sometimes have productive philosophical discussions, and the odd person even manages to change their mind, occasionally. But to do this takes a lot of effort. And when I say a lot of effort, I mean a lot of effort. To make progress philosophically you have to be willing to adopt a kind of extreme epistemic humility, where your intuitions count for very little. In fact, far from treating your intuitions as unquestionable givens, as most people do, you need to be treating them as things to be carefully examined and scrutinized with acute skepticism and even wariness. Your reaction to someone having a differing intuition from you should not be “I’m right and they’re wrong”, but rather “Huh, where does my intuition come from? Is it just a featureless feeling or can I break it down further and explain it to other people? Does it accord with my other intuitions? Why does person X have a different intuition, anyway?” And most importantly, you should be asking “Do I endorse or reject this intuition?”. In fact, you could probably say that the whole history of philosophy has been little more than an attempt by people to attain reflective equilibrium among their different intuitions – which of course can’t happen without the willingness to discard certain intuitions along the way when they conflict with others.

I guess what I’m trying to say is: when you’re discussing philosophy with someone and you have a disagreement, your foremost goal should be to try to find out exactly where your intuitions differ. And once you identify that, from there the immediate next step should be to zoom in on your intuitions – to figure out the source and content of the intuition as much as possible. Intuitions aren’t blank structureless feelings, as much as it might seem like they are. With enough introspection intuitions can be explicated and elucidated upon, and described in some detail. They can even be passed on to other people, assuming at least some kind of basic common epistemological framework, which I do think all humans share (yes, even objective-reality-denying postmodernists).

Anyway, this whole concept of zooming in on intuitions seems like an important one to me, and one that hasn’t been emphasized enough in the intellectual circles I travel in. When someone doesn’t agree with some basic foundational belief that you have, you can’t just throw up your hands in despair – you have to persevere and figure out why they don’t agree. And this takes effort, which most people aren’t willing to expend when they already see their debate opponent as someone who’s being willfully stupid anyway. But – needless to say – no one thinks of their positions as being a result of willful stupidity. Pretty much everyone holds beliefs that seem obvious within the framework of their own worldview. So if you want to change someone’s mind with respect to some philosophical question or another, you’re going to have to dig deep and engage with their worldview. And this is a difficult thing to do.

Hence, the philosophical quagmire that we find our society to be in.

It strikes me that improving our ability to explain and discuss philosophy amongst one another should be of paramount importance to most intellectually serious people. This applies to AI risk, of course, but also to many everyday topics that we all discuss: feminism, geopolitics, environmentalism, what have you – pretty much everything we talk about grounds out to philosophy eventually, if you go deep enough or meta enough. And to the extent that we can’t discuss philosophy productively right now, we can’t make progress on many of these important issues.

I think philosophers should – to some extent – be ashamed of the state of their field right now. When you compare philosophy to science it’s clear that science has made great strides in explaining the contents of its findings to the general public, whereas philosophy has not. Philosophers seem to treat their field as being almost inconsequential, as if whatever they conclude at some level won’t matter. But this clearly isn’t true – we need vastly improved discussion norms when it comes to philosophy, and we need far greater effort on the part of philosophers when it comes to explaining philosophy, and we need these things right now. Regardless of what you think about AI, the 21st century will clearly be fraught with difficult philosophical problems – from genetic engineering to the ethical treatment of animals to the problem of what to do with global poverty, it’s obvious that we will soon need philosophical answers, not just philosophical questions. Improvements in technology mean improvements in capability, and that means that things which were once merely thought experiments will be lifted into the realm of real experiments.

I think the problem that humanity faces in the 21st century is an unprecedented one. We’re faced with the task of actually solving philosophy, not just doing philosophy. And if I’m right about AI, then we have exactly one try to get it right. If we don’t, well..

Well, then the fate of humanity may literally hang in the balance.

Cold fusion: real after all?

-3 ahbwramc 17 April 2013 07:27PM

TL,DR: cold fusion is real, apparently. Yes, really - cold fusion. I know. I wouldn't have thought so either.

-  -  -

The point of this post is basically to promote to your attention the hypothesis that cold fusion is a real physical phenomenon. For those of you not in the know, this very much flies in the face of current scientific consensus (something I'm not usually in the habit of opposing). In this case though the evidence seems to be quite straightforwardly in favour of the cold fusion advocates.

[Note: most researchers working in this area don't like the term cold fusion; partially because of the negative scientific connotations it drudges up, and partially because fusion might not be an accurate description of what's going on physically. The two preferred terms seem to be low-energy nuclear reactions (LENR) and lattice-assisted nuclear reactions (LANR). I use cold fusion in this piece mainly for convenience and name-brand recognition]

Quick background - in 1989 Stanley Pons and Martin Fleischmann, leading electrochemists of their day, announced a truly startling discovery: a tabletop apparatus of theirs had produced anomalous heat that was (according to them) orders of magnitude beyond what could be produced by chemical effects alone. The only process that can produce heat like that is a nuclear reaction, but such reactions were thought to be impossible at such low temperatures. Thinking they had discovered a new source of energy, Pons and Fleischmann were justifiably excited and hurried to publish their results. In the subsequent months a huge number of researchers tried to replicate their findings, with most being unable to do so. Of the few scientists who did get positive results, some later retracted their work, and others were criticized for sloppy experimental design. To make matters worse, errors and exaggerations were found in Pons and Fleischmann's original paper. Very quickly the scientific community as a whole had cold fusion pegged as "pathological science", and most researchers forgot about the whole affair and went back to their normal, non-energy-crisis-solving work. Pons and Fleischmann, disgraced, ended up quietly leaving the country to continue their work elsewhere, and that was the end of the cold fusion story, as far as most people were concerned. [1]

Here's where it gets interesting. Naturally, the prospect of solving the world's energy problems proved very alluring to people, so a small number of researchers continued their work with cold fusion. During the 90's some of this work was published in peer-reviewed journals, although this became less and less common as the decade wore on. As far as I know, no mainstream peer-reviewed scientific journal currently accepts cold fusion papers for consideration. Undeterred, cold fusion researchers continued their work; research was published at conferences devoted to cold fusion, self-organized by researchers in the field. This work was generally not peer-reviewed, and much of it (I think most cold fusion researchers would be willing to admit) was not of the highest quality, scientifically. Much - but not all, mind you. There were some researchers at respected universities (including MIT) that conducted very rigorous and high quality studies. Anyway, together this motley band of hobbyists, engineers and scientists, over the last twenty years or so, has found...well, something. Sometimes. If you squint right.

Basically there are a huge number of scattered reports of cold fusion occurring, but reproducibility is a big problem. Some people find low levels of excess heat. Some people find nothing. Some people, when conditions are "just right", report extremely high levels of excess heat. There are even a few cases where explosions occurred and labs have been "blown up" [2]. The sheer volume of claims might be enough to be suggestive that something was going on, all things being equal. But of course, all things aren't really equal in this case; given the initial inability of expert scientists to replicate the original findings in 1989, and the non-peer-reviewed nature of most cold fusion work nowadays, we have every reason to be extra skeptical of reports of cold fusion. Extraordinary claims and all that.

This is why I've taken what I consider to be the two strongest pieces of evidence for cold fusion and provided them below. As I mentioned before, there are some scientists doing rigorous, very well controlled experiments at research universities, and they consistently find that cold fusion is occurring. So, without further ado, here's my proof:

1. Mitchell Swartz's experiments

If you have the time, I would strongly suggest you watch this video: http://www.youtube.com/watch?v=e38Y7HxD_5Y. It's part of a lecture from a multi-week cold fusion course put on by Swartz and others at MIT in January. It's 40 minutes long (and only the first part of five videos actually) but well worth your time. In it Swartz basically makes his case for cold fusion.

I suppose I should stop here to briefly describe what a typical cold fusion experiment looks like. The standard design uses a simple piece of metal, usually Palladium or Nickel. Heavy water (Deuterium) is forced into the lattice of atoms that make up the metal by applying an electric field. Once a high enough loading of Deuterium is achieved in the metal lattice, what (purportedly) happens is that two Deuterons combine in a nuclear reaction to produce a single Helium(4) nucleus, plus heat. The idea is that the lattice of metal atoms is mediating the nuclear reaction in some way, making it occur at far lower temperatures than would normally be possible. Typically these experiments are done with the apparatus fully immersed in heavy water, and what you do is check for excess heat by setting up a calorimeter around the experiment. You can easily measure how much electrical energy you're putting into the system; if the calorimeter is reading higher energies than that coming out, you know cold fusion is taking place (well, that's not entirely accurate - you know some process is producing extra energy, but you don't know what it is. The reason we can confidently say it's nuclear in origin is because the energy densities involved are well beyond what could be produced by a simple chemical reaction).

Anyway, if you can't watch the video, here's what Swartz has found:

-Consistently measures output energy in the range of 200-400% of input energy (!)

-Excess heat is well above noise level for calorimeter

-Calorimeter is very well calibrated - when heat is fed into system via simple ohmic resistor, measured output heat exactly matches input energy

-Chemical control experiments fail (ie using non-cold-fusion-active metals and loading materials gives no excess heat)

-Two calorimeters (each of which have several redundant ways of measuring heat anyway) were built, just to be sure; same results

-Excess heat generation occurs for days or even weeks continuously

-He(4) production is observed, with amounts commensurate with heat production

Mind you, this is not just a one-off experiment - he's been getting results like this for ten years or more. If you watch the video, I think you'll agree that it's a very well-controlled and well-calibrated experiment. It certainly looks that way, anyway, to my semi-informed eyes as a physics grad student (although if there are any actual experimentalists reading this who are more informed than I, I would love to hear from you - please, attack it to bits). In my eyes the only two reasonable explanations for Swartz's results are (i) cold fusion being real, and (ii) active fraud. Fraud is of course possible, but I think unlikely given what other groups have found.

Oh, and if you can't watch the video, here's a 2009 paper you can read by Swartz: http://world.std.com/~mica/Swartz-SurveyJSE2009.pdf. It's less focused on his own research and more of a survey of cold fusion research in general, but he does talk about his own results in Section 4. Certainly worth a look.

2. Yasuhiro Iwamura's transmutation work at Mitsubishi

In one of those strange quirks of fate, for some reason or another scientists in Japan ended up being particularly open to cold fusion claims [3]. There are currently several researchers in Japan, some at universities and some at different companies, who are looking in to cold fusion. I link you here to a particularly interesting paper by Iwamura, who works for a research division of Mitsubishi: http://newenergytimes.com/v2/conferences/2012/ANS2012W/2012Iwamura-ANS-LENR-Paper.pdf

Iwamura uses a slightly different setup for Swartz, but the basic idea is the same: Deuterium is permeated through a Palladium lattice, magic happens, heat comes out, etc. The main difference in this experiment is that Iwamura is not actually looking for excess heat production. He's instead looking for transmutation of elements, which also has been reported to happen in certain cold fusion experiments. To do this a layer of some other material, in this case Cesium, is added on top of the Palladium, and - in a process that no one fully understands yet - that material is transmuted into an entirely different element. So just in case unlimited clean energy wasn't enough for you, we now also have just straight-up alchemy happening (I for one can't fathom why scientists are skeptical of cold fusion).

But, prior probabilities be damned, Iwamura has actually gone and done this! In his experiments he does time-resolved XPS spectroscopy, and observes Praseodymium being created in the apparatus while the total amount of Cesium goes down with time - elemental transmutation (!)

This work is particularly strong evidence for two reasons, I think:

One, because the claim involves detecting elements, it's inherently more plausible than any claims to do with excess heat. Calorimetry can be difficult, and it's easy for a skeptic to claim that the experimenter simply made a mistake in measuring the excess heat (mind you in the case above I think the calorimetry is well done and that there wasn't a mistake, but that isn't always the case). In contrast to calorimetry, detecting elements is very straightforward. There are many independent ways to do it, and it's all rather black and white; either you find an element, or you don't. If you do find a new element, then have something of a smoking gun - it's very difficult to explain how a new element could just appear in your experiment without invoking nuclear processes. The standard skeptic's reply to experiments like this is basically to say "contamination," and wave their hands. That is, they posit that the transmuted element in question was already present in the Palladium lattice at the start of the experiment (perhaps concentrate somewhere so it wasn't detected initially). I find this a less than compelling argument, to say the least - really, the experiment just happens to be contaminated with Praseodymium, of all things? And the contamination is such that the Praseodymium gradually appears to the detector over time, at the same rate that Cesium disappears? And when experiments without Cesium are run, the Praseodymium is mysteriously absent? What a strange coincidence.

Sarcasm aside, though, the experimenters are well aware of this argument, and have a very good explanation for why it couldn't be contamination - namely, isotope ratios. Essentially the distribution of isotope frequencies for the transmuted elements they find are different from the natural isotope frequencies for the same element. Hence, the experiment couldn't have simply been contaminated with the natural version of that element.

The second reason this research counts as strong evidence is that...well, it's actually been replicated. This was particularly bizarre for me to discover upon reading about cold fusion - I was under the impression that there were no clear-cut replications of any cold fusion experiments, anywhere. That's apparently not true though - researchers at Toyota have redone Iwamura's experiment and also find Praseodymium being created. Unfortunately it was presented at a conference, and there doesn't seem to be an associated paper. Here's a link to an article though that describes the replication, though, containing some slides with the Toyota researchers results: http://news.newenergytimes.net/2012/12/06/mitsubishi-reports-toyota-replication/. The article also mentions researchers at two universities (Osaka and Iwate) reporting similar findings.

So to sum up: simple elemental detection experiment. Transmuted elements found. Control experiments fail. Multiple confirmations. Combined with the high-quality excess heat measurements of Swartz above, I feel very confident in concluding that cold fusion is a real physical phenomenon. For an additional bit of low-weight evidence, though, I submit to you also the fact that NASA, of all organizations, has an active cold fusion program: see http://futureinnovation.larc.nasa.gov/view/articles/futurism/bushnell/low-energy-nuclear-reactions.html. To be honest I think that article overhypes the current situation; yes, cold fusion appears to be real, but I find the assertion that multiple groups have already achieved kilowatt-level heat production to be very suspect, based on what I've read. Regardless, the fact that NASA is treating this seriously and actively doing cold fusion research might serve as further evidence for skeptical readers.

This concludes my case.

Now, despite the (I think) fairly convincing picture I've painted here, we are still left with the nagging question of why so many early cold fusion experiments failed, and why so many continue to fail today. It seems clear that, real effect or no, cold fusion experiments have unusually low reproducibility. Shouldn't this count against it somehow? In the words of one skeptic, nuclear physicist Richard Garwin,

"It's absurd to claim that experiments that seem to support cold fusion are valid, while those that don't are flawed."

I think Garwin misses the point here, though. What cold fusion advocates are looking for is an existence proof. They just have to show that there exists some set of experimental conditions for which cold fusion occurs. Or, to flip the quantifiers (as PhilGoetz might put it ;), they are trying to disprove the hypothesis that for all sets of experimental conditions, cold fusion never occurs. Looking at it that way, of course a few experiments would be sufficient to make the case - it's just standard Popperian falsification. When you're dealing with "for all" statements, its one strike and you're out.

Or, to put it in Bayesian terms: the probability of getting negative experimental results, conditional on cold fusion being true, is not that low. If cold fusion is true, then somewhere in the experimental parameter space there must be a region where it occurs. But that says nothing about the size of the region; it's fairly easy to imagine experimenters setting out to demonstrate cold fusion and missing some unknown key aspect of the design, giving a negative result. One doesn't even have to posit any experimental error - they're simply looking in the wrong place. On the other hand, the probability of getting positive results in a well-designed, well-controlled experiment, conditional on cold fusion being false, is extremely low. It's basically equal to the probability that the experimenter screwed up the measurement, which can be made vanishingly low with proper controls and replications.

With all that said, of course, it would still be nice to know where exactly previous cold fusion researchers were going wrong. Mitchell Swartz, incidentally, thinks he has this figured out. He's identified a number of necessary conditions for cold fusion that are frequently absent from failed experiments and present in successful ones. The two main culprits seem to insufficient loading of Deuterium in the metal lattice, and a non-optimal (too high or too low) level of electrical driving of the system. I have no idea if he's right about the particulars, of course. But it certainly doesn't seem implausible that this will all be sorted out in the near future, and what seemed like irreproducibility will simply turn out to be the result of an underlying, thus far opaque, pattern.

Huh, this turned out much longer than I expected. I guess I'll close by noting that this topic seems like an almost perfect candidate for confirmation bias; who wouldn't want to believe in a cheap, unlimited, carbon and radiation-free energy source? That's part of the reason I made this post; what I'd really like is for people to a) pick apart this post, looking for flaws in my logic/arguments, and b) look into this whole cold fusion thing independently, and see if they reach the same conclusions. I'm very interested in getting this right, for obvious reasons, and I think at the very least I've made a sufficiently interesting case that doing some research online would be worth it. I don't think I really need to mention the almost mind-boggling impact cold fusion would have, if it turned out to be real and exploitable.

I'm cautiously optimistic about the future right now, LW.

References:

[1] This is standard history, see http://en.wikipedia.org/wiki/Cold_fusion

[2] http://news.newenergytimes.net/2013/02/22/lenr-nasa-widom-larsen-nuclear-reactor-in-your-basement/ 

Relevant quote: "The explosions are difficult to keep secret. Most people who have been around the field know of them: Fleischmann and Pons in Utah, unidentified researchers at Lawrence Livermore National Laboratory, a group at SRI International, Tadahiko Mizuno in Japan, Jean-Paul Biberian in France, and another situation in a Russian lab a few years ago.

The only lab that may have blown up was the one in Russia. In the other situations, the experiment, not the lab, blew up. SRI International researcher Andy Riley was killed, and Michael McKubre was wounded. Mizuno lost his hearing for a week and came very close to sustaining severe injuries."

[3] http://coldfusioninformation.com/countries/cold-fusion-japan/