Appendices toYou're Entitled to Arguments, But Not (That Particular) Proof

(The main article was getting long, so I decided to move the appendices to a separate article which wouldn't be promoted, thus minimizing the size of the article landing in a promoted-article-only-reader's feed.)

A.  The absence of unobtainable proof is not even weak evidence of absence.

The wise will already know that absence of evidence actually is evidence of absence; and they may ask, "Since a time-lapse video record of apes evolving into humans would, in fact, be strong evidence in favor of the theory of evolution, is it not mandated by the laws of probability theory that the absence of this videotape constitute some degree of evidence against the theory of evolution?"

(Before you reject that proposition out of hand for containing the substring "evidence against the theory of evolution", bear in mind that grownups understand that evidence accumulates.  You don't get to pick out just one piece of evidence and ignore all the rest; true hypotheses can easily generate a minority of weak pieces of evidence against themselves; conceding one point of evidence does not mean conceding the debate; and people who try to act as if it does are nitwits.  Also there are probably no creationists reading this blog.)

The laws of probability theory do mandate that if P(H|E) > P(H), then P(H|~E) < P(H).  So - even if absence of proof is by no means proof of absence, and even if we reject the philosophy that absence of a particular proof means you get to discard all the other arguments about evidence and priors - must we not at least concede that absence of proof is necessarily evidence of absence, even though it may be very weak evidence?

Actually, in cases like creationism, not even that much follows.  Suppose we had a time camera - a device that lets us look into the distant past and see historical events with our own eyes.  Let the proposition "we have a time camera" be labeled Camera.  Then we would either be able to videotape the evolution of humans from apes; let the presence of this video record be labeled Video, and its absence ~Video.  Let Evolution stand for the hypothesis that evolution is true.  And let True and False stand for the epistemic states "Pretty much likely to be true" and "Pretty much likely to be false", respectively.

Then, given that evolution is true and that we have a time camera, we should expect to see Video:

P(Video|Evolution,Camera) = True   and   P(Video|~Evolution,Camera) = False

So if we had a time camera, if we could look into the past, then "no one has seen apes evolving into humans" would be strong evidence against the theory of natural selection:

P(Evolution|~Video,Camera) = False

But if we don't have a time camera, then regardless of whether evolution is true or false, we can't expect to have "seen apes evolving into humans":

P(Video|Evolution,~Camera) = False   and   P(Video|~Evolution,~Camera) = False

From which it follows that once you know ~Camera, observing ~Video tells you nothing further about Evolution:

P(Evolution|~Video,~Camera) = P(Evolution|~Camera)

If you didn't know whether or not we had time cameras, and I told you only that no one had ever seen apes evolving into humans, then you would have to evaluate P(Evolution|~Video) which includes some contributions from both P(Evolution|~Video,Camera) and P(Evolution|~Video,~Camera).  You don't know whether the video is missing because evolution is false, or the video is missing because we don't have a time camera.  And so, as the laws of probability require, P(Evolution|~Video) < P(Evolution) just as P(Evolution|Video) > P(Evolution).  But once you know you don't have a time camera, you can start by evaluating P(Evolution|~Camera) - and it's hard to see why lack of time cameras should, in and of itself, be evidence against evolution.  The process of natural selection hardly requires a time camera.  So P(Evolution|~Camera) = P(Evolution), and then ~Video isn't evidence one way or another.  The observation ~Camera screens off any evidence from observing ~Video.

And this is only what should be expected: once you know you don't have a time camera, and once you've updated your views on evolution in light of the fact that time cameras don't exist to begin with (which doesn't seem to have much of an impact on the matter of evolution), it makes no further difference when you learn that no one has ever witnessed apes evolving into humans.


 

B.  Subtext on cryonics:  Demanding that cryonicists produce a successful revival before you'll credit the possibility of cryonics, is logically rude; specifically, it is a demand for particular proof.

A successful cryonics revival performed with modern-day technology is not a piece of evidence you could possibly expect modern cryonicists to provide, even given that the proposition of interest is true.  The whole point of cryonics is as an ambulance ride to the future; to take advantage of the asymmetry between the technology needed to successfully preserve a patient (cryoprotectants, liquid nitrogen storage) and the technology needed to revive a patient (probably molecular nanotechnology).

In particular, the screening-off condition (playing the role of ~Camera in the example above) is the observation that we presently lack molecular nanotechnology.  Given that you don't currently have molecular nanotechnology, you can't reasonably expect to revive a cryonics patient today even given that they could in fact be revived using future molecular nanotechnology.

You are entitled to arguments, though not that particular proof, and cryonicists have done their best to provide you with whatever evidence can be obtained.  For example:

A study on rat hippocampal slices showed that it is possible for vitrified slices cooled to a solid state at -130ºC to have viability upon re-warming comparable to that of control slices that had not been vitrified or cryopreserved. Ultrastructure of the CA1 region (the region of the brain most vulnerable to ischemic damage) of the re-warmed slices is seen to be quite well preserved compared to the ultrastructure of control CA1 tissue (24). Cryonics organizations perfuse brains with vitrification solution until saturation is achieved...
A rabbit kidney has been vitrified, cooled to -135ºC, re-warmed and transplanted into a rabbit. The formerly vitrified transplant functioned well enough as the sole kidney to keep the rabbit alive indefinitely (25)... The vitrification mixture used in preserving the rabbit kidney is known as M22. M22 is used by the cryonics organization Alcor for vitrifying cryonics subjects. Perfusion of rabbits with M22 has been shown to preserve brain ultrastructure without ice formation (26).

This is the sort of evidence we can reasonably expect to obtain today, and it took work to provide you with that evidence.  Ignoring it in favor of demanding proof that you couldn't expect to see even if cryonicists were right, is (a) invalid as probability theory, (b) a sign of trying to defend an allowed belief rather than being honestly curious about which possible world we live in, and (c) logically rude.

Formally:

  1. Even given that the proposition put forth by cryonicists is true - that people suspended with modern-day technology will be revivable by future technology - you cannot expect them to revive a cryonics patient using modern-day technology.
  2. Cryonicists have put forth considerable effort, requiring years of work by many people, to provide you with such evidence as can be obtained today.  The lack of that particular proof is not owing to any defect of diligence on the part of cryonicists, or disinterest in their part on doing the research.
  3. The prediction that a properly cryoprotected patient does not suffer information-theoretical death is not a privileged hypothesis pulled out of nowhere; it is the default extrapolation from modern neuroscience.  If we learn that a patient cryoprotected using current technologies has undergone erasure of critical brain information, we have learned something that is not in current neuroscience textbooks - and actually rather surprising, all things considered.  The straight-line extrapolation from the science we do know is that if you can see the neurons nicely preserved under a microscope, the information sure ought to be there.  (The idea that critical brain information is stored dynamically in spiking patterns has already been contraindicated by the evidence; dogs taken to very low (above-freezing) temperatures, sufficient to suppress brain activity, do not seem to suffer any memory loss or personality change.)
  4. Given that the proposition of interest is true, there is something drastically urgent we ought to be doing RIGHT NOW, namely cryopreserving as many as possible of the 150,000 humans per day who undergo mind-state annihilation.  (Economies of scale would very likely drive down costs by an order of magnitude or more; this is an entirely feasible goal economically and technologically, the only question is the political will.)

Given these points: to discard the straight-line extrapolation from modern science and all the hard work that cryonicists have done to provide further distinguishing evidence, in favor of a demand for particular proof that you know cannot possibly be obtained and which you couldn't expect to see even given that the underlying proposition is true, when there are things we ought to be doing NOW given the truth of the proposition and much value will be lost by waiting; all this is indefensible as decision theory in a formal sense, and is, in an informal sense, madness.

Which all goes to say only what Xiaoguang "Mike" Li observed to me some time ago:  That saying you'll only sign up for cryonics when someone demonstrates a successful revival of a cryonics patient is sort of like saying that you won't get on the airplane until after it arrives at the destination.  Only a very small amount of common sense is necessary to see this, and the objection really does demonstrate the degree to which, when most people feel an innate flinch away from an idea, they hardly feel obligated to come up with objections that make the slightest bit of sense.

This beautiful public service announcement, with only a slight change of metaphor, could serve as a PSA for cryonics.  Stop making a big deal out of the decision.  It's not that complicated.


 

C.  Demanding the demonstration of a working nanomachine before you'll credit the possibility of molecular nanotechnology is logically rude, specifically, a demand for particular proof.

Given humanity's current level of technology, you can't reasonably expect a demonstration of molecular nanotechnology right now, even given that the proposition of interest is true: that molecular nanotechnology is physically possible to operate, physically possible to manufacture, and likely to be developed within some number of decades.  Even if we live in that world, you can't necessarily expect to see nanotechnology now.  And yet nonetheless the advocates of nanotechnology have gone to whatever extent possible to provide the arguments by which you could, today, figure out whether or not you live in a world where molecular nanotechnology is possible.  Eric Drexler put forth around six years of hard work to produce Nanosystems, doing as much of the basic physics as one man working more or less alone could be expected to do; and since then Robert Freitas, in particular, has been designing and simulating molecular devices, and even trying to work out simple synthesis paths, which is about as much as one person could do, and the funding hasn't really been provided for more than that.  To ignore all this hard work that has been put into providing you with such observations and arguments as can be reasonably obtained, and throw them out the window because of a demand for particular proof that you think they can't obtain and that they wouldn't be able to obtain even if the proposition at hand is true - this is not just invalid as probability theory, not just defensiveness rather than curiosity, it is logically rude.

Although actually, of course, you can see tiny molecularly-precise machines on parade, including freely rotating gears and general assemblers - just look inside a biological cell, particularly at ATP synthase and the ribosome.  But by that power of cognitive chaos which generates the demand for unobtainable proof in the first place, there can be little doubt that, as soon as this overlooked demonstration is pointed out, the one will immediately find some clause by which to exclude it.  To actually provide the unobtainable proof would hardly be fair, after all.



D.  It is invalid as probability theory, suboptimal as decision theory, and generally insane, to insist that you want to see someone develop a full-blown Artificial General Intelligence before you'll credit that it's worth anyone's time to work on problems of Friendly AI.

Not precisely analogous to the above cases, but it is a demand for particular proof.  Delineating the specifics is left as an exercise to the reader.

New Comment
70 comments, sorted by Click to highlight new comments since:

There's a somewhat analogous case I often encounter (in my secret identity), along the lines of "You've shown evidence that programmers writing unit tests is beneficial to software projects in some particular cases, but until you show me published academic empirical studies saying that unit tests always save time and reduce bug counts, I'm going to keep writing code as before (no tests and trusting to hope as a method of proof)."

The absence of particular proof serves to dismiss even the readily available opportunities for self-experimentation which would allow the respondent to generate the very evidence they require.

Good call. Reminds me of a common attitude around here towards self-help/anti-akrasia techniques....

8orthonormal
Not really, IMO. I should have posted my results before, but I've tried several of the methods suggested here, with some success, total failure, sustained success† and additional success††, respectively. I'd suspect many others have been trying various techniques as well. Antipathy toward a particular self-proclaimed expert on self-help and akrasia around here shouldn't be seen as a rejection of all techniques. † I modified taw's system to help me work more and waste less time on the Internet during the day. Briefly, from the time I wake up to the time I decide I'm done working for the day, I count time researching (or other productive active work) at 1 point per minute, listening to lectures at 1/2 point per minute, and wasting time on the Internet (including this site, sorry everyone) at -1 point per minute. I set my "good day" threshold at +180, and I've added a (somewhat complicated) reward system calculated easily on a spreadsheet. Then I can check LW and other sites guilt-free after the day is done. It's worked pretty well! †† This wasn't the point of that post, but I've used it to my advantage with the add-on LeechBlock, which makes it just that much more inconvenient to stop working and start trawling webcomics/Twitter/LW. ETA: Also, some success and marginal success.
1MichaelVassar
Though I think that SIAI itself does a better job than that, don't you?
0[anonymous]
I think at least most at SIAI do better. (I also think that attitude intuitively seems more common to me than it really is, simply because it annoys me so much.)

an ambulance ride to the future

That feels extremely poignant to me, for some reason. Cryonics doesn't cut it from an Darwinist perspective. But you don't let people die even though saving them will cost more than making a new human, or do you?

Click.

2mattnewport
Are you speaking normatively or descriptively? We do routinely let people die even though saving them would not cost very much. People with the wealth to pay for treatment, or health insurance coverage, or who are born in a relatively wealthy country with government provided healthcare, are often saved at quite high cost. The majority of the world's population has much less access to expensive health care however and in many cases we let those people die even though it would be relatively cheap to save them. Economically it makes sense to spend more saving an existing person than creating a new one, either because they themselves (or their family or friends) place a high value on their particular life or more generally because a person with already developed skills and experience potentially offers a higher return on investment than a new person who will require years of expensive education to be economically productive. That could potentially be framed as an argument for cryonics but it seems less likely that a preserved human would offer economically valuable skills to a future society with revival technology.

There is some extremely small probability that the theory of evolution is false, and the evidence of this has been withheld from us by some kind of plot. This hypothesis is supported by the absence of time cameras (since time cameras would resolve the matter), and so the absence of time cameras must increase the probability that evolution is false... even if only by 1/3^^^^^3, or something like that.

2Peter_de_Blanc
Silly Unknowns. 0, 1, and 1/3^^^^^3 are not probabilities.
1Nick_Tarleton
Why not 1/3^^^^^3?
2Eliezer Yudkowsky
You can't imagine anything that improbable. Unless we adopt Robin's anthropic penalty, in which case "I am in a unique position to affect 3^^^^^3 other people" is that improbable.

Why not 1/3^^^^^3?

You can't imagine anything that improbable.

Actually, the beauty of mathematics is that it enables us to imagine such things -- just as surely as it tells us that there ain't nothin' we're talkin' about that's anywhere near that.

I can't imagine quarks either.

1/3^^^^^3 is a probability. A stupid probability, but a probability nonetheless. And if you declare 1/3^^^^^3 to be not a probability because of it's unimaginable uselessness then by the same standard I expect you to consider 3^^^^^3 'Not a Number'. I know you routinely use arbitrarily large numbers like 3^^^3 for decision theoretic purposes (on Halloween costumes!) and that is a number that is more or less chosen because it is already unimaginable.

2Liron
log_2(3^^^^^3) heads in a row?
4Eliezer Yudkowsky
Coin's fixed.
6Liron
Ah, so you meant: No physically possible series of Bayesian updates can promote a hypothesis to prominence if its prior probability is that low. And Peter meant: It is decision-theoretically useless to include a subroutine for tracking probability increments of 1/3^^^^^3 in your algorithm. But the non-Bayesian source of your Bayesian prior might output 1/3^^^^^3 as the prior probability of an event -- as surely for the coin flip example as for Robin Hanson's anthropic one.
2Eliezer Yudkowsky
To be precise, it's impossible to describe any sense event with a prior probability that low. You can describe hypotheses conditional on which a macro-event has a probability that low. For example, conditional on the hypothesis that a coin is fixed to have a 1/3^^^3 probability of coming up heads, the probability of seeing heads is 1/3^^^3. But barring the specific and single case of Hanson's hypothesized anthropic penalty being rational, I know of no way to describe, in words, any hypothesis which could justly be assigned so low a prior probability as 1/3^^^3. Including the hypothesis that purple is falling upstairs, that my socks are white and not white, or that 2 + 2 = 5 is a consistent theorem of Peano arithmetic.
4Nick_Tarleton
The log_2(3^^^^^3) consecutive binary digits of pi starting from number 3^^^^^3 are 0?
2[anonymous]
The simulators are messing with you.
-1Unknowns
Then our minds are "fixed" too, just like the coin.
3wedrifid
How many dustspecks in the eye are you willing to bet on that?
0[anonymous]
The log_2(3^^^^^3) consecutive binary digits of pi starting from number 3^^^^^3 are 0?
2beriukay
If this were the case, then what is to stop me from thinking of N>3^^^^^3 impossible methods of gaining evidence (aliens from Mars, or Planet X, or the past, or from Cygnus XJ45, or another dimension...), and then claiming that since these probabilities are mutually independent, summing up the positive probabilities, and claiming evolution (or any theory) to be unlikely to be true? I mean, aside from the thing about probability theory being invalid, which I haven't seen before. Also, thank you Eliezer, for explaining why the argument about cryonics is logically rude. I've been banging my head on this exact topic with a friend for the past week and have been unable to get past that road block with her.
6Unknowns
First, you can't think of 3^^^^^3 ways of gaining evidence, possible or impossible, because there are not that many possible distinct states of a human brain (or of the physical universe, for that matter.) Second, the more complex your hypothesis, the less probable it will be, so some hypotheses might only change the probability by 1/3^^^^^^^^^^^^^^^^^^^^^^^^^^^3, or even less, and so it is perfectly possible to sum them all and still only move the probability by a very small amount.
1beriukay
You and Eliezer make good points, thank you. I just started reading about negative probabilities. I don't believe I've heard of them before. Just to be clear, I never claimed that the sum of possibilities would diverge, though I don't think I gave proper attention to the prior probability distribution summing to 1. I did not mean to imply that I would individually think up every single impossible possibility. I figured it would be enough to hook into some countably infinite set and show that it is just a subset of all the possible impossibilities we could generate. One could simply tap into the Infinite Earths of DC Comics to construct an argument that resembles the original lack of time camera argument: The absence of any Superman from Earths 1 through 3^^^^^^^^^^^^3 confirming evolution must elevate the probability that evolution is false. In my obviously absurd example, I am not sure why Superman from Earth 5000 would be any more complex than Superman from Earth 500. Though I suppose the numbering system would indicate an elevated degree of difficulty crossing over, perhaps. It is true, I didn't account for all the parallel Earths where Superman is evil, or disinterested in our Earth, or unable to get here. My mind boggles at the possibilities. I think my original complaint remains, though. Why would the absence of evidence from something that is admittedly impossible increase the probability of something being false? I suppose I am complaining too much for such a tiny marginal increase in probability, since a random person on the street shouting "evolution is false!" is probably going to sway your opinion to a far larger degree than some 1/3^^^^^^^^^^^^^^^^^^^^^^^^^^^3 event. However, it strikes me as strange that someone should feel obligated to disbelieve (even a tiny bit) evolution on the grounds that Superman 3^^3 didn't tell him/her that it was true. (edit: I got the Superman 500 mixed up with 5000)
4Unknowns
The point was that you can easily have a sum of an infinite series that adds to some small finite amount. Assuming "Earth 5000" is defined differently from "Earth 500" (which it must be in order to have new hypothesis), your different hypotheses will different complexities depending on the complexity of the number. Overall (but not in every single instance) the higher the number, the more complex the hypothesis, so the less the probability will be changed. There is no reason for this infinite sum not to converge to an extremely small quantity. In any case (and this may be Peter de Blanc's point), these probabilities are smaller than the sensitivity of the human judgement: so in fact, subjectively you don't need to feel obliged to change your opinion at all based on them.
0beriukay
That makes sense. I guess as long as the sum of the infinitely many absurdly contrived possibilities remains less than rounding error and/or sensitivity of human judgment, I have no qualms with your original point.
4Eliezer Yudkowsky
There are counterbalancing negative possibilities, you can't sum over just the positive ones. And since a prior probability distribution sums to 1, the contribution of even solely the positive possibilities must converge to a finite sum rather than diverging.
0[anonymous]
It happens that I've also been banging my head on this exact topic with a friend about a week ago and failed to get past that road block with him. Upvotes are in order.

A thought on cryonics: How many people suffer information-theoretic death because of Alzheimer's Disease, strokes, or other such causes long before they stop breathing? (My two living grandparents both seem to be among them.)

1Paul Crowley
We don't know whether Alzheimer's is information-theoretically reversible or not, AFAIK. EDIT: I'm wrong, for some reason I thought we knew less than we do.
2CronoDAS
How about multi-infarct dementia?
1Paul Crowley
Ouch, OK, that does look information-theoretically hard.
1Eliezer Yudkowsky
Doesn't look very information-theoretically hard to me. Partial preservation of function probably implies near-total preservation of information.
0CronoDAS
Yeah, and it's what my mom says that my grandmother has. :(
4Paul Crowley
It's hardly consolation, but from what I understand of your family it's hardly as if she would be cryopreserved upon legal death anyway, so it hardly matters either way. I take it Pratchett isn't signed up? Why the very rich don't sign up mystifies me so.
4CronoDAS
Now that you mention it, if not for the Alzheimer's, I'd pay to cryopreserve him. The great scientists and mathematicians of the past wouldn't be of much use in the present, but how much would people today pay to resurrect Shakespeare or Mozart?

The money is hardly the object: it's persuading him that it's worthwhile that's the difficulty.

From what he's been saying recently about assisted suicide, he may not be planning on living long enough for the worst of the damage to take place. This makes him a particularly good candidate for cryopreservation, except that celebrity + assisted suicide + cryonics = absolutely massive shitstorm.

1Roko
Have you considered emailing him or otherwise trying to get through? I don't see that you'd do any damage, and it would take long.
6Paul Crowley
Judging by the number of people I've met who fall into this category, Terry Pratchett has at least 10,000 close personal friends; I'd probably be better off persuading one of them to do it. However, I will bend Charlie Stross's ear on this subject if I get the opportunity. EDIT: to be clear, the possible damage is that if my email doesn't succeed, it raises the bar the second such email has to reach to be persuasive.
2Roko
I doubt Stross would listen. He's too self-righteous. But cool that you know him.
0Nick_Tarleton
What of relevance do we know? Links? (Or is this in response to CronoDAS's link? The article says multi-infarct dementia isn't Alzheimer's.)
1Paul Crowley
I don't know any more than is at the end of that link; someone who knew the subject could doubtless say much more. There's a remark about cryonics and Alzheimer's in this Ralph Merkle article:

won't get on the airplane until after it arrives at the demonstration.

"destination"?

0Eliezer Yudkowsky
fixed

I'm not sure that demanding particular proof is such a bad thing. Often when I disagree with someone I find it helpful to ask them for a list of things that would convinced them. If it is something that we don't expect to see (such as the time-camera) then one can explain why that's a bad standard. More often, in cases like evolution, what people demand is something directly contradicted by the hypothesis (a dog giving birth to a cat seems to be a common one). So even if specific demands for particular pieces of evidence are bad, they are useful to ask fo... (read more)

0[anonymous]
I don't think the post is saying anything against asking for someone's evidence.

Before you reject that proposition out of hand for containing the substring "evidence against the theory of evolution"

I love this clause. It's worthy of its own post.

1RobinZ
It strikes me as related to Policy Debates Should Not Appear One-Sided.

A thought on nanotechnology: considering that biological cells already have most of the capabilities of molecular nanotechnology, and that said cells have been undergoing natural selection for over a billion years, if something better were possible, it probably would have evolved by now. For example, I'd be very surprised if somebody one day makes a machine that's significantly better at protein synthesis than a ribosome is. I suspect that future nanotechnology will look a lot like today's biological systems.

Um... that's a rather odd argument to make, considering steel, wheels, nuclear power, transistors, radio, lasers, books, LEDs...

Proteins are held together by van der Waals forces, which are much weaker than covalent bonds. Preliminary calculations show gargantuan opportunities for improvement (see Drexler's Nanosystems).

5CronoDAS
::urge to play devil's advocate rising:: Well, our power sources still have some disadvantages when compared to cellular respiration - we can't yet build insect-size robots because we don't have a practical way to power them. And wheels are bad when there are no roads. Ever ridden a bicycle on rough terrain? It's awful. Also, how does the information storage density of DNA compare to books? As for LEDs, fireflies are still more efficient than anything humans designed. Steel? Spider silk has a higher tensile strength. Given the constraints that biological systems operate under, they tend to be very, very good at what they do. Transistors, though, I'll give you. ;)
2Christian_Szegedy
:) Still, all your arguments could have been said half billionion years ago: There was DNA, super developed arthropods (maybe fireflies and spiders?) and plants that photosynthesized more efficiently than today's solar cells. Still, evolution did not stop there, the Cambrian explosion and the rise of vertebrates was imminent... Now we are having a new explosion which is based on a completely different paradigm, is a million times faster and accelerates.
1whpearson
Nit-pick: 500 million years ago the Cambrian explosion had happened already. It was 530 million years ago.
3AllanCrossman
I'm not sure how this affects the argument, but the very flexibility of proteins is one of the things that makes them work. A whole bunch of biological reactions involve enzymes changing shape in response to some substance.
5AllanCrossman
I don't think this argument works. Adaptive evolution has mostly been driven by DNA mutations and natural selection. DNA is transcribed to RNA and then translated into proteins. I'm not sure evolution (of Earth's cell-based life) could produce something radically different, because this central mechanism is so fundamental and so entrenched.
2CronoDAS
You could be right; the cellular machinery hasn't changed very much for ages, so it certainly could have gotten caught in a local optimum. We don't know very much about what life looked like before modern cells, so we don't know what our current cellular machinery competed against.
1Jack
I don't necessarily disagree but couldn't you say the same thing about brains and intelligence?
1CronoDAS
/me shrugs Brains have been around for far less time than cells.
2Jack
I guess. I'm not sure we can justify drawing that arbitrary line just because we want novelty in synthetic intelligence but little novelty in protein production. And I don't really think 2 billion more years of evolution is going to produce they kind of intelligences most people around here are expecting to see in the next couple hundred years. Part of the reason people like the prospects for better intelligence is that we can identify really obvious ways in which our's could be improved. I wonder if there are systematic errors made in cellular mechanics.
2Technologos
And in particular, there's good reason to believe that brains are still evolving at a decent pace, where it looks like cell mechanisms largely settled a long while back.
0Dre
I don't know that much about the topic, but aren't viruses more efficient at many things than normal cells? Could there be opportunities for improvement in current biological systems through better understanding of viruses?
[-]Roko10

Whilst this is good epistemology, I have low expectations for the number of people that this good argument will move. Nevertheless, if you can move another one percent of one percent of the people who read LessWrong, you have made a positive impact.

6Vladimir_Nesov
There is more to a methodological article than changing someone's mind: you are making a method more explicit, and those who'd try to construct its ad-hoc version on the fly stronger.
1Roko
That is a good point. A humble first step towards persuasion is laying out the case as clearly as possible.
1tut
I have negative expectations for that. You can't say "I have worked so hard to reach my own goals that if you don't pay me you are rude". The would be costumers decide what reassurances they require, and if you can't meet their requirements, too bad for your business. Is this actually more than one person?
0[anonymous]
I can see how the second part of that could be read into the post, but I have no idea where you're getting the first part. In a sense, this is true, but some demands for reassurance are nevertheless objectively unreasonable.
0Roko
According to the sitemeter, we have had 10^6 hits. Depends what you mean by "read", I guess. Many hits will be one-time only.
5tut
But there are only about 4k siteviews per day, and some of those are probably by the same person. Unless you meant 'have read something once...'
0JamesAndrix
If a person with money can't accept that they were mistaken when you explain it to them, you're probably better off trying to get money from someone else. I think there are people with money who can understand that demanding particular proof is unreasonable, even if they did it.