Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: V_V 12 February 2016 11:37:00PM 6 points [-]
  • "Bayes vs Science": Can you consistently beat the experts in (allegedly) evidence-based fields by applying "rationality"? AI risk and cryonics are specific instances of this issue.

  • Can rationality be learned, or is it an essentially innate trait? If it can be learned, can it be taught? If it can be taught, do the "Sequences" and/or CFAR teach it effectively?

Comment author: gwern 12 February 2016 06:57:51PM 1 point [-]

But the point is, who is in the wrong between the adopters and the non-adopters?

If the new evidence which is in favor of cryonics benefits causes no increase in adoption, then either there is also new countervailing evidence or changes in cost or non-adopters are the more irrational side. Since I can't think of any body of new research or evidence which should neutralize the many pro-cryonics lines of research over the past several decades, and the costs have remained relatively constant in real terms, that tends to leave the third option.

(Alternatively, I could be wrong about whether non-adopters have updated towards cryonics; I wasn't around for the '60s or '70s, so maybe all the neuroscience and cryopreservation work really has made a dent and people in general are much more favorable towards cryonics than they used to be.)

Comment author: V_V 12 February 2016 10:33:14PM *  1 point [-]

If the new evidence which is in favor of cryonics benefits causes no increase in adoption, then either there is also new countervailing evidence or changes in cost or non-adopters are the more irrational side.

No. If evidence is against cryonics, and it has always been this way, then the number of rational adopters should be approximately zero, thus approximately all the adopters should be the irrational ones.

As you say, the historical adoption rate seems to be independent of cryonics-related evidence, which supports the hypothesis that the adopters don't sign up because of an evidence-based rational decision process.

Comment author: gwern 12 February 2016 08:30:29PM 1 point [-]

That's not true. I can think of at least 3 ways in which a society which has demonstrated successful revival could also still need to freeze people:

  1. You could die of something that will be curable in a few years and you know to high confidence what you will wake up as because society or revival methods won't change much.
  2. The emulation route could wind up being best long before magic nanobots cure all bodily ills, so you must die (so your brain is fixed well enough for slicing & scanning) but you know what you will wake up as almost immediately.
  3. There could be treatments or cures but of poor enough efficacy that you rationally prefer the risks of immediate death-then-preservation than try them (you have a fatal disease which can be cured only by a prefrontal lobotomy; alternately, you can go into cryopreservation; which do you prefer?).
Comment author: V_V 12 February 2016 10:26:35PM *  0 points [-]

4.You have a neurodegenerative disease, you can survive for years but if you wait there will be little left to preserve by the time your heart stops.

Comment author: qmotus 12 February 2016 04:03:59PM 0 points [-]

A major difference here is that if I sign up for those medical procedures, then I pretty much know what to expect: there is a slight chance that I get cured, and that's it. This is not the case with cryonics. I find it quite likely that cryonics would work, but there's hardly any certainty regarding happens then: I might wake up in just about any form (in a biological body, as an upload) in just about any kind of future society. I would have hardly any control over the outcome whatsoever.

Sure, maybe there would be many more who would sign up, but nevertheless I think it takes a very special kind of person to be ready to take such a leap into the unknown.

Comment author: V_V 12 February 2016 05:54:20PM 0 points [-]

If revival had been already demonstrated then you would pretty much already know what form you will be going to wake up in

Comment author: gwern 10 February 2016 09:25:05PM *  22 points [-]

Probably not. If you look at the comments on posts about the Prize, you can see how clearly people have already set up their fallback arguments once the soldier of 'possible bad vitrification when scaled up to human brain size' has been knocked down. For example, on HN: https://news.ycombinator.com/item?id=11070528

  • 'you may have preserved all the ultrastructure but despite the mechanism of crosslinking, I'm going to argue that all the real important information has been lost'
  • 'we already knew that glutaraldehyde does a good job of fixating, this isn't news, it's just a con job looking for some free money'
  • 'it irreversibly kills cells by fixing them in place so this is irrelevant'
  • 'regardless of how good the scans look, this is just a con job'
  • 'what's the big deal, we already know frogs can do this, but what does it have to do with humans; anyway, it's a quack science which we know will never work'

Even if a human brain is stored, successfully scanned, and emulated, the continued existence - nay, majority - of body-identity theorists ensures that there will always be many people who have a bulletproof argument against: 'yeah, maybe there's a perfect copy, but it'll never really be you, it's only a copy waking up'.

More broadly, we can see that there is probably never going to be any 'Sputnik moment' for cryonics, because the adoption curve of paid-up members or cryopreservations is almost eerily linear over the past 50 years and entirely independent of the evidence. Refutation of 'exploding lysosomes' didn't produce any uptick. Long-term viability of ALCOR has not produced any uptick. Discoveries always pointing towards memory being a durable feature of neuronal connections rather than, as so often postulated, an evanescent dynamic property of electrical patterns, have never produced an uptick. Continued pushbacks of 'death' have not produced upticks. No improvement in scanning technology has produced an uptick. Moore's law proceeding for decades has produced no uptick. Revival of rabbit kidney, demonstration of long-term memory continuity in revived C. elegans, improvements in plastination and vitrification - all have not or are not producing any uptick. Adoption is not about evidence.

Even more broadly, if you could convince anyone, how many do you expect to take action? To make such long-term plans on abstract bases for the sake of the future? We live in a world where most people cannot save for retirement and cannot stop becoming obese and diabetic despite knowing full well the highly negative consequences, and where people who have survived near-fatal heart attacks are generally unable to take their medicines and exercise consistently as their doctors keep begging them. And for what? Life sucks, but at least then you get to die. Even after a revival, I would predict that maybe 5% of the USA population (~16m people) would be meaningfully interested in cryonics, and of that only a fraction would go through with it, so 'millions' is an upper bound.

Comment author: V_V 12 February 2016 10:29:12AM 3 points [-]

Adoption is not about evidence.

Right. But the point is, who is in the wrong between the adopters and the non-adopters?

It can be argued that there was never good evidence to sign up for cryonics, therefore the adopters did it for irrational reasons.

Comment author: Brillyant 10 February 2016 07:36:11PM *  1 point [-]

I'm not sure this distinction, while significant, would ensure "millions" of people wouldn't sign up.

Presumably, preserving a human brain "successfully", according to some reasonable definition of the term, would be a big deal and cause a lot of interest in cryonics. It would certainly seem like significant progress towards the sort of life-extension that LW's been clambering about.

Exactly how many new contracts they would get seems hard to predict, but I don't see a number larger than 1,000,000 to be unreasonable.

Comment author: V_V 12 February 2016 10:08:51AM *  2 points [-]

I'm not sure this distinction, while significant, would ensure "millions" of people wouldn't sign up.

Millions of people do sign up for various expensive and invasive medical procedures that offer them a chance to extend their lives a few years or even a few months. If cryonics demonstrated a successful revival, then it would be considered a life-saving medical procedure and I'm pretty confident that millions of people would be willing to sign up for it.

People haven't signed up for cryonics in droves because right now it looks less like a medical procedure and more like a weird burial ritual with a vague promise of future resurrection, a sort of reinterpretration of ancient Egyptian mummification with an added sci-fi vibe.

Comment author: RichardKennaway 05 February 2016 04:27:52PM 0 points [-]

I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.

Comment author: V_V 07 February 2016 10:03:46AM 0 points [-]
Comment author: Houshalter 31 January 2016 02:29:38PM 1 point [-]

The difference with Markov models is they tend to overfit at that level. At 20 characters deep, you are just copy and pasting large sections of existing code and language. Not generating entirely unseen samples. You can do a similar thing with RNNs, by training them only on one document. They will be able to reproduce that document exactly, but nothing else.

To properly compare with a markov model, you'd need to first tune it so it doesn't overfit. That is, when it's looking at an entirely unseen document, it's guess of what the next character should be is most likely to be correct. The best setting for that is probably only 3-5 characters, not 20. And when you generate from that, the output will be much less legible. (And even that's kind of cheating, since markov models can't give any prediction for sequences it's never seen before.)

Generating samples is just a way to see what patterns the RNN has learned. And while it's far from perfect, it's still pretty impressive. It's learned a lot about syntax, a lot about variable names, a lot about common programming idioms, and it's even learned some English from just code comments.

Comment author: V_V 31 January 2016 08:23:08PM *  0 points [-]

The best setting for that is probably only 3-5 characters, not 20.

In NLP applications where Markov language models are used, such as speech recognition and machine translation, the typical setting is 3 to 5 words. 20 characters correspond to about 4 English words, which is in this range.

Anyway, I agree that in this case the order-20 Markov model seems to overfit (Googling some lines from the snippets in the post often locates them in an original source file, which doesn't happen as often with the RNN snippets). This may be due to the lack of regularization ("smoothing") in the probability estimation and the relatively small size of the training corpus: 474 MB versus the >10 GB corpora which are typically used in NLP applications. Neural networks need lots of data, but still less than plain look-up tables.

Comment author: Houshalter 29 January 2016 07:03:37PM 0 points [-]

The interesting thing about that RNN that you linked that writes code, is that it shouldn't work at all. It was just given text files of code and told to predict the next character. It wasn't taught how to program, it never got to see an interpreter, it doesn't know any English yet has to work with English variable names, and it only has a few hundred neurons to represent its entire knowledge state.

The fact that it is even able to produce legible code is amazing, and suggests that we might not be that far of from NNs that can write actually usable code. Still some ways away, but not multiple decades.

Comment author: V_V 30 January 2016 04:21:34PM *  3 points [-]

The fact that it is even able to produce legible code is amazing

Somewhat. Look at what happens when you generate code from a simple character-level Markov language model (that's just a look up table that gives the probability of the next character conditioned on the last n characters, estimated by frequency counts on the training corpus).

An order-20 language model generates fairly legible code, with sensible use of keywords, identifier names and even comments. The main difference with the RNN language model is that the RNN learns to do proper identation and bracket matching, while the Markov model can't do it except at shot range.

While, as remarked by Yoav Goldberg, it is impressive that the RNN could learn to do this, learning to match brackets and ident blocks seems very far from learning to write correct and purposeful code.

Anyway, this code generation example is pretty much of a stunt, not a very interesting task. If you gave the Linux kernel source code to a human who has never programmed and doesn't speak English and asked them to write something that looks like it, I doubt that they would be able to do much better.

Better examples of code generation using NNs (actually, log-bilinear models) or Bayesian models exist (ref, ref). In these works syntactic correctness is already guaranteed and the ML model only focuses on semantics.

Comment author: Houshalter 30 January 2016 03:57:41AM 0 points [-]

I never said that every engineer at every point in time was pessimistic. Just that many of them were at one time. And I said it was a second hand anecdote, so take that for what it's worth.

Comment author: V_V 30 January 2016 03:25:52PM 2 points [-]

You have to be more specific with the timeline. Transistors were invented in 1925 but received little interests due to many technical problems. It took three decades of research before the first commercial transistors were produced by Texas Instruments in 1954.

Gordon Moore formulated his eponymous law in 1965, while he was director of R&D at Fairchild Semiconductor, a company whose entire business consisted in the manufacture of transistors and integrated circuits. By that time, tens of thousands transistor-based computers were in active commercial use.

View more: Next