Video games with procedural generation of the game universe have existed since forever, what's new here?
"Bayes vs Science": Can you consistently beat the experts in (allegedly) evidence-based fields by applying "rationality"? AI risk and cryonics are specific instances of this issue.
Can rationality be learned, or is it an essentially innate trait? If it can be learned, can it be taught? If it can be taught, do the "Sequences" and/or CFAR teach it effectively?
But the point is, who is in the wrong between the adopters and the non-adopters?
If the new evidence which is in favor of cryonics benefits causes no increase in adoption, then either there is also new countervailing evidence or changes in cost or non-adopters are the more irrational side. Since I can't think of any body of new research or evidence which should neutralize the many pro-cryonics lines of research over the past several decades, and the costs have remained relatively constant in real terms, that tends to leave the third option.
(Alternatively, I could be wrong about whether non-adopters have updated towards cryonics; I wasn't around for the '60s or '70s, so maybe all the neuroscience and cryopreservation work really has made a dent and people in general are much more favorable towards cryonics than they used to be.)
If the new evidence which is in favor of cryonics benefits causes no increase in adoption, then either there is also new countervailing evidence or changes in cost or non-adopters are the more irrational side.
No. If evidence is against cryonics, and it has always been this way, then the number of rational adopters should be approximately zero, thus approximately all the adopters should be the irrational ones.
As you say, the historical adoption rate seems to be independent of cryonics-related evidence, which supports the hypothesis that the adopters don't sign up because of an evidence-based rational decision process.
That's not true. I can think of at least 3 ways in which a society which has demonstrated successful revival could also still need to freeze people:
- You could die of something that will be curable in a few years and you know to high confidence what you will wake up as because society or revival methods won't change much.
- The emulation route could wind up being best long before magic nanobots cure all bodily ills, so you must die (so your brain is fixed well enough for slicing & scanning) but you know what you will wake up as almost immediately.
- There could be treatments or cures but of poor enough efficacy that you rationally prefer the risks of immediate death-then-preservation than try them (you have a fatal disease which can be cured only by a prefrontal lobotomy; alternately, you can go into cryopreservation; which do you prefer?).
4.You have a neurodegenerative disease, you can survive for years but if you wait there will be little left to preserve by the time your heart stops.
A major difference here is that if I sign up for those medical procedures, then I pretty much know what to expect: there is a slight chance that I get cured, and that's it. This is not the case with cryonics. I find it quite likely that cryonics would work, but there's hardly any certainty regarding happens then: I might wake up in just about any form (in a biological body, as an upload) in just about any kind of future society. I would have hardly any control over the outcome whatsoever.
Sure, maybe there would be many more who would sign up, but nevertheless I think it takes a very special kind of person to be ready to take such a leap into the unknown.
If revival had been already demonstrated then you would pretty much already know what form you will be going to wake up in
Probably not. If you look at the comments on posts about the Prize, you can see how clearly people have already set up their fallback arguments once the soldier of 'possible bad vitrification when scaled up to human brain size' has been knocked down. For example, on HN: https://news.ycombinator.com/item?id=11070528
- 'you may have preserved all the ultrastructure but despite the mechanism of crosslinking, I'm going to argue that all the real important information has been lost'
- 'we already knew that glutaraldehyde does a good job of fixating, this isn't news, it's just a con job looking for some free money'
- 'it irreversibly kills cells by fixing them in place so this is irrelevant'
- 'regardless of how good the scans look, this is just a con job'
- 'what's the big deal, we already know frogs can do this, but what does it have to do with humans; anyway, it's a quack science which we know will never work'
Even if a human brain is stored, successfully scanned, and emulated, the continued existence - nay, majority - of body-identity theorists ensures that there will always be many people who have a bulletproof argument against: 'yeah, maybe there's a perfect copy, but it'll never really be you, it's only a copy waking up'.
More broadly, we can see that there is probably never going to be any 'Sputnik moment' for cryonics, because the adoption curve of paid-up members or cryopreservations is almost eerily linear over the past 50 years and entirely independent of the evidence. Refutation of 'exploding lysosomes' didn't produce any uptick. Long-term viability of ALCOR has not produced any uptick. Discoveries always pointing towards memory being a durable feature of neuronal connections rather than, as so often postulated, an evanescent dynamic property of electrical patterns, have never produced an uptick. Continued pushbacks of 'death' have not produced upticks. No improvement in scanning technology has produced an uptick. Moore's law proceeding for decades has produced no uptick. Revival of rabbit kidney, demonstration of long-term memory continuity in revived C. elegans, improvements in plastination and vitrification - all have not or are not producing any uptick. Adoption is not about evidence.
Even more broadly, if you could convince anyone, how many do you expect to take action? To make such long-term plans on abstract bases for the sake of the future? We live in a world where most people cannot save for retirement and cannot stop becoming obese and diabetic despite knowing full well the highly negative consequences, and where people who have survived near-fatal heart attacks are generally unable to take their medicines and exercise consistently as their doctors keep begging them. And for what? Life sucks, but at least then you get to die. Even after a revival, I would predict that maybe 5% of the USA population (~16m people) would be meaningfully interested in cryonics, and of that only a fraction would go through with it, so 'millions' is an upper bound.
Adoption is not about evidence.
Right. But the point is, who is in the wrong between the adopters and the non-adopters?
It can be argued that there was never good evidence to sign up for cryonics, therefore the adopters did it for irrational reasons.
I'm not sure this distinction, while significant, would ensure "millions" of people wouldn't sign up.
Presumably, preserving a human brain "successfully", according to some reasonable definition of the term, would be a big deal and cause a lot of interest in cryonics. It would certainly seem like significant progress towards the sort of life-extension that LW's been clambering about.
Exactly how many new contracts they would get seems hard to predict, but I don't see a number larger than 1,000,000 to be unreasonable.
I'm not sure this distinction, while significant, would ensure "millions" of people wouldn't sign up.
Millions of people do sign up for various expensive and invasive medical procedures that offer them a chance to extend their lives a few years or even a few months. If cryonics demonstrated a successful revival, then it would be considered a life-saving medical procedure and I'm pretty confident that millions of people would be willing to sign up for it.
People haven't signed up for cryonics in droves because right now it looks less like a medical procedure and more like a weird burial ritual with a vague promise of future resurrection, a sort of reinterpretration of ancient Egyptian mummification with an added sci-fi vibe.
I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.
The difference with Markov models is they tend to overfit at that level. At 20 characters deep, you are just copy and pasting large sections of existing code and language. Not generating entirely unseen samples. You can do a similar thing with RNNs, by training them only on one document. They will be able to reproduce that document exactly, but nothing else.
To properly compare with a markov model, you'd need to first tune it so it doesn't overfit. That is, when it's looking at an entirely unseen document, it's guess of what the next character should be is most likely to be correct. The best setting for that is probably only 3-5 characters, not 20. And when you generate from that, the output will be much less legible. (And even that's kind of cheating, since markov models can't give any prediction for sequences it's never seen before.)
Generating samples is just a way to see what patterns the RNN has learned. And while it's far from perfect, it's still pretty impressive. It's learned a lot about syntax, a lot about variable names, a lot about common programming idioms, and it's even learned some English from just code comments.
The best setting for that is probably only 3-5 characters, not 20.
In NLP applications where Markov language models are used, such as speech recognition and machine translation, the typical setting is 3 to 5 words. 20 characters correspond to about 4 English words, which is in this range.
Anyway, I agree that in this case the order-20 Markov model seems to overfit (Googling some lines from the snippets in the post often locates them in an original source file, which doesn't happen as often with the RNN snippets). This may be due to the lack of regularization ("smoothing") in the probability estimation and the relatively small size of the training corpus: 474 MB versus the >10 GB corpora which are typically used in NLP applications. Neural networks need lots of data, but still less than plain look-up tables.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
At least there was an interesting part reminiscent of Eliezer's Universal Fire:
Eliezer:
From the article:
I think most proceduraly generated games aren't that deeply interconnected with regard to their laws of physics.
This is a press release though, lots of games were advertised with similar claims that don't live up to expectation when you actually play them.
The reason is that designing an universe with simple and elegant physical laws sounds cool on paper but it is very hard to do if you want to set an actually playable game in it, since most combinations of laws, parameters and initial conditions yield uninteresting "pathological" states. In fact this also applies to the laws of physics of our universe, and it is the reason why some people use the "fine tuning" argument to argue for creationism or multiple universes.
I'm not an expert game programmer, but if I understand correctly, in practice these things use lots of heuristics and hacks to make them work.