Update on Kim Suozzi (cancer patient in want of cryonics)

45 Post author: ahartell 22 January 2013 09:15AM

Kim Suozzi was a neuroscience student with brain cancer who wanted to be cryonically preserved but lacked the funds. She appealed to reddit and a foundation was set up, called the Society for Venturism.  Enough money was raised, and when she died on the January 17th, she was preserved by Alcor.  

I wasn't sure if I should post about this, but I was glad to see that enough money was raised and it was discussed on LessWrong herehere, and here.

 

Source

 

Edit:  It looks like Alcor actually worked with her to lower the costs, and waived some of the fees.

Edit 2:  The Society for Venturism has been around for a while, and wasn't set up just for her.

Comments (61)

Comment author: Error 22 January 2013 02:08:45PM 13 points [-]

Wow.

I don't have the confidence in the advisability of cryonics that most here seem to; but I still want to applaud this. Well done, Internet, and I dearly hope she wakes one day.

Comment author: advancedatheist 22 January 2013 02:52:30PM 8 points [-]

I'd like to thank LessWrongers who donated for Kim's suspension. I hope you didn't donate your money in vain.

May science speed you, Miss Suozzi.

Comment author: Viliam_Bur 22 January 2013 08:51:48PM 6 points [-]

These days, publicity can make you literally immortal!

Comment author: gwern 22 January 2013 09:00:23PM 2 points [-]

Emphasis on 'can', I suppose, given the absence of revivals...

Comment author: wedrifid 22 January 2013 02:42:49PM 6 points [-]

Enough money was raised, and when she died on the January 17th, she was preserved by Alcor.

Alcor? That's curious. Given the critical lack of funds I would have expected Cryonics Institute to be used. It seems like enough money and then some was raised!

Comment author: saturn 22 January 2013 10:19:06PM 10 points [-]

Given what I've heard about CI's quality control, I don't blame her for trying to raise enough money for Alcor.

Comment author: ModusPonies 04 February 2013 04:08:18PM 0 points [-]

What have you heard about CI's quality control, and do you happen to have the sources conveniently available? (I'm making the decision between CI and Alcor.)

Comment author: saturn 06 February 2013 11:25:40PM *  3 points [-]

I don't have any special insight on this subject, only what I've picked up from reading LW and occasionally talking about it on IRC. Many sources are linked from the comments in this thread (the comments are much more informative than the original post). To sum up, it seems that both CI and Alcor are lamentably bad, but CI is considerably worse.

Comment author: Morendil 23 January 2013 08:06:34AM 1 point [-]

Read the comments here, they're interesting.

Comment author: advancedatheist 22 January 2013 03:40:41PM 11 points [-]

In 1992 I attended a dinner held by Alcor's people to commemorate the 25th anniversary of the cryosuspension of James Bedford, who has managed to stay frozen after all these years and currently resides at Alcor.

Mike Darwin gave one of his characteristically passionate and learned speeches at this event, where he invoked Joseph Campbell's ideas popular at the time about the Hero's Journey. As I recall it, Mike said that James Bedford, an ordinary man, went on a fantastic journey across time to an unknown future, in effect becoming a new kind of mythic hero. Some day, Mike said, Bedford the myth might contribute to reconstituting Bedford the man.

Bedford hasn't exactly become a household name, but then his suspension happened before most of today's Americans were born. Kim Suozzi's struggle and cryosuspension, by contrast, has happened in our awareness and in a different media environment. She may have the potential to become a kind of mythic heroine for the millennial generation. And I would certainly like to see Suozzi the myth become Suozzi the healthy, whole young woman again.

We just need some poets to tell this myth in compelling ways. Stephenie Meyer has demonstrated that a market exists for stories about ordinary mortal women of Kim's generation who become "reverse Arwens" by rejecting aging and other human limitations.

Steven B. Harris, MD, also wrote about the repurposing of mythological tropes for cryonics purposes years ago in his essay, "Cryonics And The Resurrection Of The Mythic Hero," which you can read by scrolling down on this page:

http://www.alcor.org/cryonics/cryonics8809.txt

Comment author: Konkvistador 26 January 2013 09:32:53PM 4 points [-]

I am so incredibly glad that she made it.

Comment author: advancedatheist 23 January 2013 04:19:32AM 3 points [-]

I've thought of a way of managing the religious objection to Kim's possible revival versus Abrahamic afterlife beliefs. Eternity doesn't mean endless time like we experience it. Many theologians argue that in eternity, our assumptions and experiences about time don't apply. Kim's soul, whatever that means, could very well exist in eternity in whatever place god assigns it (preferably a tolerable one if god considers her an "anonymous christian," despite her agnosticism), yet this soul could also inhabit the realm of time in Kim's revived and restored body (and she'll literally need a body because she got a neurosuspension) in Future World.

In other words, each outcome doesn't necessarily have to exclude the other. I work with a woman who converted to Orthodox Christianity and has a side business selling icons, and apparently in that tradition a "mystery" doesn't have the meaning of "puzzle" which the human mind can potentially solve and understand, as in our use of the phrase "murder mysteries." Orthodox Christians believe that not only does the human mind not understand god's mysteries; the human mind simply cannot understand them. Instead the Christian has to accept the mystery as a revelation of god's transcendent sovereignty over creation. Kim's revival might find some elbow room in this understanding of "mystery" for certain kinds of religionists who might otherwise consider her demon-possessed or a zombie.

Reference: http://en.wikipedia.org/wiki/Anonymous_Christian

Comment author: CarlShulman 23 January 2013 04:32:01AM 5 points [-]

Mark, I'm curious. I gather you are a supporter of cryonics who is very critical of most proposed routes to reviving or reconstructing cryopreserved people. How would you hope to be revived if you are cryopreserved? And what probabilities would you input into Jeff Kaufman's probability spreadsheet (adding your answers there would be very interesting, if you'd like)?

Comment author: sdr 22 January 2013 02:00:20PM 4 points [-]

Farewell, and see you on the other side!

Comment author: Nic_Smith 23 January 2013 03:05:36AM *  2 points [-]

A small correction: The Society for Venturism has been around for quite a while, although I have a vague impression they've been more active in the last year than in the past. I had a look at their site to see when they were founded (1986), and noticed they're currently raising funds for someone else, Aaron Winborn.

Comment author: ahartell 23 January 2013 06:32:06AM 0 points [-]

Thanks, updated.

Comment author: James_Miller 22 January 2013 03:44:56PM 4 points [-]

I interviewed Kim for a potential article for humanity+ magazine about her quest to get charitable funds to pay for cryonics. The article was never published because very shortly after my interview Alcor decided to fully fund Kim. Here is part of the article:

Kim Suozzi’s Cold and Lonely Journey To Outrun Brain Cancer

Like any sensible girl diagnosed with a fatal disease, 23-year-old Kim Suozzi is making arrangements to be cryogenically preserved. Kim has brain cancer, and although she's participating in a clinical trial for an experimental treatment, she told me that without cryonics her chance of survival would be basically zero.

As hard as it should be to believe, there are actually some people in Kim's position who forgo cryonics to accept certain death even though these people don't want to die. There are cancer patients who would spend every penny they have plus a bunch of taxpayer dollars, and then (if it were necessary) crawl across broken glass for a traditional treatment that would give them only a few percentage points chance of survival, but who have no interest in cryonics. Although I don't think all of these people should be forced into cryonics, at the very least they should be compelled to take a sanity test to determine if they're capable of making rational medical decisions. After all, the norm in Western society is to treat a preference for suicide as a sign of mental illness.

Of course, in reality it's people who sign up for cryonics , such as this author, who are considered mentally suspect. Fewer than 3,000 people have ever registered for cryonics despite the fact that over a hundred million have surely heard of it. We consider death, especially when it strikes someone who should be only in the first third of her life, a horrible, heartbreaking tragedy. Yet cryonics, which offers a means of escaping or at least postponing death, is something almost no one opts for, making Kim Suozzi a socially brave pioneer rather than an ordinary cancer patient.

If you think that, as futurist Ray Kurzweil writes, the Singularity is near (Kurzweil estimates 2045) then it won't take too long before cryonics could be used to revive you. Kurzweil has signed up for cryonics, and Kim told me that the plausibility of Kurzweil's analysis is a big part of why she is interested in cryonics.

Comment author: CarlShulman 22 January 2013 07:56:35PM 10 points [-]

If you think that, as futurist Ray Kurzweil writes, the Singularity is near (Kurzweil estimates 2045)

James, you've seen this study of past AI predictions, this independent grading of Kurzweil's predictions, and the stagnation of computer serial speeds and neuroimaging resolution, right? Hans Moravec has already made several predictions of AI progress based on hardware progress that have been falsified too.

Comment author: Kawoomba 22 January 2013 09:38:30PM 2 points [-]

Do you have a Singularity ETA, and if so, may I ask what it is?

Comment author: CarlShulman 22 January 2013 10:24:17PM 12 points [-]

My median timeline estimate for loosely human-level AI (i.e. it is technically feasible to build AI that can do most anything a human can do, although AI performance would be superhuman in many areas, as it already is), conditional on no catastrophes stopping forward progress would be near the end of the century. This is not a very stable or solid estimate, and I would update a lot on seeing the views of folks who had studied the issues and accumulated strong track records in prediction exercises like DAGGRE focused on technological forecasting and other relevant areas, among many other things.

Comment author: Eliezer_Yudkowsky 23 January 2013 12:28:23AM 15 points [-]

Median doom time toward the end of the century? That seems enormously optimistic. If I believed this I'd breathe a huge sigh of relief, upgrade my cryonics coverage, spend almost all current time and funding trying to launch CFAR, and write a whole lot more about the importance of avoiding biocatastrophes and moderating global warming and so on. I might still work on FAI due to comparative advantage, but I'd be writing mostly with an eye to my successors. But it just doesn't seem like ninety more years out is a reasonable median estimate. I'd expect bloody uploads before 2100.

Carl, ???

Comment author: CarlShulman 23 January 2013 02:06:37AM 11 points [-]

AI has had 60 years or more, depending on when you start counting, with (the price-performance cognate of) Moore's law running through that time: the progress we've seen reflects both hardware and software innovation. Hardware progress probably slows dramatically this century, although neuroscience knowledge should get better.

Looking at a lot of software improvement curves for specific domains (games, speech, vision, navigation) big innovations don't seem to be coming much faster than they used to, and trend projection suggests decades to reach human performance on many tasks which seem far from AGI-complete. Technologies like solar panels or electric vehicles can take many decades to become useful enough to compete with rivals.

Intermediate AI progress has fallen short of Kurzweilian predictions, although it's still decent. Among AI people AGI before the middle of the century is a view seen mainly in groups selected for AGI enthusiasm, like the folk at the AGI conference, but less so among the broader AI community. And there's Robin's progress metric (although it still hasn't been done for other fields, especially the ones making the most progress).

Are we halfway there, assuming we can manage to keep up this much progress (when progress in many other technological fields is slowing)?

Intelligence enhancement for researchers, uploads, and other boosts could help a lot, but IA will probably be a long time coming (biology is slow: FDA for drugs, maturation for genetic engineering) and uploads are very demanding of hardware technology and require much better brain models (correlated with AI difficulty).

I didn't say 87 years, but closer to 87 than 32 (or 16, for Kurzweil's prediction of a Turing-Test passing AI).

Comment author: Kaj_Sotala 23 January 2013 08:12:12AM *  11 points [-]

The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we've got working implants that replicate parts of hippocampal and cerebellar function for rats. At least one computational neuroscientist that I know of has told me that we could replicate the human cerebellum as well pretty soon, but the hard problem lies in finding suitable connections that could be used to interface the brain with computers well enough. He was also willing to go on record on neocortex prostheses not being that far away.

If we did have neural prostheses - the installation of which might end up becoming a routine medical procedure - they could no doubt be set to also record any surrounding brain activity, thus helping reverse engineer the parts we don't have figured out yet. Privacy issues might limit the extent to which that was done with humans, but less so for animals. X years to neuroprosthesis-driven cat uploads and then Y years to figuring out their neural algorithms and then creating better versions of those to get more-or-less neuromorphic AGIs.

The main crucial variables for estimating X would be the ability to manufacture sufficiently small chips to replace brain function with, and the ability to reliably interface them with the brain without risk of rejection or infection. I don't know how the latter is currently projected to develop.

Comment author: Dreaded_Anomaly 25 January 2013 03:53:19AM 2 points [-]

The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we've got working implants that replicate parts of hippocampal and cerebellar function for rats.

The hippocampal implant has been extended to monkeys.

Comment author: wedrifid 25 January 2013 08:20:19AM 1 point [-]

The hippocampal implant has been extended to monkeys.

I want one!

Comment author: Kaj_Sotala 25 January 2013 07:09:16AM 0 points [-]

Thanks, I'd missed that.

Comment author: Eliezer_Yudkowsky 23 January 2013 03:07:45AM 11 points [-]

This is a rather important point. How do we get more info on it? You're the first halfway-sane person I've ever heard put the median at 2100.

From my perspective if you told me that in actual fact AGI had been developed in 2120 (a bit of a ways after your median) despite the lack of any great catastrophes, I would update in the direction of believing all of the following:

  • Rogue biotech hadn't actually been a danger. You didn't make any strong predictions about this because it was outside your conditional; I don't know much about it either. Basically I'm just noting it down. Also, no total global worse-than-Greece collapse, no nuclear-proliferated war brought on by global warming, etc.
  • Moore's Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013.
  • AI academia was Great Stagnating (this is relatively easy to believe)
  • Machine learning techniques that actually had non-stagnat-y people pushing on them for stock-market trading also plateaued, or weren't published, or never AGI-generalized.
  • All the Foresight people were really really optimistic about nanotech, nobody cracked protein folding, or that field Great Stagnated somehow... the nanotech-related news I see, especially about protein folding, doesn't seem to square with this, but perhaps the press releases are exaggerated.
  • Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etcetera.
  • Biotech stays regulation-locked forever - not too hard to believe.
  • Anders Sandberg is wrong about basically everything to do with uploading.

It seems like I'd have to execute a lot of updates. How do we resolve this?

Comment author: CarlShulman 23 January 2013 04:06:08AM *  11 points [-]

Moore's Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013

Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore's law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That's a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn't be surprising if it kept going for some time longer.

AI academia was Great Stagnating (this is relatively easy to believe)

The so-called "Great Stagnation" isn't actually a stagnation, it's mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?

All the Foresight people were really really optimistic about nanotech

Haven't they been so far?

In any case, nanotechnology can't shrink feature sizes below atomic scale, and that's already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren't obviously that helpful.

perhaps the press releases are exaggerated

Could you give some examples of what you had in mind?

Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etc.

Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.

Biotech stays regulation-locked forever - not too hard to believe.

Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That's a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.

Anders Sandberg is wrong about basically everything to do with uploading.

See 15:30 of this talk, Anders' Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.

You're the first halfway-sane person I've ever heard put the median at 2100.

vs

I didn't say 87 years, but closer to 87 than 32 (or 16, for Kurzweil's prediction of a Turing-Test passing AI).

I said "near the end of the century" contrasted to a prediction of intelligence explosion in 2045.

Comment author: Baughn 23 January 2013 09:11:23PM -1 points [-]

press releases

Here's one: http://phys.org/news/2012-08-d-wave-quantum-method-protein-problem.html

That doesn't apply to large proteins yet, but it doesn't make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)

Comment author: Eliezer_Yudkowsky 24 January 2013 08:57:21PM 3 points [-]

Nobody believes in D-Wave.

Comment author: CarlShulman 23 January 2013 09:38:53PM *  3 points [-]

http://blogs.nature.com/news/2012/08/d-wave-quantum-computer-solves-protein-folding-problem.html

It’s also worth pointing that conventional computers could already solve these particular protein folding problems.

You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?

ETA: https://plus.google.com/103530621949492999968/posts/U11X8sec1pU

Comment author: JoshuaFox 24 January 2013 08:59:52AM 7 points [-]

This is puzzling.

I had thought that the question of AI timelines was so central that the core SI research community would have long since Aumannated and come to a consensus probability distribution.

Anyway, good you're doing it now.

Comment author: Eliezer_Yudkowsky 24 January 2013 05:17:56PM 4 points [-]

Maybe I was absent from the office that day? I hadn't heard Carl's 2083 estimate (I recently asked him in person what the actual median was, and he averaged his last several predictions together to get 2083) until now, and it was indeed outside what I thought was our Aumann-range, hence my surprise.

Comment author: ciphergoth 26 January 2013 03:45:50PM 4 points [-]

It seems like the sort of thing people would plan to do on a day you were going to be in the office.

Comment author: CarlShulman 05 February 2013 08:55:29AM 0 points [-]

We had discussed timelines to this effect last year.

Comment author: shminux 23 January 2013 09:07:02PM 4 points [-]

I'm wondering why this is stated as a conjunction. Would a single failure here really result in an early AGI development?

Comment author: Eliezer_Yudkowsky 24 January 2013 09:05:42PM 0 points [-]

BTW regarding Robin's AI progress metric, my reaction is more like Doug's (the first / most upvoted comment).

Comment author: CarlShulman 24 January 2013 09:24:17PM 2 points [-]

I agree with that comment that machine learning has been on a roll, but Robin's reply is important too. We can ask how machine learning shows up in the performance statistics for particular tasks to think about its relative contribution.

Comment author: loup-vaillant 22 January 2013 05:36:12PM *  2 points [-]

when she died

She's clinically dead for sure, but probably not information theoretic dead. I'd rather use the latter definition.

Anyway, she did successfully raised her odds, so that looks like good news.

Comment author: paper-machine 22 January 2013 07:10:03PM 8 points [-]

At the end of the day, one only corresponds with the clinically living.

Comment author: loup-vaillant 23 January 2013 01:07:07AM *  1 point [-]

Here is how I feel: the odds are not good, I can do close to nothing about it, and I have to wait a lifetime to boot. It sucks, but there's still that small glimmer of hope.

A coffin doesn't feel that way. When I see one, I just want revenge.

(Edit: /retribution/revenge)

Comment author: jkaufman 26 January 2013 08:03:39PM 1 point [-]

By "probably not" do you mean that her odds of being information-theoretically dead are less than 50%? Where would you put them?

Comment author: loup-vaillant 27 January 2013 10:53:59AM *  2 points [-]

I do mean less than 50%. Something below 10%, even. I'm just quite confident that someone who is cryopreserved, especially recently, still contain enough information to be reconstructed. On the other hand, I don't know enough about the physical structure of the human mind to be completely sure. I'd say most of my probability for her being actually information-theoretically dead lies in my ignorance of the subject.

Anyway, that's about 90% of her being still alive. My probability that she will be revived eventually is much lower, of course. I have to account for existential and catastrophic risks, the economic collapse of Alcor, the failure to further our technology… Heck, some religious fanatics may bomb the place for all I know (that one is below 1%).

Comment author: nigerweiss 22 January 2013 10:25:27PM 1 point [-]

That's got to be close to a best case suspension. I wish her nothing but the best.

Comment author: hankx7787 22 January 2013 02:25:14PM 1 point [-]

We did it!

Comment author: lsparrish 24 January 2013 05:45:16AM 0 points [-]

Some mass media coverage here. Also this video features her (religious) mother explaining the reasoning behind the head-only thing and how she came to terms with it.

Comment author: hankx7787 27 January 2013 04:10:20PM *  0 points [-]

"If god has given us the brains to figure this stuff out, then who's to say what plans he has once we have figured it out?" - Jane Suozzi on cryonics

Comment author: curiousepic 22 January 2013 03:21:49PM 0 points [-]

It will be interesting to read the case report if/when it's posted.