Comment author: Luke_A_Somers 03 October 2016 12:01:11AM 3 points [-]

It might help, though - if you suddenly stop applying the magnetic fields, then it might freeze more abruptly than if you simply lower the temperature. That could reduce the extent of crystallization and thus damage.

Comment author: WhySpace 05 October 2016 06:47:28PM 1 point [-]

Precisely. Normally, vitreous H2O (glass phase of ice) is produced through 1 of 2 methods:

  1. Pouring liquid H2O on a highly conductive heatsink which is cooled to liquid nitrogen temperatures (Ie, a block or sheet of copper in contact with LN)

  2. Taking a block of ice and compressing it at low temperatures.

The first method only works for thin sheets of ice, or creates a thin vitreous layer on the outside of a larger water-filled object. The second method allows one of the normal phases of ice to form, and then converts it to vitreous ice.

However, if we could supercool large volumes of water low enough without spontaneous crystallization, it might be possible to choose which phase of ice forms by deliberately nucleating with that. If turning off the magnetic field doesn’t cause freezing fast enough to vitrify, maybe a sufficiently sharp ultrasonic pulse could disrupt the metastable liquid state fast enough? Similarly, I’d be curious whether a thermoacoustic heat pump could remove heat fast enough to vitrify the water without completely shredding everything nearby.

On a related note, I wonder if it would be possible to suppress the less dense phases of ice (which expand more, and therefore cause more damage) just by increasing the ambient pressure during freezing? Method #2 is a crystalline solid to vitreous solid phase change, but there's no reason the same thing wouldn't work for a liquid to vitreous solid phase change. It looks like it's done at 5,000-1,600 atmospheres of pressure, but that might just be to speed up the rate of transition.

The depth diving record is the equivalent to 701 meters, which works out to 68 atmospheres of pressure. However, most of the effects have to do with respiration, such as the lung's ability to remove CO2 as it builds up in the blood. Nitrogen narcosis has effects on judgment a bit like alcohol, but this might not matter for cryonics. If it does, we could always use a liquid or gas like helium, which has effectively zero lipid solubility.

Is the reason this isn’t done cost, or something else? From a material science perspective, pressure seems like the obvious solution to fight expansion during crystallization. Working with nature is much easier than messing with thermodynamically unfavorable solutions.

Comment author: Fluttershy 04 October 2016 01:43:49AM *  3 points [-]

I'm sorry! Um, it probably doesn't help that much of the relevant info hasn't been published yet; this patent is the best description that will be publicly available until the inventors get more funding. From the patent:

By replacing the volume of the vasculature (from 5 to 10 percent of the volume of tissues, organs, or whole organisms) with a gas, the vasculature itself becomes a “crush space” that allows stresses to be relieved by plastic deformation at a very small scale. This reduces the domain size of fracturing...

So, pumping the organ full of cool gas (not necessarily oxygen) is done for reasons of cooling the entire tissue at the same time, as well as to prevent fracturing, rather than for biological reasons.

ETA: To answer your last question, persufflation would be done on both cooling and rewarming.

Comment author: WhySpace 04 October 2016 08:51:45PM 2 points [-]

Thanks!

Comment author: Fluttershy 02 October 2016 01:09:38AM 4 points [-]

OTOH it's plausible they don't have much compelling evidence mainly because they were resource-constrained. I'm still not expecting this to go anywhere, though.

Whole kidneys can already be stored and brought back up from liquid nitrogen temps via persufflation well enough to properly filter waste and produce urine, and possibly well enough to be transplanted (research pending), though this may or may not go anywhere, depending on the funding environment.

Comment author: WhySpace 03 October 2016 03:42:46PM *  4 points [-]

persufflation

That was a mild pain to google, so I'm leaving what I dug up here so others don't have to duplicate the effort.

Persufflation is perfusion with gaseous oxygen. Perfusion is when fluid going to an organ passes through the lymphatic system or blood vessels to get there.

If I'm reading this correctly, there's no thermodynamic reason to pump the organ full of oxygen gas, but only a biological one. Cells need less oxygen when they're on ice for an organ transplant, but they still consume O2. If this isn't being delivered via blood flow, another source is needed.

I take it that the persufflation is to help with recovering kidneys from liquid nitrogen temperatures, and not in getting there without damage?

Comment author: WhySpace 02 October 2016 05:18:34PM 1 point [-]

The bit about EQ was particularly interesting.(Encephalization Quotient is the ratio of the volume encapsulated by the brain to the volume of the animal. It serves as a stand-in for IQ in extinct species. Humans have an EQ between 5 and 8.)

It should be possible to examine current organisms, and classify them based on EQ and whether they have opposable thumbs. For each category, we could look at what fraction display abilities like tool use, communication, vocabulary size, and passing the mirror self-recognition test.

For example, perhaps the average EQ=1 animal without opposable thumbs has a vocabulary of 2 (alarm cries and mating signals) and doesn't pass the mirror test. On the other hand, maybe half of EQ=4 animal with opposable thumbs display rudimentary tool use.

The actual range of abilities would give us our probability distributions for speculating about extinct animals. After some math to account for gaps in the fossil record 65+ million years ago, we should be able to estimate the probability that certain dinosaurs could use tools or pass the mirror test.

The hard part is determining the probability of developing civilization, given that a species displays certain marks of intelligence. We only have 1 data point, and anthropic principle makes it almost useless.

Comment author: WhySpace 02 October 2016 05:32:18PM *  2 points [-]

As a side note, this might also be interesting, purely from a utilitarian standpoint. If insect suffering matters, that would completely dwarf all human moral weight, since there are 10^18 of them but only 10^9 of us.

However, perhaps we don't care morally about animals which can't pass the mirror test, on the assumption that this means they have no self-image, and therefore no consciousness. They could feel pain and other stimuli, but there would be no internal observer to notice their own suffering.

If that's the case, animal welfare might still dominate over human welfare, but by a smaller margin. Doing what I described in the previous comment would let us estimate the value of future life in general, if we can determine to within an order of magnitude or so how much we value animals with various traits. This is critical for questions like whether terraforming mars is net positive or net negative.

Comment author: turchin 02 October 2016 12:04:33AM 3 points [-]

In the Trent's article even mentioned possible species of Dinos who may be able have intelligent explosion. http://www.strangehorizons.com/2009/20090713/trent-a.shtml

It means that we could find really interesting (and dangerous) things during excavations in Antarctica?

Comment author: WhySpace 02 October 2016 05:18:34PM 1 point [-]

The bit about EQ was particularly interesting.(Encephalization Quotient is the ratio of the volume encapsulated by the brain to the volume of the animal. It serves as a stand-in for IQ in extinct species. Humans have an EQ between 5 and 8.)

It should be possible to examine current organisms, and classify them based on EQ and whether they have opposable thumbs. For each category, we could look at what fraction display abilities like tool use, communication, vocabulary size, and passing the mirror self-recognition test.

For example, perhaps the average EQ=1 animal without opposable thumbs has a vocabulary of 2 (alarm cries and mating signals) and doesn't pass the mirror test. On the other hand, maybe half of EQ=4 animal with opposable thumbs display rudimentary tool use.

The actual range of abilities would give us our probability distributions for speculating about extinct animals. After some math to account for gaps in the fossil record 65+ million years ago, we should be able to estimate the probability that certain dinosaurs could use tools or pass the mirror test.

The hard part is determining the probability of developing civilization, given that a species displays certain marks of intelligence. We only have 1 data point, and anthropic principle makes it almost useless.

In response to Linkposts now live!
Comment author: WhySpace 28 September 2016 05:06:25PM 9 points [-]

Awesome! This strikes me as a very good thing, especially with your suggested social norms. I have 3 additional suggestions, though:

  1. Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff. The title of the link serves much the same purpose, and gives readers enough info to decide whether or not to click through. This is standard practice on the more intellectual subreddit, since they already have the background context and knowledge that 90% of the article is spent explaining.

  2. Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious. I intend to quote or rephrase gems when I find them, and link to them in comments here.

  3. Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Comment author: WhySpace 27 September 2016 02:06:25AM *  10 points [-]

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

Comment author: WhySpace 22 September 2016 12:10:47AM 6 points [-]

Truth is not what you want it to be;

it is what it is,

and you must bend to its power or live a lie.

- Miyamoto Musashi

Comment author: DataPacRat 19 September 2016 11:55:42PM 1 point [-]

The intended meaning, which it seems I will need to rephrase to clarify: "If you are experimenting with uploading, and can meet these minimal common-sense standards, then I'm willing to volunteer ahead of time be your guinea pig. If you can't meet them, then I'd rather stay frozen a little longer. Just FYI."

Comment author: WhySpace 21 September 2016 04:32:05AM *  1 point [-]

This is potentially quite important.

MIRI, Open AI, FHI, etc. are focusing largely on artificial paths to superintelligence, since that leads to the value loading problem. While this is likely the biggest concern, in terms of expected utility, neuron-level simulations of minds may provide another route. This might actually be where the bulk of the probability of superintelligence resides, even if the bulk of the expected utility lies in preventing things like paperclip maximizers.

Robin Hanson has some persuasive arguments that uploading may actually occur years before artificial intelligence becomes possible. (See Age of EM.) If this is the case, then it may be highly valuable to have the first uploads be very familiar with the risks of the alignment problem. This could prevent 2 paths to misaligned AI:

  1. Uploads running at faster subjective speeds greatly accelerating the advent of true AI, by developing it themselves. Imagine a thousand copies of the smartest AI researcher running at 1000x human speed, collaborating with him or herself on the first AI.

  2. The uploads themselves are likely to be significantly modifiable. Since it would always be possible to be reset to backup, it becomes much easier to experiment with someone's mind. Even if we start out only knowing how neurons are connected, but not much about how they function, we may quickly develop the ability to massively modify our own minds. If we mess with our utility functions, whether intentionally or unintentionally, this starts to raise concerns like AI alignment and value drift.

The obvious solution is to hand Bostrom's Superintelligence out like candy to cryonicists. Maybe even get Alcor to try and revive FAI researchers first. However, given a first-in-last-out policy, this may not be as important for us as for future generations. We obviously have a lot of time to sort this out, so this is likely a low priority this decade/century.

Comment author: WhySpace 20 September 2016 08:48:44PM 3 points [-]

Why this works is actually more interesting, for me, than that it works. There seem to be 2 distinct use cases: signaling to others, and self-signaling (ie, identity forming).

When signaling to others, we might not want to fully identify with some tribe if we don't fully agree with all the associations of the label. "American" for instance, is a dangling node, like "blegg" or "rube". If you want to associate yourself with Bud Light, football, silicon valley, and bald eagles, that's a great word to use to describe yourself. If not, then using an adjective like "weird" can specify that not all these necessarily apply.

However, we could take this a step further. If I call myself a "nerdy American", then your brain jumps to all the associations the two words have in common, and lowers the ones that are unique to "American". Perhaps the concept of "American ingenuity", or "silicon valley" come to mind, or maybe just D&D. It's not quite a boolean operation of the two words, but more of a Bayesian update strengthening some associations and weakening others.

But the really interesting thing is when we use this to shape identity. To my knowledge, all people form some sort of identity, largely by associating themselves with certain traits or groups of traits, just as we associate various clusters of traits with others. We also seem to distance our self-image from other things. (A skeptic, for example, may strongly distance themselves from anti-vaxers, for instance.) So, I don't really see ChristianKl's worry that "disassociation" might be damaging here. Dissociating from everything might be harmful, but these are really micro-dissociations, if they are dissociations at all. They can be used either to remove some associations, or add others, or do a mix of the two. Perhaps it would be useful to make a list of useful adjectives, for these various purposes.

I'm not even sure it's psychologically possible to associate one's self with everything, and not have any of these "micro-dissociations". Perhaps this would lead to huge amounts of empathy for everyone, but at the very least we probably don't want to identify with serial killers too strongly. Perhaps I have my biology wrong here, but getting a tiny bursts of neurotransmitters every time we think or hear a label is probably how our Systems 1's actually work, and deliberately strengthening or weakening certain associations is probably a big part of the mechanics of how we train our System 1's. I'm being highly speculative here though, so if anyone knows better than I do, I would appreciate a correction.

View more: Prev | Next