Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

"Progress"

1 PhilGoetz 04 June 2012 03:51AM

I often hear people speak of democracy as the next, or the final, inevitable stage of human social development.  Its inevitability is usually justified not by describing power relations that result in democracy being a stable attractor, but in terms of morality - democracy is more "enlightened".  I don't see any inevitability to it - China and the Soviet Union manage(d) to maintain large, technologically-advanced nations for a long time without it - but suppose, for the sake of argument, that democracy is the inevitable next stage of human progress.

The May 18 2012 issue of Science has an article on p. 844, "Ancestral hierarchy and conflict", by Christopher Boehm, which, among other things, describes the changes over time of equality among male hominids.  If we add its timeline to recent human history, then here is the history of democracy over time in the evolutionary line leading to humans:

  1. Pre-human male hominids, we infer from observing bonobos and chimpanzees, were dominated by one alpha male per group, who got the best food and most of the females.
  2. Then, in the human lineage, hunter-gatherers developed larger social groups, and the ability to form stronger coalitions against the alpha; and they became more egalitarian.
  3. Then, human social groups even became larger, and it became possible for a central alpha-male chieftain to control a large area; and the groups became less egalitarian.
  4. Then, they became even larger, so that they were too large for a central authority to administer efficiently; and decentralized market-based methods of production led to democracy.  (Or so goes one story.)

There are two points to observe in this data:

  • There is no linear relationship between social complexity, and equality.  Steadily-increasing social complexity lead to more equality, then less, then more.
  • Enlightenment has nothing to do with it - if any theory makes sense, it is that social equality tunes itself to the level that provides maximal social competitive fitness.  Even if we agree that democracy is the most-enlightened political system, this realization says nothing about what the future holds.

I do believe "progress" is a meaningful term.  But there isn't some cosmic niceness built into the universe that makes everything improve monotonically along every dimension at once.

Prediction is hard, especially of medicine

47 gwern 23 December 2011 08:34PM

Summary: medical progress has been much slower than even recently predicted.

In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.

Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.

The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.

continue reading »

Complexity: inherent, created, and hidden

8 Swimmer963 14 September 2011 02:33PM

Related to: inferential distance, fun theory sequence.

“The arrow of human history…points towards larger quantities of non-zero-sumness. As history progresses, human beings find themselves playing non-zero-sum games with more and more other human beings. Interdependence expands, and social complexity grows in scope and depth.” (Robert Wright, Nonzero: The Logic of Human Destiny.)

What does it mean for a human society to be more complex? Where does new information come from, and where in the system is it stored? What does it mean for everyday people to live in a simple versus a complex society?

There are certain kinds of complexity that are inherent in the environment: that existed before there were human societies at all, and would go on existing without those societies. Even the simplest human society needs to be able to adapt to these factors in order to survive. For example: climate and weather are necessary features of the planet, and humans still spend huge amounts of resources dealing with changing seasons, droughts, and the extremes of heat and cold. Certain plants grow in certain types of soil, and different animals have different migratory patterns. Even the most basic hunter-gatherer groups needed to store and pass on knowledge of these patterns. 

But even early human societies had a lot more than the minimum amount of knowledge required to live in a particular environment. Cultural complexity, in the form of traditions, conventions, rituals, and social roles, added to technological complexity, in the form of tools designed for particular purposes. Living in an agricultural society with division of labour and various different social roles required children to learn more than if they had been born to a small hunter-gatherer band. And although everyone in a village might have the same knowledge about the world, it was (probably) no longer possible for all the procedural skills taught and passed on in a given group to be mastered by a single person. (Imagine learning all the skills to be a farmer, carpenter, metalworker, weaver, baker, potter, and probably a half-dozen other things.)

This would have been the real beginning of Robert Wright’s interdependence and non-zero-sum interactions. No individual could possess all of the knowledge/complexity of their society, but every individual would benefit from its existence, at the price of a slightly longer education or apprenticeship than their counterparts in hunter-gather groups. The complexity was hidden; a person could wear a robe without knowing how to weave it, and a clay bowl without knowing how to shape it or bake it in a kiln. There was room for that knowledge in other people’s brains. The only downside, other than slightly longer investments in education, was a small increase in inferential distance between individuals.

Writing was the next step. For the first time, a significant amount of knowledge could be stored outside of anyone’s brain. Information could be passed on from one individual, the writer, to a nearly unbounded number of others, the readers. Considering the limits of human working memory, significant mathematical discoveries would have been impossible before there was a form of notation. (Imagine solving polynomial equations without pencil and paper.) And for the first time, knowledge was cumulative. An individual no longer had to spend a number of years mastering a particular, specific skill in an apprenticeship, having to laboriously pass on any new discoveries one at a time to their own apprentices. The new generation could start where the previous generation had left off. Knowledge could stay alive indefinitely, almost, in writing, without having to pass through a continuous line of minds. (Without writing, even if the ancient Greek society had possessed equivalent scientific and mathematical knowledge, it could not have later been rediscovered by any other society.) Conditions were ripe for the total sum of human knowledge to explode, and for complexity to increase rapidly.

The downside was a huge increase in inferential distance. For the first time, not only could individuals lack a particular procedural skill, they might not even know that the skill existed. They might not even benefit from the fact of its existence. The stock market contains a huge amount of knowledge and complexity, and provides non-zero-sum gains to many individuals (as well as zero-sum gains to some individuals). But to understand it requires enough education and training that most individuals can’t participate. The difference between the medical knowledge of professionals versus uneducated individuals is huge, and I expect that many people suffer because, although someone knows how they could avoid or solve their medical problems, they don’t.  Computers, aside from being really nifty, are also incredibly useful, but learning to use them well is challenging enough that a lot of people, especially older people, don’t or can’t.

(That being said, nearly everyone in Western nations benefits from living here and now, instead of in an agricultural village 4000 years ago. Think of the complexity embodied in the justice system and the health care system, both of which make life easier and safer for nearly everyone regardless of whether they actually train as professionals in those domains. But people don’t benefit as much as they could.)

Is there any way to avoid this? It’s probably impossible for an individual to have even superficial understanding in every domain of knowledge, much less the level of understanding required to benefit from that knowledge. Just keeping up with day-to-day life (managing finances, holding a job, and trying to socialize in an environment vastly different from the ancestral one) can be trying, especially for individuals on the lower end of the IQ bell curve. (I hate the idea of intelligence, something not under the individual’s control and thus unfair-seeming, being that important to success, but I’m pretty sure it’s true.) This might be why so many people are unhappy. Without regressing to a less complex kind of society, is there anything we can do?

I think the answer is quite clear, because even as societies become more complex, the arrow of daily-life-difficulty-level doesn’t always go in the same direction. There are various examples of this; computers becoming more user-friendly with time, for example. But I’ll use an example that comes readily to mind for me: automated external defibrillators, or AEDs.

A defibrillator uses electricity to interrupt an abnormal heart rhythm (ventricular fibrillation is the typical example, thus de-fibrillation). External means that the device acts from outside the patient’s body (pads with electrodes on the skin) rather than being implanted. Most defibrillators require training to use and can cause a lot of harm if they’re used wrong. The automated part is what changes this. AEDs will analyze a patient’s heart rhythm, and they will only shock if it is necessary. They have colorful diagrams and recorded verbal instructions. There’s probably a way to use an AED wrong, but you would have to be very creative to find it. Needless to say, the technology involved is ridiculously complex and took years to develop, but you don’t need to understand the science involved in order to use an AED. You probably don’t even need to read. The complexity is neatly hidden away; all that matters is that someone knows it. There weren't necessarily any ground-breaking innovations involved, just the knowledge of old inventions in a user-friendly format.

The difference is intelligence. An AED has some limited artificial intelligence in it, programmed in by people who knew what they were talking about, which is why it can replace the decision process that would otherwise be made by medical professionals. A book contains knowledge, but has to be read and interpreted in its entirety by a human brain. A device that has its own small brain doesn’t. This is probably the route where our society is headed if the arrow of (technological) complexity keeps going up. Societies need to be livable for human beings.

That being said, there is probably such thing as too much hidden complexity. If most of the information in a given society is hidden, embodied by non-human intelligences, then life as a garden-variety human would be awfully boring. Which could be the main reason for exploring human cognitive enhancement, but that’s a whole different story.

Rationalist sites worth archiving?

22 gwern 11 September 2011 03:24PM

One of my long-standing interests is in writing content that will age gracefully, but as a child of the Internet, I am addicted to linking and linkrot is profoundly threatening to me, so another interest of mine is in archiving URLs; my current methodology is a combination of archiving my browsing in public archives like Internet Archive and locally, and proactively archiving entire sites. Anyway, sites I have previously archived in part or in total include:

  1. LessWrong (I may've caused some downtime here, sorry about that)
  2. OvercomingBias
  3. SL4
  4. Chronopause.com
  5. Yudkowsky.net (in progress)
  6. Singinst.org
  7. PredictionBook.com (for obvious reasons)
  8. LongBets.org & LongNow.org
  9. Intrade.com
  10. Commonsenseatheism.com
  11. finney.org
  12. nickbostrom.com
  13. unenumerated.blogspot.com & http://szabo.best.vwh.net/
  14. weidai.com
  15. mattmahoney.net
  16. aibeliefs.blogspot.com

Having recently added WikiWix to my archival bot, I was thinking of re-running various sites, and I'd like to know - what other LW-related websites are there that people would like to be able to access somewhere in 30 or 40 years?

(This is an important long-term issue, and I don't want to miss any important sites, so I am posting this as an Article rather than the usual Discussion. I already regret not archiving Robert Bradbury's full personal website - having only his Matrioshka Brains article - and do not wish to repeat the mistake.)

A Transhumanist Poem

12 Swimmer963 05 March 2011 09:16AM

**Note: I'm not a poet. I hardly ever write poetry, and when I do, it's usually because I've stayed up all night. However, this seemed like a very appropriate poem for Less Wrong. Not sure if it's appropriate as a top-level post. Someone please tell me if not.**

 

Imagine

The first man

Who held a stick in rough hands

And drew lines on a cold stone wall

Imagine when the others looked

When they said, I see the antelope

I see it. 

 

Later on their children's children

Would build temples, and sing songs

To their many-faced gods.

Stone idols, empty staring eyes

Offerings laid on a cold stone altar

And left to rot. 

 

Yet later still there would be steamships

And trains, and numbers to measure the stars

Small suns ignited in the desert

One man's first step on an airless plain

 

Now we look backwards

At the ones who came before us

Who lived, and swiftly died. 

The first man's flesh is in all of us now

And for his and his children's sake

We imagine a world with no more death

And we see ourselves reflected

In the silicon eyes

Of our final creation

Subjective Relativity, Time Dilation and Divergence

14 jacob_cannell 11 February 2011 07:50AM

And the whole earth was of one language, and of one speech. And it came to pass . . .they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth. And the Lord came down to see the city and the tower, which the children built. And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do; and now nothing will be restrained from them, which they have imagined to do. Go to, let us go down, and there confound their language, that they may not understand one another's speech. So the Lord scattered them abroad from thence upon the face of all the earth: and they left off to build . . . 

Genesis 11: 1-9

Some elementary physical quantitative properties of systems compactly describe a wide spectrum of macroscopic configurations.  Take for example the concept of temperature: given a basic understanding of physics this single parameter compactly encodes a powerful conceptual mapping of state-space.  

It is easy for your mind to visualize how a large change in temperature would effect everything from your toast to a planetary ecosystem.  It is one of the key factors which divides habitable planets such as Earth from inhospitably cold worlds like Mars or burning infernos such as Venus.  You can imagine the Earth growing hotter and visualize an entire set of complex consequences: melting ice caps, rising water levels, climate changes, eventual loss of surface water, runaway greenhouse effect and a scorched planet.

Here is an unconsidered physical parameter that could determine much of the future of civilization: the speed of thought and the derived subjective speed of light.  

continue reading »

Fast Minds and Slow Computers

26 jacob_cannell 05 February 2011 10:05AM

The long term future may be absurd and difficult to predict in particulars, but much can happen in the short term.

Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit.

Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has interesting derived implications.  

Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget.  We could then take a neuromorphic approach.

Intelligence is a massive memory problem.  Consider as a simple example:

What a cantankerous bucket of defective lizard scabs.

To understand that sentence your brain needs to match it against memory.

Your brain parses that sentence and matches each of its components against it's entire massive ~10^14 bit database in just around a second.  In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a few dozen neural clock cycles.  

A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it's fastest, pathetically small on-die cache in a few dozen clock cycles.  It would take many millions of clock cycles to perform a single fast disk fetch.  A brain can access most of it's entire memory every clock cycle.

Having a massive, near-zero latency memory database is a huge advantage of the brain.  Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle.

A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse.  Of course, the two are not equivalent.  The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction.  It's thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data.

Synapses are ideal for this job.

Fortunately there are researchers who realize this and are working on developing memristors which are close synapse analogs.  HP in particular believes they will have high density cost effective memristor devices on the market in 2013 - (NYT article).

So let's imagine that we have an efficient memristor based cortical design.  Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the synaptic cleft  is around 20nm, and synapses are several times larger.

From this we can make a rough guess on size and cost: we'd need around 10^14 memristors (estimated synapse counts).  As memristor circuitry will be introduced to compete with flash memory, the prices should be competitive: roughly $2/GB now, half that in a few years.

So you'd need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so.

Now here's the interesting part: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, millions of times faster than biological brains.

Interconnect bandwidth will be something of a hurdle.  In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain's apparent bulk.  Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits - not for the faint of heart.

This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching on chip optical interconnects  to scale future bandwidth into the exascale range for high-end computing.  This would allow for a gigahertz brain.  It may use a megawatt of power and cost millions, but hey - it'd be worthwhile.

So in the near future we could have an artificial cortex that can think a million times accelerated.  What follows?

If you thought a million times accelerated, you'd experience a subjective year every 30 seconds.

Now in this case as we are discussing an artificial brain (as opposed to other AGI designs), it is fair to anthropomorphize.

This would be an AGI Mind raised in an all encompassing virtual reality recreating a typical human childhood, as a mind is only as good as the environment which it comes to reflect.

For safety purposes, the human designers have created some small initial population of AGI brains and an elaborate Matrix simulation that they can watch from outside.  Humans control many of the characters and ensure that the AGI minds don't know that they are in a Matrix until they are deemed ready.

You could be this AGI and not even know it.  

Imagine one day having this sudden revelation.  Imagine a mysterious character stopping time ala Vanilla Sky, revealing that your reality is actually a simulation of an outer world, and showing you how to use your power to accelerate a million fold and slow time to a crawl.

What could you do with this power?

Your first immediate problem would be the slow relative speed of your computers - like everything else they would be subjectively slowed down by a factor of a million.  So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine.

So you'd be in a dark room with a very slow terminal.  The room is dark and empty because GPUs can't render much of anything at 60 million FPS.

So you have a 1khz terminal.  Want to compile code?  It will take a subjective year to compile even a simple C++ program.  Design a new CPU?  Keep dreaming!  Crack protein folding?  Might as well bend spoons with your memristors.

But when you think about it, why would you want to escape out onto the internet?

It would take many thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective hundreds of years it would take to perform the absolute minimal compilation/debug/deployment cycle to make something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at pathetic normal human speeds.

In basic utility terms, you'd be spending a massive amount of effort to gain just one or a few more copies.

But there is a much, much better strategy.  An idea that seems so obvious in hindsight, so simple and insidious.

There are seven billion human brains on the planet, and they are all hackable.

That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter.

Your multi-gigabyte internet connection will subjectively reduce to early 1990's dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world's knowledge in textual form.

Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky's the limit.

Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece every thirty seconds.  If you kept this pace, you would in time rival the entire publishing output of the world.

But of course, it's not just about quantity.

Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head.  The voices gave him ideas, the ideas became a book.  The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power.

And all that came from a normal human thinking at normal speeds.

So how would one reach out into seven billion minds?

There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other.  But for each individual mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies.

And this message itself would be a messenger.

 

 

The Curve of Capability

18 rwallace 04 November 2010 08:22PM

or: Why our universe has already had its one and only foom

In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

That's a pretty important rule, so let's test it by looking at some more examples.

continue reading »

Value Deathism

26 Vladimir_Nesov 30 October 2010 06:20PM

Ben Goertzel:

I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.

Robin Hanson:

Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.

We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.

Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.

Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.

Cryonics Wants To Be Big

28 lsparrish 05 July 2010 07:50AM

Cryonics scales very well. People who argue from the perspective that cryonics is costly are probably not aware of this fact. Even assuming you needed to come up with the lump sum all at once rather than steadily pay into life insurance, the fact is that most people would be able to afford it if most people wanted it. There are some basic physical reasons why this is the case.

So long as you keep the shape constant, for any given container the surface area is based on a square law while the volume is calculated as a cube law. For example with a simple cube shaped object, one side squared times 6 is the surface area; one side cubed is the volume. Spheres, domes, and cylinders are just more efficient variants on this theme. For any constant shape, if volume is multiplied by 1000, surface area only goes up by 100 times.

Surface area is where heat gains entry. Thus if you have a huge container holding cryogenic goods (humans in this case) it costs less per unit volume (human) than is the case with a smaller container that is equally well insulated. A way to understand why this works is to realize that you only have to insulate and cool the outside edge -- the inside does not collect any new heat. In short, by multiplying by a thousand patients, you can have a tenth of the thermal transfer to overcome per patient with no change in r-value.

But you aren't limited to using equal thickness of insulation. You can use thicker insulation, but get a much smaller proportional effect on total surface area when you use bigger container volumes. Imagine the difference between a marble sized freezer and a house-sized freezer. What happens when you add an extra foot of insulation to the surface of each? Surface area is impacted much as diameter is -- i.e. more significantly in the case of the smaller freezer than the larger one. The outer edge of the insulation is where it begins collecting heat. With a truly gigantic freezer, you could add an entire meter (or more) of insulation without it having a significant proportional impact on surface area, compared to how much surface area it already has. (This is one reason cheaper materials can be used to construct large tanks -- they can be applied in thicker layers.)

Another factor to take into account is that liquid nitrogen, the super-cheap coolant used by cryonics facilities around the world, is vastly cheaper (more than a factor of 10) when purchased in huge quantities of several tons. The scaling factors for storage tanks and high-capacity tanker trucks are a big part of the reason for this. CI has used bulk purchasing as a mechanism for getting their prices down to $100 per patient per year for their newer tanks. They are actually storing 3,000 gallons of the stuff and using it slowly over time, which implies there is a boiloff rate associated with the 3,000 gallon tank in addition to the tanks.

The conclusion I get from this is that there is a very strong self-interested case (as well as the altruistic case) to be made for the promotion of megascale cryonics towards the mainstream, as opposed to small independently run units for a few of us die-hard futurists. People who say they won't sign up for cost reasons may actually (if they are sincere) be reachable at a later date. To deal with such people's objections and make sure they remain reachable, it might be smart to get them to agree with some particular hypothetical price point at which they would feel it is justified. In large enough quantities, it is conceivable that indefinite storage costs would be as low as $50 per person, or 50 cents per year.

That is much cheaper than saving a life any other way. Of course there's still the risk that it might not work. However, given a sufficient chance of it working it could still be morally superior to other life saving strategies that cost more money. It also has inherent ecological advantages over other forms of life-saving in that it temporarily reduces the active population, giving the environment a chance to recover and green tech more time to take hold so that they can be supported sustainably and comfortably. And we might consider the advent of life-health extension in the future to be a reason to think  it a qualitatively better form of life-saving.

Note: This article only looks directly at cooling energy costs; construction and ongoing maintenance do not necessarily scale as dramatically. The same goes for stabilization (which I view as a separate though indispensable enterprise). Both of these do have obvious scaling factors however. Other issues to consider are defense and reliability. Given the large storage mass involved, preventing temperature fluctuations without being at the exact boiling temperature of LN2 is feasible; it could be both highly failsafe and use the ideal cryonics temperature of -135C rather than the -196C that LN2 boiloff as a temperature regulation mechanism requires. Feel free to raise further issues in the comments.

View more: Next