Comment author: maxikov 13 December 2015 01:29:20AM *  0 points [-]

If nothing breaks, we'll be live here: https://www.youtube.com/watch?v=SpUuPr5gYxk

Comment author: smoofra 21 June 2015 09:54:49PM 4 points [-]

I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.

I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.

If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA on that!

There is almost no argument or evidence that can convince me to put more trust in ZFC than i do PA. I don't think I'm wrong.

I trust low-energy moral conclusions more than I will ever trust abstract metaethical foundational theories. I think it is a mistake to look for low-complexity foundations and reason from them. I think the best we can do is seek reflective equilibrium.

Now, that being said, I don't think it's wrong to study abstract metaethical theories, to ask what their consequences are, and even to believe them a little bit. The analogy with math still holds here. We study the heck out of ZFC. We even believe it more than a little at this point. But we don't believe it more than we believe the intermediate value theorem.

PS: I also don't think "shut up and calculate" is something you can actually do under utilitarianism, because there are good utilitarian arguments for obeying deontological rules and being virtuous, and pretty much every ethical debate that anyone has ever had can be rephrased as a debate about what terms should go in the utility function and what the most effective way to maximize it is.

Comment author: maxikov 21 June 2015 11:20:41PM 0 points [-]

PA has a big advantage over object-level ethics: it never suggested things like "every tenth or so number should be considered impure and treated as zero in calculations", while object-level ethics did. The closes thing I can think of in mathematics, where everyone believed X, and then it turned out not X at all, was the idea that it's impossible to take every elementary integral algorithmically or prove that it's non-elementary. But even that was a within-system statement, not meta-statement, and it has an objective truth value. Systems as whole, however, don't necessarily have it. Thus, in ethics either individual humans or the society as whole need a mechanism for discarding ethical systems for good, which isn't that big of an issue for math. And the solution for this problem seems to be meta-ethics.

Comment author: shminux 21 June 2015 10:02:44PM 4 points [-]

TL;DR: Once in a while a wild extrapolation of an earlier limited model turns out to match a later, more comprehensive one. This happens in ethics, as well as in physics. Occurrences like that are amplified by the selection bias and should be treated with caution.

(Also, a bunch of applause lights for utilitarianism.)

Comment author: maxikov 21 June 2015 10:30:52PM 3 points [-]

I agree with the first paragraph of the summary, but as for the second - my point is against turning applause lights for utilitarianism on the grounds of such occurrences, or on any grounds whatsoever. And I also observe that ethics haven't gone as far from Bentham as physics have gone from Newton, which I regard as meta-evidence that the existing models are probably insufficient at best.

High energy ethics and general moral relativity

8 maxikov 21 June 2015 08:34PM

Utilitarianism sometimes supports weird things: killing lone backpackers for their organs, sacrificing all world's happiness to one utility monster, creating zillions of humans living on near-subsistence level to maximize total utility, or killing all but a bunch of them to maximize average utility. Also, it supports gay rights, and has been supporting them since 1785, when saying that there's nothing wrong in having gay sex was pretty much in the same category as saying that there's nothing wrong in killing backpackers. This makes one wonder: if despite all the disgust towards them few centuries ago, gay rights have been inside the humanity's coherent extrapolated volition all along, then perhaps our descendants will eventually come to the conclusion that killing the backpacker has been the right choice all along, and only those bullet-biting extremists of our time were getting it right. As a matter of fact, as a friend of mine pointed out, you don't even need to fast forward few centuries - there are or were already ethical systems actually in use in some cultures (e.g. bushido in pre-Meiji restoration Japan) that are obsessed with honor and survivor's guilt. They would approve of killing the backpacker or letting them kill themselves - this being an honorable death, and living while letting five other people to die being dishonorable - on non-utilitarian grounds, and actually alieve that this is the right choice. Perhaps they were right all along, and the Western civilization bulldozed through them effectively destroying such culture not because of superior (non-utilitarian) ethics but for any other reason things happened in history. In this case there's no need in trying to fix utilitarianism, lest it suggest killing backpackers, because it's not broken - we are - and out descendants will figure that out. In physics we've seen this, when an elegant low-Kolmogorov-complexity model predicted that weird things happens on a subatomic level, and we've built huge particle accelerators just to confirm - yep, that's exactly what happens, in spite of all your intuitions. Perhaps smashing utilitarianism with high energy problems only breaks our intuitions, while utilitarianism is just fine.

But let's talk about relativity. In 1916 Karl Schwarzschild solved the newly discovered Einstein field equations and thus predicted the black holes. It was thought as a mere curiosity and perhaps GIGO at the time, until in 1960s people realized that yes, contra all intuitions, this is in fact a thing. But here's the thing: they were actually first predicted by John Michell in 1783. You can easily check it: if you substitute the speed of light to the classical formula for escape velocity, you'll get the Schwarzschild radius. Michell actually knew the radius and mass of the Sun, as well as the gravitational constant precisely enough to get the order of magnitude and the first digit right when providing an example of such object. If we somehow never discovered general relativity, but managed to build good enough telescopes to observe the stars orbiting the emptiness that we now call Sagittarius A*, if would be very tempting to say: "See? We predicted this centuries ago, and however crazy it seemed, we now know it's true. That's what happens when you stick to the robust theories, shut up, and calculate - you stay centuries ahead of the curve."

We now know that Newtonian mechanics aren't true, although they're close to truth when you plug in non-astronomical numbers (and even some astronomical). A star 500 times size and the same density as the Sun, however, is very much astronomical. It is only sheer coincidence that in this exact formula relativistic terms work exactly in the way to give the same solution for the escape velocity as the classical mechanics do. It would be enough for Michell to imagine that his dark star rotates - a thing that Newtonian mechanics say doesn't matter, although it does - to change the category of this prediction from "miraculously correct" to "expectedly incorrect". It doesn't mean that Newtonian mechanics weren't a breakthrough, better than any single theory existing at the time. But it does mean that it would be premature to people in pre-relativity era to invest into building a starship designed to go ten times the speed of light even if they could - although that's where "shut up and calculate" could lead them.

And that's where I think we are with utilitarianism. It's very good. It's more or less reliably better than anything else. And it managed to make ethical predictions so far fetched (funny enough, about as far fetched as the prediction of dark stars) that it's tempting to conclude that the only reason why it keeps making crazy predictions is that we haven't yet realized they're not crazy. But we live in the world where Sagittarius A* was discovered, and general relativity wasn't. The actual 42-ish ethical system will probably converge to utilitarianism when you plug in non-extreme numbers (small numbers of people, non-permanent risks and gains, non-taboo topics). But just because it converged to utilitarianism on one taboo (at the time) topic, and made utilitarianism stay centuries ahead of the moral curve, doesn't mean it will do the same for others.

Comment author: [deleted] 18 June 2015 08:32:12AM *  10 points [-]

Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way

I would have it, and I don't find it great. Why should baking be an individual effort? Teamwork is better. It should be seen as "here, if you like it, help me bake it". That is why it is Discussion, not Main. I think a good way to use this site setup would be to throw half-baked things into Discussion, if it sounds interesting cooperate on baking it, then when done promote to Main. Really, why don't we do this?

All the great articles in the past, LW 2007-2010 look a lot like individual effort. Why should it be so?

Is this a bit Silicon Valley Culture? Because those guys do the same - they have a software idea and work on it individually or with 1-2 co-founders. Why? Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them? Assuming you are not after the money but after a solution you yourself would use, of course - "scratch your own itch" is a good idea in open source.

This kind of individual-effort culture sounds a lot like a culture where insights are in abundance but working on them is scarce, so people don't value much insights from others as long as they are not properly worked out. Well, I should say I am pretty much used to the opposite, most folks I know just work routine and hardly any reflection at all...

In response to comment by [deleted] on In praise of gullibility?
Comment author: maxikov 21 June 2015 05:39:44AM 1 point [-]

Is this a bit Silicon Valley Culture? Because those guys do the same - they have a software idea and work on it individually or with 1-2 co-founders. Why? Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them?

For one thing, because open source community isn't terribly likely to embark on a random poster's new project, and you'll end up developing it mostly by yourself anyway. Furthermore, there's this aspect of hacker culture, and especially open source culture, where it's actively anti-evangelistic, and dislikes developing user-friendly things like Ubuntu, preferring Slackware or Gentoo.

Comment author: mikedarwin 09 April 2015 09:32:23PM 22 points [-]

My pleasure!

I have a few (hopefully helpful) comments to add. I am a huge advocate of trying things yourself on a do-able scale. For instance, many years ago I had pretty much the same idea you did and I decided to it out, directly. I lived across the street from a mechanical engineer from Eli Lilly, Inc., named Bud Riever. I asked Bud to figure how much prsssure would be developed if I simply cooled a closed steel container which was completely filled with water to well below the frrezing point? The answer was about 2,000 atmospheres, or about 24,000 psi. As it turns out, a piece of steel pipe of the right thickness threaded on both ends and capped with screw on galvanized steel pipe caps will hold that pressure. And, since it is hydrostatic pressure with no gas present, if the pipe fails (splits), it will not fail explosively. My test subject was to be Baker's yeast, reconstituted in a dilute sugar solution and placed inside of a twist tied sawdwhich bag (no air bubbles) which was in turn placed inside the section of pipe which was then capped on the open end.

It took me forever to figure out that the only way to close the pipe with the yeast inside, whilst excluding also all air bubbles, was to do so in a galvanized metal wash tub filled with water. The cap on the pipe was screwed shut under water in tub. I could then cool my self-pressurizing chamber with a slush of dry ice and acetone. I broke several pipes before I found a thickness of steel that would take the pressure. Alas, my experiment showed only a little better survival of yeast under pressure than that which was achieveable under the same conditions with a vented pipe; i.e., almost none.

Maybe two years ago, I got the idea that inhaled hydrogen gas might be profoundly radioprotective. H+ should be available to neutralize the OH- radicals produced by the interaction of gamma rays and water, thereby acting as an "instantaneous" neutralizer of the bulk of radiation injury (the bulk of the non-hydroxyl radical injury occurs when high energy particles directly impact and disrupt DNA). I did a literature search and found nothing. I also asked a medical physicist friend and several other scientists whom I respected. I was told that this approach would not work in large measure because the addition of dissolved hydrogen would not deal with the problem of the hydrogen radical that would remain after the hydroxyl radical was neutralized. My hypothesis was that the hydrogen radical would react with oxygen to form another hydroxyl radical, and then subsequently be neutralized by the abundand molecular hydrogen.

After some months, I couldn't stand not knowing anymore so I found an industrial X-ray service with powerful enough X- and gamma ray sources to deliver ~16 gray of radiation to half a dozen mice in a reasonable pewriod of time and I cobbled up a test apparatus. The next step was to expose mice to supralethal doses of X- and gamma rays. Hydrogen gas at 80% of the breathing air (balance oxygen) was indeed profoundy protective. When I passed this information along to my medical physicist friend he quickly found cites of other (pretty obscure) work showing the same effect:

http://cdn.intechopen.com/pdfs/35987/InTech-Hydrogen_from_a_biologically_inert_gas_to_a_unique_antioxidant.pdf

Qian LR, Cao F, Cui JG, Huang YC, Zhou XJ, Liu SL, Cai JM: Radioprotective effect of hydrogen in cultured cells and mice. Free Radic Res 2010, 44:275-282. PubMed Abstract | Publisher Full Text OpenURL

Qian LR, Li BL, Cao F, Huang YC, Liu SL, Cai JM, Gao F: Hydrogen-rich pbs protects cultured human cells from ionizing radiation-induced cellular damage. Nuclear Technology & Radiation Protection 2010, 25:23-29. PubMed Abstract | Publisher Full Text OpenURL

Alas, my dreams of a commercializable product that would render radiolgical exams effectively safe for children, young and middle aged adults vanished, well, as in a puff of hydrogen and oxygen igniting. But here (to me) is the really strange thing, despite the stunning degree of radiprotection inhaled hydxrogen gas proivides, as well as evidnce that it is pluripotent protect against ischemia-reperfusion injury, cancer and a variety of other free radical mediated pathologies (http://www.molecularhydrogeninstitute.com/studies/), no one I know has shown the slighest interest in it. So, even if you identify something that is workable and easy to implement, don't expect the world to beat a path to your door!

Nevertheless, DOING THINGS and actually carrying out experiments changes how you think, how you approach problem solving and how your brain is wired. These changes are, for the most part, empowering and make you better problem solver.

Comment author: maxikov 09 April 2015 10:25:37PM 3 points [-]

That's actually surprising: I thought yeast survives freezing reasonably well, and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC182733/?page=2 seems to confirm that. What was different in your setup so that even the control group had a very low survival rate?

Comment author: mikedarwin 09 April 2015 06:54:23AM *  49 points [-]

I was asked by several people to comment on this post/proposal. Clearly, Maxikov put a lot of time and effort into this post and, at least in part, there's the pity. When you find you have an idea which seems at once compelling and obvious (in tems of the science) in an already well explored field, the odds are very good that you weren't the first to reach that conjecture. And that almost always means that there is someting wrong with your premises. Very smart and capable people have been trying to achieve cryopreservation of cells, tissues, organs and organisms for over 50 years now and the physical chemistry of water under very high pressures and very low temperatures has been understood for far longer. This should be a hint that some careful searching of the literature is in order before going public with a proposal to "fix cryonics," and especially before spending a lot of time/energy on proposal like this.

Attempts to use extreme hydrostatic pressure to mitigate or eliminate freezing injury go back at least 60 years, and probably longer. As your phase diagram above shows, when the pressure is sufficiently high during cooling the expansiuon of water is prevented, but ice formation is not. What happens is that other allotropes of ice form which do not require expansion. However, this turns out to be a bad thing, since, as opposed to any of these ices being formed first in the interstitial spaces, as happens with Ice I, freezing occurs both intracellularly and extracellularly at the same time in the presence of other ice allotropes. Crystal formation inside cells results in devastating ultrastructural disruption - far worse than would occur if ice formed outside cells first, grew slowly and dehydrated the cells, and finally resulted in a vitrified cellular interior (providing that cryoprotectant is present).

However, the problem with this approach doesn't stop there. Extreme hyperbaria itself is directly damaging by at least two mechanisms: denaturation of cellular proteins (including critical enzymes and membrane proteins) and damage to cell membrane lipid leaflets resulting in permeabilization of the membrane to ions (Onuchic LF, Lacaz-Vieira F., Glycerol-induced baroprotection in erythrocyte membranes. Cryobiology. 1985 Oct;22(5):438-45.) Irreversible membrane damage occurs in mammalian red cells exposed to a pressure of 8000 atm (~117,600 psi) applied for ~10 minutes. Exposure of more comnplex mammamalian cells to far lower pressures~20,000 psi, results in loss of viability due to protein denaturation, and perhaphs due to alterations in the molecular structure of membrane lipids,as well. Interestingly, the same compounds that provide protection cellular (molecular) protection against freezing damage also confer substantial protection against baroinjury. Fahy, et al., have extensively explored the use of hyperbaria to augment vitrification in the rabbit kidney (http://www.freepatentsonline.com/4559298.pdf) and have further extended work from the 1980s demonstrating that cryoprotectives are also substatntially baroprotective.

The first work that I'm aware of to attempt to achieve organ cryopreservation using hyperbaria was that of the late Armand Karow, in the late 1960s - early 1970s (Karow AM Jr, Liu WP, Humphries AL Jr. Survival of dog kidneys subjected to high pressures: necrosis of kidneys after freezing.Cryobiology. 1970 Sep-Oct;7(2):122-8. PMID: 5498348). Karow was able to demonstrate the brief tolerance of dog kidneys to pressures of about ~18,000 psi, however, kidneys subjected to isothermal hyperbaric freezing, even in the presence of of moderate cryoprotection, did not survive.

When I started research and experimentation in cryobiology nearly 40 years ago, there was no Internet, no (affordable) photocopiers and the only way to do a "literature search" was with something called the Index Medicus (http://en.wikipedia.org/wiki/Index_Medicus) which was a veritable wall of bound volumes. I used 3" x 5" index cards to write down possible cites to look up - which then required a trip(s) to the "stacks" to look for the journals. Today, I have the Internet, Pubmed, the international patent database and on line library for 30 million books available. I currently have a digitial library of 12,000 mostly scientific and technical books which, at its current rate of growth, should double in size within a few months. My computer is almost constantly reading a book to me with software that cost me just under $5.00. One of the books I "read" recently was The Shallows: What the Internet Is Doing to Our Brains by Nicholas Carr. Carr argues that the Internet is fundamentally altering the way most people today process information - and not for the better. I don't use the Internet the way most people seem to, today. I rely heavily on books, especially textbooks, to educate me about areas with which I have little or no familiarity, and my approach is pretty much what it has been since I started my intellectual life; namely to study intensively and deeply until I achieve basic mastery of an area, and only then use skimming and browsing over large amounts of material to advance my knowledge. The tools of the information-digitial age have thus been a nearly unblemished advantage to me. If you want to reads Carr's book, click on this link:

http://www.mediafire.com/download/5s4wdr554ia4axn/Nicholas_Carr-The_Shallows__What_the_Internet_Is_Doing_to_Our_Brains_(2010).epub and then click on the green Download button.

I'm also posting links to a number of full text books on cryobioolgy which you can download, as per above:

ADVANCES IN BIOPRESERVATION: https://www.mediafire.com/?raccqhv0rrqfhmh

ADVANCES IN LOW TEMPERATURE BIOLOGY: https://www.mediafire.com/?4i6v9qublf3l8q2

FUNDAMENTALS OF CRYOBIOLOGY: https://www.mediafire.com/?pxq6mxbxvfib41j

CURRENT TRENDS IN CRYOBIOLOGY: https://www.mediafire.com/?pxq6mxbxvfib41j

CRYOPRESERVATION... https://www.mediafire.com/?pxq6mxbxvfib41j

LIFE IN THE FROZEN STATE: https://www.mediafire.com/?ydx3a89m2f47r7y

THE FROZEN CELL: https://www.mediafire.com/?ydx3a89m2f47r7y

Cheers, Mike Darwin

Comment author: maxikov 09 April 2015 09:36:39AM 6 points [-]

Thanks so much for the detailed review and lots of useful reading!

Comment author: passive_fist 09 April 2015 03:23:20AM 1 point [-]

To my understanding it's because of the higher tensile strength of carbon fiber, although I could be wrong.

I wonder, how much can be achieved by merely increasing the thickness of the walls (even to such extremes as a small hole in a cubic meter of steel)?

In a round vessel containing pressure, a pressure gradient is set up from the inside wall to the outside. You can think of such a vessel as a series of concentric shells of increasing radius, each of which only has to support the pressure differential acting upon it. At some pressure level, this pressure differential itself becomes so high that it tears the material apart, regardless of how thick the walls are or how tiny the interior radius is. The physics of this isn't terribly complicated but I don't have any links at the moment, sorry.

Comment author: maxikov 09 April 2015 03:39:20AM 0 points [-]

Sure, I can easily imagine that by mentally substituting steel with jello - at some point you're tear it apart no matter how thick the walls are. However, that substitute also gives me the impression that most shapes we would normally consider for a vessel don't reach the maximum strength possible for the material.

Comment author: passive_fist 09 April 2015 01:23:38AM 4 points [-]

One common technique is composite construction with carbon fibers wound concentrically around an alloy core.

Comment author: maxikov 09 April 2015 02:51:46AM 1 point [-]

Is that done to convert shear force to tension?

I wonder, how much can be achieved by merely increasing the thickness of the walls (even to such extremes as a small hole in a cubic meter of steel)?

Comment author: Romashka 08 April 2015 08:56:00PM *  2 points [-]

I meant that in mammals of comparable sizes, you have brains with comparable sizes - and, ultimately, if you salvage a brain all is not lost. Also, they have definable behaviour, which (as you approach more harsh experiments, like the ability to recognize kin after being thawed) might tell you something useful. How would you interpret a shrimp's ability to move after thawing? And all that blood chemistry - the closer it gets to human, the better. Starting with shrimp is useful at the very beginning, to see if it can be done at all, maybe.

As to mammals, perhaps mice are better to begin with, because they are smaller than we. I just thought - without checking - that sea mammals are tougher when it comes to oxygen depletion combined with evenly distributed heightened pressure. I can be wrong.

BTW, what do you think of Tardigrada, water bears?:)

Comment author: maxikov 08 April 2015 09:47:01PM 1 point [-]

Ah, that's true. I guess going back to normal vitals and motion is good enough for preliminary experiments, but of course once that step is over, it's crucial to start examining the effects of preservation on cognitive features of mammals.

Tardigrada and some insects are in fact known to survive ridiculously harsh conditions, freezing (combined with nearly complete dehydration) included. Thus, it makes sense to take a simple organism that isn't known to survive freezing, and make it survive. I suspect though that if you can prevent tardigrades from dehydrating before freezing, the control group won't survive, which means that some experiments can possibly be done on them too.

View more: Next