Like me. Voiciferously.
Is your position written out somewhere where I can read it?
Eliezer,
In my experience, smart people have many original theories. They likely hold these theories because they know they are smarter than most people, and so don't see any reason to trust common knowledge. Also, holding original and complex theories make them seem more intelligent. Most original theories are of course incorrect, even when they come from smart people. Intelligent, charismatic people are very good at convincing themselves and others they are correct.
IMO, this is one of the main reasons those, smart, competent people in charge screw up so often. They don't do it because they aren't smart or competent, they do it because they have a bias in favor of their own ideas and theories, just like everyone else.
The smarter you are, the more likely you are to think you're the exception, and neglect the outside view.
I read this months ago, but only yesterday finally got the reference.
I didn't realize until you said it.
Even if religion is divinely inspired, a person's understanding of one aspect of religion can be wrong without invalidating all of that person's other religious beliefs.
"Technically, it proves his belief about science is false."
True, though in the same way, Eliezer's success in producing an AI, even according to the dodgy specifications of his dinner companion, would only prove his belief about God wrong, not his belief IN God wrong.
The AI data point would contradict Mr Dinner's model of God's nature only at a single point, His allegedly unique intelligence-producing quality.
Sure. But religion is supposed divinely inspired and thus completely correct on every point. If one piece of the bundle is disproven, the whole bundle takes a hit.
I was asked by several people to comment on this post/proposal. Clearly, Maxikov put a lot of time and effort into this post and, at least in part, there's the pity. When you find you have an idea which seems at once compelling and obvious (in tems of the science) in an already well explored field, the odds are very good that you weren't the first to reach that conjecture. And that almost always means that there is someting wrong with your premises. Very smart and capable people have been trying to achieve cryopreservation of cells, tissues, organs and organisms for over 50 years now and the physical chemistry of water under very high pressures and very low temperatures has been understood for far longer. This should be a hint that some careful searching of the literature is in order before going public with a proposal to "fix cryonics," and especially before spending a lot of time/energy on proposal like this.
Attempts to use extreme hydrostatic pressure to mitigate or eliminate freezing injury go back at least 60 years, and probably longer. As your phase diagram above shows, when the pressure is sufficiently high during cooling the expansiuon of water is prevented, but ice formation is not. What happens is that other allotropes of ice form which do not require expansion. However, this turns out to be a bad thing, since, as opposed to any of these ices being formed first in the interstitial spaces, as happens with Ice I, freezing occurs both intracellularly and extracellularly at the same time in the presence of other ice allotropes. Crystal formation inside cells results in devastating ultrastructural disruption - far worse than would occur if ice formed outside cells first, grew slowly and dehydrated the cells, and finally resulted in a vitrified cellular interior (providing that cryoprotectant is present).
However, the problem with this approach doesn't stop there. Extreme hyperbaria itself is directly damaging by at least two mechanisms: denaturation of cellular proteins (including critical enzymes and membrane proteins) and damage to cell membrane lipid leaflets resulting in permeabilization of the membrane to ions (Onuchic LF, Lacaz-Vieira F., Glycerol-induced baroprotection in erythrocyte membranes. Cryobiology. 1985 Oct;22(5):438-45.) Irreversible membrane damage occurs in mammalian red cells exposed to a pressure of 8000 atm (~117,600 psi) applied for ~10 minutes. Exposure of more comnplex mammamalian cells to far lower pressures~20,000 psi, results in loss of viability due to protein denaturation, and perhaphs due to alterations in the molecular structure of membrane lipids,as well. Interestingly, the same compounds that provide protection cellular (molecular) protection against freezing damage also confer substantial protection against baroinjury. Fahy, et al., have extensively explored the use of hyperbaria to augment vitrification in the rabbit kidney (http://www.freepatentsonline.com/4559298.pdf) and have further extended work from the 1980s demonstrating that cryoprotectives are also substatntially baroprotective.
The first work that I'm aware of to attempt to achieve organ cryopreservation using hyperbaria was that of the late Armand Karow, in the late 1960s - early 1970s (Karow AM Jr, Liu WP, Humphries AL Jr. Survival of dog kidneys subjected to high pressures: necrosis of kidneys after freezing.Cryobiology. 1970 Sep-Oct;7(2):122-8. PMID: 5498348). Karow was able to demonstrate the brief tolerance of dog kidneys to pressures of about ~18,000 psi, however, kidneys subjected to isothermal hyperbaric freezing, even in the presence of of moderate cryoprotection, did not survive.
When I started research and experimentation in cryobiology nearly 40 years ago, there was no Internet, no (affordable) photocopiers and the only way to do a "literature search" was with something called the Index Medicus (http://en.wikipedia.org/wiki/Index_Medicus) which was a veritable wall of bound volumes. I used 3" x 5" index cards to write down possible cites to look up - which then required a trip(s) to the "stacks" to look for the journals. Today, I have the Internet, Pubmed, the international patent database and on line library for 30 million books available. I currently have a digitial library of 12,000 mostly scientific and technical books which, at its current rate of growth, should double in size within a few months. My computer is almost constantly reading a book to me with software that cost me just under $5.00. One of the books I "read" recently was The Shallows: What the Internet Is Doing to Our Brains by Nicholas Carr. Carr argues that the Internet is fundamentally altering the way most people today process information - and not for the better. I don't use the Internet the way most people seem to, today. I rely heavily on books, especially textbooks, to educate me about areas with which I have little or no familiarity, and my approach is pretty much what it has been since I started my intellectual life; namely to study intensively and deeply until I achieve basic mastery of an area, and only then use skimming and browsing over large amounts of material to advance my knowledge. The tools of the information-digitial age have thus been a nearly unblemished advantage to me. If you want to reads Carr's book, click on this link:
http://www.mediafire.com/download/5s4wdr554ia4axn/Nicholas_Carr-The_Shallows__What_the_Internet_Is_Doing_to_Our_Brains_(2010).epub and then click on the green Download button.
I'm also posting links to a number of full text books on cryobioolgy which you can download, as per above:
ADVANCES IN BIOPRESERVATION: https://www.mediafire.com/?raccqhv0rrqfhmh
ADVANCES IN LOW TEMPERATURE BIOLOGY: https://www.mediafire.com/?4i6v9qublf3l8q2
FUNDAMENTALS OF CRYOBIOLOGY: https://www.mediafire.com/?pxq6mxbxvfib41j
CURRENT TRENDS IN CRYOBIOLOGY: https://www.mediafire.com/?pxq6mxbxvfib41j
CRYOPRESERVATION... https://www.mediafire.com/?pxq6mxbxvfib41j
LIFE IN THE FROZEN STATE: https://www.mediafire.com/?ydx3a89m2f47r7y
THE FROZEN CELL: https://www.mediafire.com/?ydx3a89m2f47r7y
Cheers, Mike Darwin
With which of those books should I start?
I'm leading a rationality training group. We're working through the most recent CFAR curriculum, but I also want to work from parts of the sequences.
Which posts in the sequences were particularly impactfull for you? Not just ones that you found interesting, but ideas that you actually implemented in your thinking about object-level stuff.
I'm particularly interested in posts that we could spin out into techniques to practice, like noticing confusion or leaving a line of retreat.
Remember, in the real world, all of this happens in a continuous configuration space with a differentiable amplitude distribution.
So, in reality, since gravitational interactions and whatnot cause the photon to always have a tiny effect, even with no sensor, it will very rarely show up at Detector 1. And as the level of interaction with the rest of reality increases, P(D1) approaches 50%. Right?
And as the level of interaction with the rest of reality increases, P(D1) approaches 50%. Right?
and this is de-coherence? This is why the macro-world is seemingly classical? There are some many elements in the system that you never get anything that doesn't intact with something else and all the configurations are independent?
Clarification: an amplitude is the value of a configuration?
so { a photon going from A to B = (-1 + 0i) } is a configuration and { (-1 + 0i) } is an amplitude?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Despite all talking about rationality, we are still humans with all typical human flaws. Also, it is not obvious which way it needs to go. Even if we had unlimited and infinitely fast processing power, and could solve mathematically all kinds of problems related to Löb's theorem, I still would have no idea how we could start transferring human values to the AI, considering that even humans don't understand themselves, and ideas like "AI should find a way to make humans smile" can lead to horrible outcomes. So maybe the first step would be to upload some humans and give them more processing power, but humans can also be horrible (and the horrible ones are actually more likely to seize such power), and the changes caused by uploading could make even nice people go insane.
So, what is the obvious next step, other than donating some money to the research, which will most likely conclude that further research is needed? I don't want to discourage anyone who donates or does the research, just saying that the situation with the research is frustrating by its lack of feedback. On the scale where 0 is the first electronic computer and 100 is the Friendly AI, are we at least at point 1? If we happen to be there, how would we know that?
I would like this plan, but there are reasons to think that the path to WBE passes through nueromorphic AI which is exceptionally likely to be unfriendly, since the principle is basically to just copy parts of the human brain without understanding how the human brain works.