Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. ("Microstate" means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a "macrostate".) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.
So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf's law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.
The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these "monads" or "tensor factors" containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say "everything only exists in the mind", which then implies that the mind only exists in the mind. A "high-level description" is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
The first question is a question about how a theory like this would develop in detail. I can't say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.
Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, "the mind only exists in the mind".
If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition? If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we de...
Followup to: Cryonics wants to be big
We've all wondered about the wisdom of paying money to be cryopreserved, when the current social attitude to cryopreservation is relatively hostile (though improving, it seems). In particular, the probability that either or both of Alcor and CI go bankrupt in the next 100 years is nontrivial (perhaps 50% for "either"?). If this happened, cryopreserved patients may be left to die at room temperature. There is also the possibility that the organizations are closed down by hostile legal action.A
The ideal solution to this problem is a way of keeping bodies cold (colder than -170C, probably) in a grave. Our society already has strong inhibitions against disturbing the dead, which means that a cryonic grave that required no human intervention would be much less vulnerable. Furthermore, such graves could be put in unmarked locations in northern Canada, Scandinavia, Siberia and even Antarctica, where it is highly unlikely people will go, thereby providing further protection.
In the comments to "Cryonics wants to be big", it was suggested that a large enough volume of liquid nitrogen would simply take > 100 years to boil off. Therefore, a cryogenic grave of sufficient size would just be a big tank of LN2 (or some other cryogen) with massive amonuts of insulation.
So, I'll present what I think is the best possible engineering case, and invite LW commenters to correct my mistakes and add suggestions and improvements of their own.
If you have a spherical tank of radius r with insulation of thermal conductivity k and thickness r (so a total radius for insulation and tank of 2r) and a temperature difference of ΔT, the power getting from the outside to the inside is approximately
25 × k × r × ΔT
If the insulation is made much thicker, we get into sharply diminishing returns (asymptotically, we can achieve only another factor of 2). The volume of cryogen that can be stored is approximately 4.2 × r3, and the total amount of heat required to evaporate and heat all of that cryogen is
4.2 × r3 × (volumetric heat of vaporization + gas enthalpy)
The quantity is brackets for Nitrogen and a ΔT of 220 °C is approximately 346,000,000 J m-3. Dividing energy by power gives a boiloff time of
1/12,000 × r2 × k-1 centuries
Setting this equal to 1 century, we get:
r2/k = 12,000.
Now the question is, can we satisfy this constraint without an exorbitant price tag? Can we do better and get 2 or 3 centuries?
"Cryogel" insulation with a k-value of 0.012 is commercially available Meaning that r would have to be at least 12 meters. A full 12-meter radius tank would weigh 6000 tons (!) meaning that some fairly serious mechanical engineering would be needed to support it. I'd like to hear what people think this would cost, and how the cost scales with r.
The best feasible k seems to be fine granules or powder in a vacuum. When the mean free path of a gas increases significantly beyond the characteristic dimension of the space that encloses it, the thermal conductivity drops linearly with pressure. This company quotes 0.0007 W/m-K, though this is at high vacuum. Fine granules of aerogel would probably outperform this in terms of the vacuum required to get down to < 0.001 W/m-K.
Supposing that it is feasible to maintain a good enough vacuum to get to 0.0007 W/m-K, perhaps with aerogel or some other material. Then r is a mere 2.9 meters, and we're looking at a structure that's the size of a large room rather than the size of tower block, and a cryogen weight of a mere 80 tons. Or you could double the radius and have a system that would survive for 400 years, with a size and weight that was still not in the "silly" range.
The option that works without the need for a vacuum is inviting because there's one less thing to go wrong, but I am no expert on how hard it would be to make a system hold a rough vacuum for 100 years, so it is not clear how useful that is.
As a final comment, I disagree that storing all patients in one system is a good idea. Too many eggs in one basket is never good when you're trying to maximize the probability that each patient will survive. That's why I'm keen on finding a system that would be small enough that it would be economical to build one for a few dozen patients, say (cost < 30 million).
So, I invite Less Wrong to comment: is this feasible, and if so how much would it cost, and can you improve on my ideas?
In particular, any commenters with experience in cryogenic engineering would delight me with either refinement or critique of my cryogenic ideas, and delight me even more with cost estimates of these systems. Its also fairly critical to know whether you can hold a 99% vacuum for a century or two.
A: In addition to this, many scenarios where cryonics is useful to the average LW reader are scenarios where technological progress is slow but "eventually" gets to the required level of technology to reanimate you, because if progress is fast you simply won't have time to get old and die before we hit longevity escape velocity. Slow progress in turn correlates with the world experiencing a significant "dip" in the next 50 or so years, such as a very severe recession or a disaster of some kind. These are precisely the scenarios where a combination of economic hardship and hostile public opinion might kill cryonics organizations.