Sure, I can easily imagine that by mentally substituting steel with jello - at some point you're tear it apart no matter how thick the walls are. However, that substitute also gives me the impression that most shapes we would normally consider for a vessel don't reach the maximum strength possible for the material.
Most vessels are spherical or cylindrical, which is already pretty good (intuitively, spherical vessels should be optimal for isotropic materials). You might want to take a look at the mechanics of thin-walled pressure vessels if you didn't already.
It's important to note that the radial stresses in cylindrical vessels are way smaller than the axial and hoop stresses (which, so to say, pull perpendicular to the "direction" of the pressure). This is also why wound fibers can increase the strength of such vessels.
Thanks for taking the time to write this up and putting numbers to things: it makes it actually possible to evaluate your idea critically.
The thing that jumped out at me was the amount of pressure required for human preservation. What kinds of devices can generate 100KBar of pressure?
Edit: changed GBar to KBar
Materials science undergraduate student here (not a mechanical engineer, my knowledge is limited in the area, I did not go to great lengths to ensure I'm right here, etc.).
A typical method to generate high pressures in research are diamond anvils. This is suitable for exploring the behavior of cells and microorganisms under high pressure.
For human preservation, however, you'd need a pressure vessel. As the yield strength of your typical steel is on the order of 100, maybe 300 MPa, you're really up against a wall here, materials-wise. I don't doubt that suitable alloys for human-sized pressure vessels at 350 MPa exist, however, such vessels will be expensive, and controlling processes within will be difficult. In any case, generating such pressures will probably not involve a moving piston.
I can't really tell whether or not the procedure you've outlined is viable, but I'm quite sure it's far from trivial, just from an engineering point of view.
The concerns of user passive_fist are also valid.
I think I’ve found a somewhat easy-to-make error that could pose a significant existential risk when making an AGI. This error can potentially be found in hierarchical planning agents, where each high-level action (HLA) is essentially its own intelligent agent that determines what lower-level actions to do. Each higher-level action agent would treat determining what lower-level action to do as a planning problem and would try to take the action that maximizes its own utility function (if its a utility-based agent) or (if it’s a goal-based agent) probability of accomplishing its goal while minimizing its cost function (UOCF).
For these agents, it is absolutely vital that each HLA’s UOCF prevents the HLA from doing anything to interfere with the highest-level action maximizing its utility function, for example by rewriting the utility function of higher-level actions or sending the higher-level actions deliberately false information. Failing to do so would result in an error that could significantly increase existential risk. To explain why, consider an agent whose highest-level action wants to maximize the number of fulfilling lives lived. In order to do this, the agent has a lower-level action whose goal is to go to a warehouse to get supplies. The cost function of this lower-level action is simply a function of, say, the amount of time it takes for the agent to reach the warehouse and the amount of money spent or money in damages done. In this situation, the lower-level action agent might realize that there is a chance that the higher-level action agent changes its mind and decides to do something other than go to the warehouse. This would cause the lower-level action to fail to accomplish its goal. To prevent this, this lower-level action may try to modify the utility function of the higher-level action to make it certain to continue trying to go to the warehouse. If this is done repeatedly by different lower-level actions, the resultant utility function could be quite different from the highest-level action’s original utility function and may pose a large existential risk. Even if the lower-level action can’t rewrite the utility function of higher-level actions, it may still sabotage the higher-level action in some other way to further its own goals, for example by sending false information to higher-level actions.
To prevent this, the utility function of the lower-level action can simply be to maximize the highest-level action’s utility, and it can see the UOCF it was provided with as a rough method of maximizing the highest-level action’s utility function. In order to make the UOCF accurately represent the highest-level action’s utility function, it would (obviously) need to place high cost on interfering with the highest-level action’s attempts to maximize its utility. Some basic ideas on how to do this is for there to be very high cost in changing the utility functions of higher-level actions or giving them deliberately false information. Additionally, the cost of this would need to increase when the agent is more powerful, as the more powerful the agent is, the greater damage a changed utility function could do. Note that although higher-level actions could learn through experience what the UOCFs of lower-level actions should be, great care would need to be taken to prevent the AGI from, when still inexperienced, accidentally creating a lower-level action that tries to sabotage higher-level actions.
Please insert some line-breaks at suitable points to make your comment be more readable. At the moment it's figuratively a wall of text.
Edit: Thank you.
I don't know what this means :(
If you make a joke on a day where jokes are made, but another person is not on the same day anymore, that person might not get the joke because they don't think the day matters.
I hate april fool's jokes across time zones. You don't expect them on April 2nd, do you?
Ah, I see :)
Although honestly, what kind of idiot had the idea to order the date mm/dd/yyyy?
If you're reading a document on a computer where you have to scroll to find the footnotes and then scroll back up to find where you were again, you can instead open another copy of the document in a new tab/window, and leave it at the footnotes.
If you're reading a pdf with multiple pages, zooming out to show the entire page (or even displaying two pages at once if your display is wide enough) enables super-fast scrolling through the document. I have seen people not do this and it was painful to watch.
Also, some pdf readers (including adobe reader) have a "magnifying glass" feature, which achieves what you described without having to open the document a second time.
By "advice in the comments", you mean new entries to the repositories, right? So you're suggesting that we fragment the repository through a number of separate comment sections, labeled by year, and that is a really awful way of organizing a global repository of timeless articles.
If you're worried about incumbents taking disproportionate precedence in the list(as more salient posts tend to get more attention; more votes; more salience), IIRC, reddits have a comment ordering that's designed to promote posts on merit rather than seniority. If that isn't sufficient to address incumbent bias then we should probably be talking about building a better one.
I meant, "in the comments of the new article". I'm sorry if that wasn't clear.
The goal was to get some discussion and new advice going, and that's difficult if you just link to the old repository, which means one more click on the way, one trivial inconvenience more.
I had thought about copying all the advice (or the good pieces only) over to the old repository once this one is obsolete, i.e. once the rerun repository for march is posted, and I might do this then, if I find the time.
That's just not how the relevant model works
Yes, that's rather the point? He's pointing out the implications of the Gompertz curve: that increases in age-related risk account for almost all of why we live such short lives.
Then he should give reasons why that's possible. As it is, it seems to me like he is simply ignoring the math behind ageing. The following would be a better argument, IMO:
The Gompertz law describes human mortality as it currently is. It says that human mortality over time increases more than exponentially. To defy the Gompertz law, bold steps are necessary. Constant maintenance via external drugs that do what our immune system currently does or re-setting our immune system to a younger age may be necessary, as well as keeping the length of our telomers constant without inducing cancer, to break the hard limit set by the Gompertz curve.
Compare:
Radioactive decay is exponential and not linear. That is partly what makes nuclear waste take so long to disappear: Atomic decay is a random process, and even after a few half-lives, some radiation remains. And it gets worse: Many waste products have very long lifetimes, so their radioactivity stays around even when short-lived products are all gone. But researchers have found a solution: They bombard radioactive atoms with other nuclear particles, inducing them to decay much faster. The only weakly radioactive products can be safely extracted. In effect, this process overcomes the limiting math of radioactive decay, enabling linear decay rates and quick decay of long-lived fission products.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Theoretically, zero. However you're right that the structural demands of maintaining pressure over long term (and, especially, maintaining cryogenic temperatures and high pressures at the same time) are high and there is a large risk of unintended pressure release.
There's also leakage by diffusion of gasses, which might be non-negligible due to the high pressure gradient, although the diffusion coefficient e.g. of water through steel should be low. Not sure how that works out.