Comment author: drethelin 26 April 2013 04:57:08PM 0 points [-]

We spend millions of dollars digging up dinosaurs.

People get really excited when we find things like Troy.

Look at all the antique stores that are around.

Why WOULDN'T people get revived?

Comment author: TitaniumDragon 27 April 2013 01:06:57AM 2 points [-]

Humans aren't dinosaurs, nor can you put them on your mantlepiece as a conversation piece. They are not property, but living, independent persons.

Comment author: Desrtopa 26 April 2013 01:26:40PM 1 point [-]

At this point, you have to ask yourself whether you can even do any sort of reasonable meta analysis on the population. You're seeing clear differences between the populations and you can't just "compensate for them". If you take a sub-population which has numerous factors which increase their risk of some disease, and then "compensate" for those factors and still see an elevated level of the disease, it isn't actually suggestive of anything at all, because you have no way of knowing whether your "compensation" actually compensated for it or not. Statistics is not magic; it cannot magically remove bias from data.

Well, if you already know how much each of the associated factors contributes alone via other tests where you were able to isolate those variables, you can make an educated guess that their combined effect is no greater than the sum of their individual effects.

The presence of other studies that didn't show the same significant results weighs against it, but on the other hand such cases are certainly not unheard of with respect to associations that turn out to be real. The Cochrane Collaboration's logo comes from a forest plot of results for whether an injection of corticosteroids reduce the chance of early death in premature birth. Five out of seven studies failed to achieve statistical significance, but when their evidence was taken together, it achieved very high signficance, and further research since suggests a reduction of mortality rate between 30-50%.

While a study of the sort linked above certainly doesn't establish the truth of its findings with the confidence of its statistical significance, "never believe studies like this" doesn't leave you safe from a treatment-of-evidence standpoint, because even in the case of a real association, the data are frequently going to be messy enough that you'd be hard pressed to locate it statistically. You don't want to set your bar for evidence so high that, in the event that the association were real, you couldn't be persuaded to believe in it.

Comment author: TitaniumDragon 27 April 2013 12:40:26AM 0 points [-]

You can't make an educated guess that a combination of multiple factors is no greater than the sum of their individual effects, and indeed, when you're talking about disease states, this is the OPPOSITE of what you should assume. The harm done to your body taxes its ability to deal with harm; the more harm you apply to it, whatever the source, the worse things get. Your body only has so much ability to fight off bad things happening to it, so if you add two bad things on top of each other, you're actually likely to see harm which is worse than the sum of their effects because part of each of the effects is naturally masked by your body's own repair mechanisms.

On the other hand, you could have something where the negative effects of each of the things counteracts each other.

Moreover (and worse), you're assuming you have any independent data to begin with. Given that there is a correlation between smoking and red meat consumption, your smoking numbers are already suspect, because we've established that the two are not independent variables.

In any event, guessing is not science, it is nonsense. I could guess that the impact of the factors was greater than the sum of the parts, and get a different result, and as you can see, it is perfectly reasonable to make that guess as well. That's why it is called a guess.

When we're doing analysis, guessing is bad. You guess BEFORE you do the analysis, not afterwards. All you're doing when you "guess" how large the impact is, is manipulating the data.

That's why control groups are so important.

Regarding glucocorticosteroid use in pregnancy, there actually is quite a bit of debate over whether or not their use is actually a good thing due to the fact that cortiocosteroids are tetratogens.

And yes, actually, it is generally better not to believe in true correlations than it is to believe in false ones. Look at all the people who are raising malnourished children on vegan and vegetarian diets.

Comment author: ArisKatsaris 26 April 2013 08:57:04AM *  2 points [-]

The problem is with formalizing solutions, and making them consistent with other aspects that one would want an AI system to have (e.g. ability to update on the evidence). Your suggested three solutions don't work in this respect because:

1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn't be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn't be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.

2) The assumption that a person's words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?

3) We would hopefully want the AI to care about things it can't currently directly observe, or it wouldn't care at all about the future (which it likewise can't currently directly observe).

The issue isn't helping human beings not fall prey to Pascal's Mugging -- they usually don't. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.

Comment author: TitaniumDragon 27 April 2013 12:16:34AM *  0 points [-]

[quote]1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn't be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn't be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.[/quote]

Neither of those feats are even particularly impressive, though. Humans can make it snow indoors, and likewise an apparent reversal in gravity can be achieved via numerous routes, ranging from inverting the room to affecting one's sense of balance to magnets.

Moreover, there are numerous more likely explanations for such feats. An AI, for instance, would have to worry about someone "hacking its eyes", which would be a far simpler means of accomplishing that feat. Indeed, without other personnel around to give independent confirmation and careful testing, one should always assume that you are hallucinating, or that it is trickery. It is the rational thing to do.

You're dealing with issues of false precision here. If something is so very unlikely, then it shouldn't be counted in your calculations at all, because the likelihood is so low that it is negligible and most likely any "likelihood" you have guessed for it is exactly that - a guess. Unless you have strong empirical evidence, treading its probability as 0 is correct.

[quote]2) The assumption that a person's words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?[/quote]

Same way it thinks about everything else. If someone walks up to you on the street and claims souls exist, does that change the probability that souls exist? No, it doesn't. If your AI can deal with that, then they can deal with this situation. If your AI can't deal with someone saying that the Bible is true, then it has larger problems than pascal's mugging.

[quote]3) We would hopefully want the AI to care about things it can't currently directly observe, or it wouldn't care at all about the future (which it likewise can't currently directly observe).[/quote]

You seem to be confused here. What I am speaking of here is the greater sense of observability, what someone might call the Hubble Bubble. In other words, causality. Them torturing things that have no casual relationship with me - things outside of the realm that I can possibly ever effect, as well as outside the realm that can possibly ever effect me - it is irrelevant, and it may as well not happen because there is not only no way of knowing if it is happening, there is no possible way that it can matter to me. It cannot affect me, I cannot affect them. Its just the way things work. Its physics here.

Them threatening things outside the bounds of what can affect me doesn't matter at all - I have no way of determining their truthfulness one way or the other, nor has it any way to impact me, so it doesn't matter if they're telling the truth or not.

[quote]The issue isn't helping human beings not fall prey to Pascal's Mugging -- they usually don't. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.[/quote]

The above three things are all reasonable ways of dealing with the problem. Assigning it a probability of 0 is what humans do, after all, when it call comes down to it, and if you spend time thinking about it, 2 is obviously something you have to build into the system anyway - someone walking up to you and saying something doesn't really change the likelihood of very unlikely things happening. And having it just not care about things outside of what is causally linked to it, ever, is another reasonable approach, though it still would leave it vulnerable to other things if it was very dumb. But I think any system which is reasonably intelligent would deal with it as some combination of 1 and 2 - not believing them, and not trusting them, which are really quite similar and related.

Comment author: gwern 20 April 2013 11:20:02PM 0 points [-]

what is the likelihood that we care about dead frozen people in the future, especially if we have to spend millions of dollars resurrecting them (as is more than likely the case), especially given how difficult it would be to do so?

This is a standard criticism people come up with after 5 seconds of thought, and a perfect example of http://lesswrong.com/lw/h8n/litany_of_a_bright_dilettante/

Do you really think that no one in cryonics hasn't ever thought - 'wait a second! why would anyone in the future even bother putting in the work?' - and you have successfully exposed a fatal ~70-year-old blindspot in a comment written in a few seconds?

Comment author: TitaniumDragon 26 April 2013 07:59:48AM 4 points [-]

Dunning-Kruger and experience with similar religious movements suggests otherwise.

It takes someone who really thinks about most things very little time to come up with very obvious objections to most religious doctrine, and given the overall resemblance of cryonics to religion (belief in future resurrection, belief that donating to the church/cyronics institution will bring tangible rewards to yourself and others in the future, belief in eternal life) its not really invalid to suggest something like that.

Which is more likely - that people are deluding themselves over the possibility of eternal life and don't actually have any real answers to the obvious questions, but conveniently ignore them because they see the upside as being so great, or that this has totally been answered, despite the fact that you didn't even articulate an actual answer to it in your response, or even link to it?

I'm pretty sure that, historically speaking, the former is far more likely than the latter.

If someone comes up to you and starts talking about how you have an immortal soul, if you've spent any time studying medicine or neurobiology at all, or have experience with anyone who has suffered brain damage, it really doesn't take you very long to come up with a good counterargument to people having souls. And people have argued about the nature of being for -thousands- of years, and dubiousness about souls has been around for considerably longer than cryonics has been. And yet, people still believe in souls, despite the fact that a very simple, five minutes of thought counterargument exists and has never been countered.

The fact that you did not have a counter for my argument and instead linked to a page which was meant to be a "take that" directed at me is evidence against you having an actual answer to my query, which is always a bad sign. This is not to say that it doesn't have an answer, but a quick, simple answer (or link) would be no more difficult to find than the litany article.

Indeed, after looking at the Alcor site, and reading around, all I really find are arguments against it. The best argument for it that I've seen is that resurrecting 20th century people might be profitable from an entertainment/educational standpoint, but I find even that to be a weak argument - not only is resuscitating someone for the purpose of entertainment deeply morally repugnant (and likely to be so into the future), but wikipedia and various other sources from the 20th and 21st century are likely to be far more valuable to historians, while writers will benefit more from creating their own characters who are considerably more interesting than real people - and it is considerably cheaper and less morally and legally questionable to do so.

So what is the argument for it? If it is so simple to resolve, then what is the resolution?

As ciphergoth pointed out, there isn't really a good answer here. And that is troubling given that the whole thing is pointless if no one is ever going to bring you back anyway. I was reading one article on Alcor which suggested that, even for a cyronics optimist, the odds of it actually paying off were 15% if he only used his most optimistic numbers - and I think his numbers about the technology are optimistic indeed. That's bad news, especially given the guy is someone who actually thinks that doing cryonics is worthwhile.

Comment author: TitaniumDragon 26 April 2013 07:37:09AM 0 points [-]

You're thinking about this too hard.

There are, in fact, three solutions, and two of them are fairly obvious ones.

1) We have observed 0 such things in existence. Ergo, when someone comes up to me and says that they are someone who will torture people I have no way of ever knowing existing unless I give them $5, I can simply assign them the probability of 0 that they are telling the truth. Seeing as the vast, vast majority of things I have observed 0 of do not exist, and we can construct an infinite number of things, assigning a probability of 0 to any particular thing I have never observed and have no evidence of is the only rational thing to do.

2) Even assuming they do have the power to do so, there is no guarantee that the person is being rational or telling the truth. They may torture those people regardless. They might torture them BECAUSE I gave them $5. They might do so at random. They might go up to the next person and say the next thing. It doesn't matter. As such, their demand does not change the probability that those people will be tortured at all, because I have no reason to trust them, and their words have not changed the probabilities one way or the other. Ergo, again, you don't give them money.

3) Given that I have no way of knowing whether those people exist, it just doesn't matter. Anything which is unobservable does not matter at all, because, by its very nature, if it cannot be observed, then it cannot be changing the world around me. Because that is ultimately what matters, it doesn't matter if they have the power or not, because i have no way of knowing and no way of determining the truth of the statement. Similar to the IPU, the fact that I cannot disprove it is not a rational reason to believe in it, and indeed the fact that it is non-falsifiable indicates that it doesn't matter if it exists at all or not - the universe is identical either way.

It is inherently irrational to believe in things which are inherently non-falsifiable, because they have no means of influencing anything. In fact, that's pretty core to what rationality is about.

Comment author: Desrtopa 25 April 2013 11:47:44PM *  1 point [-]

There might be some factors which the study is failing to control for, but from the link in the grandparent

Included in the analysis were 448,568 men and women without prevalent cancer, stroke, or myocardial infarction, and with complete information on diet, smoking, physical activity and body mass index

The study seems to control for the more obvious associated factors.

Also, the full text states that the consumption of red meat is associated with an increase in mortality when controlling for the confounders assessed in their study, with processed meat being associated with a greater increase, but poultry not being associated with an increase in mortality.

Comment author: TitaniumDragon 26 April 2013 07:16:12AM 2 points [-]

The problem is that the choice to eat differently itself is potentially a confounding factor (people who pick particular diets may not be like people who do not do so in very important ways), and any time you have to deal with, say, 10 factors, and try to smooth them out, you have to question whether any signal you find is even meaningful at all, especially when it is relatively small.

The study in particular notes:

[quote]Men and women in the top categories of red or processed meat intake in general consumed fewer fruits and vegetables than those with low intake. They were more likely to be current smokers and less likely to have a university degree [/quote]

At this point, you have to ask yourself whether you can even do any sort of reasonable meta analysis on the population. You're seeing clear differences between the populations and you can't just "compensate for them". If you take a sub-population which has numerous factors which increase their risk of some disease, and then "compensate" for those factors and still see an elevated level of the disease, it isn't actually suggestive of anything at all, because you have no way of knowing whether your "compensation" actually compensated for it or not. Statistics is not magic; it cannot magically remove bias from data.

This is the problem with virtually all analysis like this, and is why you should never, ever believe studies like this. Worse still, there's a good chance you're looking at the blue M&M problem - if you do enough meta analysis of a large population you will find significant trends which are not really there, and different studies (noted in the paper) indicate different results - that study showed no increase in mortality and morbidity from red meat consumption, an American study showed an increase, and several vegetarian studies showed no difference at all. Because of publication bias (positive results are more likely to be reported than negative results), potential researcher bias (belief that a vegetarian diet is good for you is likelier than normal in a population studying diet, because vegetarians are more interested in diets than the population as a whole), and the fact that we're looking at conflicting results from studies, I'd say that that is pretty good evidence that there is no real effect and it is all nonsense. If I see five studies on diet, and three of them say one thing and two say another, I'm going to stick with the null hypothesis because it is far more likely that the three studies that say it does something are the result of publication bias of positive results.

Comment author: RichardKennaway 25 April 2013 04:58:59PM 16 points [-]

Casinos. The habitual gambler would prefer to have money than not, and would prefer to take the negative-value bet than not.

Comment author: TitaniumDragon 25 April 2013 10:51:53PM 7 points [-]

People like my mother (who occasionally go to the casino with $40 in their pocket, betting it all in 5-cent slot machines a nickel at a time, then taking back whatever she gets back) go to the casino in order to have fun/relax, and playing casino games is an enjoyable past time to them. Thus while they lose money, they acknowledge that it is more likely than not that it will happen, and are not distressed when they leave with less money than they enter with because their goal was to enjoy themselves, not to end up with more money - getting more money is just a side benefit, something that happens sometimes (about one time in four that she goes, she winds up with more money than she entered with) but which is not really the primary purpose.

Ergo calling it a money pump in such cases is a bit silly.

On the other hand, people who genuinely believe they can win money at the lottery/gambling (against the house; it is not irrational to play poker or blackjack with the idea that you can win, IF you know what you're doing) are in fact engaging in money pumping activities.

But it really depends on the nature of the person involved as to whether or not it is a true money pump.

In response to Fermi Estimates
Comment author: TitaniumDragon 25 April 2013 10:23:12PM 6 points [-]

I will note that I went through the mental exercise of cars in a much simpler (and I would say better) way: I took the number of cars in the US (300 million was my guess for this, which is actually fairly close to the actual figure of 254 million claimed by the same article that you referenced) and guessed about how long cars typically ended up lasting before they went away (my estimate range was 10-30 years on average). To have 300 million cars, that would suggest that we would have to purchase new cars at a sufficiently high rate to maintain that number of vehicles given that lifespan. So that gave me a range of 10-30 million cars purchased per year.

The number of 5 million cars per year absolutely floored me, because that actually would fail my sanity check - to get 300 million cars, that would mean that cars would have to last an average of 60 years before being replaced (and in actuality would indicate a replacement rate of 250M/5M = 50 years, ish).

The actual cause of this is that car sales have PLUMMETED in recent times. In 1990, the median age of a vehicle was 6.5 years; in 2007, it was 9.4 years, and in 2011, it was 10.8 years - meaning that in between 2007 and 2011, the median car had increased in age by 1.4 years in a mere 4 years.

I will note that this sort of calculation was taught to me all the way back in elementary school as a sort of "mathemagic" - using math to get good results with very little knowledge.

But it strikes me that you are perhaps trying too hard in some of your calculations. Oftentimes it pays to be lazy in such things, because you can easily overcompensate.

Comment author: RomeoStevens 24 April 2013 07:22:53PM 1 point [-]

Unprocessed means untreated with preservatives. Smoked, salted, dried, potassium benzoate, etc. The evidence I'm referencing is a meta-review of epidemiological studies. The lack of a causal pathway refers to the failure to find anything when doing intervention studies on particular substances. So it could very well be that the epidemiological studies are all failing to properly control for confounding factors. Nutritional self reporting is notoriously terrible. Epidemiological studies often rely on spaced surveys, sometimes asking questions about food habits over an entire year. That people are unable to provide accurate info is unsurprising. Still, it is not zero evidence.

My own hypothesis is that the animal's diet has a lot more to do with the potential harm to you than currently realized. Animals with crappy diets are sickly. We likely have a natural aversion to eating sickly animals for a reason.

Comment author: TitaniumDragon 25 April 2013 09:54:43PM 3 points [-]

Uh, yeah. The reason for that is that sickly animals carry parasites. It is logical that we wouldn't want to eat parasite-ridden or diseased animals, because then WE get the parasites. If the animal is not parasite-ridden, there's no good reason to believe it would be unhealthy to eat.

My personal suspicion for the cause is underlying SES factors (wealthy people tend to eat better, fresher food than the poor) as well as the simple issue of dietary selection - people who watch what they eat are also more likely to exercise and generally have healthier habits than those who are willing to eat anything.

Comment author: ciphergoth 21 April 2013 12:51:36PM 5 points [-]

And it's not necessarily that the replies to this problem are good, but that they are what you need to reply to. There's nothing to be said for making a serve we've already returned; to advance the discussion, you need to actually hit the ball back into our court, by reading and replying to the standard replies to this point.

Comment author: TitaniumDragon 25 April 2013 08:52:16PM 4 points [-]

I have never actually seen any sort of cogent response to this issue. Ever. I see it being brushed aside constantly, along with the magical brain restoration technology necessary for this, but I've never actually seen someone go into why, exactly, anyone would bother to thaw them out and revive them, even if it WAS possible to do. They are, all for all intents and purposes, dead, from a legal, moral, and ethical standpoint. Not only that, but defrosting them has little actual practical benefit - while there is obvious value to the possible cryopreservation of organs, that is only true if there aren't better way of preserving organs for shipment and preservation. As things are today, however, that seems unlikely - we already have means of shipping organs and keeping them alive, and given the current trend towards growing organs, it seems far more likely to me that the actual method will be to grow organs and keep them alive rather than keep them in cryopreservation, and without that technology being worked on, there is pretty much no value at all to developing unfreezing technology.

That means that, realistically speaking, the only purpose of such technology would be, say, shipping humans to another planet, which while probably not really rational from an economic perspective is at least somewhat reasonably likely. But even still that is a different kettle of fish - the technology in question may not resemble present day cryogenics at all, and as such may be utterly useless for unfreezing people from present-day cyrogenic treatments. Once you can prove that people CAN be revived in that way, then there is much more incentive towards cryogenics... but that is not present day cryogenics, and there is no evidence to suggest future cryogenic treatments will be very similar to present ones.

Okay, so even all that technology aside, let's assume, at some point, we do develop this technology for whatever reason. At this point, not only do you have to bear the expense of unfreezing these people, but you also have to bear the expense of fixing whatever is wrong with them (which, I will note, actually killed them in the past), as well as fixing whatever damage was done to them prior to being cryogenically frozen (and lest we forget, 10 minutes without oxygen is very likely to cause irreparable brain damage in humans who survive - let alone humans who are beyond what we in the present day can deal with). This is likely to be very, very expensive indeed, and there is little real incentive for someone in the future to spend their money in this way instead of on something else. You are basically hoping for some rich idiot to not only be capable of doing this, but also being willing to do it and having the legal ability to do so (as, lest we forget, there are laws about playing around with human corpses, and I suspect that it is unlikely they will change positively for frozen people in the future - as if they do change, what are the odds that your frozen body won't be used in some other sort of experiment?).

I have never seen arguments which really address these issues. People wave their hands and talk about nanotechnology and brain uploading, but as someone who has actually dealt with nanotechnology I can tell you that it is not, in fact, magical, nor is it capable of many of the feats people believe it will be capable of, nor will it EVER be capable of many of the feats that people imagine it will be capable of. Nanomachines have to be provided with energy the same as anything else, among other, major issues, and I have some severe doubts about the unfreezing process in the first place due to various issues of thermodynamics and the fact that the bodies are not frozen in a setup which is likely to facilitate unfreezing them.

A lot of cryonics arguments basically boil down to "future technology is magic", and that's a pretty big problem for any sort of rational argumentation. "You can't prove that they won't be able to revive me" can be used for all sorts of terrible arguments, as the burden of proof is on the person making the argument that it IS possible, not on the person holding to the present day "we can't, and see no way to do so."

I mean, you look at things like:

http://www.alcor.org/Library/html/resuscitation.htm

The technology in here is, quite literally, magic. It doesn't exist, and it won't exist. Ever. Things on the level are very dumb; they cannot be intelligent, and they cannot act intelligently, because they are too small, too simple. The bits where they stick stuff into your cells is where things get really ridiculous, but even before then, those little nanomachines are going to have real issues doing what you are hoping for, and would have to be custom built for the task at hand. We're talking enormous expense if it is even possible to do at all, and given the extremely small cryogenic population, the odds of perfecting the technology prior to running out of dead people is not very good. Remember, if the result is brain dead or severely brain damaged, it is still a failure. But even these sorts of nanomachines are very questionable; transistors are only going to get 256 times smaller at most, which makes me question whether said nanomachines can function in the way that is hoped for at all. Of course, this is not necessarily a barrier to, say, a different sort of nanomachine (though they'd be more micromachines really, on the scale of a cell rather than on the scale of large molecules) which was controlled by some sort of external process with the little machines being extensions/remotes of it, but this is still questionable.

Extreme expense, questionable technology (which would have to be custom developed for the purpose), the question of whether cryonics is even a viable technological route for something else for cryogenic revival to piggyback on, likely custom technology for reviving people who have died of things that people no longer die of because of earlier preventative measures (why build something to fix someone with late stage cancer when no one gets late state cancer anymore?), legal problems, the necessity for experimental subjects... all of these things add up to the question of why these hypothetical future people are even going to bother. That's assuming it is even ethical to revive someone who is, say, not genetically engineered and therefore would be at the bottom of the societal heap if they were revived.

View more: Prev | Next