Comment author: TitaniumDragon 27 May 2014 10:46:25PM *  1 point [-]

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Comment author: TitaniumDragon 27 May 2014 11:23:31PM *  0 points [-]

After reading through all of the comments, I think I may have failed to address your central point here.

Your central point seems to be "a rational agent should take a risk that might result in universal destruction in exchange for increased utility".

The problem here is I'm not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn't necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur.

Say the Nazis gain an excessive amount of power. What happens then? Well, there's the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn't. And - let's face it - chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you're in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation - sooner or later, someone is going to do it, and if it isn't you, then it will be someone else who gains whatever benefits there are from it.

The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don't make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.

Comment author: TitaniumDragon 27 May 2014 10:46:25PM *  1 point [-]

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Comment author: TitaniumDragon 27 May 2014 10:47:57PM -1 points [-]

Incidentally, regarding some other things in here:

[quote]They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital. As capital becomes increasingly important, conflict - at least, violent, capital-destroying conflict - becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade.

And that's ignoring the fact that we've already sort of engineered a global scenario where "The West" (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others' capital, benefit from each others' capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don't, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West.

The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility - in "the West" we've seen crime rates and homicide rates decline for 20+ years now.

As a final, random aside:

My favorite thing about the Trinity test was the scientist who was taking side bets on the annihilation of the entire state of New Mexico, right in front of the governor of said state, who I'm sure was absolutely horrified.

Comment author: TitaniumDragon 27 May 2014 10:46:04PM *  0 points [-]

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Comment author: TitaniumDragon 27 May 2014 10:46:49PM *  0 points [-]

Apparently I don't know how to use this system properly.

Comment author: TitaniumDragon 27 May 2014 10:46:04PM *  0 points [-]

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Comment author: TitaniumDragon 27 May 2014 10:46:25PM *  1 point [-]

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Comment author: TitaniumDragon 27 May 2014 10:46:04PM *  0 points [-]

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Comment author: TitaniumDragon 10 September 2013 12:52:10AM -2 points [-]

While we are, in the end, meat machines, we are adaptive meat machines, and one of the major advantages of intelligence is the ability to adapt to your environment - which is to say, doing more than executing preexisting adaptations but being able to generate new ones on the fly.

So while adaptation-execution is important, the very fact that we are capable of resisting adaptation-execution means that we are more than adaptation-executors. Indeed, most higher animals are capable of learning, and many are capable of at least basic problem solving.

There is pretty significant selective pressure towards being a fitness maximizer and not a mere adaptation-executor, because something which actively maximizes its fitness will by definition have higher fitness than one which does not.

Comment author: Robin_Hanson2 18 May 2007 10:20:24PM 14 points [-]

Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.

Comment author: TitaniumDragon 14 July 2013 05:10:17AM 0 points [-]

I will note that this is one of the fundamental failings of utilitarianism, the "mere addition" paradox. Basically, take a billion people who are miserable, and one million people who are very happy. If you "add up" the happiness of the billion people, they are "happier" on the whole than the million people; therefore, the billion are a better solution to use of natural resources.

The problem is that it always assumes some incorrect things:

1) It assumes all people are equal 2) It assumes that happiness is transitive 3) It assumes that you can actually quantify happiness in a meaningful way in this manner 4) It assumes the additive property for happiness - that you can add up some number of miserable people to get one happy person.

None of these assumptions are necessarily true.

Of course, all moral philosophies are going to fail at some level.

Note that, for instance, in this case there is an obvious difference: adding 50 years to one life is actually significantly better than extending 50 lives by 1 year each, as the investment to improve one person for 50 years is considerably less, and one person with 50 years can do considerably larger, longer, and grander projects.

Comment author: TitaniumDragon 28 May 2013 12:35:25AM 0 points [-]

I think you're wrong about an important point here, actually, which is that not all things are as exciting as other things. Not all things are equally exciting.

Riding a dragon is actually way cooler than hang gliding for any number of reasons. Riding animals is cool in and of itself, but riding a dragon is actually flying, rather than hang gliding, which is "falling with style". You get the benefits of hang-gliding - you can see the landscape, for instance - but you have something which natively can fly beneath you. You need to worry less about crashing on a dragon than you do on a hang glider. You can ascend and descend at will. You can take off from a lot more locations - hang gliding usually requires you to go somewhere inconvenient to get to, and if you want to do it again, then you have to get your glider all the way back up to where you took off from. And of course if dragons are sentient, sapient beings, that adds a whole additional level of coolness.

Magic not readily replicable by science - the ability to personally fly, shapeshift, clairvoyance (though we have replicated that to some extent with cameras and drones, they are much less convenient), teleportation, and the like are very cool. The ability to throw fireballs or lightning bolts is much less cool, because we CAN replicate those abilities with science (or at least reasonable approximations thereof).

Really though, is any magic cooler than, say, computers?

Understanding protein folding is cooler than special relativity, because there is a lot more you can do with protein folding than special relativity. Special relativity really only comes into play when you're dealing with outer space, which is very expensive and outside of the realm of day-to-day life; GPS is pretty much the only thing which really cares about it as far as normal life goes. Conversely, protein folding allows for all sorts of biological shenanigans, is vital to engineering lifeforms, and allows for all sorts of novel medications, not to mention potential for creating new materials en masse.

It is true that magic is often used as an escapist fantasy. And it is true that it is a logical flaw in such stories (to some extent; it depends on how magic works, after all. And it might also give a lazy person motivation).

Comment author: gwern 26 April 2013 10:34:01PM *  5 points [-]

Dunning-Kruger

You obviously have not actually read the Dunning-Krueger paper and understood what it showed.

and experience with similar religious movements suggests otherwise.

Name three. Like V_V, I suspect that for all that you glibly allude to 'cults' you have no personal experience and you have not acquainted yourself with even a surface summary of the literature, much like you have not bothered to so much as read a cryonics FAQ or book before thinking you have refuted it.

It takes someone who really thinks about most things very little time to come up with very obvious objections to most religious doctrine

And it takes even less time to notice that there are long thorough answers to the obvious objections. Your point here is true, but says far more about you than religion or cryonics; after all, many true things like heliocentrism or evolution have superficial easily thought-of objections which have been addressed in depth. Sometimes they work, sometimes they don't; the argument from evil is probably the single most obvious argument against Western religions, there are countless replies from theists of various levels of sophistication, and while I don't think any of them actually work, I also don't think someone going 'My mother died! God doesn't exist!' is contributing anything whatsoever. What, you think the theists somehow failed to notice that bad things happen? Of course they did notice, so if you want to argue against the existence of God, read up on their response.

Which is more likely - that people are deluding themselves over the possibility of eternal life and don't actually have any real answers to the obvious questions, but conveniently ignore them because they see the upside as being so great, or that this has totally been answered, despite the fact that you didn't even articulate an actual answer to it in your response, or even link to it?

If you had spent less time being arrogant, it might have occurred to you that I see this sort of flip reaction all the time in which people learn of cryonics and in five seconds think they've come up with the perfect objection and refuse to spend any time at all to disconfirm their objection. You are acting exactly like the person who said, "but it's not profitable to revive crypatients! QED you're all idiots and suckers", when literally the first paragraph of the Wikipedia article on ALCOR implies how they attempt to resolve this issue; here's a link to the discussion: http://lesswrong.com/lw/gh5/cryo_and_social_obligations/8d43

Notice how you are acting exactly like cheapviagra. You've come up with an obvious issue for cryonics, and rather than do any - gasp! - research; you commented on it. OK, fine. I then told you that there were replies to the issue, and that you should've known this because the issue is so obvious, and rather than learn a lesson about useful contributions, you are instead self-righteously criticizing me for not willing to drop everything and dig up every answer to every idle passing thought you have!

By the way, a benchmark I've found useful in discussing factual matters or matters with a long pre-existing literature is number of citations and hyperlinks per comment. You're still batting a zero.

I'm pretty sure that, historically speaking, the former is far more likely than the latter.

I'm impressed you've failed to notice that LW is maybe a little different from other sites and we have higher standards, and what happens 'historically' isn't terribly relevant.

The fact that you did not have a counter for my argument and instead linked to a page which was meant to be a "take that" directed at me is evidence against you having an actual answer to my query, which is always a bad sign. This is not to say that it doesn't have an answer, but a quick, simple answer (or link) would be no more difficult to find than the litany article.

Apparently you missed the point. The point was: stop being arrogant. Think for a freaking second about how obvious an argument might be and at least what the reply might be, if you cannot be arsed to look up actual sources. Do us the courtesy of not just thinking your way to your bottom line of 'cryonics sucks' but maybe a step beyond that too.

Indeed, after looking at the Alcor site, and reading around, all I really find are arguments against it. The best argument for it that I've seen is that resurrecting 20th century people might be profitable from an entertainment/educational standpoint

Really? That's the best argument? What did you read, exactly? Where do they say 'there's no reason to revive people except for entertainment'? Or are you just picking out the weakest possible argument because that's what you want to talk about?

As ciphergoth pointed out, there isn't really a good answer here.

What? No! That's not what ciphergoth meant at all! Here is what he said:

And it's not necessarily that the replies to this problem are good, but that they are what you need to reply to.

He did not say the answers were not good. He said, that even if the answers were not good, they are still what you need to deal with. You need to work on your reading comprehension if that's what you got out of his comment. (Shades of Dunning-Kruger, since you brought it up...) Or is this another aspect of you refusing to do any research, bring up the weakest lamest arguments you can find as strawmen, and grossly misinterpret what people have said?

I was reading one article on Alcor which suggested that, even for a cyronics optimist, the odds of it actually paying off were 15% if he only used his most optimistic numbers - and I think his numbers about the technology are optimistic indeed. That's bad news,

No, it's not bad news. It's just news. Expected value is about payoff, cost, and probability. 15% means nothing more and nothing less than 15%; without additional details, it does not mean that something is a good idea and it does not mean something is a bad idea either.

Comment author: TitaniumDragon 27 April 2013 02:23:14AM 4 points [-]

By the way, a benchmark I've found useful in discussing factual matters or matters with a long pre-existing literature is number of citations and hyperlinks per comment. You're still batting a zero.

So that means your comment is worthless, and thus can be safely ignored, given your only "citations" do not support yourself in any way and is merely meant to insult me?

In any case, citations are mostly unimportant. I use google and find various articles to support my stances; you can do the same to support yours, but I don't go REF Fahy et. al. "Physical and biological aspects of renal vitrification" Organogenesis. 2009 Jul-Sep; 5(3): 167–175.

Most of the time, you aren't going to bother checking my sources anyway, and moreover, you're asking for negative evidence, which is always a problem. You're asking for evidence that God does not exist, and rejecting everything but "Hey look, God is sitting here, but he's not".

You're acting like someone who was just told that they don't have a soul and therefore won't go to heaven when they die, because heaven doesn't exist.

You can take ten seconds to see a long list of objections by googling "Cryonics is a scam". You can go to Alcor and read a paper where a true believer suggests that the odds of revival are, at best, 15%, and that's assuming magical nanomachines have a 99% chance of existing. You can read the opinions of various experts who point out the problems with ice crystal formation, the toxicity of vitrification chemicals (which would have to be purged prior to revival), the issues of whether microdamage to structures would cause you to die anyway, the issues of whether you can actually revive them, and pointing out that, once you do warm them up, you've got a dead body, and all you have to do from there is ressurect the dead. We do know that even short times wtihout oxygen cause irreparable brain damage, and even at cold temperatures, that process does not stop completely - once they're in LN2, sure, maybe, assuming the process doesn't destroy them. Or you know, that the process of putting in the chemicals doesn't cause damage.

The truth is that none of the objections will sway you because you're a believer.

IF it is possible to do this sort of thing, there is a very, very good chance that it will require a very specific freezing process. A process which does not yet exist.

I'm impressed you've failed to notice that LW is maybe a little different from other sites and we have higher standards, and what happens 'historically' isn't terribly relevant.

The problem is that it isn't, and a cursory search of the internet will tell you that. :\

I was a bit excited to find a site devoted to rationality, and was rather disappointed to learn that no, it wasn't.

I wrote a little hymn about it a while ago. It starts with "Our AI, who art in the future", and you can imagine that it goes downhill from there.

In fact, a cursory search of the net showed at least one topic that you guys preemptively banned from discussion because some future AI might find it and then torture you for it. If that isn't a religious taboo, I don't know what is.

The singularity is not going to happen. Nanomachines the way that they are popularly imagined will never exist. Cyronics, today, is selling hope and smoke, and is a bad investment. You've got people discussing "friendly AI" and similar nonsense, without really understanding that they're really talking about magic, and that all this philosophizing about it is pretty silly.

I'm good with doing silly things, but people here take them seriously.

Just because you call yourself a rationalist doesn't make you a rationalist. Being rational is hard for most people to do. But perhaps the most important aspect of being a rationalist is understanding that just because you want something to be true, doesn't make it true. Understanding that deep in your bones.

Most people will be deeply insulted if you imply that they are irrational. And yet people on the whole do not behave rationally, given the goals they claim to possess.

I understand you are deeply emotionally invested in this. I understand that arguing with you is pointless. But I actually enjoy arguing, so its okay. But how is it for you? If you've invested in cryonics, is your brain more or less likely to believe that it is true?

Historical trends are always important, especially when you see obvious similarities. There are obvious and worrisome similarities between basic tenants (ressurrection of the dead, some sort of greatly advanced being who will watch over us (the post-singularity AI or AIs)) and the tenents of religions. You can't claim "we're different" without good evidence, and as they say, extraordinary claims require extraordinary evidence.

And all the evidence today points towards cyronics being a very expensive form of burial. There is no extraordinary evidence that it will allow for ressurection in the future. Thus, it is a waste of money and resources you could spend to make today more awesome.

Apparently you missed the point. The point was: stop being arrogant. Think for a freaking second about how obvious an argument might be and at least what the reply might be, if you cannot be arsed to look up actual sources. Do us the courtesy of not just thinking your way to your bottom line of 'cryonics sucks' but maybe a step beyond that too.

Maybe you should take your own advice?

I am quite aware that this is upsetting to you. I just told you you're going to die and be dead forever. It is an unsurprising reaction; a lot of people react with fear to that idea.

But really, there is no evidence cryonics is useful in any way. The argument is "Well, if you've rotted away, then you've got no chance at all!" Sure. But what if you could spend your money on present-day immortality research? The odds of that paying off are probably much higher than the odds of cryonics paying off. There is a path forward there. We don't know what causes aging, but we know that many organisms live longer than human beings do, and we may be able to take advantage of that. Technology such as artifical or grown organs may allow us to survive until brain failure. Drugs to treat brain disease may allow us to put off degredation of our brains indefinitely. The list goes on.

That is far more promising than "freeze me and hope for the best". Heck, if you really wanted to live forever you'd do things to work towards that. If cyronics is truly so important, why aren't you doing relevant research? Or working towards other things that can help with life extension?

Isn't that far more rational?

Cryonics is a sucker's bet. Even if there was a possibility it worked, the odds of it working are far less than other routes to immortality.

Instead, cryonics is just a way to sell people hope. Just as Christians make peace with the idea of death that they will be going to a better place, that they will be okay, Christians avoid death as much as anyone else does. The same is true of cryonics. The rational thing to do, if it is important to avoid dying, is to work towards avoiding it or mitigating it as much as possible. Are you? If the answer is no, is it really so important to you? Or is paying that money for cryonics just a personal way to make peace with death?

Comment author: gwern 26 April 2013 10:34:01PM *  5 points [-]

Dunning-Kruger

You obviously have not actually read the Dunning-Krueger paper and understood what it showed.

and experience with similar religious movements suggests otherwise.

Name three. Like V_V, I suspect that for all that you glibly allude to 'cults' you have no personal experience and you have not acquainted yourself with even a surface summary of the literature, much like you have not bothered to so much as read a cryonics FAQ or book before thinking you have refuted it.

It takes someone who really thinks about most things very little time to come up with very obvious objections to most religious doctrine

And it takes even less time to notice that there are long thorough answers to the obvious objections. Your point here is true, but says far more about you than religion or cryonics; after all, many true things like heliocentrism or evolution have superficial easily thought-of objections which have been addressed in depth. Sometimes they work, sometimes they don't; the argument from evil is probably the single most obvious argument against Western religions, there are countless replies from theists of various levels of sophistication, and while I don't think any of them actually work, I also don't think someone going 'My mother died! God doesn't exist!' is contributing anything whatsoever. What, you think the theists somehow failed to notice that bad things happen? Of course they did notice, so if you want to argue against the existence of God, read up on their response.

Which is more likely - that people are deluding themselves over the possibility of eternal life and don't actually have any real answers to the obvious questions, but conveniently ignore them because they see the upside as being so great, or that this has totally been answered, despite the fact that you didn't even articulate an actual answer to it in your response, or even link to it?

If you had spent less time being arrogant, it might have occurred to you that I see this sort of flip reaction all the time in which people learn of cryonics and in five seconds think they've come up with the perfect objection and refuse to spend any time at all to disconfirm their objection. You are acting exactly like the person who said, "but it's not profitable to revive crypatients! QED you're all idiots and suckers", when literally the first paragraph of the Wikipedia article on ALCOR implies how they attempt to resolve this issue; here's a link to the discussion: http://lesswrong.com/lw/gh5/cryo_and_social_obligations/8d43

Notice how you are acting exactly like cheapviagra. You've come up with an obvious issue for cryonics, and rather than do any - gasp! - research; you commented on it. OK, fine. I then told you that there were replies to the issue, and that you should've known this because the issue is so obvious, and rather than learn a lesson about useful contributions, you are instead self-righteously criticizing me for not willing to drop everything and dig up every answer to every idle passing thought you have!

By the way, a benchmark I've found useful in discussing factual matters or matters with a long pre-existing literature is number of citations and hyperlinks per comment. You're still batting a zero.

I'm pretty sure that, historically speaking, the former is far more likely than the latter.

I'm impressed you've failed to notice that LW is maybe a little different from other sites and we have higher standards, and what happens 'historically' isn't terribly relevant.

The fact that you did not have a counter for my argument and instead linked to a page which was meant to be a "take that" directed at me is evidence against you having an actual answer to my query, which is always a bad sign. This is not to say that it doesn't have an answer, but a quick, simple answer (or link) would be no more difficult to find than the litany article.

Apparently you missed the point. The point was: stop being arrogant. Think for a freaking second about how obvious an argument might be and at least what the reply might be, if you cannot be arsed to look up actual sources. Do us the courtesy of not just thinking your way to your bottom line of 'cryonics sucks' but maybe a step beyond that too.

Indeed, after looking at the Alcor site, and reading around, all I really find are arguments against it. The best argument for it that I've seen is that resurrecting 20th century people might be profitable from an entertainment/educational standpoint

Really? That's the best argument? What did you read, exactly? Where do they say 'there's no reason to revive people except for entertainment'? Or are you just picking out the weakest possible argument because that's what you want to talk about?

As ciphergoth pointed out, there isn't really a good answer here.

What? No! That's not what ciphergoth meant at all! Here is what he said:

And it's not necessarily that the replies to this problem are good, but that they are what you need to reply to.

He did not say the answers were not good. He said, that even if the answers were not good, they are still what you need to deal with. You need to work on your reading comprehension if that's what you got out of his comment. (Shades of Dunning-Kruger, since you brought it up...) Or is this another aspect of you refusing to do any research, bring up the weakest lamest arguments you can find as strawmen, and grossly misinterpret what people have said?

I was reading one article on Alcor which suggested that, even for a cyronics optimist, the odds of it actually paying off were 15% if he only used his most optimistic numbers - and I think his numbers about the technology are optimistic indeed. That's bad news,

No, it's not bad news. It's just news. Expected value is about payoff, cost, and probability. 15% means nothing more and nothing less than 15%; without additional details, it does not mean that something is a good idea and it does not mean something is a bad idea either.

Comment author: TitaniumDragon 27 April 2013 02:20:02AM 3 points [-]

I understood Dunning-Kruger quite well. Dunning-Kruger suggests that, barring outside influence, people will believe themselves to be of above-average ability. Incompetent people will greatly overestimate their capability and understanding, and the ability to judge talent in others was proportional to ability in the skill itself - in other words, people who are incompetent are not only incompetent, but also incapable of judging competence in other people.

Competent people, conversely, overestimate the competence of the incompetent; however, they do have the ability to judge incompetence, so when they are allowed to look at the work of others relative to their own, their estimation of their own personal ability more closely matches their true ranking - while incompetent people being exposed to the work of others had no such effect, though training in the skill improved their ability to self-judge, judge others, and at the skill itself.

People, therefore, are unfit to judge their own competence; the only reliable way to get feedback is via actual practice (i.e. if you have some sort of independent metric for your ability, such as success or failure of actual work) or if you have other competent people judge your competence. As you might imagine, this, of course, creates the problem where you have to ask yourself, "Who is actually competent in cryonics?" And the answer is "cryobiologists and people in related disciplines". And what is THEIR opinion of cyronics?

Quite poor, on the whole. While there are "cryonics specialists" there are no signs of actual competence there as there is no one who can actually revive frozen people, let alone revive frozen people and fix whatever problems they had prior to being frozen. Ergo, they can't really be viewed as useful judges on the whole because they have shown no signs of actual competence - there is no proof that anyone is competent at cryonics at all.

Dunning-Kruger definitely applies here, and applies in a major way. The closest things to experts are the people working in real scientific disciplines, such as cyrobiology and similar things. These people have real expertise, and they are not exactly best friends with Alcor and similar organizations. In fact, most of them say that it is, at best, well-intended stupidity and at worst a scam.

Name three. Like V_V, I suspect that for all that you glibly allude to 'cults' you have no personal experience and you have not acquainted yourself with even a surface summary of the literature, much like you have not bothered to so much as read a cryonics FAQ or book before thinking you have refuted it.

Similar religious movements? How many movements don't have some concept of life after death? It is very analogous.

I have indeed read papers on cyrobiology and on cryonics, though I could not name them off-hand - indeed, I couldn't tell you the name of the paper I read on the subject just yesterday, or the others I read earlier this week. I am, on the whole, not very impressed. There are definitely things we can freeze and thaw just fine - embryos and sperm, for instance. We can freeze lots of "lower organisms". We've played around with freezing fish and frogs and various creatures which have adapted to such things.

But freezing mammals? Even reducing mammalian body temperatures to the point where freezing begins is fatal, albeit not immediately; we have revived rats and monkeys and hamsters down to very low temperatures (below 0C) and revived them, but they don't tend to do very well afterwards, dying on the scale of hours to days. Some organs, such as the heart and kidney, have been frozen and revived - which is cool, to be fair. Well, frozen is the wrong term really - more "preserved at low temperatures". There was the rabbit kidney which they did vitrify, while the hearts I've seen have mostly been reduced to low temperatures without freezing them - though you can apparently freeze and thaw hearts and they'll work, at least for a while (we figured that out more than half a century ago).

However, a lot of cryobiology is not about things applicable to cryonics - we're talking taking tissue down to like, -2C, not immersing it in LN2. The vitrified rabbit kidney is interesting for that reason, but unfortunately the rabbit in question only lasted nine days - so while it could keep them up for a while, it did eventually fail. And all the other rabbits they experimented on perished as well.

And it takes even less time to notice that there are long thorough answers to the obvious objections. Your point here is true, but says far more about you than religion or cryonics; after all, many true things like heliocentrism or evolution have superficial easily thought-of objections which have been addressed in depth. Sometimes they work, sometimes they don't; the argument from evil is probably the single most obvious argument against Western religions, there are countless replies from theists of various levels of sophistication, and while I don't think any of them actually work, I also don't think someone going 'My mother died! God doesn't exist!' is contributing anything whatsoever. What, you think the theists somehow failed to notice that bad things happen? Of course they did notice, so if you want to argue against the existence of God, read up on their response.

The length of an answer has very little to do with its value. Look at Alcor's many answers - there are plenty of long answers there. Long on hope, that is, short on reality. In fact, being able to answer something succicently is often a sign of actual thought. It is very simple to pontificate and pretend you have a point, it is much more difficult to write a one paragraph answer that is complete. And in this case, the answer SHOULD be simple.

If it was so easy, again, why are you writing a long response?

If you had spent less time being arrogant, it might have occurred to you that I see this sort of flip reaction all the time in which people learn of cryonics and in five seconds think they've come up with the perfect objection and refuse to spend any time at all to disconfirm their objection. You are acting exactly like the person who said, "but it's not profitable to revive crypatients! QED you're all idiots and suckers", when literally the first paragraph of the Wikipedia article on ALCOR implies how they attempt to resolve this issue; here's a link to the discussion: http://lesswrong.com/lw/gh5/cryo_and_social_obligations/8d43

I enjoy how you are calling me arrogant, and yet you still are not answering my question.

At least other people have tried. "Dead people are valuable artifacts! People are excited about Jurassic park, that would totally be a viable business venture, so why not dead people?" Now that is a quick, succicient argument. It makes a reasonable appeal - dead people could be valuable as an attraction in the future.

The problem with that is the idea that it would make you any money at all. The Thirteenth Amendment prohibits owning people, and that kind of puts a major crimp in the idea of a tourist attraction, and given the sheer expense involved, again, you need some sort of parallel technology to get rid of those costs in any case. Humans are also a lot less exciting than dinosaurs are. I'm not going to go to the zoo to see someone from the 17th century, and indeed the idea is morally repugnant. Sure, I might go to ren faires, but let's face it - ren faires aren't run by people from the 10th century, they're run by people from the 21st century for people from the 21st century.

View more: Prev | Next