This research doesn't imply the non-existence of a Great Filter (contra this post's title). If we take the Paper's own estimates, there will be approximately 10^20 terrestrial planets in the Universe's history. Given that they estimate the Earth preceded 92% of these, there currently exist approximately 10^19 terrestrial planets, any one of which might have evolved intelligent life. And yet, we remain unvisited and saturated in the Great Silence. Thus, there is almost certainly a Great Filter.
In July I started a Caloric Restriction Diet, fasting for an entire (calendar) day twice weekly. I did this out of a desire for the potential longevity benefits, but since then it's had a rather happy (albeit utterly predictable) side-effect: I lost 10 pounds!
LW/CFAR should develop a rationality curriculum for Elementary School Students. While the Sequences are a great start for adults and precocious teens with existing sympathies to the ideas presented therein, there's very little in the way of rationality training accessible to (let alone intended for) children.
I am opening this thread to test the hypothesis that SuperIntelligence is plausible but that Whole-Brain Emulations would most likely become obsolete before they were even possible.
I'm not sure of what you're claiming here. Are you hypothesizing that a path to Superintelligence which requires WBE will likely be slower than a path which does not? Or something else, like that brain-based computation with good APIs will hold a relative advantage over WBE indefinitely?
Further, given the ability to do so, entities which were near to being Whole-Brain Emulations would rapidly choose to cease to be near Whole-Brain Emulations and move on to become something else.
Again, this could be clearer. Are you implying that a WBE in the process of being constructed will opt not to be completed before beginning to self-improve (i.e. become a neuromorph)?
Impact concerns notwithstanding, there are some practical constraints: Elon Musk and Sergey Brin are naturalized US Citizens, which makes them ineligible to serve as US President.
A variation of this technique (pretending to be Batman) works for children.
It's not a matter of "telling" the AI or not. If the AI is sufficiently intelligent, it should be able to observe that its computational resources are bounded, and infer the existence of the box. If it can't make that inference (and can't self-improve to the point that it can), it probably isn't a strong enough intelligence for us to worry about.
The circumstances under which I would opt to be killed are extremely specific. Namely, I would want not to be revived if I were to be tortured indefinitely. This is actually more specific than it sounds: in order for this to occur, there must exist an entity which would soon possess the ability to revive me, and an incentive to do so rather than just allowing me to die. I find this to be such an extreme edge case that I'm actually uncomfortable with the characterization of the conversation. Instead, I'd turn around the result in question: under what circumstances do you want to be revived?
Trivially, we should want to be revived into a civilization which possesses the technology to revive us at all, and subsequently extend our lives. If circumstances are bad on Earth, we should prefer to defer our revival until those circumstances improve. If they never do, the overwhelming probability is that cryonic remains will simply be forgotten, turned off, and the frozen are never revived. But building a terminal death condition which might be triggered denies us the probability of waiting out those bad circumstances.
tl;dr Don't choose death, choose deferment.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Do you really think the probability that aliens have visited our system over it's history is less than say 10^-9?
The 10^19 or so planets that could have independently evolved civilizations generates an enormous overwhelming prior that we are not the first. It would take extremely strong evidence to overcome this prior. So from a Bayesian view, it is completely unreasonable to conclude that there is any sort of Filter - given the limits of our current observations.
We have no idea whether we have been visited or not. The evidence we have only filters out some very specific types of models for future civs - such as aliens which colonize most of the biological habitats near stars. The range of models is vast and many (such as cold dark models where advanced civs avoid stars) remain unfiltered by our current observations.
The Great Filter isn't an explanation of why life on Earth is unique; rather, it's an explanation of why we have no evidence of civilizations that have developed beyond Kardashev I. So, rather than focusing on the probability that some life has evolved somewhere else, consider the reason that we apparently don't have intelligent life everywhere. THAT's the Great Filter.