Comment author: jacob_cannell 21 October 2015 06:54:34PM 1 point [-]

And yet, we remain unvisited and saturated in the Great Silence. Thus, there is almost certainly a Great Filter.

Do you really think the probability that aliens have visited our system over it's history is less than say 10^-9?

The 10^19 or so planets that could have independently evolved civilizations generates an enormous overwhelming prior that we are not the first. It would take extremely strong evidence to overcome this prior. So from a Bayesian view, it is completely unreasonable to conclude that there is any sort of Filter - given the limits of our current observations.

We have no idea whether we have been visited or not. The evidence we have only filters out some very specific types of models for future civs - such as aliens which colonize most of the biological habitats near stars. The range of models is vast and many (such as cold dark models where advanced civs avoid stars) remain unfiltered by our current observations.

Comment author: AABoyles 21 October 2015 07:11:43PM 0 points [-]

The Great Filter isn't an explanation of why life on Earth is unique; rather, it's an explanation of why we have no evidence of civilizations that have developed beyond Kardashev I. So, rather than focusing on the probability that some life has evolved somewhere else, consider the reason that we apparently don't have intelligent life everywhere. THAT's the Great Filter.

Comment author: AABoyles 21 October 2015 05:04:12PM *  8 points [-]

This research doesn't imply the non-existence of a Great Filter (contra this post's title). If we take the Paper's own estimates, there will be approximately 10^20 terrestrial planets in the Universe's history. Given that they estimate the Earth preceded 92% of these, there currently exist approximately 10^19 terrestrial planets, any one of which might have evolved intelligent life. And yet, we remain unvisited and saturated in the Great Silence. Thus, there is almost certainly a Great Filter.

Comment author: AABoyles 02 September 2015 06:49:05PM 10 points [-]

In July I started a Caloric Restriction Diet, fasting for an entire (calendar) day twice weekly. I did this out of a desire for the potential longevity benefits, but since then it's had a rather happy (albeit utterly predictable) side-effect: I lost 10 pounds!

Comment author: AABoyles 11 August 2015 02:21:59PM 12 points [-]

LW/CFAR should develop a rationality curriculum for Elementary School Students. While the Sequences are a great start for adults and precocious teens with existing sympathies to the ideas presented therein, there's very little in the way of rationality training accessible to (let alone intended for) children.

Comment author: AABoyles 29 July 2015 01:14:36AM 5 points [-]

Done. Looking forward to seeing your results!

Comment author: AABoyles 09 July 2015 03:53:05PM 3 points [-]

I am opening this thread to test the hypothesis that SuperIntelligence is plausible but that Whole-Brain Emulations would most likely become obsolete before they were even possible.

I'm not sure of what you're claiming here. Are you hypothesizing that a path to Superintelligence which requires WBE will likely be slower than a path which does not? Or something else, like that brain-based computation with good APIs will hold a relative advantage over WBE indefinitely?

Further, given the ability to do so, entities which were near to being Whole-Brain Emulations would rapidly choose to cease to be near Whole-Brain Emulations and move on to become something else.

Again, this could be clearer. Are you implying that a WBE in the process of being constructed will opt not to be completed before beginning to self-improve (i.e. become a neuromorph)?

Comment author: AABoyles 17 June 2015 06:26:42PM *  10 points [-]

Impact concerns notwithstanding, there are some practical constraints: Elon Musk and Sergey Brin are naturalized US Citizens, which makes them ineligible to serve as US President.

Comment author: AABoyles 04 June 2015 03:50:46PM *  0 points [-]

A variation of this technique (pretending to be Batman) works for children.

In response to Boxing an AI?
Comment author: AABoyles 27 March 2015 03:12:53PM 0 points [-]

It's not a matter of "telling" the AI or not. If the AI is sufficiently intelligent, it should be able to observe that its computational resources are bounded, and infer the existence of the box. If it can't make that inference (and can't self-improve to the point that it can), it probably isn't a strong enough intelligence for us to worry about.

Comment author: AABoyles 23 February 2015 03:05:25PM 0 points [-]

The circumstances under which I would opt to be killed are extremely specific. Namely, I would want not to be revived if I were to be tortured indefinitely. This is actually more specific than it sounds: in order for this to occur, there must exist an entity which would soon possess the ability to revive me, and an incentive to do so rather than just allowing me to die. I find this to be such an extreme edge case that I'm actually uncomfortable with the characterization of the conversation. Instead, I'd turn around the result in question: under what circumstances do you want to be revived?

Trivially, we should want to be revived into a civilization which possesses the technology to revive us at all, and subsequently extend our lives. If circumstances are bad on Earth, we should prefer to defer our revival until those circumstances improve. If they never do, the overwhelming probability is that cryonic remains will simply be forgotten, turned off, and the frozen are never revived. But building a terminal death condition which might be triggered denies us the probability of waiting out those bad circumstances.

tl;dr Don't choose death, choose deferment.

View more: Prev | Next