Comment author: Eniac 08 December 2014 12:02:01AM 3 points [-]

My own favorite hypothesis goes like this: Our universe is most likely to be the simplest one that contains me (us, observers, conscious beings, whatever your favorite rendition of the anthropic principle). It is not likely to be much larger than necessary for creating me. The reason it is as large as it is, then, is that that's what it takes. The answer, then, is that something like me exists only once. More would be a waste of universal size and/or complexity, and Occam forbids it.

Is this as crazy as it sounds?

Comment author: FrameBenignly 07 December 2014 08:56:46PM 1 point [-]

Also, seeing stuff like this really bugs me:

top of page 2: "Recent analyses of the Kepler statistics showed that about 20% of all Sun-like stars have Earth-sized planets orbiting within the habitable zone [Petigura, Howard and Marcy 2014]."

2nd paragraph of page 3: "Analyses of the Kepler results shows that 7-15% of the Sun-like stars have an Earth-sized planet within their habitable zone [Petigura et al., 2014]"

That's a pretty glaring error to be making. This isn't a top journal, but it isn't an obscure one either. http://eigenfactor.com/rankings.php?bsearch=International+Journal+of+Astrobiology&searchby=journal&orderby=eigenfactor

Comment author: Eniac 07 December 2014 11:47:51PM *  0 points [-]

I agree. However, considering that Kepler is not actually sensitive enough to detect Earth sized planets in the habitable zone of sun-like stars, both these numbers are extrapolations and it must be assumed that the 7-15% or 20% are well within each other's error bounds.

Comment author: FrameBenignly 07 December 2014 08:30:34PM 0 points [-]

We may not have good measures for estimating Fb or Lb let alone Lc, but the Kepler mission gives us a pretty good estimate of Rb. You should update your estimate of the closeness of a biotic planet depending on whether your Rb prior was higher or lower than the result.

Comment author: Eniac 07 December 2014 11:43:03PM 0 points [-]

That is true. However, if "any value could be assigned to Fb", then any value can be made to come out of the Drake equation, except for an upper bound. Updating on Rb can shift around that upper bound, but it tells you nothing about the really small values that decide whether we are alone in the universe or not.

Comment author: Algernoq 22 September 2014 02:34:37AM 2 points [-]

The key obstacle here is startup money. If self-replication is economically sound, then a convincing business case could be made, leading to startup funding and a trillion-dollar business.

Again, the problem is a realistic economic assessment of development cost and predicted profit. In the current funding market, there is a lot of capital chasing very few good ideas. If there is money to be made here (a big if, given low cost of manual labor) then this could be a good opportunity.

"Mining the moon" is probably too expensive to start up. Better to start with a self-replicating robot factory on Earth.

I might look at this financial analysis as a side project. Contact me if you want to get involved and have sufficient industry experience to know what you don't know.

Comment author: Eniac 07 December 2014 04:54:24AM 0 points [-]

One of the prototypical payoffs that could be had with self-replication that I have seen mentioned is solar farms in the desert that live off sand or rocks and produce arbitrarily large acreage of photovoltaics that can then be used as a replacement for oil. This requires full self-replication, including chemical raw material processing, which is not easy to demonstrate.

I am not sure a good business case could be made for the more limited form of self-replication where the "raw material" is machine parts that only need to be assembled. That would be much easier to demonstrate, so I think a business case for it would be extremely valuable.

Comment author: [deleted] 21 September 2014 08:20:22AM *  2 points [-]

Building a self-replicating lunar mining & factory complex is one thing. Building a self-replicating machine that is able to operate effectively in any situation it encounters while expanding into the cosmos is another story entirely. Without knowing the environment in which it will operate, it'll have to be able to adapt to circumstances to achieve its replication goal in whatever situation it finds itself in. That's the definition of an AGI.

Comment author: Eniac 07 December 2014 04:39:27AM 1 point [-]

Bacteria perform quite well at expanding into an environment, and they are not intelligent.

Comment author: lacker 21 September 2014 10:59:14PM 1 point [-]

Hmm, it is interesting that that exists but it seems like it is cannot have been very serious because it dates from over 30 years ago and no followup activity happened.

Comment author: Eniac 07 December 2014 04:34:56AM 0 points [-]

I think this is because Freitas and Drexler and others who might have pursued clanking replicators became concerned with nanotechnology instead. It seems to me that clanking replicators are much easier, because we already have all the tools and components to build them (screwdrivers, electic motors, microchips, etc.). Nanotechnology, while incorporating the same ideas, is far less feasible and may be seen as a red herring that has cost us 30 years of progress in self-replicating machines. Clanking replicators are also much less dangerous, because it is much easier to pull the plug or throw in a wrench when something goes wrong.

Comment author: John_Maxwell_IV 21 September 2014 06:56:41AM 15 points [-]

Therefore, certainty we can do that would eliminate much existential risk.

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory. Our estimates of existential risks are our map. If we were to build a self-replicating spacecraft, our estimates of existential risks would go down some. But the risks themselves would be unaffected.

Additionally, if we were to build a self-replicating spacecraft and become less worried about existential risks, that decreased worry might mean the risks would become greater because people would become less cautious. If the filter is early, that means we have no anthropic evidence regarding future existential risks... given an early filter, the sample size of civilizations that reach our ability level is small, and you can't make strong inferences from a small sample. So it's possible that people would become less cautious incorrectly.

Comment author: Eniac 07 December 2014 04:23:16AM *  -1 points [-]

It seems to me that you are making a map-territory confusion here. Existential risks are in the territory.

If I understand the reasoning correctly, it is that we only know the map. We do not know the territory. The territory could be many different kinds, as long as they are consistent with the map. Adding SRS to the map rules out some of the unsafer territories, i.e. reduces our existential risk. It is a Baysian type argument.

Comment author: advancedatheist 06 December 2014 02:53:41AM *  7 points [-]

So what happens if we find all these biologically feasible exoplanets that just don't have any life on them?

BTW, you might want to give Matthew Stewart's book Nature's God a read. He points to the unexpected fact that many of the Americans in revolutionary times who wrote down their thoughts on the matter believed in "space aliens," as Stewart calls them, on exoplanets throughout the universe, and that these colonial Americans considered this arbitrary belief "rational" because of the peculiar way early modern philosophy originated from the revival of Epicureanism around the beginning of the 17th Century.

Reference:

http://books.google.com/books?id=L69bAwAAQBAJ&pg=PT45&lpg=PT45&dq=matthew+stewart+space+aliens&source=bl&ots=ruXJKJ-oGO&sig=LiQm__PtCVmXuVVGmEAueb2sLtY&hl=en&sa=X&ei=KHCCVJnhBsvhoATsjICwCA&ved=0CCoQ6AEwAg#v=onepage&q=matthew%20stewart%20space%20aliens&f=false

Comment author: Eniac 06 December 2014 04:19:41AM 5 points [-]

This is indeed unexpected. It appears the belief in aliens has been waning instead of waxing as we find out more and more about the universe.

"So what happens if we find all these biologically feasible exoplanets that just don't have any life on them?"

We go forth and put some, of course!

Comment author: Eniac 06 December 2014 04:11:39AM 2 points [-]

Estimates? Here some quotes from the paper on those "estimates":

"Also Lc, the average longevity of a communicative civilization, cannot be inducted from its short history on Earth and could be anywhere between a few hundred years and billions of years."

"Bayesian analysis demonstrates that as long as Earth remains the only known planet with biotic life, any value could be assigned to Fb"

You tell me how valuable these estimates are, in view of their precision....

View more: Prev | Next