Here is another reason why we need to work on superhuman AI: 

Thermodynamics of advanced civilizations

Link: http://www.aleph.se/andart/archives/2010/10/visions_of_the_future_in_milano.html

  1. Civilizations are physical objects, and nearly any ultimate goal imply a need for computation, storing bits and resources (the basic physical eschatology assumption).
  2. The universe has a bunch of issues:
    • The stelliferous era will just last a trillion year or so.
    • Matter and black holes are likely unstable, so after a certain time there will not be any structure around to build stuff from. Dark matter doesn't seem to be structurable either.
    • Accelerating expansion prevents us from reaching beyond a certain horizon about 15 gigalightyears away.
    • It will also split the superclusters into independent "island universes" that will become unreachable from each other within around 120 billion years.
    • It also causes horizon radiation ~10-29 K hot, which makes infinite computation impossible.
  3. Civilizations have certain limits of resources, expansion, processing and waste heat:
    • We can still lay our hands on 5.96·1051 kg matter (with dark matter 2.98·1052 kg) within the horizon, and ~2·1045 kg (with DM ~1046 kg) if we settle for a supercluster.
    • The lightspeed limitation is not enormously cumbersome, if we use self-replicating probes.
    • The finite energy cost of erasing bits is the toughest bound. It forces us to pay for observing the world, formatting new memory and correct errors.
  4. Putting it all together we end up with the following scenario for maximal information processing:
    • The age of expansion: interstellar and intergalactic expansion with self-replicating probes. It looks like one can enforce "squatters rights", so there is no strong reason to start exploiting upon arrival.
    • The age of preservation: await sufficiently low temperatures. A halving of temperature doubles the amount of computation you can do. You only need a logarithmically increasing number of backups for indefinite survival. Since fusion will release ~1% of the mass-energy of matter but black hole conversion ~50%, it might not be relevant to turn off the stars unless you feel particularly negentropic.
    • The age of harvest: Exploit available energy to produce maximal amount of computation. The slower the exploitation, the more processing can be done. This is largely limited by structure decay: you need to be finished before your protons decay. Exactly how much computation you can do depends on how large fraction of the universe you got, how much reversible computation you can do and the exact background temperature.
  5. This leads to some policy-relevant conclusions:
    • Cosmic waste is a serious issue: the value of the future is enormous in terms of human lives, so postponing colonization or increasing existential risk carries enormous disutilities. However, in order to plan like this you need to have very low discount rates.
    • There are plenty of coordination problems: burning cosmic commons, berserker probes, entropy pollution etc. The current era is the only chance of setting up game rules before dispersion and loss of causal contact.
    • This model suggests a Fermi paradox answer: the aliens are out there, waiting. They already own most of the universe and we better be nice to them. Alternatively, if there is a phase transition situation where we are among the first, we really need to think about stable coordination and bargaining strategies.

Here is more: Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization

Our only hope to alleviate those problems is a beneficial superhuman intelligence to coordinate our future for us. That is if we want to make it to the stars and that our dream of a galactic civilization comes true and does not end up in a unimaginable war over resources.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:06 AM

Our only hope to alleviate those problems is a beneficial superhuman intelligence to coordinate our future for us. That is if we want to make it to the stars and that our dream of a galactic civilization comes true and does not end up in a unimaginable war over resources.

I think that most LessWrong users are aware of these issues, and don't need them repeated in such an evangelical and evidence free manner. Sorry for being so snarky, I think I've been equally dismissive of a few other posts recently. Come on people! More rationallity material, less rehashes of singularity and cryonics arguments. I mean the recent post "Baby born from cryo-preserved embryo" doesn't even have anything to do with life extension by cryonics, never mind rationality. Rant over.

I thought Anders Sandberg's blog post was cool though.

[-][anonymous]13y00

It was more or less a test, I've wrote the opposite before here ;-)