Comment author: CellBioGuy 17 March 2016 01:32:25AM *  1 point [-]

I suspect you misunderstand my objection and that I may have used only half of the appropriate analogy

A universe in which your proposed ubiquitous low-matter low-energy interstellar computers exist is one in which space-based self-replication and manufacturing is a thing that happens. This implies the existence of a whole slew of 'ecological niches'. Indeed, the sort that is generally thought of in these circles (more-or-less industrially turning large amounts of matter near stars into stuff that intercepts light and uses the resultant energy for something or other) is rather simpler, is more similar to the demonstrated cases of terrestrial biology / human industry, and has more matter and energy available than what you propose. The low temperature low energy devices would be more akin to crazy deep extremophile lithotrophic bacteria or deep sea fish on Earth, living slow metabolisms and at low densities and matter/energy fluxes, while things in star systems would be akin to photosynthetic plants and algae at the surface, living at high densities at high flux.

In any situation other than perfect coordination, that which replicates itself more rapidly becomes more common. You will have adaptation and evolution. It doesn't matter if more computation can be done in one place than another - in terms of sheer matter and energy, that which uses high energy fluxes and large amounts of matter will replicate to large numbers and be dominant in terms of amount of stuff and effect on the physical universe. Other stuff could still exist, but most stuff would be of this faster heavier type. Niches will be filled. And a stellar system niche is not akin to the deep ocean if an interstellar niche is compared to the surface of the Earth, if anything it's the opposite. The deep sea niche may be where you see all kinds of fascinating bioluminescence and long distance signaling epiphenomena that these organisms care about and of a sort you dont see at the surface, but in terms of biomass the surface niche dominates. Furthermore, competition amongst different things mean they often do things inefficiently so as to gain advantages over each other - those that do become more common faster.

Comment author: jacob_cannell 17 March 2016 05:26:39AM *  0 points [-]

The low temperature low energy devices would be more akin to crazy deep extremophile lithotrophic bacteria or deep sea fish on Earth, living slow metabolisms and at low densities and matter/energy fluxes,

Hmm I think you misunderstood my model. At the limits of computation, you approach the maximal computational density - the maximum computational capacity per unit mass - only at zero temperature. The stuff you are talking about - anything that operates at any non-zero temp - has infinitely less compute capability than the zero-temp stuff.

So your model and analogy is off - the low temp devices are like gods - incomprehensibly faster and more powerful, and bio life and warm tech is like plants, bacteria, or perhaps rocks - not even comparable, not even in the same basic category of 'thing'.

In any situation other than perfect coordination, that which replicates itself more rapidly becomes more common.

Of course. But it depends on what the best way to replicate is. If new universe creation is feasible (and it appears to be, from what we know of physics), then civs advance rather quickly to post-singularity godhood and start creating new universes. Among other things, this allows exponential growth/replication which is vastly superior to puny polynomial growth you can get by physical interstellar colonization. (it also probably allows for true immortality, and perhaps actual magic - altering physics) And even if that tech is hard/expensive, colonization does not entail anything big, hot, or dumb. Realistic colonization would simply result in many small, compact, cold civ objects. Also see the other thread.

Comment author: gwern 17 March 2016 12:14:16AM *  0 points [-]

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

This still doesn't answer my question. I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation? My smartphone processor is more energy-efficient than my laptop, but that doesn't mean datacenters don't exist or are useless or aren't popping up like mushrooms.

At the limits they need zero.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it? Slowness is a problem for a superintelligence. What good is super-efficiency if it takes millennia to calculate answers which some more energy would have solved quicker? Time is not free.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively. Turn it into a Matrioshka brain or something from one of Ander's papers on optimal large-scale computing artifacts.

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass). Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and leaves the Fermi paradox unresolved.

Comment author: jacob_cannell 17 March 2016 05:02:20AM *  0 points [-]

I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation?

At any point in development, investing resources in physical expansion has a payoff/cost/risk profile, as does investing resources in tech advancement. Spatial expansion offers polynomial growth, which is pretty puny compared to the exponential growth from tech advancement. Furthermore, the distances between stars are pretty vast.

If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it's economic payoff compared to chasing Moore's Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome. If the tech singularity leads to stronger outcomes ala new universe manipulations, then you never need to colonize, it's best to just invest everything locally. And of course there is the spectrum in between, where you get some colonization, but the timescale is slowed.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it?

No, not for reversible computing. The energy required to represent/compute a 1 bit state transition depends on reliability, temperature, and speed, but that energy is not consumed unless there is an erasure. (and as energy is always conserved, erasure really just means you lost track of a bit)

In fact the reversible superconducting designs are some of the fastest feasible in the near term.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively.

Biological computing (cells) doesn't work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

I'm not all that confident that moving mass out system is actually better than just leaving it in place and doing best effort cooling in situ. The point is that energy is not the constraint for advancing computing tech, it's more mass limited than anything, or perhaps knowledge is the most important limit. You'd never want to waste all that mass on a dyson sphere. All of the big designs are dumb - you want it to be as small, compact, and cold as possible. More like a black hole.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and

It's extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not 'use up' more than a tiny fraction of the matter of earth, and so on.

leaves the Fermi paradox unresolved.

From the evidence for mediocrity, the lower KC complexity of mediocrity, and the huge number of planets in the galaxy, I start with a prior strongly favoring reasonably high number of civs/galaxy, and low odds on us being first.

We have high uncertainty on the end/late outcome of a post-singularity tech civ (or at least I do, I get the impression that people here inexplicably have extremely high confidence in the stellavore expansionist model, perhaps because of lack of familiarity with the alternatives? not sure).

If post-singularity tech allows new universe creation and other exotic options, you never have much colonization - at least not in this galaxy, from our perspective. If it does not, and there is an eventual end of tech progression, then colonization is expected.

But as I argued above, even colonization could be hard to detect - as advanced civs will be small/cold/dark.

Transcension is strongly favored a priori for anthropic reasons - transcendent universes create far more observers like us. Then, updating on what we can see of the galaxy, colonization loses steam: our temporal rank is normal, whereas most colonization models predict we should be early .

For transcension, naturally its hard to predict what that means .. . but one possibility is a local 'exit' at least from the perspective of outside observers. Creation of lots of new universes, followed by physical civ-death in this universe, but effective immortality in new universes (ala game theoretic horse trading across the multiverse). New universe creation could also potentially alter physics in ways that permit further tech progression. Either way, all of the mass is locally invested/used up for 'magic' that is incomprehensibly more valuable than colonization.

Comment author: CellBioGuy 15 March 2016 07:18:55PM *  0 points [-]

This idea implies a degree of coordination that does not happen in actual ecologies we have seen. Thus we get trees extravagantly sucking up mineral nutrients and building massive scaffolds to hold their photosynthetic structures over their competition, and weeds that voraciously multiply and compete with each other to take up every bit of sunlight and soil they can that the bigger things can't establish themselves in, rather than a thin scum of microbial mats that efficiently intercepts energy. You are implying a climax community without any other seres, and large amounts of material that while not being used efficiently are not used at all.

Things that reproduce themselves effectively become more common regardless of efficiency, and even multicellular organisms built of exquisite coordination get cancer.

Comment author: jacob_cannell 16 March 2016 06:06:28PM *  0 points [-]

Given that physics is the same across space, the math/physics/tech of different civs will end up being the same, more or less. I wouldn't call that coordination.

To extend your analogy, plants don't grow in the center of the earth - and this has nothing to do with coordination. Likewise, no human tribes colonized the ocean depths, and this has nothing to do with coordination.

Comment author: gwern 15 March 2016 08:19:47PM *  2 points [-]

Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage. In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible. The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better. The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint. These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.

I don't understand why this predicts no Dyson spheres, no visible mega-engineering, etc, and convergent self-limiting to a handful of solar systems and cold brains per civilization.

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem. You say that the reversible brains don't need that much energy. OK, but more computing power is always better, the cold brains want as much as possible, so what limits them? If it's energy, then they will want to pipe in as much energy as possible from their local star. If it's putting matter into the right configuration for cold brains and shielding, then they will... want to pipe in as much matter lifted by energy as possible from their local star so they can build even more cold brains. Space is vast, so it's not like they're going to run out of cold places to put cold brains, and even if they do, well, a Dyson sphere around a star will fix that, so they'll keep expanding with the matter & energy. Interconnects and IO use up a lot of energy? Well, we already know how to solve that. Whatever the binding limit to their computational power is, it seems to be solved by either more matter, more energy, or both, and the largest available source of both is stars, far from being 'trash heaps'.

And since they are already expanding, their massive redundancy and deep space stealth/mobility means relativistic strikes are irrelevant, and so the usual first-mover expansionary convergent argument applies. So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with. This doesn't sound remotely like a Fermi paradox resolution.

Comment author: jacob_cannell 16 March 2016 05:54:37PM *  1 point [-]

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem.

Every practical computational tech substrate has some error bounded compute/temperature curve, where computational capability quickly falls to zero past some upper bound temperature. Even for our current tech, computational capacity essentially falls off a cliff somewhere well below 1,000K.

My general point is that the really advanced computing tech shifts all those curves over - towards lower temperatures. This is a hard limit of physics, it can not be overcome. So for a really advanced reversible quantum computer that employs superconduction and long coherence quantum entanglement, 1K is just as impossible as 1,000K. It's not entirely a matter of efficiency.

Another way of looking at it - advanced tech just requires lower temperatures - as temperature is just a measure of entropy (undesired/unmodeled state transitions). Temperature is literally an inverse measure of computational potential. The ultimate computer necessarily must have a temperature of zero.

You say that the reversible brains don't need that much energy.

At the limits they need zero. Approaching anything close to those limits they have no need of stars. Not only that, but they couldn't survive any energy influx much larger than some limit, and that limit necessarily must go to zero as their computational capacity approaches theoretical limits.

If it's energy, then they will want to pipe in as much energy as possible from their local star.

No. There is an exact correct amount of energy to pipe in based on their viable operating temperature of their current tech civ. And this amount goes to zero as you advance up the tech.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

The same principle applies, but as you advance up the ultra-tech ladder, the temp ranges get lower and lower (because remember, temp is literally an inverse measure of maximum computational capabillity).

OK, but more computing power is always better, the cold brains want as much as possible, so what limits them?

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass).

Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

So the most valuable mass that gets colonized first would be the rogue planets/nomads - which apparently are more common than attached planets.

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

The big unknown variable is again what the end of tech in the universe looks like, which gets back to that new universe creation question. If that kind of ultimate/magic tech is possible, civs will invest everything in to that, and you have less colonization, depending on the difficulty/engineering tradeoffs.

Comment author: DanArmak 13 March 2016 11:58:35AM 0 points [-]

Wouldn't mediocrity imply intelligence evolving many times in the Earth's past, which would imply a Great Filter in our near future? See also my other comment.

Comment author: jacob_cannell 13 March 2016 06:10:16PM *  2 points [-]

Depends on what you mean by 'intelligence'.

If you mean tech/culture/language capable, well it isn't surprising that has only happened once, because it is so recent, and the first tech species tends to takeover the planet and preclude others.

If you mean something more like "near human problem solving capability", then that has evolved robustly in multiple separate vertebrate lineages: - corvids, primates, cetaceans, proboscids. It also evolved in an invertebrate lineage (octopi) with a very different brain plan. I think that qualifies as extremely robust, and it suggests that evolution of culturual intelligence is probably inevitable, given enough time/energy/etc.

Comment author: DanArmak 13 March 2016 11:56:37AM 0 points [-]

If it was the maximum possible speed, then it must have involved very unlikely events that took billions of years to happen maybe just once, and that's evidence of a Great Filter in our past.

If it wasn't the maximum possible speed, then there should be many planets where intelligence evolved much earlier in the Universe's lifetime, and the fact we don't see aliens is evidence of a Great Filter in the future.

Comment author: jacob_cannell 13 March 2016 06:04:48PM *  1 point [-]

evidence of a Great Filter in our past.

Most of the space of possible great filters in the past have been ruled out. Rare planets is out. Tectonics is out. Rare bio origins is out. The mediocrity of earth's temporal rank rules out past disaster scenarios, ala Bostrom/Tegmark's article.

and the fact we don't see aliens is evidence of a Great Filter in the future.

Mediocrity of temporal rank rules out any great filter in the future that has anything to do with other civs, because in scenarios where that is the filter, surviving observers necessarily find themselves on early planets.

Furthermore, natural disasters are already ruled out as a past filter, and thus as a future filter as well.

So all that remains is this narrow space of possibilities that relate to the timescale of evolution, where earth is rare in that evolution runs unusually fast here. Given that there are many billions of planets in the galaxy in habitable zones, earth has to be 10^10 rare or so, which seems pretty unlikely at this point.

Also, 'seeing aliens' depends on our model of what aliens should look like - which really is just our model for the future of post-biological civs. Our observations currently can only rule out the stellavore expansionist model. The transcend model predicts small, cold, compact civs that would be very difficult to detect directly.

That being said, if aliens exist, the evidence may already be here, we just haven't interpreted it correctly.

Comment author: DanArmak 12 March 2016 09:22:32PM 1 point [-]

Life on Earth has existed for billions of years without experiencing a terminal snowball or greenhouse scenario. It also recovered from several snowball periods once they ended.

So the fact that intelligence took this long to evolve - 4-5 billions of years after biogenesis, and 600-700 million years after the first multicellular animals - must be important.

If it were the case that the Great Filter was the short average lifespan of habitable planets before they became iceballs or greenhouses, then we should expect to appear much much earlier in our planet's history.

Comment author: jacob_cannell 12 March 2016 10:40:20PM 1 point [-]

So the fact that intelligence took this long to evolve - 4-5 billions of years after biogenesis, and 600-700 million years after the first multicellular animals - must be important.

~5 billion years out of an expected ~10 billion year lifespan for a star like the sun - mediocrity all the way down!

Comment author: DanArmak 12 March 2016 09:37:38PM 0 points [-]

The high value matter/energy or real estate is probably a tiny portion of the total, and is probably far from stars, as stellar environments are too noisy/hot for advanced computation.

Can you expand on this?

All computation requires matter/energy. If a civ wants to increase its amount of computation, then eventually it will need to use up that huge majority of matter that resides in stars. I think it was the Significant Digits hpmor fanfic where Harry remarked that the stars were huge piles of valuable materials that had inconveniently caught fire and needed to be put out. Of course, it's still necessary to have a practical way of star lifting.

One alternative is that advanced civs find a way to use dark matter instead, or some other physics we don't really understand yet.

Comment author: jacob_cannell 12 March 2016 10:34:03PM *  3 points [-]

The high value matter/energy or real estate is probably a tiny portion of the total, and is probably far from stars, as stellar environments are too noisy/hot for advanced computation.

Can you expand on this?

See this post.

Extrapolating from current physics to ultimate computational intelligences, the most important constraint is temperature/noise, not energy. A hypothetical optimal SI would consume almost no energy, and it's computational capability would be inversely proportional to it's temperature. So at the limits you have something very small, dense, cold, and dark, approaching a black hole.

Passive shielding appears to be feasible, but said feasibility decreases non-linearly with proximity to stars.

So think of the computational potential of space-time as a function of position in the galaxy. The computational potential varies inversely with temperature. The potential near a star is abysmal. The most valuable real estate is far out in the interstellar medium, potentially on rogue planets or even smaller cold bodies, where passive shielding can help reduce temperatures down to very low levels.

So to an advanced civ, the matter in our solar system is perhaps worthless - the energy cost of pulling the matter far enough away from the star and cooling it is greater than it's computational value.

All computation requires matter/energy.

Computation requires matter to store/represent information, but doesn't require consumption of that matter. Likewise computation also requires energy, but does not require consumption of that energy.

At the limits you have a hypothetical perfect reversible quantum computer, which never erases any bits. Instead, unwanted bits are recycled internally and used for RNG. This requires a perfect balance of erasure with random bit consumption, but that seems possible in theory for general approximate inference algorithms of the types SI is likely to be based on.

that the stars were huge piles of valuable materials that had inconveniently caught fire and needed to be put out.

This is probably incorrect. From the perspective of advanced civs, the stars are huge piles of worthless trash. They are the history of life rather than it's future, the oceans from which advanced post-bio civs emerge.

Comment author: Gurkenglas 12 March 2016 04:23:44PM *  0 points [-]

AIXI is simple, and if our universe happened to allow turing machines to calculate endlessly behind cartesian barriers, it could be interesting in the sense of actually working.

Comment author: jacob_cannell 12 March 2016 06:34:02PM 0 points [-]

We have wildly different definitions of interesting, at least in the context of my original statement. :)

Comment author: hairyfigment 11 March 2016 08:18:44AM 5 points [-]

But more generally, AI should use whatever works. If that happens to be "scruffy" methods, then so be it.

This seems like a bizarre statement if we care about knowable AI safety. Near as I can tell, you just called for the rapid creation of AGI that we can't prove non-genocidal.

Comment author: jacob_cannell 12 March 2016 09:02:01AM -4 points [-]

If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can't handle uncertainty, and doesn't scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.

View more: Prev | Next