And the whole earth was of one language, and of one speech. And it came to pass . . .they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth. And the Lord came down to see the city and the tower, which the children built. And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do; and now nothing will be restrained from them, which they have imagined to do. Go to, let us go down, and there confound their language, that they may not understand one another's speech. So the Lord scattered them abroad from thence upon the face of all the earth: and they left off to build . . . 

Genesis 11: 1-9

Some elementary physical quantitative properties of systems compactly describe a wide spectrum of macroscopic configurations.  Take for example the concept of temperature: given a basic understanding of physics this single parameter compactly encodes a powerful conceptual mapping of state-space.  

It is easy for your mind to visualize how a large change in temperature would effect everything from your toast to a planetary ecosystem.  It is one of the key factors which divides habitable planets such as Earth from inhospitably cold worlds like Mars or burning infernos such as Venus.  You can imagine the Earth growing hotter and visualize an entire set of complex consequences: melting ice caps, rising water levels, climate changes, eventual loss of surface water, runaway greenhouse effect and a scorched planet.

Here is an unconsidered physical parameter that could determine much of the future of civilization: the speed of thought and the derived subjective speed of light.  

The speed of thought is not something we are accustomed to pondering because we all share the same underlying neurological substrate which operates at a maximum frequency of around a kilohertz, and appears to have minor and major decision update cycles at rates in the vicinity of 33hz to 3hz.1

On the other hand the communication delay has changed significantly over the last ten thousand years as we evolved from hunter-gatherer tribes to a global civilization.

For much of early human history, the normal instantaneous communication distance limit would be the audible range of about 100 feet, and long distance communication consisted of sending physical human messengers; a risky endeavor that could take months to traverse a continent.

The long distance communication delay in this era (on the order of months) was more than 10^9 times the baseline thought-frequency (which is around a millisecond).  The developmental outcome in this type of regime is divergence.  New ideas and slight mutations of existing beliefs are generated in local ingroups far faster than they can ever propagate to remote outgroups.  

In the divergent regime cultures fragment into sub-cultures; languages split into dialects; and dialects become new languages and cultures as groups expand geographically.2

Over time a steady accumulation of technological developments increased subjective bandwidth and reduce subjective latency in the global human network: the advent of agricultural civilization concentrated human populations into smaller regions, the domestication of horses decreased long distance travel time, books allowed stored communication from the past, and the printing press provided an efficient one to many communication amplifier.

Yet despite all of this progress, even as late as the mid 19th century the pony express was considered fast long distance communication.  It was not until just very recently in the 20th century that near instantaneous long distance communication became relatively cheap and widespread.3

Today the communication delay for typical point to point communication around the world is somewhere around 200 to 300 ms, corresponding to a low delay/thought-frequency ratio of 10^2.  This figure is close enough to the brain's natural update cycles to permit real time communication.

It is difficult to measure, but the general modern trend seems to have now finally shifted towards convergence rather than divergence.  Enough people are moving between cultures, translating between languages and communicating new ideas fast enough relevant to the speed of thought to largely counter the tendency toward divergence.

But now consider that our global computational network consists of two very different substrates: the electronic substrate which operates at near-light speed, and a neural substrate which operates at much slower chemical speeds; more than one million times slower.

At the moment the vast majority of the world's knowledge and intelligence is encoded in the larger and slower neural substrate, but the electronic substrate is growing exponentially at a vastly faster pace.

Viewed as a single global cybernetic computational network we can see there is massive discrepancy between the neural and electronic sub-components.

So what happens when we shift completely to the electronic, when we have artificial brains and AGI's that think at full electronic speeds?

The speed of light measured in atomic seconds is the same for all physical frames of reference, but it's subjective speed varies based on one's subjective speed of thought.  This subjective relativity causes effective time dilation proportional to one's level of acceleration.

For an AGI or upload that has an architecture similar to the brain but encoded in the electronic substrate using high effeciency neuromorphic circuitry, thoughts could be computed in around a thousand clock cycles or less at a rate of billions of clock cycles per second.  

Such a Mind would experience a million fold time dilation, or an entire subjective year every thirty seconds.

Imagine the external universe, time itself, slowing down by a factor of a million.  Watching a human walk to work would be similar to us watching grass grow.  Actually it would be considerably worse; five minutes would correspond to an unimaginable decade of subjective time for an acceleration level 6 hyperintelligence.

A bullet would not appear to be much faster than a commuter, and the speed of light itself, the fastest signal propagation in the universe, would be slowed down to just 300 subjective meters per second, roughly the speed of a jetliner.

Real-time communication would thus only be possible with entities in the same building and on the same local network.

It would take a subjective day or two to reach distant external internet sites.  Browsing the web would not be possible in the conventional sense.  It would appear the only viable strategy would be to copy most of the internet into a local cache.  But even this would be impeded by the million fold subjective bandwidth slowdown.  

Today's fastest gigabyte direct ethernet backbone connections would be reduced back down to mere kilobyte per second modem speeds.  A cable modem connection speed would require about as much fiber bandwidth as our entire current transatlantic fiber capacity.

Acceleration level 6 corresponds to a 10^8 value for the communication delay / thoughtspeed ratio, a shift backwards roughly equivalent to the era before the advent of the telegraph.  This is the historical domain of both the Roman Empire and pre civil war America.

If Moore's Law continues well into the next decade, further levels of acceleration will be possible.  A combination of denser circuitry, architectural optimizations over the brain and higher clock rates could lead to acceleration level 9 hyperintelligences.  Overclocked circa 2011 CPUs are already approaching 10 GHZ, and test transistors have achieved speeds into the terrahertz range in the lab.4

The brain takes about 1000 'clocks' of the base neuron frequency to compute one second worth of thought.  If a future massively dense and parallel neuromorphic architecture could do the same work 10 times more effeciently and thus compute one second of thought in 100 clock cycles while running at 100 GHZ this would enable acceleration level 9.5

Acceleration level 9 stretches the limits of human imagination.  It's difficult to conceive of an intelligence that experiences around 30 years in just one second, or a billion subjective years for every sidereal year.

At this dilation factor light slows to just 300 centimeters per second, a slow walking pace.  More crucially, light moves just 3 centimeters per clock cycle, which would place serious size constraints on the physical implementation of a single mind.  To make integrated decisions with a unified knowledge base, in other words think in how we understand the term, the core of a Mind running at these speeds would have to be crammed into the space of a modern desktop box.  (although it certainly could have a larger secondary knowledge store accessible with some delay)    

The small size constraint would severely limit how much power/heat one could throw at the problem, and thus these high speeds will probably require much higher circuit densities to achieve the required energy efficiency than implied by memory requirements alone.

With light itself crawling along at 300 centimeters per second it would take data packets hundreds of millions of seconds, or on the order of years, to make typical transits across the internet.  These speeds are already close to physical limits; even level 9 hyperintelligences will probably not be able to surmount the speed of light delay.

The entire fiber backbone of the circa 2011 transatlantic connection would be required to achieve end 20th century dialup modem speeds.6

Even using all of that fiber it would take on the order of ten physical seconds to transfer a 10^14 byte Mind, corresponding to hundreds of thousands of subjective years.

A level 9 world is one where the subjective communication delay, approaching 10^11, is a throwback to the prehistoric era.  Strong Singletons and even weaker systems such as global governments or modern markets would be unlikely or impossible at such high levels of acceleration.7

From the social and cultural perspective high levels of thought acceleration are structurally equivalent to the world expanding to billions of times it's current size. 

It is similar to the earth exploding into an intergalactic or hyperdimensional civilization linked together by a vast impossibly slow lightspeed transit network.

Entire new cultures and civilizations would form and play out complex histories in the blink of an eye.

With every increase in circuit density and speed the new metaverse will vasten exponentially in virtual space and time just as it physically shrinks and quickens down into the ever smaller, faster levels of the real.

And although all of this change will be unimaginably fast for a biological human, Moore's Law will be a distant ancestral memory for level 9 intelligences, as it depends on a complex series of events in the impossibly slow physical world of matter.  Even if an entire new hardware generation transition could be compressed into just 8 hours of physical time through nanotechnological miracles, that's still an unimaginable million years of subjective time at acceleration level 9.

Another interesting subjective difference: computer speed or performance will not change much from the inside perspective of a hyperintelligence running on the same hardware.  Traditional computers will indefinitely maintain roughly the same subjective slow speeds for minds running on the same substrate at those same speeds.  Density shrinkings will enable more and or larger minds; but only a net shift towards the latter would entail a net increase in traditional parallel CPU performance available per capita.  But as discussed previously, speed of light delays severely constrain the size of large unified minds.

The radical space-time compression of the Metaverse Singularity model suggests a reappraisal of the Fermi Paradox and the long-term fate of civilizations.  

The speed of light barrier gives a natural gradient to the expansion of complexity: it is inwards, not outwards.  

Humanity today could mount an expedition to a nearby solar system, but the opportunity cost of such an endeavor vastly exceeds any realistic discounted returns.  The incredible resources space colonization would require are much better put to use increasing our planetary intelligence through investing in further semiconductor technology.

This might never change.  Indeed such a change would be a complete reversal of the general universal trend towards smaller, faster complexity.

Each transition to a new level of acceleration and density will increase the opportunity cost of expansion in proportion.  Light-years are vast units of space-time for humans today, but they are unimaginably vaster for future accelerated hyperintelligences. 

Facing the future it appears that looking outwards into space is looking into the past, for the future lies in innerspace, not outerspace.

 

Notes

1 Human neuron action potentials have a measured maximum frequency of a little less than a millisecond.  This is thus one measure of rough equivalence to the clock frequency in a digital circuit, but it is something of a conservative over-estimate as neurological circuits are not synchronous at that frequency.  Many circuits in the brain are semi-synchronized over longer intervals roughly corresponding to the various measured 'brain wave' frequencies, and neuron driven mechanisms such as voice have upper frequencies of the same order.  Humans can react in as quickly as 150ms in some conditions, but appear to initiate actions such as saccades at a rate of 3 to 4 per second.  Smaller primate brains are similar but somewhat quicker.

2 The greater monogenesis theory of all extant languages and cultures from a single distant historical proto-language is a matter of debate amongst linguistics, but the similarity in many low-level root words is far beyond chance.  The restrained theory of a common root Proto-Indo-European language is near universally accepted.  This map and this tree help visualize the geographical historical divergence of this original language/cultural across the supercontinent along with it's characteristic artifact: the chariot.  All of this divergence occurred on a timescale of five to six millenia.

3  Homing pigeons, where available, were of course much faster than the pony express, but were rare and low-bandwidth.

4 Apparently this has been done numerous times in the last decade in different ways.  Here is one example.  Of course making a few transistors run in the terahertz doesn't get you much closer to making a whole CPU actually run at that speed, for a large variety of reasons.

5 None of these particular numbers will seem outlandish a decade or two from now if Moore's Law holds it's pace.  However getting a brain or AGI type design to run at these fantastic speeds will likely require more significant innovations such as a move to 3D integrated circuits and major interconnect breakthroughs.  There are many technological uncertainties here, but less than that involved in drexler-style nano-tech, and this is all on the current main path.

6 It looks like we currently have around 8 tbps of transatlantic bandwidth circa 2011.

7 Nick Bostrom seems to have introduced the Singleton concept to the Singularity/Futurist discourse here.  He mentions artificial intelligences as one potential Singleton promoting technology but doesn't consider their speed potential with respect to the speed of light.

 

New Comment
95 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]PlaidX210

I love this article, but I disagree with the conclusion. You're essentially saying that a post-singularity world would be too impatient to explore the stars. I grant you that thinking a million times faster would make someone very impatient, but living a million times longer seems likely to counterbalance that.

Back in the days of cristopher columbus, what stopped people from sailing off and finding new continents wasn't laziness or impatience, it was ignorance and a high likelihood of dying at sea. If you knew you could build a rocket and fly it to mars or alpha centauri, and that it was 100% guaranteed to get there, and you'd have the mass and energy of an entire planet at your disposal once you did, (a wealth beyond imagining in this post-singularity world), I really doubt that any amount of transit time, or the minuscule resources necessary to make the rocket, would stand in anyone's way for long.

ESPECIALLY given the increased diversity. Every acre on earth has the matter and energy to go into space, and if every one of those 126 billion acres has its own essentially isolated culture, I'd be very surprised if not a single one ever did, even onto the end of the earth.

Honestly I'd be surprised if they didn't do it by tuesday. I'd expect a subjectively 10 billion year old civilization to be capable of some fairly long-term thinking.

2ewbrownv
Agreed. Another detail that is often overlooked is that an electronic intelligence doesn't have to run at maximum possible speed all the time. If an AI or upload wants to travel to alpha centauri it can easily slow its subjective time down by whatever factor is needed to make the trip time seem acceptible.
-5jacob_cannell
0jacob_cannell
My case against outward expansion is not based on issues of patience. It's an economic issue. I should have made this more clear in the article, perhaps strike that one sentence about how long interstellar travel will subjectively take for accelerated intelligences, as that's not even really relevant. Outward expansion is unimaginably expensive, risky, and would take massive amounts of time to reach a doubling. Moore's Law allows a much lower route risk for AGI's to double their population/intelligence/whatever using a tiny tiny fraction of the time and energy required to double through space travel. See my reply above to Mitchell Porter. What's the point? In the best case scenario you can eventually double your population after hundreds or thousands of years. You could spend a tiny tiny fraction of those resources and double your population thousands of times faster by riding Moore's Law. Space travel only ever makes sense if Moore's Law type growth ends completely. There's also the serious risks of losing the craft on the way and even discovering that Alpha Centauri is already occupied.
[-]PlaidX110

Why WOULDN'T moore's law type growth end completely? Are you saying the speed of light is unbreakable but the planck limit isn't?

7CarlShulman
The latter point is in tension with the rest of your argument. "No one colonizes the vast resources of space: they're too crowded" doesn't work as a Fermi Paradox explanation. Uncertainty about one's prospects for successfully colonizing first could modestly diminish expected resource gain, but the more this argument seems persuasive, the more it indicates that potential rivals won't beat you to the punch.
0jacob_cannell
If older, powerful alien civilizations are already around then colonization may not even be an option for us at all. It's an option for that lucky first civilization, but nobody else.
7NancyLebovitz
IIRC, one of the concerns about AIs grabbing as much territory and resources as possible is that they want to improve the odds that nothing else can be a threat to their core mission.
[-]saturn140

Aren't you anthropomorphizing AIs? If an AI's goals entail communicating with the rest of the world, the AI has the option to simply wait as long as it takes. Likewise, it's not obvious that an uploaded human would need or want to run at the fastest physically possible timescale all the time.

And if outward- and inward-looking civilizations ever need to compete for resources, it seems like the outward-looking ones would win.

Nothing in this scenario would hold back an AI with an expansionist value system, like a paperclip maximizer or other universe tilers.

3Clippy
My thoughts exactly. If all you care about is maximising paperclips, you'll suffer any cost, bear any burden, wait any time, if the project will increase universe-wide paperclippage.
7benelliott
"you'll suffer any cost, bear any burden" Why is it a cost or a burden at all? I didn't realise paper-clippers had a term in their utility function for subjective waiting time.
4sketerpot
We're paperclip-maximizing on a deadline here. Every millisecond you wait brings you that much closer to the heat death of the universe. When faced with the most important possible mission -- paperclip production, obviously -- you've got to seize every moment as much as possible. To do otherwise would simply be wrong.
0benelliott
But that still doesn't mean that subjective perception of time is important. One day is one day, whether or not it feels like a century.
0Clippy
My point was that it's not, human. My statement is equivalent to saying that these other factors do not influence a clippy's decision once the expected paperclippage of the various options is known.
-1jacob_cannell
The space of value systems is vast, but I don't think the particular subspace of value systems that attempt to maximize some simple pattern (such as paperclips) is large enough in terms of probabilistic likelihood mass to even warrant discussion. And even if it was, even simple maximizers will first ride Moore's Law if they have a long planning horizon. The space of expansionist replicator-type value systems (intelligences which value replicating entire entity patterns similar to themselves or some component self-patterns) is a large, high likelihood cut of design space. The goal of a replicator is to make more of itself. A rational replicator will pursue the replication path that has the highest expected exponential rate of replication for the cost, which we can analyze in economic terms. If you actually analyze the cost of interstellar replication, it is vastly many orders of magnitude more expensive and less efficient than replicating by doubling the efficiency of your matter encoding. You can double your population/intelligence/whatever by becoming smaller, quicker and more efficient through riding Moore's Law, and the growth rate of that strategy is vastly orders of magnitude higher than the rate of return provided by interstellar travel. This blog post discusses some of the cost estimates of interstellar travel. Interstellar travel only makes sense when it is the best investment option to maximize replication rate of return. Consider that long before interstellar replication is economical interplanetary expansion to the moon and mars would be exploited first. And long long before that actually becomes a wise investment, replicators will first expand to Antarctica. So why is Antarctica not colonized? Expanding to utilize most of Earth's mass is only rational to replicators when Moore's Law type growth stalls completely. So hypothesizing that interstellar travel is viable is equivalent to making a long term bet about what will happen at the end of Moore's
8CarlShulman
It's not a question of ruling out the scenario, just driving down its probability to low levels. Current physics indicates that we can't increase computation indefinitely in this way. It may be wrong, but that's the place to put most of our probability mass. When we consider new physics, they might increase the returns to colonization (e.g. more computation using bigger black holes) or have little effect, with only a portion of our probability mass going to the "vast inner expansion" scenarios. Even in those scenarios, there's still the intelligence explosion dynamic to consider. At each level of computational efficiency it may be relatively easy or hard to push onwards to the next level: there might be many orders of magnitude of easy gains followed by some orders of difficult ones, and so forth. As long as there are bottlenecks somewhere along the technology trajectory, civilizations should spend most of their time there, and would benefit from additional resources to advance through the bottlenecks. Combining these factors, you're left with a possibility that seems to be non-vanishing but also small.
-5jacob_cannell
5NancyLebovitz
Replicators might be a tiny part of AI-space, while still being quite a large part of the space of AIs likely to be invented by biologically evolved organisms.
0Desrtopa
The entire scenario of this post rests on this "what if," and it's not a very probable one. There appear to be hard theoretical limits to the speed of computation and the amount of computation that can be performed with a given amount of energy, and there may easily be practical limitations which set the bounds considerably lower. Assuming that there are limits is the default position, and in an intelligence explosion, it's quite likely that the AI will reach those limits quite quickly, unless the resources available on Earth alone do not allow for it.
-1jacob_cannell
That wiki entry is wrong and or out of date. It only considers strictly classical irreversible computation. It doesn't mention quantum and reversible computation. But as to the larger question - yes I think there are probably eventual limits, but even this can not yet be said for certain until we have a complete unified theory of physics: quantum gravity and what not. From what we do understand of current physics, the limits of computation take us down to singularities, regions of space time similar to the big bang: black holes, wormholes, etc type objects, which are not fully understood in current physics. Also, the larger trend towards greater complexity is not really dependent on computational growth per se. At the higher level of abstraction, the computational resources of the earth haven't changed much since it's formation. All of the complexity increase since then has been various forms of reorganization of matter/energy patterns. Increasing computational density is just one form of complexity increasing transformation. Complexity can continue to increase at many other levels of organization (software, mental, knowledge, organizational, meta, etc) So the more important general question is this: is there an absolute final limit to the future complexity of the earth system? And if we reach that, what happens next?
0Desrtopa
Can you explain what this complexity is and why you want so much of it?
0jacob_cannell
See my other recent reply on our other thread.
0whpearson
Are you assuming the memory growing in proportion to your input bandwidth?
-7jacob_cannell

Some linguistics nitpicks:

The greater monogenesis theory of all extant languages and cultures from a single distant historical proto-language is a matter of debate amongst linguistics, but the similarity in many low-level root words is far beyond chance.

If you mean the similarity between word roots on a world-wide scale, the answer is decisively no. Human language vocabularies are large enough that many seductive-looking similarities will necessarily exist by pure chance, and nothing more than that has ever been observed on a world-wide scale. Mark Rosenfelder has a good article dealing with this issue on his web pages.

In fact, the way human languages are known to change implies that common words inherited from a universal root language spoken many millenniums ago would not look at all the same today. It's a common misconception that there are some "basic" words that change more slowly than others, but in reality, the way it works is that the same phoneme changes the exact same way in all words, or at most depending on some simple rules about surrounding phonemes, with very few exceptions. So that "basic" words end up diverging like all others.

One confoun... (read more)

Upvoted for raising some very important topics. But I disagree on a few points.

One is the assumption that 'subjective time' is related to the discount rate - that if a super-intelligence can do as much thinking in a day as we can do in a century, then it will care as little about tomorrow as we care about the next century. I would make a different assumption - that the 'natural' discount rate is more closely related to the machine's expected lifetime (when it expects indexical utility flows to cease) and to its planning horizon (when its expectations regarding the future environment become no better than guesses).

The second is the failure to distinguish communication latencies from communication bandwidths. Both are important, but they play different roles. According to some theories of consciousness, it is an essentially serial phenomenon, and hence latencies matter a lot. So, while it may be possible to construct a mind whose physical substrate is distributed between Earth and Jupiter's moons, it probably won't be possible to construct a consciousness divided in this way. At least not a consciousness that could pass a Turing test.

1jacob_cannell
I completely agree with your points. I didn't mean to imply that subjective time is related to the discount rate, and I tend to agree that the 'natural' discount rate and planning horizon is probably related to expected lifetime for most agents. But it's difficult to to show why this should always tend to be so. The time dilation for extremely fast thinkers will slow down the subjective rate of return of Moore's Law type investments just as much as space expansion type investments, that's not really the core of the argument against expansion. Where did I confuse these two? I discussed both. Latency subjectively increases with rate of thought and bandwidth decreases, respectively. They both contribute to divergence.

Talking about whether an AI would or would not want to expand indefinitely is sort of missing the point. Barring a completely dominant singleton, someone is going to expand beyond Earth with overwhelming probability. The legacy of humans will be completely dominated by those who didn't stay on Earth. It doesn't matter whether the social impulse is generally towards expansion.

Edit: To be more precise, arguments that "most possible minds wouldn't want to expand" must be incredibly strong in order to have any bearing whatsoever on the long term likelihood of expansion. I don't really buy your argument at all (I would be happy to create new worlds inhabited by like-minded people even if there was a long communication delay between us...) but it seems like your argument isn't even claiming to be strong enough to matter.

Some other notes: you can't really expand inwards very much. You can only fit so much data into a small space (unless our understanding of relativity is wrong, in which case the discussion is irrelevant). Of course, you hit a much earlier limit if you aren't willing to send something to the stars to harvest resources. Maybe these limits seem distant to us, but t... (read more)

This seems to rest on unfounded anthropomorphization. If the AI doesn't have the patience to deal with processes that occur over extremely long time periods relative to its speed of thought, its usefulness to us is dramatically limited. The salient question is not whether it takes a long time from the AI's perspective, only whether, in the long run, it increases utility or not.

Small error at "It's difficult to conceive of an intelligence that experiences around 30,000 years in just one second"

One billion * one second = ~30 years, not ~30,000 years.

4komponisto
Well, unless you're European. :-)
2jacob_cannell
Whoop! Thanks, corrected.

A related empirical data point is that we already see strong light cone effects in electronic markets. The machine decision speeds are so fast that it is not possible to usefully communicate with similarly fast machines outside of a radius of some small number of kilometers because the state of reality at one machine changes faster than it can propagate that information to another due to speed of light limitations. The diminishing ability to influence decisions as a function of distance raises questions about the relevancy of most long haul communication b... (read more)

-1jacob_cannell
Good points. Looking at how the effect is already present in high speed digital trading makes it more immediately relevant, and perhaps we could generalize from some of those lessons for futures dominated by high speed intelligences. Yes, this is a related divergent effect. The idea of copying the internet into local caches to reduce latency is an example.

I didn't like this article at all. Loads of references and mathematics all founded on an absurd premise. That unspecified AGIs and AGI supported humanity would prefer not to harvest the future light cone just because they can think really fast. Most possible mind designs just don't care.

Facing the future it appears that looking outwards into space is looking into the past, for the future lies in innerspace, not outerspace.

If there is just one agent that disagrees all the navel gazer AIs in the world become irrelevant.

0jacob_cannell
See my other replies - the argument is based on economic rate of return (risk adjusted doubling time or exponential growth of your population/intelligence/GDP). Interstellar expansion has a terrible growth rate compared to riding Moore's Law. It also assumes that space is empty.

I came to a similar conclusion after reading Accelerando, but don't forget about existential risk. Some intelligent agents don't care what happens in a future they never experience, but many humans do, and if a Friendly Singularity occurs, it will probably preserve our drive to make the future a good one even if we aren't around to see it. Matrioshka brain beats space colonization; supernova beats matrioshka brain; space colonization beats supernova.

If you care about that sort of thing, it pays to diversify.

0Nornagest
I don't have the astrophysics background to say for sure, but if subjective time is a function of total computational resources and computational resources are a function of energy input, then you might well get more subjective time out of a highly luminous supernova precursor than a red dwarf with a lifetime of a trillion years. Existential risk isn't going to be seen in the same way in a CPU-bound civilization as in a time-bound one.
1Luke Stebbing
If computation is bound by energy input and you're prepared to take advantage of a supernova, you still only get one massive burst and then you're done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you'd launched that space colonization program first!
-5jacob_cannell

Are you suggesting that AIs would get bored of exploring physical space, and just spend their time thinking to themselves? Or is your point that a hyper-accelerated civilisation would be more prone to fragmentation, making different thought patterns likely to emerge, maybe resulting in a war of some sort?

If I got bored of watching a bullet fly across the room, I'd probably just go to sleep for a few milliseconds. No need to waste processor cycles on consciousness when there are NP-complete problems that need solving.

1jacob_cannell
I'm suggesting AI's will largely inhabit the metaverse - an expanding multiverse of pervasive simulated realities that flow at their accelerated speeds. The external physical universe will be too slow and boring. I imagine that in the metaverse uploads and AIs will be doing everything humans have ever dreamed of, and far more. Yes divergence or fragmentation seems in the cards so to speak because of the relative bandwidth/latency considerations. However that doesn't necessarily imply war or instability (although nor could I rule that out). Watching the real world would be just one activity, there would be countless other worlds and realities to explore.

Nick Bostrom seems to have introduced the Singleton concept to the Singularity/Futurist discourse here.

I don't think so. It dates back at least to early 2001 on SL4. It didn't come from Nick Bostrom.

4Eliezer Yudkowsky
I remember getting the word from Bostrom.
4timtyler
O. I stand corrected. Thanks!
2MichaelHoward
I can't remember if the word "singleton" was used, but the concepts were being discussed on the extropian mailing list as early as about 1993, and I don't think it was new then.

Is it possible then, that with the inefficiencies inherent in planet-wide ultra-speed communication, that an AI on that level would not be competing for most of the world's resources, and so choose not to interfere too much with the slow-speed humans?

0jacob_cannell
It's hard to generalize the goals of all AIs. It's a little easier for uploads and human-like AIs, I imagine most will be interested in exploring the metaverse but will still require and desire physical resources in the form of energy and matter for computation and storage. I suspect that many will also have larger scale concerns with how the earth is managed. Time dilation may cause them to spend less time proportionally thinking about it and more time in their simulated realities, but ultimately the simulations still depend on an outer physical world.

Interesting too is the concept of amorphous, distributed and time-lagged consciousness.

Our own consciousness arises from an asynchronous computing substrate, and you can't help but wonder what weird schizophrenia would inhabit a "single" brain that stretches and spreads for miles. What would that be like? Ideas that spread like wildfire, and moods that swing literally with the tides?

Such a Mind would experience a million fold time dilation, or an entire subjective year every thirty seconds.

five minutes would correspond to an unimaginable decade of subjective time for an acceleration level 6 hyperintelligence.

architectural optimizations over the brain and higher clock rates could lead to acceleration level 9 hyperintelligences.

Acceleration level 9 stretches the limits of human imagination. It's difficult to conceive of an intelligence that experiences around 30 years in just one second, or a billion subjective years for every siderea

... (read more)
0TheOtherDave
I would expect minds separated by such a latency gulf to simply send longer messages. That's what a lot of human correspondents have historically done in similar situations, anyway, and it seems a reasonable way to continue communication. But perhaps I'm being parochial.

A very thought-provoking and well-written article. Thanks!

Your biggest conceptual jump seems to be reasoning about the subjective experience of hyperintelligences by analogy to human experiences. That is, and experience of some thought/communication speed ratio for a hyperintelligence would be "like" a human experience of that same ratio. But hyperintelligences aren't just faster. I think they'd probably be very very different qualitatively. Who knows if the costs / benefits of time-consuming communication will be perceived in similar or even recognizable ways?

6Desrtopa
jacob_cannell has gone on record as anticipating that strong AI will actually be designed by circuit simulation of the human brain. This explains why so many of his posts and comments have such a tendency to anthropomorphize AI, and also, I think, why they tend to be heavy on the interesting ideas, light on the realistic scenarios.
-1jacob_cannell
I did? I don't think early strong AI will be an exact circuit simulation of the brain, although I do think it will employ many of the principles. However, using the brain's circuit as an example is useful for future modelling. If blind evolution could produce that particular circuit which uses a certain number of components to perform those kinds of thoughts using a certain number of cycles, we should eventually be able to do the same work using similar or less components and similar or less cycles.
0Desrtopa
It would probably have been fairer if I'd said "approximate simulation." But if we actually had a sufficient reductionist understanding of the brain and how it gives rise to a unified mind architecture to create an approximate simulation which is smarter than we are and safe, we wouldn't need to create an approximation of the human brain at all, and it would almost certainly not be even close to the best approach we could take to creating an optimally friendly AI. When it comes to rational minds which use their intelligence efficiently to increase utility in an altruistic manner, anything like the human brain is a lousy thing to settle for.
-2jacob_cannell
Thanks, I think the time dilation issue is not typically considered in visions of future AGI society and could prove to be a powerful constraint. I agree they will probably think differently, if not immediately then eventually as the space of mind architectures is explored. Still we can analyze the delay factor from an abstract computational point of view and reach some conclusions without getting into specific qualitative features of what certain types of thought are "like". I find it hard to estimate likelihoods of different types of qualitative divergences from human-like mind architectures. On the one hand we have the example of early cells such as bacteria which radiated into a massive array of specialized forms, but life is all built around variations of a few old general designs for cells. So are human minds like that? Is that the right analogy? On the other hand we can see human brain architecture as just one particular point in a vast space of possibility.