Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Yes, I agree. As you point out, that's a general kind of problem with decision-making in an environment of low probability that something spectacularly good might happen if I throw resources at X. (At one point I actually wrote a feature-length screenplay about this, with an AI attempting to throw cosmic resources at religion, in a low-probability attempt to unlock infinity. Got reasonably good scores in competition, but I was told at one point that "a computer misunderstanding its programming" was old hat. Oh well.)

My pronouncement of "exactly zero" is just what would follow from taking the stated scientific assumptions at face value, and applying them to the specific argument I was addressing. But I definitely agree that a real-world AI might come up with other arguments for expansion.

I have described certain limits to communication in an expanding cosmological civilization here: https://arxiv.org/abs/2208.07871

Assuming a civilization that expands at close to the speed of light, your only chance to influence the behavior of colonies in most distant galaxies must be encoded in what you send toward those galaxies to begin with (i.e. what is in the colonizing probes themselves, plus any updates to instructions you send early on, while they're still en route). Because, the home galaxy (the Milky Way) will never hear so much as a "hello, we've arrived" back from approximately 7/8 of the galaxies that it eventually colonizes (due to a causal horizon). 

You'll have some degree of two-way communication with the closest 1/8 of colonized galaxies, though the amount of conversation will be greatly delayed and truncated with distance.

To see just how truncated, suppose a colony is established in a galaxy, and they send the following message back towards the Milky Way: "Hello, we've arrived and made the most wonderful discovery about our colony's social organization. Yes, it involves ritualistically eating human children, but we think the results are wonderful and speak for themselves. Aren't you proud of us?"

As I mentioned, for only 1/8 of colonized galaxies would that message even make it back to the Milky Way. And for only the 1/27 closest galaxies would the Milky Way be able to send a reply saying "What you are doing is WRONG, don't you see? Stop it at once!" And you can expect that message to arrive at the colony only after a hundred billion years, in most cases. In the case of the 1/64 closest colonies, the Milky Way could also expect to hear back "Sorry about that. We stopped." in reply.

That is, unless the current favored cosmology is completely wrong, which is always in the cards.

So, yeah -- if you want to initiate an expanding cosmological civilization, you'll have to live with the prospect that almost all of it is going to evolve independently of your wishes, in any respect that isn't locked down for all time on day 1.

Jay Olson3622

There is social pressure to hide X. So, X turns out to be much more common and much less extreme than one naively imagined. The net effect is already out there, so maybe just chill.

But, the above story exists in equilibrium with the reverse reaction:

There is social pressure to hide X. It turns out that X is much more common than one naively imagined, and although the average instance of X is not so extreme, the system is actually about to collapse under the cumulative weight of X, and almost nobody is aware until it happens.

A story like this is revealed at the end of every business cycle, where X is some form of corruption we previously thought was held in check by the social pressure against X (which turned out to be insufficiently harsh). Like, for example, approving loans to people we know are unlikely to pay them back.

Toby Ord and I wrote a paper that describes how "expanding cosmological civilizations" (my less-snappy but more descriptive term for "grabby civilizations") update our estimates of any late filters you might want to consider -- assuming SIA as anthropic school: https://arxiv.org/abs/2106.13348

Basically, suppose you have some prior pdf P(q) on the probability "q" that we pass any late filter. Then, considering expanding civilizations will tell you to update it to P(q) --> P(q)/q. And this isn't good, since it upweights low values of "q" (i.e. lower survival probability).

A general argument analogous to this was actually advanced by Katja Grace, long before we started studying expanding cosmological civilizations -- just in the context of regular galactic SETI. But the geometry of ECC's gives it a surprisingly simple, quantifiable form.