You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on Open thread, July 29-August 4, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 29 July 2013 10:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (381)

You are viewing a single comment's thread. Show more comments above.

Comment author: linkhyrule5 02 August 2013 08:04:07AM 5 points [-]

Waffled between putting this here and putting this in the Stupid Questions thread:

Why is the default assumption that a superintelligence of any type will populate its light cone?

I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).

But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.

On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.

There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.

So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.

Comment author: DanielLC 03 August 2013 03:26:45AM 3 points [-]

Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn't send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you'd want to leave early.

Also, as Oscar said, you don't want the resources you can easily reach to go to waste while you're putting off using them.