You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on Discussion: Which futures are good enough? - Less Wrong Discussion

5 Post author: WrongBot 24 February 2013 12:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: shminux 24 February 2013 12:44:24AM 2 points [-]

Would you want to live in such a utopia?

Comment author: Baughn 24 February 2013 01:40:48AM 2 points [-]

Not particularly, no. I care about communicating with my actual friends and family, not shadows of them.

I believe I'd still prefer this scenario over our current world, assuming those two - or destroying the world - are the only options. That's not very likely, though.

I would very much prefer CelestAIs utopia over this one, aliens and all.

Comment author: ikrase 27 February 2013 10:29:51PM 0 points [-]

I'd take status quo over this, and would only accept this with extremely low odds of intelligent life existing elsewhere or elsewhen in universe and the alternative being destruction.

Comment author: Baughn 28 February 2013 10:48:25AM 0 points [-]

Mm, well.

How about the alternative being probably destruction? I'm not optimistic about our future. I do think we're likely to be alone within this hubble volume, though.

Comment author: ikrase 01 March 2013 04:56:24AM 0 points [-]

Hmmm. What specific X-risks are you worried about? UFAI beating MUFAI (what I consider this to be) to the punch?

Not sure about 'probably destruction' and no life going to arise in universe (Hubble volume? Does it matter?). But I think that the choice is unrealistic given the possibility of making another, less terrible AI in another few years.

-A lot of this probably depends on my views on the Singularity and the like: I have never had particularly a high estimation of either the promise or the peril of FOOMing AI.

  • If the AI will allow me to create people inside my subjective universe, and they are allowed to be actual people, not imitation P-zombies, my acceptance of this goes a lot higher, but I would still shut the project down.

-Hubble volume? Really? I mean, we are possibly the only technological civilization of our level within the galaxy, but the Hubble Volume is really, really big. (~10E10 galaxies?) And it goes temporally, as well.

Comment author: Baughn 01 March 2013 10:27:50AM 1 point [-]

Hubble volume

It matters. There's a good chance our universe is infinite, but there's also a good chance it's physically impossible to escape the (shrinking, effectively) hubble volume, superintelligence or not.

I'm inclined to think that if there was intelligence in there we'd probably see it, though. UFAI is a highly probable way for our civilization to end, but it won't stop the offspring from spreading. Yes, it's really big, but I expect UFAI to spread at ~lightspeed.

X-risks

UFAI's the big one, but there are a couple others. Biotech-powered script kiddies, nanotech-driven war, etc. Suffice to say I'm not optimistic, and I consider death to be very, very bad. It's not at all clear to me that this scenario is worse than status quo, let alone death.

Comment author: ikrase 01 March 2013 09:15:41PM 1 point [-]

Do we care whether another intelligence is inside or outside of the Hubble volume?

My estimation of the risk from UFAI is lower than (what seems to be) the LW average. I also don't see why limiiting the unfriendlyness of an AI to this MUFAI should be easier than a) an AI which obeys the commands of an individual human on a short time scale and without massive optimization or abstraction or b) an AI which only defends us against X-risks.

Comment author: Baughn 02 March 2013 03:39:07AM *  1 point [-]

If there are no other intelligences inside the Hubble volume, then a MUFAI would be unable to interfere with them.

a) is perhaps possible, but long-range optimization is so much more useful that it won't last. You might use an AI like that while creating a better one, if the stars are right. If you don't, you can expect that someone else will.

I like to call this variation (among others) LAI. Limited, that is. It's on a continuum from what we've got now; Google might count.

b) might be possible, at a risk of getting stuck like that. Ideally you'd want the option of upgrading to a better one, sooner or later. Ideally without letting just anyone who says that theirs is an FAI override it, but if you knew how to make an AI recognize an FAI, you're most of the way to having an FAI. This one's a hard problem, due mostly to human factors.