You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Q&A with Shane Legg on risks from AI - Less Wrong Discussion

42 Post author: XiXiDu 17 June 2011 08:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 18 June 2011 07:49:48AM *  0 points [-]

So I've seen you raise this point before, but I'm missing something. Like, does your definition of human extinction not cover cases where an AI or aliens keep all the information about humans and can simulate arbitrarily large numbers of them at will but mostly decide not to?

Right - so I am really talking about "information-theoretic" extinction. I figure, it is most likely that there will be instantiated humans around, though. Indeed, this era may become one of the most-reconstructed times in history - because of the implications for our descendants' knowledge about the form of alien races. They might subsequently go on to encounter, these, will want to know what they are potentially up against - and humans are a major part of the clues they have about that.

Do you think that the majority of computation an AI will do will involve simulating humans?

No. Instrumentally, humans would dwindle to a tiny fraction of one percent of the ecosystem. That seems inevitable anyway. Only a totally crazy civilization would keep swarms of organic humans knocking around.

Comment author: Will_Newsome 18 June 2011 03:17:57PM 1 point [-]

Okay, yeah, I definitely agree that information theoretic extinction is unlikely. I think that basically no one immediately realizes that's what you're talking about though, 'cuz that's not how basically anyone else uses the word "extinction"; they mostly imagine the naive all-humans-die-in-fiery-blast scenario, and when you say you don't think that will happen, they're like, of course that will happen, but what you really mean is a non-obvious thing about information value and simulations and stuff. So I guess you're implicitly saying "if you're too uncharitable to guess what credible thing I'm trying to say, that's your problem"? I'm mostly asking 'cuz I do the same thing, but find that it generally doesn't work; there's no real audience, alas.

we get wiped out by aliens.

Any aliens that wipe us out would have to be incredibly advanced, in which case they probably won't throw away their game theoretic calculations. Especially if they're advanced enough to be legitimately concerned about acausal game theory. And they'd have to do that within the next century or so, or else they'll only find posthumans, in which case they're definitely going to learn a thing or two about humanity. (Unless superintelligence goal systems are convergent somehow.)

Comment author: timtyler 18 June 2011 11:01:16PM *  0 points [-]

I definitely agree that information theoretic extinction is unlikely. I think that basically no one immediately realizes that's what you're talking about though, 'cuz that's not how basically anyone else uses the word "extinction" [...]

So: I immediately went on to say:

I figure, it is most likely that there will be instantiated humans around, though.

That is the same use of "extinction" that everybody else uses. This isn't just a silly word game about what the term "extinction" means.

Comment author: Will_Newsome 23 June 2011 05:46:25PM 0 points [-]

I still don't think people feel like it's the same for some reason. Maybe I'm wrong. I just thought I'd perceived unjustified dismissal of some of your comments a while back and wanted to diagnose the problem.

Comment author: timtyler 23 June 2011 07:52:01PM *  0 points [-]

It would be nice if more people would think about the fate of humans in a world which does not care for them.

That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.

Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans - due to their utility in understanding aliens - and would repeatedly simulate or reenact the run up to superintelligence - to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.

Comment author: Will_Newsome 25 June 2011 06:56:33AM 0 points [-]

It's the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I'm rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the "past".) Also, "controlling" analogous algorithms is pretty confusing.

Comment author: timtyler 25 June 2011 08:12:41AM *  0 points [-]

It's the party line at LW maybe, but not SingInst.

If so, they keep pretty quet about it! I expect for them it would be "more convenient" if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the "save the world" funding pitch.

Comment author: Will_Newsome 25 June 2011 08:40:50AM 0 points [-]

I think it's epistemicly dangerous to guess at the motivations of "them" when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it's not like they have blogs where they talk about these things. SingInst is still really small and really diverse.

Comment author: timtyler 25 June 2011 11:42:27AM 0 points [-]

There are only a handful of Research Fellows and it's not like they have blogs where they talk about these things.

Right - so, to be specific, we have things like this:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

I think I have to agree with the Europan Zugs in disagreeing with that.