royf comments on Fake Causality - Less Wrong

41 Post author: Eliezer_Yudkowsky 23 August 2007 06:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: royf 11 June 2012 02:43:48AM 0 points [-]

A GAI with the utility of burning itself? I don't think that's viable, no.

What do you mean by "viable"?

Intelligence is expensive. More intelligence costs more to obtain and maintain. But the sentiment around here (and this time I agree) seems to be that intelligence "scales", i.e. that it doesn't suffer from diminishing returns in the "middle world" like most other things; hence the singularity.

For that to be true, more intelligence also has to be more rewarding. But not just in the sense of asymptotically approaching optimality. As intelligence increases, it has to constantly find new "revenue streams" for its utility. It must not saturate its utility function, in fact its utility must be insatiable in the "middle world". A good example is curiosity, which is probably why many biological agents are curious even when it serves no other purpose.

Suicide is not such a utility function. We can increase the degree of intelligence an agent needs to have to successfully kill itself (for example, by keeping the gun away). But in the end, it's "all or nothing".

But anyway it can't be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.

Gödel's theorem doesn't prevent any specific thing. In this case I was referring to information-theoretic reasons. And indeed, suicide is not a typical human behavior, even without considering that some contributing factors are irrelevant for our discussion.

Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?

Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.

In that sense, I completely agree with you. I usually don't like making the technology distinction, because I believe there's more important stuff going on in higher levels of abstraction. But if that's where you're coming from then I guess we have resolved our differences :)