wedrifid comments on Extraterrestrial paperclip maximizers - Less Wrong

3 Post author: multifoliaterose 08 August 2010 08:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 09 August 2010 12:09:19AM *  3 points [-]

(1) Quoting myself,

Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends.

Receiving a signal from us would seem to make the direction that the signal is coming from a preferred direction of exploration/colonization. If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores.

(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.

(3) An AGI may be cautious about exploring so as to avoid encountering more powerful AGIs with differing goals and hence may avoid initiating an indiscriminate exploration/colonization wave in all directions, preferring to hear from other civilizations before exploring too much.

The point about subtle deception made in a comment by dclayh suggests that communication between extraterrestrials may degenerate into a Keynesian beauty contest of second guessing what the motivations of other extraterrestrials are, how much they know, whether they're faking helplessness or faking power, etc. This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post.

Even so, there may be genuine opportunities for information transmission. At present I think the possibility that communicating with extraterrestrials has large negative expected value deserves further consideration, even if it seems that the probable effect of such consideration is to rule out the possibility.

Comment author: wedrifid 09 August 2010 06:55:31PM 1 point [-]

(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.

This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a "war" between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.

Comment author: XiXiDu 09 August 2010 07:10:23PM 2 points [-]

This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we'll be able to enjoy and live our life. But who says there'll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That's an argument. But generally, as long as you don't want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival?

I mean, what's the point. The argument here is that working now is worth it because in return we'll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.

Comment author: wedrifid 09 August 2010 07:16:17PM 5 points [-]

The argument here is that working now is worth it because in return we'll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.

Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.

Comment author: XiXiDu 09 August 2010 07:37:13PM 1 point [-]

Absolutely, I was just looking for excuses I guess. Thanks.