Mass_Driver comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 20 June 2010 04:41:42AM 5 points [-]

An AI that "valued" keeping the world looking roughly the way it does now, that was specifically instructed never to seize control of more than X number of each of several thousand different kinds of resources, and whose principal intended activity was to search for, hunt down, and destroy AIs that seemed to be growing too powerful too quickly might be an acceptable risk.

This would not be acceptable to me, since I hope to be one of those AIs.

The morals of FAI theory don't mesh well at all with the morals of transhumanism. This is surprising, since the people talking about FAI are well aware of transhumanist ideas. It's as if people compartmentalize them and think about only one or the other at a time.

Comment author: Mass_Driver 20 June 2010 04:47:52AM 5 points [-]

This would not be acceptable to me, since I hope to be one of those AIs.

Er, hypothetically would you be willing to wait a decade or so for ordinary humans to erect some safeguards if we could promise you all the medicine you needed to stay healthy? I mean, nothing against your transhumanist aspirations and all, but how am I supposed to distinguish between people who want to become AIs 'cause it's intrinsically awesome and people who want to become AIs in order to take over the world and bend it to their own sinister, narrow ends?

Comment author: PhilGoetz 23 June 2010 07:04:08PM 2 points [-]

The people who want to become AIs in order to take over the world and bend it to their own sinister, narrow ends will try to convince people that everyone else's AIs are dangerous and must be destroyed.

Comment author: Mass_Driver 23 June 2010 09:28:26PM 4 points [-]

I'm not sure whether you're kidding.

As a joke, it's funny.

As a serious rebuttal, I don't think it works. A shield AI's code could be made public in advance of its launch, and could verifiably NOT contain anything like the memories, personality, or secret agenda of the programmers. There's nothing "narrow" about wanting the world to cooperate in enforcing a temporary ban on superintelligent AIs.

Such a desire is, as some other commenters have complained, a bit conservative -- but in light of the unprecedented risks (both in terms of geographic region affected and in terms of hard-to-remove uncertainty), I'll be happy to be a conservative on this issue.

Comment author: UnholySmoke 29 June 2010 03:42:20PM 0 points [-]

Voted up for sheer balls. You have my backing sir.