mindviews comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 20 June 2010 04:41:42AM 5 points [-]

An AI that "valued" keeping the world looking roughly the way it does now, that was specifically instructed never to seize control of more than X number of each of several thousand different kinds of resources, and whose principal intended activity was to search for, hunt down, and destroy AIs that seemed to be growing too powerful too quickly might be an acceptable risk.

This would not be acceptable to me, since I hope to be one of those AIs.

The morals of FAI theory don't mesh well at all with the morals of transhumanism. This is surprising, since the people talking about FAI are well aware of transhumanist ideas. It's as if people compartmentalize them and think about only one or the other at a time.

Comment author: mindviews 21 June 2010 06:58:03AM 1 point [-]

The morals of FAI theory don't mesh well at all with the morals of transhumanism.

It's not clear to me that a "transhuman" AI would have the same properties as a "synthetic" AI. I'm assuming that a transhuman AI would be based on scanning in a human brain and then running a simulation of the brain while a synthetic AI would be more declaratively algorithmic. In that scenario, proving a self-modification would be an improvement for a transhuman AI would be much more difficult so I would treat it differently. Because of that, I'd expect a transhuman AI to be orders of magnitude slower to adapt and thus less dangerous than a synthetic AI. For that reason, I think it is reasonable to treat the two classes differently.