Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TimFreeman comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimFreeman 15 May 2011 06:22:01PM 2 points [-]

An AI that is successfully "Friendly" poses an extistential risk of a kind that other AIs don't pose. The main risk from an unfriendly AI is that it will kill all humans. That isn't much of a risk

What do you mean by existential risk, then? I thought things that killed all humans were, by definition, existential risks.

humans are on the way out in any case.

What, if anything, do you value that you expect to exist in the long term?

There are arguments that [an UFAI] will inevitably take resources away from humans, but these are just that - arguments.

Pretty compelling arguments, IMO. It's simple -- the vast majority of goals can be achieved more easily if one has more resources, and humans control resources, so an entity that is able to self-improve will tend to seize control of all the resources and therefore take control of those resources from the humans.

Do you have a counterargument, or something relevant to the issue that isn't just an argument?