You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on Sarah Connor and Existential Risk - Less Wrong Discussion

-9 [deleted] 01 May 2011 06:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread.

Comment author: wedrifid 01 May 2011 09:26:58PM 0 points [-]

It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.

I can only infer what you were saying here but it seems likely that I roughly speaking approve of what you are saying. It is the sort of thing that people don't consider rationally, instead going off the default reaction that fits a broad class of related ideas.