You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Florian_Dietz comments on LessWrong's attitude towards AI research - Less Wrong Discussion

8 Post author: Florian_Dietz 20 September 2014 03:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: Florian_Dietz 20 September 2014 04:46:04PM 2 points [-]

I would argue that these two goals are identical. Unless humanity dies out first, someone is eventually going to build an AGI. It is likely that this first AI, if it is friendly, will then prevent the emergence of other AGI's that are unfriendly.

Unless of course the plan is to delay the inevitable for as long as possible, but that seems very egoistic since faster computers make will make it easier to build an unfriendly AI in the future, while the difficulty of solving AGI friendliness will not be substantially reduced.

Comment author: ChristianKl 20 September 2014 10:07:56PM 2 points [-]

I don't think building an UFAI is something that you can simply achieve by throwing hardware at it.

I'm also optimistic over improving human reasoning ability over longer timeframes.

Comment author: Florian_Dietz 21 September 2014 04:27:32PM *  1 point [-]

No, it can't be done by brute-force alone, but faster hardware means faster feedback and that means more efficient research.

Also, once we have computers that are fast enough to just simulate a human brain, it becomes comparatively easy to hack an AI together by just simulating a human brain and seeing what happens when you change stuff. Besides the ethical concerns, this would also be insanely dangerous.