DanielLC comments on What is the most anti-altruistic way to spend a million dollars? - Less Wrong

-4 Post author: Punoxysm 24 March 2014 09:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 29 March 2014 02:13:26AM 0 points [-]

If they have to succeed to get the million, why would they care about the prize? If they make a friendly AI they won't need the million, and if they have an unfriendly one, the also won't need it, but for different reasons. Even if it's just a human-level AI, it would be worth orders of magnitude more than that.

Comment author: jobe_smith 01 April 2014 02:38:45PM 0 points [-]

I think that LWers assign a much higher probability to a FOOM scenario that most people. Most people probably wouldn't assign much value to an AI that just seeks to maximize the number of paperclips in the universe, and continuously attempts to improve its ability to make that goal happen. Someone could build something like that expecting that its abilities would level off pretty quickly, and be badly wrong.