DanielLC comments on What is the most anti-altruistic way to spend a million dollars? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
If they have to succeed to get the million, why would they care about the prize? If they make a friendly AI they won't need the million, and if they have an unfriendly one, the also won't need it, but for different reasons. Even if it's just a human-level AI, it would be worth orders of magnitude more than that.
I think that LWers assign a much higher probability to a FOOM scenario that most people. Most people probably wouldn't assign much value to an AI that just seeks to maximize the number of paperclips in the universe, and continuously attempts to improve its ability to make that goal happen. Someone could build something like that expecting that its abilities would level off pretty quickly, and be badly wrong.