DanielLC comments on What is the most anti-altruistic way to spend a million dollars? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
I have to say i don't get why so many of the comments on this are negative. Surely, if there was a completely legal way to inflict great harm on humanity for only $1Million then there are a ton of people/groups with the desire and resources to do those things. The idea that anyone with the desire to implement these things will learn about them first on LessWrong seems ludicrous to me.
Anyway, here is an idea:
If they have to succeed to get the million, why would they care about the prize? If they make a friendly AI they won't need the million, and if they have an unfriendly one, the also won't need it, but for different reasons. Even if it's just a human-level AI, it would be worth orders of magnitude more than that.
I think that LWers assign a much higher probability to a FOOM scenario that most people. Most people probably wouldn't assign much value to an AI that just seeks to maximize the number of paperclips in the universe, and continuously attempts to improve its ability to make that goal happen. Someone could build something like that expecting that its abilities would level off pretty quickly, and be badly wrong.