DanielLC comments on What is the most anti-altruistic way to spend a million dollars? - Less Wrong

-4 Post author: Punoxysm 24 March 2014 09:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread. Show more comments above.

Comment author: jobe_smith 25 March 2014 01:31:44PM 8 points [-]

I have to say i don't get why so many of the comments on this are negative. Surely, if there was a completely legal way to inflict great harm on humanity for only $1Million then there are a ton of people/groups with the desire and resources to do those things. The idea that anyone with the desire to implement these things will learn about them first on LessWrong seems ludicrous to me.

Anyway, here is an idea:

  1. Offer a $1Million prize for a working self-improving paper-clip maximizing AI. I think that this is very unlikely to produce anything, but since it is a prize you don't have to actually pay it out until someone builds a UFAI that destroys the universe. If no one seems to be working on it, you can always rescind the prize and move on to another evil scheme. I guess the downside would be if somebody accidentally made a friendly AI while trying to win the prize.
Comment author: DanielLC 29 March 2014 02:13:26AM 0 points [-]

If they have to succeed to get the million, why would they care about the prize? If they make a friendly AI they won't need the million, and if they have an unfriendly one, the also won't need it, but for different reasons. Even if it's just a human-level AI, it would be worth orders of magnitude more than that.

Comment author: jobe_smith 01 April 2014 02:38:45PM 0 points [-]

I think that LWers assign a much higher probability to a FOOM scenario that most people. Most people probably wouldn't assign much value to an AI that just seeks to maximize the number of paperclips in the universe, and continuously attempts to improve its ability to make that goal happen. Someone could build something like that expecting that its abilities would level off pretty quickly, and be badly wrong.