It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?
"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
But I'll propose a possibly even more scarily cultish idea:
Why attempt to perfect human rationality? Because someone's going to invent uploading sometime. And if the first uploaded person is not sufficiently rational, they will rapidly become Unfriendly AI; but if they are sufficiently rational, then there's a chance they will become Friendly AI.
(The same argument can be used for increasing human compassion, of course. Sufficiently advanced compassion requires rationality, though.)
(Tangentially:)
"Will" is far too strong. Becoming UFAI at least requires that an upload be given sufficient ability to self-modify (or sufficiently modified from outside), and that IA up to superintelligence on uploads be not only tractable (likely but not guaranteed) but, if it's going to be the first upload, easy enough that lots more uploads don't get made first. Digital intelligences are not intrinsically, automatically hard takeoff risks,... (read more)