It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?
If you predictably have no ethics when the world is at stake, people (including your allies!) who know this won't trust you when you think the world is at stake. That could also get everybody killed.
(Yes, this isn't going to make the comfortably ethical option always correct, but it's a really important consideration.)
Note to any readers: This subthread is discussing the general and unambiguously universal claim conveyed by a particular Eliezer quote. There are no connotations for the AGI prevention fiasco beyond the rejection of that particular soldier as it is used here or anywhere else.
I appreciate ethics. I've made multiple references to the 'ethical injunctions' post in this thread and tend... (read more)