That's reasonable. But I personally consider it to be arguing in bad faith if someone makes a comment, I reply to it, then I go back later and see that it's been edited to look like I'm replying to something substantially different. Minor edits for spelling or punctuation are reasonable, but introducing entirely new strands of argument, or deleting arguments that were there originally, gives an incorrect impression of what's actually been said. I'm not going to keep going back and checking every five minutes that the context of my comments hasn't been utterly changed, so I'm only going to reply in more-or-less stable contexts.
or deleting arguments that were there originally
As I previously mentioned, I have not deleted anything from comments I have written in this thread.
It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?