For discussion of the general response to hypothetical ticking time-bomb cases in which one knows with unrealistic certainty than a violation of an ethical injunction will pay off, when in reality such an apparent assessment is more likely to be a result of bias and a shortsighted incomplete picture of the situation (e.g. the impact of being the kind of person who would do such a thing), see the linked post.
With respect to the idea of neo-Luddite wrongdoing, I'll quote a previous comment:
...The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
"Ticking time bomb cases" are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It's dangerous to use such hypotheticals as a way to j
As direct moderator censorship seems to provoke a lot of bad feeling, I would encourage everyone to downvote this to oblivion, or for the original poster to voluntarily delete it, for reasons given in highly upvoted comments below. Or search on "UTTERLY FUCKING STUPID", without quotes.
Given that (redacted) It is a very, very, VERY bad idea to start talking about (redacted), and I would suggest you should probably delete this post to avoid encouraging such behaviour.
EDIT: Original post has now been edited, and so I've done likewise here. I ask anyone coming along now to accept that neither the original post nor the original version of this comment contained anything helpful to anyone, and that I was not suggesting censorship of ideas, but caution about talking about hypotheticals that others might not see as such.
"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
But I'll propose a possibly even more scarily cultish idea:
Why attempt to perfect human rationality? Because someone's going to invent uploading sometime. And if the first uploaded person is not sufficiently rational, they will rapidly become Unfriendly AI; but if they are sufficiently rational, then there's a chance they will become Friendly AI.
(The same argument can be used for increasing human compassion, of course. Sufficiently advanced compassion requires rationality, though.)
(Tangentially:)
And if the first uploaded person is not sufficiently rational, they will rapidly become Unfriendly AI
"Will" is far too strong. Becoming UFAI at least requires that an upload be given sufficient ability to self-modify (or sufficiently modified from outside), and that IA up to superintelligence on uploads be not only tractable (likely but not guaranteed) but, if it's going to be the first upload, easy enough that lots more uploads don't get made first. Digital intelligences are not intrinsically, automatically hard takeoff risks, which it sounds like you're modeling them as. (Not to mention, up to a point insufficient rationality would make an upload less likely to ever successfully increase its intelligence.)
(That said, there are lots of risks and horrible scenarios involving uploads that don't require strong superintelligence, just subjective speedup or copiability.)
Note to any readers: This subthread is discussing the general and unambiguously universal claim conveyed by a particular Eliezer quote. There are no connotations for the AGI prevention fiasco beyond the rejection of that particular soldier as it is used here or anywhere else.
If you predictably have no ethics when the world is at stake, people who know this won't trust you when you think the world is at stake. That could also get everybody killed.
I appreciate ethics. I've made multiple references to the 'ethical injunctions' post in this thread and tend to do so often elsewhere - I rate it as the second most valuable post on the site, after 'subjectively objective'.
Where people often seem to get confused is in conflating 'having ethics' with being nice. There are situations where not shooting at people is an ethical violation. (Think neglecting duties when there is risk involved.) Pacifism is not intrinsically ethically privileged.
The problem with the rule:
"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
... is not that it is advocating doing the Right Thing even in extreme scenarios. The problem is that it is advocating d...
It says "bad *argument" not "Bad person shooting at you". Self-defence (or defence of one's family, country, world, whatever) is perfectly acceptable - initiation of violence never is. It's never right to throw the first punch, but can be right to throw the last.
I approve of that sentiment so long as people don't actually take it literally when the world is at stake. Because that could get everybody killed.
Mind you in this case there are even more exceptions. Initiation of violence, throwing the first punch, is appropriate in all sorts of situations. In fact in the majority of cases where it is appropriate to throw the second punch, throwing the first punch is better. Because the first punch could kill or injure you. The only reason not to preempt the punch (given that you will need to respond with a punch anyway) is for the purpose of signalling to people like yourself.
In these kind of cases it can be wise to pay lip service to a 'never throw the first punch' moral but actually follow a rational approach when a near mode situation arises.
Let me remind you: The world is at stake. You, everybody you care about and your entire species will die and the future light cone left baron or tiled with dystopic junk. That is not a time to be worrying about upholding your culture's moral ideals. Save the @#%! world!
This is a site devoted to rationality, supposedly. How rational is it to
Comments of this form are almost always objectionable.
It's hyperbole, and, worse, hyperbole that might be both incitement to violence and possibly self-incriminating if one of those people do get shot. If the world where $randomAIresearcher, who wasn't anywhere near achieving hir goal anyway, gets shot, the SIAI is shut down as a terrorist organisation, and you get arrested for incitement to violence, seems optimal to you, then by all means keep making statements like the one above...
Are you trying to be ironic here? You criticize hyperbole while writing that?
No, I am being perfectly serious. There are several people in this thread, yourself included, who are coming very close to advocating - or have already advocated - the murder of scientific researchers. Should any of them get murdered (and as I pointed out in my original comment, which I later redacted in the hope that as the OP had redacted his post this would all blow over, Ben Goertzel has reported getting at least two separate death threats from people who have read the SIAI's arguments, so this is not as low a probability as we might hope) then the finger will point rather heavily at the people in this thread. Murdering people is wrong, but advocating murder on the public internet is not just wrong but UTTERLY FUCKING STUPID.
It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
I can only infer what you were saying here but it seems likely that I roughly speaking approve of what you are saying. It is the sort of thing that people don't consider rationally, instead going off the default reaction that fits a broad class of related ideas.
That sounds like it'd be a rather small conspiracy, rather little assimilation, and rather much hunting.
So you say, Horizon Wars should be started? Preemptive strikes against any not FAI programmer or organization out there?
Sweet!
It's probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who's trying to build a friendly AI.
[redacted]
[redacted]
further edit:
Wow, this is getting a rather stronger reaction than I'd anticipated. Clarification: I'm not suggesting practical measures that should be implemented. Jeez. I'm deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.
For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?