I recently read Scott Aaronson's "Why Philosophers Should Care About Computational Complexity" (http://arxiv.org/abs/1108.1791), which has a wealth of interesting thought-food. Having chewed on it for a while, I've been thinking through some of the implications and commitments of a computationalist worldview, which I don't think is terribly controversial around here (there's a brief discussion in the paper about the Waterfall Argument, and its worth reading if you're unfamiliar with either it or the Chinese room thought experiment).
That said, suppose we ascribe to a computationalist worldview. Further suppose that we have a simulation of a human running on some machine. Even further suppose that this simulation is torturing the human through some grisly... (read 471 more words →)
I’ve seen pretty uniform praise from rationalist audiences, so I thought it worth mentioning that the prevailing response I’ve seen from within a leading lab working on AGI is that Eliezer came off as an unhinged lunatic.
For lack of a better way of saying it, folks not enmeshed within the rat tradition—i.e., normies—do not typically respond well to calls to drop bombs on things, even if such a call is a perfectly rational deduction from the underlying premises of the argument. Eliezer either knew that the entire response to the essay would be dominated by people decrying his call for violence, and this was tactical for 15 dimensional chess reasons, or he severely underestimated people’s ability to identify that the actual point of disagreement is around p(doom), and not with how governments should respond to incredibly high p(doom).
This strikes me as a pretty clear failure to communicate.