Scott Aaronson has a new 85 page essay up, titled "The Ghost in the Quantum Turing Machine". (Abstract here.) In Section 2.11 (Singulatarianism) he explicitly mentions Eliezer as an influence. But that's just a starting point, and he then moves in a direction that's very far from any kind of LW consensus. Among other things, he suggests that a crucial qualitative difference between a person and a digital upload is that the laws of physics prohibit making perfect copies of a person. Personally, I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read, and this is a good excuse to read about things like (I quote the abstract) "the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb's paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption". This is not just a shopping list of buzzwords, these are all important components of the author's main argument. It unfortunately still seems weak to me, but the time spent reading it is not wasted at all.
I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:
What? Just because predicting human behaviour one minute before it's happening with 99% accuracy is more impressive, it doesn't mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe even unachievable or uncomputable kind of process, but it also may be just a matter of better probes/more computational power. Lack of impressiveness is not a refutation at all. Also
This is plainly wrong, as any Bayesian-minded person will know: it all depends on the prior information you are using. Predicting with 99.99% accuracy that any person, put in front of the dilemma of tasting a pleasant cake or receive a kick in the teeth (or, to stay in the Portal metaphor, to be burned alive) will chose the cake, is clearly not relevant to the free will debate.
At the same time, predicting what will be the next choice in pressing the button, exclusively from neurological data (and very broadly aggregated, as is the case of fMRI) with 60% accuracy, is in direct contrast with the Knightian unpredictability thesis.
So would you have been wiling to draw the same conclusion from an experiment that predicted the button pushing 1 second before with 99.99999% probability by scanning the neurons in the arm?