My biggest criticism of SI is that I cannot decide between:
A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe
This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden's analogy that SI is trying to develop facebook before the Internet).
A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obvious...
He didn't count on the stupidity of mankind.
"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."
But why/how is it better spent that way?
"I consistently refuse to be drawn into running the Singularity Institute. I have an overwhelming sense of doom about what happens if I start going down that road."
This strikes me as pretty strange. I would like to hear more about it.
Certainly, one can obtain status by other means, such as by posting at OB and LW, and presenting at conferences, etc. Are there other reasons why you don't want to "run" the Singularity Institute?
I think these are great predictions.
I agree that that's one reasonable interpretation.
I just want to emphasize that that standard is very different than the weaker "if I had to guess, I would say that the person actually committed the crime." The first standard is higher. Also, the law might forbid you from considering certain facts/evidence, even if you know in the back of your mind that the evidence is there and suggestive. There are probably other differences between the standards that I'm not thinking of.
By guilty, do we mean "committed or significantly contributed to the murder"?
Or do we mean "committed or significantly contributed to the murder AND there is enough evidence showing that to satisfy the beyond-a-reasonable-doubt (or Italian equivalent) standard of proof for murder"?
The comments don't seem to make that distinction, but I think it could make a big difference.
I would be surprised if Eliezer would cite Joshua Greene's moral anti-realist view with approval.
GEB has always struck me as more clever than intelligent.
Yvain:
Some points.
The typical mind fallacy sounds just like the "Mind Projection Fallacy," or the empathy gap. It's a fascinating issue.
You sound like you have Asperger tendencies: introverted, geeky, cerebral, sensitivity to loud noise. Interestingly, people with Asperger's are famously bad at empathizing; i.e. more likely to commit the Mind Projection Fallacy. This may be one reason why we find the fallacy so fascinating: we've been burned by it before (as you relate in your post), and seem uniquely vulnerable to it.
Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.
First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illus... (read more)