John_Maxwell_IV comments on Leaving LessWrong for a more rational life - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
It seems I should have picked a different phrase to convey my intended target of ire. The problem isn't concept formation by means of comparing similar reference classes, but rather using thought experiments as evidence and updating on them.
To be sure, thought experiments are useful for noticing when you are confused. They can also be semi-dark art in providing intuition pumps. Einstein did well in introducing special relativity by means of a series of thought experiments, by getting the reader to notice their confusion over classical electromagnetism in moving reference frames, then providing an intuition pump for how his own relativity worked in contrast. It makes his paper one of the most beautiful works in all of physics. However it was the experimental evidence which proved Einstein right, not the gedankenexperimenten.
If a thought experiment shows something to not feel right, that should raise your uncertainty about whether your model of what is going on is correct or not (notice your confusion), to whit the correct response should be “how can I test my beliefs here?” Do NOT update on thought experiments, as thought experiments are not evidence. The thought experiment triggers an actual experiment—even if that experiment is simply looking up data that is already collected—and the actual experimental results is what updates beliefs.
MIRI has not to my knowledge released any review of existing AGI architectures. If that is their belief, the onus is on them to support it.
He invented the AI box game. If it's an experiment, I don't know what it is testing. It is a setup totatly divorced from any sane reality for how AGI might actually develop and what sort of controls might be in place, with built-in rules that favor the AI.
Yet nevertheless, time and time again people such as yourself point me to the AI box games as if it demonstrated anything of note, anything which should cause me to update my beliefs.
It is, I think, the examples of the sequences and the character of many of the philosophical discussions which happen here that drive people to feel justified in making such ungrounded inferences. And it is that tendency which possibly makes the sequences and/or less wrong a memetic hazard.
BTW, I realized there's something else I agree with you on that's probably worth mentioning:
Eliezer in particular, I think, is indeed overconfident in his ability to reason things out from first principles. For example, I think he was overconfident in AI foom (see especially the bit at the end of that essay). And even if he's calibrated his ability correctly, it's totally possible that others who don't have the intelligence/rationality he does could pick up the "confident reasoning from first principles" meme and it would be detrimental to them.
That said, he's definitely a smart guy and I'd want to do more thinking and research before making a confident judgement. What I said is just my current estimate.
Insofar as I object to your post, I'm objecting to the idea that empiricism is the be-all and end-all of rationality tools. I'm inclined to think that philosophy (as described in Paul Graham's essay) is useful and worth learning about and developing.