And obviously there's a ton of money going into cancer research in general, albeit I wouldn't be surprised if most of it was dedicated to solving specific cancers rather than all cancer at once.
I think the consensus in the field is at the moment that cancer isn't a single thing. Therefore "solve all cancer at once" unfortunately doesn't make a good goal.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It seems I should have picked a different phrase to convey my intended target of ire. The problem isn't concept formation by means of comparing similar reference classes, but rather using thought experiments as evidence and updating on them.
To be sure, thought experiments are useful for noticing when you are confused. They can also be semi-dark art in providing intuition pumps. Einstein did well in introducing special relativity by means of a series of thought experiments, by getting the reader to notice their confusion over classical electromagnetism in moving reference frames, then providing an intuition pump for how his own relativity worked in contrast. It makes his paper one of the most beautiful works in all of physics. However it was the experimental evidence which proved Einstein right, not the gedankenexperimenten.
If a thought experiment shows something to not feel right, that should raise your uncertainty about whether your model of what is going on is correct or not (notice your confusion), to whit the correct response should be “how can I test my beliefs here?” Do NOT update on thought experiments, as thought experiments are not evidence. The thought experiment triggers an actual experiment—even if that experiment is simply looking up data that is already collected—and the actual experimental results is what updates beliefs.
MIRI has not to my knowledge released any review of existing AGI architectures. If that is their belief, the onus is on them to support it.
He invented the AI box game. If it's an experiment, I don't know what it is testing. It is a setup totatly divorced from any sane reality for how AGI might actually develop and what sort of controls might be in place, with built-in rules that favor the AI.
Yet nevertheless, time and time again people such as yourself point me to the AI box games as if it demonstrated anything of note, anything which should cause me to update my beliefs.
It is, I think, the examples of the sequences and the character of many of the philosophical discussions which happen here that drive people to feel justified in making such ungrounded inferences. And it is that tendency which possibly makes the sequences and/or less wrong a memetic hazard.
I have such very strong agreement with you here.
…but I disagree with you here.
Thought experiments and reasoning by analogy and the like are ways to explore hypothesis space. Elevating hypotheses for consideration is updating. Someone with excellent Bayesian calibration would update much much less on thought experiments etc. than on empirical tests, but you run into really serious problems of reasoning if you pretend that the type of updating is fundamentally different in the two cases.
I want to emphasize that I think you're highlighting a strength this community would do well to honor and internalize. I strongly agree with a core point I see you making.
But I think you might be condemning screwdrivers because you've noticed that hammers are really super-important.