According to Robin Hanson's arguments in this blog post, we want to promote research in to cell modeling technology (ideally at the expense of research in to faster computer hardware). That would mean funding this kickstarter, which is ending in 11 hours (it may still succeed; there are a few tricks for pushing borderline kickstarters through). I already pledged $250; I'm not sure if I should pledge significantly more on the strength of one Hanson blog post. Thoughts from anyone? (I also encourage other folks to pledge! Maybe we can name neurons after characters in HPMOR or something. EDIT: Or maybe funding OpenWorm is a bad idea; see this link.)
People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).
There's a false equivalence, similar to what'd happen if I were predicting "the lottery will not roll 12345134" and someone else predicting "the lottery will roll 12345134". Predicting some sudden change in a growth curve along with the cause of such change, that's a guess into a large space of possibilities; if such guess is equally unsupported with it's negation, it's extremely unlikely and negation is much more likely.
That strikes me as a rather silly way to look at it. The future generations of biological humans are not predictable or controllable either.
The point is that you need bottom-up understanding of, for example, suffering, to be able to even begin working at an "ethics module" which recognizes suffering as bad. (We get away without conscious understanding of such only because we can feel it ourselves and thus implicitly embody a definition of such). On the road to that, you obviously have cell simulation and other neurobiology.
The broader picture is that with zero clue as to the technical process of actually building the "ethics module", when you look at, say, openworm, and it doesn't seem like it helps build an ethical module, that's not representative in any way as to whenever it would or would not help, but only representative of it being a concrete and specific advance and the "ethics module" being too far off and nebulous.
This sounds to me like an argument over priors; I'll tap out at this point.
... (read more)