According to Robin Hanson's arguments in this blog post, we want to promote research in to cell modeling technology (ideally at the expense of research in to faster computer hardware). That would mean funding this kickstarter, which is ending in 11 hours (it may still succeed; there are a few tricks for pushing borderline kickstarters through). I already pledged $250; I'm not sure if I should pledge significantly more on the strength of one Hanson blog post. Thoughts from anyone? (I also encourage other folks to pledge! Maybe we can name neurons after characters in HPMOR or something. EDIT: Or maybe funding OpenWorm is a bad idea; see this link.)
People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).
The point is, cell simulation won't yield this stupid AI movie plot threat that you guys are concerned about. This is because it doesn't result in sudden superintelligence, but a very gradual transition.
And in so much as there's some residual possibility that it could, this possibility is lessened by working on it earlier.
I'm puzzled why you focus on the AI movie plot threat when discussing any AI related technology, but my suspicion is that it's because it is a movie plot threat.
edit: as for the "robust provably safe AI", as a subset of "safe" an AI must be able to look at - say - an electronic circuit diagram (or an even lower level representation) and tell if said circuit diagram implements a tortured sentient being. You'd need neurobiology to merely define what's bad. The problem is that "robust provably safe" is nebulous enough that you can't link it to anything concrete.
You seem awfully confident. I agree that you're likely right but I think it's hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
Just to clarify: ... (read more)