According to Robin Hanson's arguments in this blog post, we want to promote research in to cell modeling technology (ideally at the expense of research in to faster computer hardware). That would mean funding this kickstarter, which is ending in 11 hours (it may still succeed; there are a few tricks for pushing borderline kickstarters through). I already pledged $250; I'm not sure if I should pledge significantly more on the strength of one Hanson blog post. Thoughts from anyone? (I also encourage other folks to pledge! Maybe we can name neurons after characters in HPMOR or something. EDIT: Or maybe funding OpenWorm is a bad idea; see this link.)
People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).
You seem awfully confident. I agree that you're likely right but I think it's hard to know for sure and most people who speak on this issue are too confident (including you and both EY/RH in their AI foom debate).
Just to clarify: so you mostly agree with the "Bad Emulation Advance" blog post?
It's not clear to me that a gradual transition completely defeats the argument against neuromorphic AI. If neuromorphic AI is less predictable (to put things poetically, "harder to wield") than AI constructed so that it provably satisfies certain properties, then you can imagine humanity wielding a bigger and bigger weapon that's hard to control. How long do you think the world would last if everyone had a pistol that fired tactical nuclear weapons? What if the pistol has a one in six chance of firing in a random direction?
Want to point to a specific case where I did that?
That's an interesting point. I think it probably makes sense to think of "robust provably safe" as being a continuous parameter. You've got your module that determines what's ethical and what isn't and you've got your module that makes predictions and you've got your module that generates plans. The probability of your AI being "safe" is the product of the probabilities of all your modules being "safe". If a neuromorphic AI self-modifies in a less predictable way that seems like a lose, keeping the ethics module constant.
There's a false equivalence, similar to what'd happen if I were predicting "the lottery will not roll 12345134" and someone else predicting "the lottery will roll 12345134". Predicting some sudden change in a growth curve along with the cause of such change, that's a guess into a large space of possibilities; if such guess i... (read more)