According to Robin Hanson's arguments in this blog post, we want to promote research in to cell modeling technology (ideally at the expense of research in to faster computer hardware). That would mean funding this kickstarter, which is ending in 11 hours (it may still succeed; there are a few tricks for pushing borderline kickstarters through). I already pledged $250; I'm not sure if I should pledge significantly more on the strength of one Hanson blog post. Thoughts from anyone? (I also encourage other folks to pledge! Maybe we can name neurons after characters in HPMOR or something. EDIT: Or maybe funding OpenWorm is a bad idea; see this link.)
People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).
Sure. So to give you some context, WBE is an acronym for Whole Brain Emulation. You can read this report by the Future of Humanity Institute at Oxford where they detail the technological development that'd be required to achieve this. If it did end up getting achieved, the consequences for society could be enormous. An emulated brain could do the same work as a software developer, researcher, etc., but the emulated brain could be run at a much faster subjective rate (say, doing a week's worth of subjective thinking in the space of 5 minutes) and for less money (given continued decreases in the cost of computer hardware). Robin Hanson, the GMU economics professor who wrote the "Bad Emulation Advance" blog post, is very interested in ems... here is a presentation where he fleshes out the possibilities of an emulation-filled future in some detail. It's not a terrible future, but it's not a terrifically bright one either.
One future that probably would be pretty terrible is if we had an extremely intelligent artificial intelligence that was built on some technologies inspired by the human brain but was not a high-resolution exact copy of any living human ("neuromorphic AI"), and was not carefully constructed to work towards achieving human values. You can read this for a short summary of why this would likely be terrible, or this for a longer, more fleshed-out argument (with entertaining background info).
So this is why we want to tread carefully. It's suspected that neuromorphic AI is harder to construct in a robust, provably safe way, and given the dangers of haphazardly constructed superintelligences, it seems like we'd rather see more mathematically pure AI methods advance, if any at all. That's what this link is in reference to.
I know that was a pretty high density of crazy ideas in a pretty short period of time, and there are some leaps of reasoning that I left out... let me know if you've got any thoughts or questions. To a futurist like me, things like better diagnostics and therapies for diseases, while interesting and exciting, are small potatoes in the grand scheme of things. When there's the possibility of society being completely transformed by a technology, that's when I start to pay attention. (Hey, it's happened plenty of times before. Imagine explaining the modern world to a Cro-Magnon.) So the main way I'm seeing your project is in terms of its incremental advances along various technological dimensions that could lead to societal transformation. Hopefully that makes some sense.
A human is very massively sub-mankind level intelligent, and some rough approximation running at sub-realtime speeds with several orders of magnitude higher daily sustenance cost even more so.
Granted, it's a great plot device once you give it superpowers, and so there have been many high profile movies concerning such scenarios, and you see worlds destroyed by AI on the big screen. And your internal probability evaluator - evolved before CGI - uses the frequency of scenarios you seen with your own eyes.