Vladimir_Nesov comments on Friendly AI Research and Taskification - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (44)
There likely was. The SIAI also seems to have a research program outlined.
Yup. There's a Blue Gene supercomputer that is being used to (among other things) simulate increasingly large portions of the brain at a neuronal level. That's $100m right there, and then we can throw in the funding for pretty much all neuroanatomy research as well. I'd guesstimate the global annual budget for FAI research at $1-2m. I may be defining upload precursors more loosely than you are, so I understand your skepticism.
The majority of your post focuses on the difficulty of taskifying FAI, which makes it sound as though you're arguing for a predetermined conclusion.
Great! :)
Considering that the SIAI is currently highly specialized to focus on FAI research, retooling the organization to do something else entirely seems like a waste of money. Reading your post from that perspective, your post seemed hostile, though I realize that wasn't intended.
Bad argument. If in fact FAI research shouldn't be pursued, then they shouldn't pursue it, no matter sunk cost.
Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn't be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches. This is why I read multi as hostile: if FAI research is the wrong approach as he argues, then the SIAI should shut down. Which (in my head) compresses to "multi wants to shut down the SIAI."
Probably not a good assumption; they've changed approaches before (in their earliest days, the idea of FAI hadn't been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won't need to happen, but still, shutting down SIAI wouldn't be the only option (let alone the best option) if turned out that FAI was a bad idea.
Bad only if it is taken for granted that the SIAI must continue to exist.
Yes, agreed.