lukeprog comments on Video Q&A with Singularity Institute Executive Director - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (122)
No doubt, a one-paragraph list of sub-problems written in English is "unsatisfactory." That's why we would "really like to write up explanations of these problems in all their technical detail."
But it's not true that the problems are too vague to make progress on them. For example, with regard to the sub-problem of designing an agent architecture capable of having preferences over the external world, recent papers by (SI research associate) Daniel Dewey, Orseau & Ring, and Hibbard each constitute progress.
I doubt this is a problem. We are quite familiar with technical research, and we know how hard it is for, in my usual example of what needs to be done to solve many of the FAI sub-problems, "Claude Shannon to just invent information theory almost out of nothing."
In fact, here is a paragraph I wrote months ago for a (not yet released) document called Open Problems in Friendly Artificial Intelligence:
Also, I regularly say that "Friendly AI might be an incoherent idea, and impossible." But as Nesov said, "Believing problem intractable isn't a step towards solving the problem." Many now-solved problems once looked impossible. But anyway, this is one reason to pursue research in both Friendly AI and on "maxipok" solutions that maximize the chance of an "ok" outcome, like Oracle AI.