William_S comments on Superintelligence 29: Crunch time - Less Wrong

8 Post author: KatjaGrace 31 March 2015 04:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: William_S 05 April 2015 02:31:17AM 1 point [-]

I think that Andrew Ng's position is somewhat reasonable, especially applied to technical work - it does seem like human level AI would require some things we don't understand, which makes the technical work harder before those things are known (though I don't agree that there's no value in technical work today). However, the tone of the analogy to "overpopulation on Mars" leaves the question as to at what point the problem transitions to "something we can't make much progress on today" to "something we can make progress on today". Martian overpopulation would have pretty clear signs when it's a problem, whereas it's quite plausible that the point where technical AI work becomes tractable will not be obvious, and may occur after the point where it's too late to do anything.

I wonder if it would be worth developing and promoting a position that is consistent with technical work seeming intractible and non-urgent today, but with a more clearly defined point where it becomes something worth working on (ie. AI passes some test of human like performance, some well-defined measure of expert opinion says human level AI is X-years off). In principle, this seems like it would be low cost for an AI researcher to adopt this sort of position (though in practice, it might be rejected if AI researchers really believes that dangerous AI is too weird and will never happen).