lukeprog comments on SIAI’s Short-Term Research Program - Less Wrong

31 Post author: XiXiDu 24 June 2011 11:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 24 June 2011 03:32:01PM *  9 points [-]

"What is missing for the SIAI to actually start working on friendly AI?"

I think that question is answered by Yudkowsky in his interview with Baez:

"I probably need to take at least a year to study up on math, and then—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)"

Yudkowsky's position, widely known, is that it is unsafe to do otherwise. I imagine that is why they are not funding researchers to work on extending MOSES (or any other AGI work for that matter), but that's just speculation on my part.

To learn more about the work people are doing to build AGI, check out the conferences series on AGI at agi-conf.org, organized by Ben Goertzel, advisor to SIAI (formerly Director of Research). Videos of most of the talks and tutorials are available for free, along with PDFs of the conference papers.

Comment author: lukeprog 24 June 2011 09:08:00PM *  8 points [-]

"What is missing for the SIAI to actually start working on friendly AI?"

The biggest problem in designing FAI is that nobody knows how to build AI. If you don't know how to build an AI, it's hard to figure out how to make it friendly. It's like thinking about how to make a computer play chess well before anybody knows how to make a computer.

In the meantime, there's lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I'm currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.

Comment author: aletheilia 24 June 2011 11:01:18PM 8 points [-]

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

Comment author: lukeprog 24 June 2011 11:55:54PM 2 points [-]

Correct!

Comment author: Vladimir_Nesov 24 June 2011 11:55:55PM 5 points [-]

If you don't know how to build an AI, it's hard to figure out how to make it friendly.

(You don't make an AI friendly. You make a Friendly AI. Making an AI friendly is like making a text file good reading.)

Comment author: lukeprog 25 June 2011 12:11:34AM *  4 points [-]

Yes, I know. 'Making an AI friendly' is just a manner of speaking, like talking about humans having utility functions.

Comment author: Vladimir_Nesov 25 June 2011 12:18:19AM *  4 points [-]

I assumed you know, which is why it was a parenthetical, mainly clarifying for the benefit of others. Disagreement with the method of presentation.

Comment author: lukeprog 25 June 2011 12:30:09AM 1 point [-]

Okay.