Vladimir_Nesov comments on SIAI’s Short-Term Research Program - Less Wrong

31 Post author: XiXiDu 24 June 2011 11:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 25 June 2011 12:59:46PM *  1 point [-]

In this analogy, the relevant concern maps for me to the notion of "safety" of airplanes. And we know what "safely" for airplanes is. It means people don't die. It's hard to make a proper analogy, since for all usual technology the moral questions are easy, and you are left with technical questions. But with FAI, we also need to do something about moral questions, on an entirely new level.

Comment author: Kaj_Sotala 25 June 2011 04:40:19PM 3 points [-]

I agree that solving FAI also involves solving non-technical, moral questions, and that considerable headway can probably be made on these without knowledge about AGI. I was only saying that there's a limit on how far you can get that way.

How far or near that limit is, I don't know. But I would think that there'd be something useful to be found from pure AGI earlier than one might naively expect. E.g. the Sequences draw on plenty of math/compsci related material, and I expect that likewise some applications/techniques from AGI will also be necessary for FAI.