cousin_it comments on What big goals do we have? - Less Wrong

10 Post author: cousin_it 19 January 2010 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 20 January 2010 08:15:29AM 0 points [-]

How is FAI a math problem? I never got that either.

Comment author: Vladimir_Nesov 20 January 2010 04:30:13PM *  3 points [-]

How is FAI a math problem?

In the same sense AIXI is a mathematical formulation of a solution to the AGI problem, we don't have a good idea of what FAI is supposed to be. As a working problem statement, I'm thinking of how to define "preference" for a given program (formal term), with this program representing an agent that imperfectly implements that preference, for example a human upload could be such a program. This "preference" needs to define criteria for decision-making on the unknown-physics real world from within a (temporary) computer environment with known semantics, in the same sense that a human could learn about what could/should be done in the real world while remaining inside a computer simulation, but having an I/O channel to interact with the outside, without prior knowledge of the physical laws.

I'm gradually writing up the idea of this direction of research on my blog. It's vague, but there is some hope that it can put people into a more constructive state of mind about how to approach FAI.

Comment author: Wei_Dai 21 January 2010 12:02:43PM *  1 point [-]

Thanks (and upvoted) for the link to your blog posts about preference. They are some of the best pieces of writings I've seen on the topic. Why not post them (or the rest of the sequence) on Less Wrong? I'm pretty sure you'll get a bigger audience and more feedback that way.

Comment author: Vladimir_Nesov 21 January 2010 07:55:04PM *  1 point [-]

Thanks. I'll probably post a link when I finish the current sequence -- by current plan, it's 5-7 posts to go. As is, I think this material is off-topic for Less Wrong and shouldn't be posted here directly/in detail. If we had a transhumanist/singularitarian subreddit, it would be more appropriate.