Kevin comments on Complexity of Value ≠ Complexity of Outcome - Less Wrong

32 Post author: Wei_Dai 30 January 2010 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread.

Comment author: Kevin 30 January 2010 08:32:50AM *  1 point [-]

Does any existing decision theory make an attempt to decide based on existing human values? How would one begin to put human values into rigorous mathematical form?

I've convinced a few friends that the most likely path to Strong AI (i.e. intelligence explosion) is a bunch of people sitting in a room doing math for 10 years. But that's a lot of math before anyone even begins to start plugging in the values.

I suppose it does make sense for us to talk in English about what all of these things mean, so that in 10+ years they can be more easily translated into machine language with sufficient rigor. So can anyone here conceive what the equations for the values of a FAI begin to look like? I can't right now and it seems like I am missing something important when we are just talking about all of this in English.

Comment author: ciphergoth 30 January 2010 08:53:06AM 3 points [-]
Comment author: Eliezer_Yudkowsky 30 January 2010 09:27:57AM 2 points [-]

That's not non-English.

Comment author: ciphergoth 30 January 2010 11:37:36AM 3 points [-]

Sure, but it helps to be familiar with it if you're having this discussion all the same.