Tim_Tyler comments on Which Parts Are "Me"? - Less Wrong

30 Post author: Eliezer_Yudkowsky 22 October 2008 06:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 23 October 2008 05:43:00PM 0 points [-]

Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about "expected utility" or "goals" as it is about promoting a present model of evolving values into one's future, via increasingly effective interaction with one's (necessarily local) environment of interaction.

I don't think most of that makes much sense. If you think there's some sort of problem with utilitarian approaches to AI, feel free to spell it out - but IMHO, the sort of criticism offered here is too wishy-washy to be worth anything.

Problems with priors often wash out as you get more data. Combinatorial explosions are fun - but it's nice to know what is being combined. In biology, the future context organisms face is usually assumed to be similar to past contexts. Not a perfect assumption, but often good enough. Organisms have developmental plasticity (including brains) and developmental canalisation to help them deal with any changes.

IMO, you're barking up an empty tree here. The economic framework surrounding expected utility maximisation is incredibly broad and general - machine intelligence can't help but be captured by it.