You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on What we're losing - Less Wrong Discussion

52 Post author: PhilGoetz 15 May 2011 03:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 16 May 2011 07:22:19PM 0 points [-]

I trust the applicability of the symbols of expected utility theory less over time and trust common beliefs about the automatic implications of putting those symbols in a seed AI even less than that. Am I alone here?

The current theory is all fine - until you want to calculate utility based on something other than expected sensory input data. Then the current theory doesn't work very well at all. The problem is that we don't yet know how to code: "not what you are seeing, how the world really is" in a machine-readable format.