private_messaging comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 12 July 2012 12:28:11PM -2 points [-]

here are fewer details required than many strawman versions would have it; and often what seems like a specific detail is actually just an antiprediction, i.e., UFAI is not about a special utility function but about the whole class of non-Friendly utility functions.

If by "utility function" you mean "a computable function, expressible using lambda calculus" (or Turing machine tape or python code, that's equivalent), then the arguing that majority of such functions lead to a model-based utility-based agent killing you, is a huge stretch, as such functions are not grounded and the correspondence of model with the real world is not a sub-goal to finding maximum of such function.