You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on What I mean... - Less Wrong Discussion

5 Post author: Stuart_Armstrong 26 March 2015 11:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 30 March 2015 03:14:39PM 0 points [-]

the happiness maximizer is going to need to be able to find happiness inside an unfamiliar ontology.

But the module for predicting human behaviour/preferences should surely be the same in a different ontology? The module is a model, and the model is likely not grounded in the fine detail of the ontology.

Example: the law of comparative advantage in economics is a high level model, which won't collapse because the fundamental ontology is relativity rather than newtonian mechanics. Even in a different ontology, humans should remain (by far) the best things in the world that approximate the "human model".

Comment author: Manfred 31 March 2015 05:21:00AM *  0 points [-]

If there is a module that specifically requires prediction of human behavior, sure. My claim in the second part of my comment is that if the model predicts the number of paperclips, it's not necessary that the closest match to things that function like human decisions will actually be a useful predictive model of human decisions.