Giles comments on An attempt to dissolve subjective expectation and personal identity - LessWrong

35 Post author: Kaj_Sotala 22 February 2013 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread.

Comment author: Giles 23 February 2013 12:53:46AM 14 points [-]

I can imagine that if you design an agent by starting off with a reinforcement learner, and then bolting some model-based planning stuff on the side, then the model will necessarily need to tag one of its objects as "self". Otherwise the reinforcement part would have trouble telling the model-based part what it's supposed to be optimizing for.

Comment author: Kaj_Sotala 23 February 2013 07:00:47AM *  1 point [-]

Thanks, that's what I was trying to say.

Comment author: Gust 22 April 2013 12:05:59PM 0 points [-]

All the content in the post just fell in place after I read Giles summary. Still a great post, though.

Comment author: abramdemski 03 March 2013 10:49:44AM 0 points [-]

It seems to me like this would be needed even if there was only the model-based part: if the system has actuators, then these need to be associated with some actuators in the 3rd-person model; if the system has sensors, then these need to be associated with sensors in the 3rd-person model. Once you know every physical fact about the universe, you still need to know "which bit is you" on top of that, if you are an agent.

Comment author: lukstafi 03 March 2013 04:00:56PM 0 points [-]

Self enters into the equation via the epistemic dynamics: which regularities are intrinsic to the model, and which are "intrinsic" to the frame of reference in which the input is provided.