ShardPhoenix comments on Where do selfish values come from? - Less Wrong

27 Post author: Wei_Dai 18 November 2011 11:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: ShardPhoenix 20 November 2011 12:19:23AM *  0 points [-]

I'm not sure if AIXI has a "nature"/personality as such though? I suppose this might be encoded in the initial utility function somehow, but I'm not sure if it's feasible to include all these kinds of scenarios in advance.

Comment author: Vladimir_Nesov 20 November 2011 12:39:23AM 1 point [-]

That agent "recomputes decisions" is in any case not a valid argument for it being unable to precommit. Precommitment through inability to render certain actions is a workaround, not a necessity: a better decision theory won't be performing those actions of its own accord.

Comment author: timtyler 20 November 2011 12:30:37AM 0 points [-]

So: me neither - I was only saying that arguing from "recomputing its actions at every time-step", to "lacking precommitment" was an invalid chain of reasoning.