ShardPhoenix comments on Where do selfish values come from? - Less Wrong

27 Post author: Wei_Dai 18 November 2011 11:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: ShardPhoenix 19 November 2011 05:45:10AM *  1 point [-]

I get the impression that AIXI re-computes it's actions at every time-step, so it can't pre-commit to paying the CM. I'm not sure if this is an accurate interpretation though.

Comment author: timtyler 19 November 2011 01:09:54PM *  2 points [-]

Something equivalent to precommitment is: it just being in your nature to trust counterfactual muggers. Then, recomputing your actions in every time-step is fine, and it doesn't necessarilly indicate that you don't have a nature that alllows you to pay counterfactual muggers.

Comment author: ShardPhoenix 20 November 2011 12:19:23AM *  0 points [-]

I'm not sure if AIXI has a "nature"/personality as such though? I suppose this might be encoded in the initial utility function somehow, but I'm not sure if it's feasible to include all these kinds of scenarios in advance.

Comment author: Vladimir_Nesov 20 November 2011 12:39:23AM 1 point [-]

That agent "recomputes decisions" is in any case not a valid argument for it being unable to precommit. Precommitment through inability to render certain actions is a workaround, not a necessity: a better decision theory won't be performing those actions of its own accord.

Comment author: timtyler 20 November 2011 12:30:37AM 0 points [-]

So: me neither - I was only saying that arguing from "recomputing its actions at every time-step", to "lacking precommitment" was an invalid chain of reasoning.