prase comments on Rationality and Winning - Less Wrong

19 Post author: lukeprog 04 May 2012 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: prase 07 May 2012 06:34:35PM 1 point [-]

Then you suddenly have Y in your system (not just 'been told Y') . If you don't do that you can't learn, if you do that you need a lot of hacks not to get screwed over.

I don't think I can't learn if I don't include every hypothesis I am told into my set of hypotheses with assigned probability. A bounded agent may well do some rounding on probabilities and ignore every hypothesis with probability below some threshold.

But even if I include Y with some probability, what does it imply?

Until unbounded Bayesian agent tells me it got pascal's mugged, that's not really known.

Has a bounded agent told you that it got Pacal-mugged? The problem is a combination of a complexity-based prior together with unbounded utility function, and that isn't specific to bounded agents.

Can you show how a Bayesian agent with bounded utility function can be exploited?

Comment author: private_messaging 08 May 2012 06:17:01AM *  1 point [-]

You're going on the road of actually introducing necessary hacks. That's good. I don't think simply setting threshold probability or capping the utility on a Bayesian agent results in the most effective agent given specific computing time, and it feels to me that you're wrongfully putting a burden of both the definition of what your agent is, and the proof, on me.

You got to define what the best threshold is, or what is the reasonable cap, first - those have to be somehow determined before you have your rational agent that works well. Clearly I can't show that it is exploitable for any values, because assuming hypothesis probability threshold of 1-epsilon and utility cap of epsilon, the agent can not be talked into doing anything at all. edit: and trivially, by setting threshold too low and cap too high, the agent can be exploited.

We were talking about LW rationality. If LW rationality didn't give you procedure for determining the threshold and the cap, then I already demonstrated the point I was making. I don't see huge discussion here on the optimal cap for utility, and on the optimal threshold, and on best handling of the hypotheses below threshold, and it feels to me that rationalists have thresholds set too low and caps set too high. You can of course have an agent that will decide with commonsense and then set threshold and cap as to match it, but that's rationalization not rationality.