In this problem the predictor predicts correctly. Can you explain why you think it predicts incorrectly?
the agent does not regard the prediction outcome as contingent on the agent's computation.
In trying to explain why the simulation ought to show that the prediction outcome is in fact contingent, I realized that I was confused, so I'm going to disregard what I previously tried to think, and start over.
The results are messy and full of wrong ideas; I suggest skimming to get the gist of it. That is: the following text is a noisy signal of what I'm trying to think, so don't read the details too closely.
--
I may have to reconsider whether I properly grokked t...
Some people on LW have expressed interest in what's happening on the decision-theory-workshop mailing list. Here's an example of the kind of work we're trying to do there.
In April 2010 Gary Drescher proposed the "Agent simulates predictor" problem, or ASP, that shows how agents with lots of computational power sometimes fare worse than agents with limited resources. I'm posting it here with his permission:
About a month ago I came up with a way to formalize the problem, along the lines of my other formalizations:
Also Wei Dai has a tentative new decision theory that solves the problem, but this margin (and my brain) is too small to contain it :-)
Can LW generate the kind of insights needed to make progress on problems like ASP? Or should we keep working as a small clique?