Under normal decision theory, you can imagine that an agent is asking you the reader how they should decide, and then they will do it. You can't consistently imagine Omega's coin-flip agents doing that, since Omega has preprogrammed them to ignore whatever you say.
This is a much stronger constraint than ordinary agent determinism, since a deterministic agent can take different actions based on sensory input, such as a response to a question about why one action is better than another. In respect of this particular action, I would hesitate to call one of the Omega-created entities an agent at all.
They are certainly not rational agents, and not really suitable objects for examining whether any given decision theory is suitable for rational agents.
I think they can be agents, at least if Omega gave them a decision theory that produces the output determined by the coin flip. I mean, then it's no different then when you normally program an agent with a decision theory. Whether they are rational agents then depends on whether you call e.g. Causal Decision Theory-agents rational - I'd probably say no, but many would disagree, I'm guessing.
I fail to see why the Coin Flip Creation problems are at all interesting.
It is trivial to get suboptimal outcomes in favor of any target 'optimal' agent if the game can arbitrarily modify the submitted agent.
(Also, Coin Flip Creation Version 2, like the vanilla Newcomb's paradox, requires that either a) the agent is sub-Turing (not capable of general computation) (in which case there is no paradox) or b) Omega has a Halting oracle, or is otherwise super-Turing, but this would require violating the Church-Turing thesis (in which case all bets are off).)
Well, the post did get agreement in the comment section, and had a quite clever sounding (but wrong) argument about how agents are deterministic in general etc., and it seemed important to point out the difference between CFC and Newcomb's problem.
Perhaps I should rephrase:
Why do others find Coin Flip Creation problems at all interesting? Is it because they a) have thought of said arguments and dismissed them (in which case, why? What am I missing?), b) because they haven't thought of said arguments (in which case why not? I found it immediately apparent. Am I that much of an outlier?), or c) because of something else (if so, what?)
Ah, I get you now. I don't know, of course; a and b could both be in the mix. I have had a similar feeling with an earlier piece on decision theory, which to me seemed (and still seems) so clearly wrong, and which got quite the upvotes. This isn't meant to be too negative about that piece - it just seems people have very different intuitions about decision theory even after having thought (and read) about it quite a bit.
Back in 2017, Johannes_Treutlein published a post critiquing logical decision theories: Did EDT get it right all along? Introducing yet another medical Newcomb problem. In it, Treutlein presents the Coin Flip Creation problem (and a second version) and argues logical decision theories (like Updateless Decision Theory (UDT) and Functional Decision Theory (FDT)) handle it wrong. After reading the post, it seems to me Treutlein's argumentation is flawed, and while I am probably not the first to notice this (or even write about it), I still think it's important to discuss this, as I am afraid more people make the same mistake.
Note that I will be talking mostly about how FDT handles the problems Treutlein presents, as this is a theory I have some expertise on.
The Coin Flip Creation Problem
From the original post:
Treutlein claims EDT one-boxes and "gets it right". But I think it's wrong even to discuss what a decision theory would do in this problem: my claim is that this is not a proper decision theoretic problem. It's an interesting thought experiment, but it is of little value to decision theory. Why? Because the question
has two branches:
In both cases, the answer is already given in the problem statement. In case 1, Omega created you as a one-boxer, and in case 2, you were created as a two-boxer.
Treutlein claims logical decision theories (like UDT and FDT) get this problem wrong, but there literally is no right or wrong here. Without the Omega modification at the coin flip, FDT would two-box (and rightly so). With the Omega modification, there is, in case 1, no FDT anymore (as Omega modifies the agent into a one-boxer), so the question becomes incoherent. The question is only coherent for case 2, where FDT makes the right decision (two-boxing, making $1,000 > $0). And it's not FDT's fault there's no $1,000,000 to earn in case 2: this is purely the result of a coin flip before the agent even existed. It's not the result of any decision made by the agent. In fact, the whole outcome of this game is determined purely by the outcome of the coin flip! Hence my claim that this is not a proper decision theoretic problem.
Treutlein does (sort of) address my counterargument:
Indeed. An AI always does what its source code says, so in a way, its decisions are determined by its creator. This is why my intuition with Newcomb's problem is not so much "What action should the agent take?" but "What source code (or decision procedure) should the agent run?" This phrasing makes it more clear that the decision does influence whether there's $1,000,000 to earn, as actions can't cause the past but your source code/decision procedure could have been simulated by Omega. But actions being predetermined is not the objection to the Coin Flip Creation problem. In Newcomb's problem, your action is predetermined, but your decision still influences the outcome of the game. I want to run a one-boxing procedure, as that would give me $1,000,000 in Newcomb's problem. What procedure do I want to run in the Coin Flip Creation problem? This question doesn't make sense! In the Coin Flip Creation problem, my decision procedure is determined by the coin flip!
Coin Flip Creation, Version 2
From the original post:
Treutlein claims UDT does one-box on this version while it two-boxes on the original version, and finds this curious. My objection remains that this, too, is not a problem for decision theories, as the decision procedure is determined by the coin flip in the problem statement.