Matt_Young
Matt_Young has not written any posts yet.

Matt_Young has not written any posts yet.

I can certainly empathize with that statement. And if my opponent is not only dominating in ability but exploiting that advantage to the point where I'm losing just as much by submitting as I would by exacting punishment, then that's the tipping point where I start hitting back. Of course, I'd attempt retaliatory behavior initially when I was unsure how dominated I was, as well, but once I know that the opponent is just that much better than me, and as long as they're not abusing that advantage to the point where retaliation becomes cost-effective, then I'd have to concede my opponent's superiority, grit my teeth, bend over, and take one for the team. Especially with a 1 million human lives per util ratio. With lives at stake, I shut up and multiply.
If I judge the probability that I am a simulation or equivalent construct to be greater than 1/499500, yes.
(EDIT: Er, make that 1/999000, actually. What's the markup code for strikethrough 'round these parts?)
(EDIT 2: Okay, I'm posting too quickly. It should be just 10^-6, straight up. If I'm a figment then the $1000 isn't real disutility.)
(EDIT 3: ARGH. Sorry. 24 hours without sleep here. I might not be the sim, duh. Correct calculations:
u(pay|sim) = 10^6; u(~pay|sim) = 0; u(pay|~sim) = -1000; u(~pay|~sim) = 0
u(~pay) = 0; u(pay) = P(sim) 10^6 - P(~sim) (1000) = 1001000 * P(sim) - 1000
pay if P(sim) > 1/1001.
Double-checking... triple-checking... okay, I think that's got it. No... no... NOW that's got it.)
To be clear:
Are both I and my simulation told this is a one-time offer?
Is a simulation generated whether the real coin is heads or tails?
Are both my simulation and I told that one of us is a simulation?
Does the simulation persist after the choice is made?
I suppose the second and fourth points don't matter particularly... as long as the first and third are true, then I consider it plus EV to pay the $1000.
... (read more)Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:
"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"
Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an
Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote....
...Every majority coalition and division of the pie, is dominated by another majority coalition in which each agent of the new majority gets more pie. There does not appear to be any such thing as a dominant majority vote.
I suggest offering the following deal at the outset:
"I offer each of you the opportunity to lobby for an open spot in a coalition with me, to split the pie equally six ways, formed with a mutual promise that we will not defect, and if any coalition members do defect, we agree to exclude them from future... (read more)
... (read more)Here's yet another problem whose proper formulation I'm still not sure of, and it runs as follows. First, consider the Prisoner's Dilemma. Informally, two timeless decision agents with common knowledge of the other's timeless decision agency, but no way to communicate or make binding commitments, will both Cooperate because they know that the other agent is in a similar epistemic state, running a similar decision algorithm, and will end up doing the same thing that they themselves do. In general, on the True Prisoner's Dilemma, facing an opponent who can accurately predict your own decisions, you want to cooperate only if the other agent will cooperate if and only
Hi. Found the site about a week ago. I read the TDT paper and was intrigued enough to start poring through Eliezer's old posts. I've been working my way through the sequences and following backlinks. The material on rationality has helped me reconstruct my brain after a Halt, Melt and Catch Fire event. Good stuff.
I observe that comments on old posts are welcome, and I notice no one has yet come back to this post with the full formal solution for this dilemma since the publication of TDT. So here it is.
Whatever our opponent's decision algorithm may be, it will either depend to some degree on... (read 587 more words →)
You know, you're right.
I was thrown off by the word "precommit", which implies a reflectively inconsistent strategy, which is TDT-anathema. On the other hand, rational agents win, so having that strategy does make sense in that case, despite the fact that we might incur negative utility relative to playing submissively if we had to actually carry it out.
The solution, I think, is to be "the type of agent who would be ruthlessly vindictive against opponents who have enough predictive capability to see that I'm this type of agent, and enough strategic capability to accept that this means they gain nothing by defecting against me." That makes it a reflectively consistent... (read more)