TimS comments on If we live in a simulation, what does that imply? - Less Wrong

18 Post author: JoshuaFox 25 October 2012 09:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread.

Comment author: TimS 26 October 2012 12:25:34AM *  -2 points [-]

Our simulators want to trade with us.

Acausal trade confuses me. Is the following right?

Humanity should simulate other agents who (a) would value being simulated and (b) would simulate us

Because it isn't clear to me that humanity is the type of agent that would value unconnected copies being simulated. (this is distinct but dependent on the assertion that simulated humans are entitled to moral consideration regardless of whether actual humans are sufficiently causally connected to the simulated humans).

Comment author: JoshuaFox 26 October 2012 12:37:02AM *  0 points [-]

Tim, it confuses me too, but I don't think that that summary is right. Instead: Humans should give another agent what it wants if it would give humans what we want in other conditions (or: in another part of the multiverse).

An "agent" here is just a computer program, an algorithm. "Paying" it in an acausal trade may well mean running it (simulating it).

Comment author: TimS 26 October 2012 01:07:19AM 2 points [-]

Ok, Tile-the-Universe-with-Smiles should make some paperclips because Clippy will put smiles on something. But both agents are so far apart that they can't empirically verify the other agent's existence.

So, this makes sense if Clippy and Tiling can deduce each other's existence without empirical evidence, and each one thinks this issue is similar enough to Newcomb's problem that they pre-commit to one-boxing (aka following through even if they can't empirically verify follow through by the other party.

But treating this problem like Newcomb's instead of like one-shot Prisoner's dilemma seems wrong to me. Even using some advanced decision theory, there doesn't seem to be any reason either agent thinks the other is similar enough to cooperate with. Alternatively, each agent might have some way of verifying compliance - but then labeling this reasoning "acausal" seems terribly misleading.


Internet connection wonkiness = inadvertent double post. Sorry about that, folks.

Comment author: Armok_GoB 27 October 2012 01:50:23AM 0 points [-]

Umm, prety much all of the advanced decision theories talked about here do cooperate on the prisoners dilemma. In fact, it's sometimes used as a criterion I'm prety sure.

Comment author: TimS 27 October 2012 11:14:20AM 0 points [-]

The advanced decision theories cooperate with themselves. They also try to figure out if the counter-party is likely to cooperate. But they don't necessarily cooperate with everyone - consider DefectBot.

Comment author: Armok_GoB 27 October 2012 09:10:26PM 0 points [-]

This was to obvious for me to notice the assumption.

Comment author: JoshuaFox 26 October 2012 03:34:46AM 0 points [-]

@TimS, this is an important objection. But rather than putting my reply under this downvoted thread, I will save it for a later.

Comment author: faul_sname 26 October 2012 05:26:24AM *  0 points [-]

Because the post was retracted, it will not be downvoted any further, so you're safe to respond.