Johnicholas comments on Dead men tell tales: falling out of love with SIA - Less Wrong

2 Post author: Stuart_Armstrong 18 February 2011 02:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: Johnicholas 18 February 2011 06:46:31PM *  2 points [-]

The presentation of this article could be improved. For one, "triply altruistic" is novel enough that it could do with some concrete expansion. Also, the article is currently presented as a delta - I would prefer a "from first principles" (delta-already-applied) format.

Here's my (admittedly idiosyncratic) take on a "from first principles" concrete introduction:

Suppose that some creatures evolve in a world where they are likely to be plucked out by an experimenter, possibly cloned, possibly some clones are killed, then the survivors are offered a bet of some sort and then deposited back.

For example, in scenario 1 (or A in the previous post), the experimenter first clones the agent, then flips a coin, then if the coin came up heads, kills an agent, then elicits a "probability" of how the coin flip landed from the surviving agents using a bet (or a scoring rule?), then lets the surviving agents go free.

The advantage of this concreteness is that if we can simulate it, then we can see which strategies are evolutionarily stable. Note that though you don't have to specify the utilities or altruism parameters in this scenario, you do have to specify how money relates to what the agents "want" - survival and reproduction. Possibly rewarding the agents directly in copies is simplest.

I admit I have not done the simulation, but my intuition is that the two procedures "creates extra copies and then kill them" or "never create them at all" create identical evolutionary pressures, and so have identical stable strategies. So I'm dubious about your conclusion that there is a substantive difference between them.

Comment author: Stuart_Armstrong 18 February 2011 07:05:47PM 1 point [-]

Don't know what a delta is, sorry :-)

Looking for an evolutionary stable strategy might be an interesting idea.

But the point is not to wonder what would be ideal if your utility were evolutionarily stable, but what to do with your current utility, in these specific situations.

Comment author: Johnicholas 18 February 2011 08:49:08PM 0 points [-]

Sorry, by "delta" I meant change, difference, or adjustment.

The reason to investigate evolutionarily stable strategies is to look at the space of workable, self-consistent, winningish strategies. I know my utility function is pretty irrational - even insane. For example, I (try to) change my explicit values when I hear sufficiently strong arguments against my current explicit values. Explaining that is possible for a utilitarian, but it takes some gymnastics, and the upshot of the gymnastics is that utility functions become horrendously complicated and therefore mostly useless.

My bet is that there isn't actually much room for choice in the space of workable, self-consistent, winningish strategies. That will force most of the consequentialists, whether they ultimately care about particular genes or memes, paperclips or brass copper kettles, to act identically with respect to these puzzles, in order to survive and reproduce to steer the world toward their various goals.

Comment author: Stuart_Armstrong 19 February 2011 09:29:34AM 0 points [-]

I'm unsure. For a lone agent in the world, who can get copied and uncopied, I think that following my approach here is the correct one. For multiple competing agents, this becomes a trade/competition issue, and I don't have a good grasp of that.

Comment author: gwern 18 February 2011 08:55:30PM 0 points [-]