Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Dagon 18 May 2017 09:34:13PM 1 point [-]

Can you give an operational definition (or concrete example) of the free rider 'problem'? There are a couple of different things that you might not like about the phenomenon, and I'm not sure exactly which is the problem you're concerned about.

Exclusion is the most common "solution" (auctions and "fair" divisions being specific allocation mechanisms within that). Don't let "free riders" actually ride, and there's no problem.

Comment author: strangepoop 19 May 2017 12:59:47PM *  0 points [-]

Exclusion isn't always socially appropriate. If I take a cab home everyday (which I pay for), and a friend can literally take a free ride because her place is on the way, should I "exclude" her if she doesn't want to share the cost? She claims it doesn't cost me extra, I'd be paying for the cab anyway if she lived somewhere else.

But of course I can come up with un-excludable externalities:

I share a house that's in pretty bad shape, and I decide to get some fresh painting done. This is a net benefit to all the housemates, but we would value them differently. I want this slightly more than all the others. So I have to pay the entire amount.

Comment author: cousin_it 16 May 2017 05:46:19PM *  0 points [-]

Yeah. My previous version of this idea was "the free market maximizes money-weighted utility instead of utility", but the one with recursion is nicer because it evokes a dynamic picture.

The word "blame" is a bit is-ought to begin with :-) Still, it seems like less disposable income leads to fewer jobs which leads to less disposable income etc, so at least part of unemployment should be blamed on the recursive effect and not on individuals.

Comment author: strangepoop 18 May 2017 11:40:32PM 1 point [-]

Incidentally, Gary Drescher makes the same (citation free) statement in a footnote in Chapter 7 - Deriving Ought from Is:

Utilitarian bases for capitalism—arguments that market forces promote the greatest good—are another matter, best suited for other books. For here, suffice it to note that even in theory, an unconstrained market does not promote the greatest good overall, but rather the greatest good weighted by the participants’ relative wealth.

I remember asking for a reference about a year ago on LWIRC, but that didn't help much.

Comment author: MrMind 16 May 2017 02:53:20PM 0 points [-]

I have a problem with the definition: patternism doesn't fall automatically out of reductionism / naturalism, so it's not automatically accepted by those who accept cryonics.

Comment author: strangepoop 18 May 2017 11:11:33PM 0 points [-]

Can you help me with this?

It seems to me:

'reductionism/naturalism' + 'continuity of consciousness in time' + 'no tiny little tags on particles that make up a conscious mind' = 'patternism'

Are you saying that there's something wrong with the latter two summands? Or it doesn't quite add up?

Comment author: gathaung 16 May 2017 04:22:28PM *  0 points [-]

You should strive to maximize utility of your pattern, averaged over both subjective probability (uncertainty) and squared amplitude of wave-function.

If you include the latter, then it all adds up to normalcy.

If you select a state of the MWI-world according to born rule (i.e. using squared amplitude of the wave-function), then this world-state will, with overwhelming probability, be compatible with causality, entropy increase over time, and a mostly classic history, involving natural selection yielding patterns that are good at maximizing their squared-amplitude-weighted spread, i.e. DNA and brains that care about squared-amplitude (even if they don't know it).

Of course this is a non-answer to your question. Also, we have not yet finished the necessary math to prove that this non-answer is internally consistent (we=mankind), but I think this is (a) plausible, (b) the gist of what EY wrote on the topic, and (c) definitely not an original insight by EY / the sequences.

Comment author: strangepoop 18 May 2017 10:13:45PM 0 points [-]

See my reply to Oscar_Cunningham below; I'm not sure if Egan's law is followed exactly (it never is, otherwise you've only managed to make the same predictions as before, with a complexity penalty!)

Comment author: Oscar_Cunningham 16 May 2017 07:49:58AM 0 points [-]

If some one offered me a bet giving $0 or $100 based on a quantum coin flip I'd be willing to pay $50 for it. So it's clear that I'm acting for the sake of my average future self, not just the best or worst outcome. Therefore I also act to avoid outcomes where I die, even if there are still some possibilities where I live. The fact that I won't experience the "dead" outcomes is irrelevant - I can still act for the sake of things which I won't experience.

What about the question of whether I anticipate immortality? Well if I was planning what to do after an event where I might die, I would think to myself "I only need to think about the possibility where I live, since I won't be able to carry out any actions in the other case" which is perhaps not the same as "anticipating immortality" but it has the same effect.

Comment author: strangepoop 18 May 2017 10:05:22PM *  0 points [-]

I don't think that follows exactly. Specifically, that "you're acting for the sake of things which you won't experience".

You are correct in your pricing of quantum flips according to payoffs adjusted by the Born rule.

But the payoffs from your dead versions don't count, assuming you can only find yourself in non-dead continuations. I don't know if this is a position (Bostrom or Carroll have almost surely written about it) or just outright stupidity, but it seems to me that this assumption (of only finding yourself alive) shrinks your ensemble of future states, leaving your decision theoretic judgements to only deal with the alive ones

If I'm offered a bet of being given $0 or $100 over two flips of a fair quantum coin, with payoffs:

|00> -> $0

|11> -> $100

|01> -> certain immediate death

|10> -> certain immediate death

I'd still price it at $50, rather than $25.

You could say, a little vaguely, that the others are physical possibilities, but they're not anthropic possibilities.

As for "I can still act for the sake of things which I won't experience" in general, where you care about dead versions, apart from you being able to experience such, you might find Living in Many Worlds helpful, specifically this bit:

Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the 12th century, which are also beyond your ability to affect. But the 12th century is not your responsibility, because it has, as the quaint phrase goes, "already happened". I would suggest that you consider every world which is not in your future, to be part of the "generalized past".

If you care about other people finding you dead and mourning you though, then the case would be different, and you'd have to adjust your payoffs accordingly.

Note again though, this should have nothing necessarily to do with QM (all of this would hold in a large enough classical universe).

As for me, personally, I don't think I buy immortality, but then I'd have to modus tollens out a lot of stuff (like stepping into a teleporter, or even perhaps the notion of continuity).

Comment author: strangepoop 18 May 2017 09:18:21PM 1 point [-]

Is there some nice game-theoretic solution that deals with the 'free rider problem', in the sense of making everyone pay in proportion to their honest valuation? Like how Vickery Auctions reveal honest prices, or Sperner's lemma can help with envy-free rent division?

Comment author: strangepoop 15 May 2017 11:36:23PM *  4 points [-]

Why does patternism [the position that you are only a pattern in physics and any continuations of it are you/you'd sign up for cryonics/you'd step into Parfit's teleporter/you've read the QM sequence]

not imply

subjective immortality? [you will see people dying, other people will see you die, but you will never experience it yourself]

(contingent on the universe being big enough for lots of continuations of you to exist physically)

I asked this on the official IRC, but only feep was kind enough to oblige (and had a unique argument that I don't think everyone is using)

If you have a completely thought out explanation for why it does imply that, you ought never to be worried about what you're doing leading to your death (maybe painful existence, but never death), because there would be a version of you that would miraculously escape it.

If you bite that bullet as well, then I would like you to formulate your argument cleanly, then answer this (rot13):

jul jrer lbh noyr gb haqretb narfgurfvn? (hayrff lbh pbagraq lbh jrer fgvyy pbafpvbhf rira gura)

ETA: This is slightly different from a Quantum Immortality question (although resolutions might be similar) - there is no need to involve QM or its interpretations here, even in a classical universe (as long as it's large enough), if you're a patternist, you can expect to "teleport" to another exact clone somewhere that manages to live.

Comment author: strangepoop 19 December 2016 04:24:43PM 1 point [-]

Can someone recommend a book on Economics basics with the same level of force and completion as a Jaynes/Drescher/Pearl/Nozick/Dawes?

I mean, with powerful freeing laws (I feel like this is exactly analogous to EY's requiredism in the free will sequence) that can let my imagination wander without fear of fooling myself too much.

I realize that this may be asking for too much given the nature of the field, but anything that is close will do.