Optimal rudeness

-7 PhilGoetz 13 April 2013 03:48AM

On LessWrong, we often get cross, and then rude, with each other. Sometimes, someone then observes this rudeness is counterproductive.

Is it?

As a general rule, emotional responses are winning strategies (at least for your genes).  That's why you have those emotions.

Granted, insulting someone during your rebuttal of their argument makes it less likely that they will see your point. But it appears to be an effective tactic when carrying on an argument in public.

It's my impression that on LessWrong, a comment or a post written with a certain amount of disdain is more-likely to get voted up than a completely objective comment. A good way to obtain upvotes, if that is your goal, is to make other readers wish to identify with you and disassociate themselves from whomever you're arguing against.  A great many up-voted comments, including some of my own, suggest, subtly or not subtly, with or without evidence, that the person being responded to is ignorant or stupid.

The correct amount of derision appears be slight, and to depend on status. Someone with more status should be more rude. Retaliations against rudeness may really be retaliations for an attempt to claim high status.

What's the optimal response if someone says something especially rude to you?  Is a polite or a rude response to a rude comment more likely to be upvoted/downvoted?  Not ideally, but in reality.  I think, in general, when dealing with humans, responding to skillful rudeness, and especially humorous rudeness, with politeness, is a losing strategy.

My expectation is that rudeness is a better strategy for poor and unpopular arguments than for good or popular ones, because rudeness adds noise.  The lower a comment's expected karma, the ruder it should be.

You jerk.

Willing gamblers, spherical cows, and AIs

15 ChrisHallquist 08 April 2013 09:30PM

Note: posting this in Main rather than Discussion in light of recent discussion that people don't post in Main enough and their reasons for not doing so aren't necessarily good ones. But I suspect I may be reinventing the wheel here, and someone else has in fact gotten farther on this problem than I have. If so, I'd be very happy if someone could point me to existing discussion of the issue in the comments.

tldr; Gambling-based arguments in the philosophy of probability can be seen as depending on a convenient simplification of assuming people are far more willing to gamble than they are in real life. Some justifications for this simplification can be given, but it's unclear to me how far they can go and where the justification starts to become problematic.

In "Intelligence Explosion: Evidence and Import," Luke and Anna mention the fact that, "Except for weather forecasters (Murphy and Winkler 1984), and successful professional gamblers, nearly all of us give inaccurate probability estimates..." When I read this, it struck me as an odd thing to say in a paper on artificial intelligence. I mean, those of us who are not professional accountants tend to make bookkeeping errors, and those of us who are not math, physics, engineering, or economics majors make mistakes on GRE quant questions that we were supposed to have learned how to do in our first two years of high school. Why focus on this particular human failing?

A related point can be made about Dutch Book Arguments in the philosophy of probability. Dutch Book Arguments claim, in a nutshell, that you should reason in accordance with the axioms of probability because if you don't, a clever bookie will be able to take all your money. But another way to prevent a clever bookie from taking all your money is to not gamble. Which many people don't, or at least do rarely.

Dutch Book Arguments seem to implicitly make what we might call the "willing gambler assumption": everyone always has a precise probability assignment for every proposition, and they're willing to take any bet which has a non-negative expected value given their probability assignments. (Or perhaps: everyone is always willing to take at least one side of any proposed bet.) Needless to say, even people who gamble a lot generally aren't that eager to gamble.

So how does anyone get away with using Dutch Book arguments for anything? A plausible answer comes from a joke Luke recently told in his article on Fermi estimates:

Milk production at a dairy farm was low, so the farmer asked a local university for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist. After two weeks of observation and analysis, the physicist told the farmer, "I have the solution, but it only works in the case of spherical cows in a vacuum."

If you've studied physics, you know that physicists don't just use those kinds of approximations when doing Fermi estimates; often they can be counted on to yield results that are in fact very close to reality. So maybe the willing gambler assumption works as a sort of spherical cow, that allows philosophers working on issues related to probability to generate important results in spite of the unrealistic nature of the assumption.

Some parts of how this would work are fairly clear. In real life, bets have transaction costs; they take time and effort to set up and collect. But it doesn't seem too bad to ignore that fact in thought experiments. Similarly, in real life money has declining marginal utility; the utility of doubling your money is less than the disutility of losing your last dollar. In principle, if you know someone's utility function over money, you can take a bet with zero expected value in dollar terms and replace it with a bet that has zero expected value in utility terms. But ignoring that and just using dollars for your thought experiments seems like an acceptable simplification for convenience's sake.

Even making those assumptions so that it isn't definitely harmful to accept bets with zero expected (dollar) value, we might still wonder why our spherical cow gambler should accept them. Answer: because if necessary you could just add one penny to the side of the bet you want the gambler to take, but always having to mention the extra penny is annoying, so you may as well assume the gambler takes any bet with non-negative expected value rather than require positive expected value.

Another thing that keeps people from gambling more in real life is the principle that if you can't spot the sucker in the room, it's probably you. If you're unsure whether an offered bet is favorable to you, the mere fact that someone is offering it to you is pretty strong evidence that it's in their favor. One way to avoid this problem is to stipulate that in Dutch Book Arguments, we just assume the bookie doesn't know anything more about whatever the bets are about than the person being offered the bet, and the person being offered the bet knows this. The bookie has to construct her book primarily based on knowing the propensities of the other person to bet. Nick Bostrom explicitly makes such an assumption in a paper on the sleeping beauty problem. Maybe other people explicitly make this assumption, I don't know.

In this last case, though, it's not totally clear whether limiting the bookie's knowledge is all you need to bridge the gap between the willing gambler assumption and how people behave in real life. In real life, people don't often make very exact probability assignments, and may be aware of their confusion about how to make exact probability assignments. Given that, it seems reasonable to hesitate in making bets (even if you ignore transaction costs and declining marginal utility and know that the bookie doesn't know any more about the subject of the bet than you do), because you'd still know the bookie might be trying to exploit your confusion over how to make exact probability assignments.

At an even simpler level, you might adopt a rule, "before making multiple bets on related questions, check to make sure you aren't guaranteeing you'll lose money." After all, real bookies offer odds such that if anyone was stupid enough to bet on each side of a question with the same bookie, they'd be guaranteed to lose money. In a sense, bookies could be interpreted as "money pumping" the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they're apparently irrational enough to be gambling in the first place.

In the end, I'm confused about how useful the willing gambler assumption really is when doing philosophy of probability. It certainly seems like worthwhile work gets done based on it, but just how applicable are those results to real life? How do we tell when we should reject a result because the willing gambler assumption causes problems in that particular case? I don't know.

One possible justification for the willing gambler assumption is that even those of us who don't literally gamble, ever, still must make decisions where the outcome is not certain, and we therefore we need to do a decent job of making probability assignments for those situations. But there are lots of people who are successful at their chosen field (including in fields that require decisions with uncertain outcomes) who aren't weather forecasters or professional gamblers, and therefore can be expected to make inaccurate probability estimates. Conversely, it doesn't seem that the skills acquired by successful professional gamblers give them much of an edge in other fields. Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don't specifically require them is weak.

Another justification for pursing lines of inquiry based on the willing gambler assumption, a justification that will be particularly salient for people on LessWrong, is that if we want to build an AI based on an idealization of how rational agents think (Bayesianism or whatever), we need tools like the willing gambler assumption to figure out how to get the idealization right. That sounds like a plausible thought at first. But if we flawed humans have any hope of building a good AI, it seems like an AI that's as flawed as (but no more flawed than) humans should also have a hope of self-improving into something better. An AI might be programmed in a way that makes it a bad gambler, but aware of this limitation, and left to decide for itself whether, when it self-improves, it wants to focus on improving its gambling ability or improving other aspects of itself.

As someone who cares a lot about AI, this issue of just how useful various idealizations are for thinking about AI and possibly programming an AI one day are especially important to me. Unfortunately, I'm not sure what to say about them, so at this point I'll turn the question over to the comments.

Infinitesimals: Another argument against actual infinite sets

-21 common_law 26 January 2013 03:04AM

[Crossposted]

Argument

My argument from the incoherence of actually existing infinitesimals has the following structure:

1. Infinitesimal quantities can’t exist;

2. If actual infinities can exist, actual infinitesimals must exist;

3. Therefore, actual infinities can’t exist.

Although Cantor, who invented the mathematics of transfinite numbers, rejected infinitesimals, mathematicians have continued to develop analyses based on them, as mathematically legitimate as are transfinite numbers, but few philosophers try to justify actual infinitesimals, which have some of the characteristics of zero and some characteristics of real numbers. When you add an infinitesimal to a real number, it’s like adding zero. But when you multiply an infinitesimal by infinity, you sometimes get a finite quantity: the points on a line are of infinitesimal dimension, in that they occupy no space (as if they were zero duration), yet compose lines finite in extent.

Few advocate actual infinitesimals because an actually existing infinitesimal is indistinguishable from zero. For however small a quantity you choose, it’s obvious that you can make it yet smaller. The role of zero as a boundary accounts for why it’s obvious you can always reduce a quantity. If I deny you can, you reply that since you can reduce it to zero and the function is continuous, you necessarily can reduce any given quantity—precluding actual infinities. When I raise the same argument about an infinite set, you can’t reply that you can always make the set bigger; if I say add an element, you reply that the sets are still the same size (cardinality). The boundary imposed by zero is counterpoint for infinitesimals to the openness of infinity, but the ability to demonstrate actual-infinitesimals’ incoherence suggests that infinity is similarly infirm.

Can more be said to establish that the conclusion about actual infinitesimal quantities also applies to actual infinite quantities? Consider again the points on a 3-inch line segment. If there are infinitely many, then each must be infinitesimal. Since there are no actual infinitesimals, there are no actual infinities of points.

But this conclusion depends on the actual infinity being embedded in a finite quantity—although, as will be seen, rejecting bounded infinities alone travels metaphysical mileage. For boundless infinities, consider the number of quarks in a supposed universe of infinitely many. Form the ratio between the number of quarks in our galaxy and the infinite number of quarks in the universe. The ratio isn’t zero because infinitely many galaxies would still form a null proportion to the universal total; it’s not any real number because many of them would then add up to more than the total universe. This ratio must be infinitesimal. Since infinitesimals don’t exist, neither do unbounded infinities (hence, infinite quantities in general, their being either bounded or unbounded).

 

Infinitesimals and Zeno’s paradox

Rejecting actually existing infinities is what really resolves Zeno’s paradox, and it resolves it by way of finding that infinitesimals don’t exist. Zeno’s paradox, perhaps the most intriguing logical puzzle in philosophy, purports to show that motion is impossible. In the version I’ll use, the paradox analyzes my walk from the middle of the room to the wall as decomposable into an infinite series of walks, each reducing the remaining distance by one-half. The paradox posits that completing an infinite series is self-contradictory: infinite means uncompletable. I can never reach the wall, but the same logic applies to any distance; hence, motion is proven impossible.

The standard view holds that the invention of the integral calculus completely resolved the paradox by refuting the premise that an infinite series can’t be completed. Mathematically, the infinite series of times actually does sum to a finite value, which equals the time required to walk the distance; Zeno’s deficiency is pronounced to be that the mathematics of infinite series was yet to be invented. But the answer only shows that (apparent) motion is mathematically tractable; it doesn’t show how it can occur. Mathematical tractability is at the expense of logical rigor because it is achieved by ignoring the distinction between exclusive and inclusive limits. When I stroll to the wall, the wall represents an inclusive limit—I actually reach the wall. When I integrate the series created by adding half the remaining distance, I only approach the limit equated with the wall. Calculus can be developed in terms of infinitesimals, and in those terms, the series comes infinitesimally close to the limit, and in this context, we treat the infinitesimal as if it were zero. As we’ve seen, actual infinity and infinitesimals are inseparable, certainly where, as here, the actual infinity is bounded. The calculus solves the paradox only if actual infinitesimals exist—but they don’t.

Zeno’s misdirection can now be reconceived as—while correctly denying the existence of actual infinities—falsely affirming the existence of its counterpart, the infinitesimal. The paradox assumes that while I’m uninterruptedly walking to the wall, I occupy a series of infinitesimally small points in space and time, such that I am at a point at a specific time the same way as if I were had stopped.

Although the objection to analyzing motion in Zeno’s manner was apparently raised as early as Aristotle, the calculus seems to have obscured the metaphysical project more than illuminating it. Logician Graham Priest (Beyond the Limits of Thought (2003)) argues that Zeno’s paradox shows that actual infinities can exist by the following thought experiment. Priest asks that you imagine that rather than walking continuously to the wall, I stop for two seconds at each halfway point. Priest claims the series would then complete, but his argument shows that he doesn’t understand that the paradox depends on the stopping points being infinitesimal. Despite the early recognition that (what we now call) infinitesimals are at the root of the paradox, philosophers today don’t always grasp the correct metaphysical analysis.

Distinguishing actual and potential infinities

Recognizing that infinitesimals are mathematical fictions solidifies the distinction between actual and potential infinity. The reason that mathematical infinities are not just consistent but are useful is that potential infinities can exist. Zeno’s paradox conceives motion as an actual infinity of sub-trips, but, in reality, all that can be shown is that the sub-trips are potentially infinite. There’s no limit to how many times you can subdivide the path, but traversing it doesn’t automatically subdivide it infinitely, which result would require that there be infinitesimal quantities. This understanding reinforces the point about dubious physical theories that posit an infinity of worlds. It’s been argued that the many-worlds interpretation of quantum mechanics, which invokes an uncountable infinity of worlds, doesn’t require actual infinity any more than does the existence of a line segment, which can be decomposed into uncountably many segments, but this plurality of worlds does not avoid actual infinity. We exist in one of those worlds. Many worlds, unlike infinitesimals and the conceptual line segments employing them, must be conceived as actually existing

 

You can't signal to rubes

7 Patrick 01 January 2013 06:40AM

The word 'signalling' is often used in Less Wrong, and often used wrongly. This post is intended to call out our community on its wrongful use, as well as serve as an introduction to the correct concept of signalling as contrast.

continue reading »

Brain-in-a-vat Trolley Question

-5 nick012000 30 December 2012 03:22AM

Just saw this on another forum. I figured I'd repost it here, since it'd be interesting to see you guy's answer to it.

Consider the following case:

On Twin Earth, a brain in a vat is at the wheel of a runaway trolley. There are only two options that the brain can take: the right side of the fork in the track or the left side of the fork. There is no way in sight of derailing or stopping the trolley and the brain is aware of this, for the brain knows trolleys. The brain is causally hooked up to the trolley such that the brain can determine the course which the trolley will take.

On the right side of the track there is a single railroad worker, Jones, who will definitely be killed if the brain steers the trolley to the right. If the railman on the right lives, he will go on to kill five men for the sake of killing them, but in doing so will inadvertently save the lives of thirty orphans (one of the five men he will kill is planning to destroy a bridge that the orphans' bus will be crossing later that night). One of the orphans that will be killed would have grown up to become a tyrant who would make good utilitarian men do bad things. Another of the orphans would grow up to become G.E.M. Anscombe, while a third would invent the pop-top can.

If the brain in the vat chooses the left side of the track, the trolley will definitely hit and kill a railman on the left side of the track, "Leftie" and will hit and destroy ten beating hearts on the track that could (and would) have been transplanted into ten patients in the local hospital that will die without donor hearts. These are the only hearts available, and the brain is aware of this, for the brain knows hearts. If the railman on the left side of the track lives, he too will kill five men, in fact the same five that the railman on the right would kill. However, "Leftie" will kill the five as an unintended consequence of saving ten men: he will inadvertently kill the five men rushing the ten hearts to the local hospital for transplantation. A further result of "Leftie's" act would be that the busload of orphans will be spared. Among the five men killed by "Leftie" are both the man responsible for putting the brain at the controls of the trolley, and the author of this example. If the ten hearts and "Leftie" are killed by the trolley, the ten prospective heart-transplant patients will die and their kidneys will be used to save the lives of twenty kidney-transplant patients, one of whom will grow up to cure cancer, and one of whom will grow up to be Hitler. There are other kidneys and dialysis machines available, however the brain does not know kidneys, and this is not a factor.

Assume that the brain's choice, whatever it turns out to be, will serve as an example to other brains-in-vats and so the effects of his decision will be amplified. Also assume that if the brain chooses the right side of the fork, an unjust war free of war crimes will ensue, while if the brain chooses the left fork, a just war fraught with war crimes will result. Furthermore, there is an intermittently active Cartesian demon deceiving the brain in such a manner that the brain is never sure if it is being deceived.

QUESTION: What should the brain do?

[ALTERNATIVE EXAMPLE: Same as above, except the brain has had a commisurotomy, and the left half of the brain is a consequentialist and the right side is an absolutist.]

Stop Using LessWrong: A Practical Interpretation of the 2012 Survey Results

-37 aceofspades 30 December 2012 10:00PM

Link to those results: http://lesswrong.com/lw/fp5/2012_survey_results/

I've been basically lurking this site for more than a year now and it's incredible that I have actually taken anything at all on this site seriously, let alone that at least thousands of others have. I have never received evidence that I am less likely to be overconfident about things than people in general or that any other particular person on this site is.

Yet in spite of this apparently 3.7% of people answering the survey have actually signed up for cryonics which is surely greater than the percent of people in the entire world signed up for cryonics. The entire idea seems to be taken especially seriously on this site. Evidently 72.9% of people here are at least considering signing up. I think the chance of cryonics working is trivial, for all practical purposes indistinguishable from zero (the expected value of the benefit is certainly not worth several hundred thousand dollars in future value considerations). Other people here apparently disagree, but if the rest of the world is undervaluing cryonics at the moment then why do those here with privileged information not invest heavily in the formation of new for-profit cryonics organizations, or start them alone, or invest in technology which will soon develop to make the revival of cryonics patients possible? If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.

There is also a question asking about the relative likelihood of different existential risks, which seems to imply that any of these risks are especially worth considering. This is not really a fault of the survey itself, as I have read significant discussion on this site related to these ideas. In my judgment this reflects a grand level of overconfidence in the probabilities of any of these occurring. How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed? I generously estimate 5% have made any such preparations.

I also see mentioned in the survey and have read on this site material related to in my view meaningless counterfactuals. The questions on dust specks vs torture and Newcomb's Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.

My judgment of this site as of now is that way too much time is spent discussing subjects of such low expected value (usually because of absurdly low expected probability of occurring) for using this site to be worthwhile. In fact I hypothesize that this discussion actually causes overconfidence related to such things happening, and at a minimum I have seen insufficient evidence for the value of using this site to continue doing so.

[Link] "An OKCupid Profile of a Rationalist"

-16 Athrelon 14 November 2012 01:48AM

The rationalist in question, of course, is our very own EY.

Quotes giving a reasonable sample of the spectrum of reactions:

Epic Fail on the e-harmony profile. He’s over-signalling intelligence. There’s a good paper about how much to optimally signal, like when you have a PhD to put it on your business card or not. This guy is going around giving out business cards that read Prof. Dr. John Doe, PhD, MA, BA. He won’t be getting laid any time soon.

His profile is probably very effective for aspergery girls who like reading the kinds of things that appear on LessWrong. Yudkowsky is basically a celebrity within a small niche of hyper-nerdy rationalists, so I doubt he has much trouble getting laid by girls in that community.

You make it sound like a cult leader or something....And reading the profile again with that lens, it actually makes a lot of sense.

I was about to agree [that the profile is oversharing], but then come to think of it, I realize I have an orgasm denial fetish, too. It’s an aroused preference that never escaped to my non-aroused self-consciousness.

Why is this important to consider? 

LessWrong as a community is dedicated to trying to "raise the sanity waterline," and its most respected members in particular put a lot of resources into outreach, via CFAR, HPMoR, and maintaining this site.  But a big factor in how people perceive our brand of rationality is about image.  If we're serious about raising the sanity waterline, that means image management - or at least avoiding active image malpractice - is something we should enthusiastically embrace as a way to achieve our goals. [1]

This is also a valuable exercise in considering the outside view.  Marginal Revolution is already a fairly WEIRD site, focused on abstract economic issues.  If any major blog is likely to be sympathetic to our cultural quirks, this would be it.  Yet a plurality of commenters reacted negatively. 

To the extent that we didn't notice anything strange about LW's figurehead having this OKCupid profile, LW either failed at calibrating mainstream reaction, or failed at consequentialism and realizing the drag this would have on our other recruitment efforts.  In our last discussion, there were only a few commenters raising concerns, and the consensus of the thread was that it was harmless and had no PR consequences worth noting.

As one commenter cogently put it,

I’m not saying that he’s trying to make a statement with this, I’m saying that he is making a statement about this whether he’s trying to or not. Ideas have consequences for how we live our lives, and that Eliezer has a public, identifiable profile up where he talks about his sexual fetishes is not some sort of randomly occurring event with no relationship to his other ideas.

I'd argue the same reasoning applies to the community at large, not just EY specifically.

[1] From Anna's excellent article: 5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")

Any existential risk angles to the US presidential election?

-9 Stuart_Armstrong 20 September 2012 09:44AM

Don't let your minds be killed, but I was wondering if there were any existential risk angles to the coming American election (if there isn't, then I'll simply retreat to raw, enjoyable and empty tribalism).

I can see three (quite tenuous) angles:

  1. Obama seems more likely to attempt to get some sort of global warming agreement. While not directly related to Xrisks per se, this would lead to better global coordination and agreement, which improves the outlook for a lot of other Xrisks. However, pretty unlikely to succeed.
  2. I have a mental image that Republicans would be more likely to invest in space exploration. This is a lot due to Newt Gingrich, I have to admit, and to the closeness between civilian and military space projects, the last of which are more likely to get boosts in Republican governments.
  3. If we are holding out for increased population rationality as being a helping factor for some Xrisks, then the fact the the Republicans have gone so strongly anti-science is certainly a bad sign. But on the other hand, its not clear whether them winning or losing the election is more likely to improve the general environment for science among their supporters.

But these all seem weak factors. So, less wronger, let me know: are the things I should care about in the election, or can I just lie back and enjoy it as a piece of interesting theatre?

 

Meta: LW Policy: When to prohibit Alice from replying to Bob's arguments?

-3 SilasBarta 12 September 2012 03:29AM

In light of recent (and potential) events, I wanted to start a discussion here about a certain method of handling conflicts on this site's discussion threads, and hopefully form a consensus on when to use the measure described in the title.  Even if the discussion has no impact on site policy ("executive veto"), I hope administrators will at least clarify when such a measure will be used, and for what reason.

I also don't want to taint or "anchor" the discussion by offering hypothetical situations or arguments for one position or another.  Rather, I simply want to ask: Under what conditions should a specific poster, "Alice" be prohibited from replying directly to the arguments in a post/comment made by another poster, "Bob"?  (Note: this is referring specifically to replies to ideas and arguments Bob has advanced, not general comments about Bob the person, which should probably go under much closer scrutiny because of the risk of incivility.)

Please offer your ideas and thoughts here on when this measure should be used.

Circular Preferences Don't Lead To Getting Money Pumped

-3 Mestroyer 11 September 2012 03:42AM

Edit: for reasons given in the comments, I don't think the question of what circular preferences actually do is well defined, so this an answer to a wrong question.

 

If I like Y more than X, at an exchange rate of 0.9Y for 1X, and I like Z more than Y, at an exchange rate of 0.9Z for 1Y, and I like X more than Z, at an exchange rate of 0.9X for 1Z, you might think that given 1X and the ability to trade X for Y at an exchange rate of 0.95Y for 1X, and Y for Z at an exchange rate of 0.95Z for 1Y, and Z for X at an exchange rate of 0.95X for 1Z, I would trade in a circle until I had nothing left.

But actually, if I knew that I had circular preferences, and I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.

Similarly, if the exchange rates are all 1:1, but each trade costs 1 penny, and I care about 1 penny much much less than any of 1X, 1Y, or 1Z, and I trade my X for Y, I know I'm actually going to end up with X - 3 cents, so I won't make the trade.

Unless I can set a Schelling fence, in which case I will end up trading once.

So if instead of being given X, I have a 1/3 chance of each of X, Y, and Z, I would hope I wouldn't set a Schelling fence, because then my 1/3 chance of each thing becomes a 1/3 chance of each thing minus the trading penalty. So maybe I'd want to be bad at precommitments, or would I precommit not to precommit?

View more: Prev | Next