Rationality Quotes August 2014
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Pareto improvement in gym norms: Spread the word!
This article is in a superposition of tongue-in-cheek and tongue straight in the mouth. (That's a Norwegian expression meaning "to concentrate on something difficult".) If you read it, please report your experimental observation of which it is, so that we can determine the amplitudes of the two states. However, I am actually making a serious point: Why do we have this non-optimal norm, and can we change it?
Gyms, at least the ones I've been in, seem to have a norm that each user should wipe his own sweat off the machine he just used. This is obviously inefficient. Consider that there are two kinds of users: Sensible, rational people (SRPs) who don't give a damn about other people's sweat on the machine; and finicky fussbudget frumpy failures (4Fs) (names chosen at random out of a hat, and completely unrelated to my own opinion on the point) who are too precious to have anyone else's sweat in their immediate vicinity; it's not as though they're going to shower after their exercise, right? Anyway. Under the existing norm, everyone has to clean once per machine use, but only the 4Fs are getting any utilons. Clearly, if we switch to a norm that everyone optionally cleans the machine they're about to use, then the SRPs are saved some work, while the 4Fs still get to use clean machines. This is an obvious Pareto improvement. Moreover, it's also a Nash equilibrium (and, incidentally, the current norm is a puzzling failure of the usual rule of thumb that social arrangements are Nash equilibria - why have we chosen this particular activity as one where we put effort into pushing people away from the equilibrium?) since nobody can improve his situation by cleaning the machine after using it, or failing to clean beforehand.
Please spread the word of this obvious improvement in gym-users' quality of life! Also, please push society towards the Nash equilibrium by defecting from the current norm: Either clean your machine before, not after, using it, or else don't clean it at all. If anyone challenges you, give them a quick lecture on economics - this has the added benefit of making you popular with the opposite sex.
Some possible objections:
1. My mother taught me to clean up after myself.
And imagine how much more pleasant your childhood would have been, if only you'd known about Nash equilibria and Pareto improvements! However, not all is lost: You can still try to convince your SO or roommate that the one who cares most about mess should be the one to clean it up.
2. My utility function has a term for not making others do work.
Also, apparently, for signalling your concern for others. The total amount of work done is rather less in my proposed new equilibrium. Suggest you update accordingly.
3. I prefer cleaning up my own sweat to cleaning that of others.
Have you considered the benefits of self-modifying to be more masochistic? Today's society offers all kinds of opportunities for turning yourself on, if only you could take advantage! This could actually be more efficient than taking a pill that makes you bisexual, since you can only sleep with so many people in one lifetime anyway. Repeat after me: Thank you for making me clean the machine, Master! Please may I clean another? There, do you feel the surge of hormones?
4. If I have to clean the machine, everyone else should too!
Until the rest of society has self-modified to be sufficiently masochistic to derive pleasure from your dominance, you should not attempt to impose it on them. This aside, have you considered the benefits of suggesting suitable punishments for anyone who doesn't clean their machine? Aren't they being rather naughty? Many exciting encounters may result from this handy ice-breaker!
5. My gym doesn't have that norm.
Excellent! Please spread the word. Today your gym, tomorrow mine!
Iterated Prisoner's Dilemma in software patents
This post contains some thoughts around software-patent strategies for large tech companies, in particular how the ability to block others' applications seems to set up an Iterated Prisoner's Dilemma and may change the strategic landscape for patents entirely.
Joel Spolsky writes of recent successes in blocking bad patent applications:
Micah showed me a document from the USPTO confirming that they had rejected the patent application, and the rejection relied very heavily on the document I found. This was, in fact, the first “confirmed kill” of Ask Patents, and it was really surprisingly easy.
and suggests that this may lead to a "Mexican Standoff" among major software companies:
My dream is that when big companies hear about how friggin’ easy it is to block a patent application, they’ll use Ask Patents to start messing with their competitors. How cool would it be if Apple, Samsung, Oracle and Google got into a Mexican Standoff on Ask Patents? If each of those companies had three or four engineers dedicating a few hours every day to picking off their competitors’ applications, the number of granted patents to those companies would grind to a halt. Wouldn’t that be something!
It seems to me that this would be something of a Prisoner's Dilemma situation for the companies: Presumably, each of them is best off if it is the only one that can get any software patents (it defects by blocking the others, they cooperate by not setting up a patent-blocking team), better off if everyone can get patents (everyone cooperates by not having a blocking team), and worst off if nobody can get patents (everyone has a blocking team which they have to pay for). It is Iterated because the decision to block or not block can be made anew every month, or quarter, or whatever. So the question is, will these companies filled with smart people be able to recognise an IPD, and will they cooperate?
Some factors to consider: Setting up a patent-blocking team requires some small amount of effort, so inertia is in favour of cooperation. On the other hand, many individual engineers at these places are likely out of sympathy with the patents that their managers insist on, and may be delighted to push the 'D' button under the guise of sabotaging their competitors. (And at least some of the major tech companies have 20% time or equivalents, so there wouldn't even be much inertia to overcome - just decide to do it!)
Another point is that this is a multiplayer game, but it only takes two companies to block everyone: For example, Google blocks everyone except Google, and then exactly one company needs to retaliate to make the block complete. This does of course raise the question of who is going to step forward and pay for the retaliation; but on the other hand, the cost appears small. The free-rider problem exists, but it does not seem to be large.
Another point: The ease of patent-blocking may change the strategic landscape entirely, by making it not worth the effort to file for patents in the first place. It appears to me that everyone involved knows that these patents are worthless. They file them for some mix of prestige, "everyone does it", and ability to retaliate if someone else sues using _their_ worthless overbroad patents. Presumably it is only worth expending engineer time on this because the patents are very likely to be granted; conversely, it's only worth having patent-blocking teams if a lot of worthless applications are filed. The equilibrium is not clear to me, but it seems that it will have to shift at least slightly in the directions of having engineers do more bug-fixing and less patent-filing.
Meetup : Cincinnati: Financial optimisation
Discussion article for the meetup : Cincinnati: Financial optimisation
This month we will meet to discuss how to apply rationality skills to personal finance: What are our goals, what ought to be our goals, and how can we accomplish them. We will try to identify levels, including especially the levels above our own; however, a certain amount of mockery for the levels below our own may also occur, on the grounds that it will have an anti-akrasic effect.
Discussion article for the meetup : Cincinnati: Financial optimisation
Attempting to rescue logical positivism
Very brief recap: The logical positivists said "All truths are experimentally testable". Their critics responded: "If that's true, how did you experimentally test it? And if it's not true, who cares?" Which is a fair criticism. Logical positivism pretty much collapsed as a philosophical position. But it seems to me that a very slight rephrasing might have saved it: "All _beliefs_ are experimentally testable". For if the critic makes the same adjustment, asking "Is that a belief, and if so -" you can interrupt him and say, "No, that's not a belief, that's a definition of what it means to say 'I believe X'."
A definition is not true or false, it is useful or not useful. Why is this definition useful? Because it allows us to distinguish between two classes of declarative statements; the ones that are actual beliefs, and the ones that have the grammatical form of beliefs but are empty of meaningful belief-content.
It seems to me, then, that both the positivists and their critics fell into the trap of confusing 'belief' and 'truth', and that carefully making this distinction might have saved positivism from considerable undeserved mockery.
Meetup : Cincinnati near-Schelling day
Discussion article for the meetup : Cincinnati near-Schelling day
It turns out that Schelling Day is not, in fact, a Schelling point locally; but we will run the ritual anyway, with better snacks.
Discussion article for the meetup : Cincinnati near-Schelling day
Meetup : Cincinnati February: Predictions
Discussion article for the meetup : Cincinnati February: Predictions
We will meet at the Amol India on Ludlow street at 1400. This month's exercise is to try for calibration. Each of us should give a probability for each of the five events below occurring before April 1st; at the meetup we will discuss our reasoning and perhaps update. Then in April we'll see how we did, as a group and individually.
- Will Kim Jong-un cease to be dictator of Best Korea?
- Will the sentence of any of the seven scientists convicted of failing to adequately warn of the L'Aquila earthquake be modified?
- Will a large (more than 1000 soldiers) foreign force invade Iran?
- Will pope Bendict's resignation be revealed as for other than medical reasons?
- Will Eliezer update HPMOR a) Exactly once b) Exactly twice c) Three or more times?
If you like, you can substitute other questions, or suggest new ones, in the comments. Posting to PredictionBook is optional but encouraged.
Discussion article for the meetup : Cincinnati February: Predictions
Meetup : Ohio LessWrong in Cincinnati
Discussion article for the meetup : Ohio LessWrong in Cincinnati
We will meet at the Taft Museum entrance. Dinner afterwards will include highly-intellectual discussion of the art. Be sure you have something interesting, intelligent, and rational to say! No pressure, of course.
Discussion article for the meetup : Ohio LessWrong in Cincinnati
How to update P(x this week), upon hearing P(x next month) = 99.5%?
Suppose you want to assign a probability that a government will fall (ie the Prime Minister resigns) before the end of the year. Lacking any particular information - I haven't even told you which government it is - you say "Obviously, it's 50% - either it happens or not" (or perhaps "Oh, say, 10%, governments can usually rely on lasting a year at least"), put that prediction into your registry, and go on with your life. Then, on December 1st, you hear that the Prime Minister in question has promised to resign and call an election in March of next year. How should this affect your probability that he will resign before the end of this year?
I see several arguments:
1. Having gotten this public commitment out of him, his opponents have no particular reason to push his government further. It should become more stable for the finite time it has left. My probability of a resignation in December should go down.
2. His opponents were able to extract such a promise; it follows that he cannot be quite confident in his ability to survive a vote of no confidence. Such a signal of weakness might easily lead to a "blood-in-the-water" effect whereby his opponents become more aggressive and go for the immediate kill. His government will surely fall before this attempted compromise date; my probability should go up.
3. The March date wasn't chosen at random. Presumably there is something the PM thinks he can get accomplished if he retains his position until March, but not if he resigns right away. So, presumably, his opponents will be all the more eager for him to resign before he gets it done, whatever it is; they will put more resources into toppling him. Again, my probability should go up.
The question is not hypothetical: I was faced with precisely this problem in December, and got it wrong. I'd like to see how others think about it.
How much to spend on a high-variance option?
So the jackpot in the Ohio lottery is around 25 million, and the chance of winning it is one in roughly 14 million, with tickets at 1 dollar a piece. It appears to me that roughly a quarter million tickets are sold each drawing; so, supposing you win, the probability of someone else also winning is 1 - (1 - 1/14e6)^{250000}=2%, which does not significantly reduce the expectation value of a ticket. So, unless I'm making a silly mistake somewhere, buying lottery tickets has positive expected value. (I find this counterintuitive; where are all the economists who should be picking up this free money? But I digress.)
I pointed this out to my wife, and said that it might be worth putting a dollar into it; and she very cogently asked, "Then why not make it 100 dollars?" Why not, indeed! Is there any sensible way of deciding how much to put into an option that has a positive expected value, but very low chance of payoff?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)