If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself.
Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.
From the Even Odds thread:
Assume there are n people. Let S_i be person i's score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be
(i.e. the person's score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.
This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quad...
Not aware of any tourneys with this tweak, but I use a similar example when I teach.
If the payoff from exiting is zero and the mutual defection payoff is negative, then the game doesn't change much. Exit on the first round becomes the unique subgame-perfect equilibrium of any finite repetition, and with a random end date, trigger strategies to support cooperation work similarly to the original game.
Life is a more interesting if the mutual defection payoff is sufficiently better than exit. Cooperation can happen in equilibrium even when the end date is known (except on the last round) since exit is a viable threat to punish defection.
From an economics perspective, the stapler dissertation is real. The majority of the time, the three papers haven't been published.
It's also possible to publish empirical work produced in a few months. The issue is where that article is likely to be published. There's a clear hierarchy of journals, and a low ranked publication could hurt more than it helps. Dissertation committees have very different standards depending on the student's ambition to go into academia. If the committee has to write letters of rec to other professors, it takes a lot more work...
Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.
Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from &qu...
My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.
Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might c...
I'm on board with "absurdly powerful". It underlies the bulk of mechanism design, to the point my advisor complains we've confused it with the entirety of mechanism design.
The principle gives us the entire set of possible outcomes for some solution concept like dominant-strategy equilibrium or Bayes-Nash equilibrium. It works for any search over the set of outcomes, whether that leads to an impossibility result or a constructive result like identifying the revenue-optimal auction.
Given an arbitrary mechanism, it's easy (in principle) to find the ...
The paper cited is handwavy and conversational because it isn't making original claims. It's providing a survey for non-specialists. The table I mentioned is a summary of six other papers.
Some of the studies assume workers in poorer countries are permanently 1/3rd or 1/5th as productive as native workers, so the estimate is based on something more like a person transferred from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy is able to produce $10-15K in value.
For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)
an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars)
I am having strong doubts about this number. The paper cited is long on handwaving and seems to be entirely too fond of expressions like "should make economists’ jaws hit their desks" and "there appear to be trillion-dollar bills on the sidewalk". In particular, there is the pervasive assumption that people are fungible so transferring a person from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy immediately nets you $45,000 in additional GDP. I don't think this is true.
The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power / votes / delegates.
On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + ...
For the NBS with more than two agents, you just maximize the product of everyone's gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.
Agents could be given more bargaining power by giving them different exponents in the Nash product.
Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.
Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person's life (the outcome can't be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).
I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we're doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.
Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has pr...
My reading of the problem is that a satisfactory Parliamentary Model should:
Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of start...
Metafilter has a classic thread on "What book is the best introduction to your field?". There are multiple recommendations there for both law and biology.
Since Arrow and GS are equivalent, it's not surprising to see intermediate versions. Thanks for pointing that one out. I still stand by the statement for the common formulation of the theorem. We're hitting the fuzzy lines between what counts as an alternate formulation of the same theorem, a corollary, or a distinct theorem.
Arrow's theorem doesn't apply to rating systems like approval or range voting. However, Gibbard-Satterthwaite still holds. It holds more intensely if anything since agents have more ways to lie. Now you have to worry about someone saying their favorite is ten times better than their second favorite rather than just three times better in addition to lying about the order.
See pg. 391-392 of The Role of Deliberate Practice in the Acquisition of Expert Performance.pdf), the paper that kicked off the field. A better summary is that 2-4 hours is the maximum sustainable amount of deliberate practice in a day.
Typo fixed now. Jill's payment should be p_Jill = 300 - p_Jack.
The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn't do it for them. The "bid and split excess" mechanism I mention at the very end could be better if people are occasionally honest.
I'm now curious what's possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It'd be fairly easy to calculate the potential welfare gain by adding a flag to the agent's type ...
That's an indexed Cartesian product, analogous to sigma notation for indexed summation, so is the set of all vectors of agent types.
Aside from academic economists and computer scientists? :D Auction design has been a big success story, enough so that microeconomic theorists like Hal Varian and Preston McAfee now work at Google full time. Microsoft and other tech companies also have research staff working specifically on mechanism design.
As far as people that should have some awareness (whether they do or not): anyone implementing an online reputation system, anyone allocating resources (like a university allocating courses to students or the US Army allocating ROTC graduates to its branches), or anyone designing government regulation.
Some exposure to game theory. Otherwise, tolerance of formulas and a little bit of calculus for optimization.
At least, I hope that's the case. I've been teaching this to economics grad students for the past few years, so I know common points of misunderstanding, but can easily take some jargon for granted. Please call me out on anything that is unclear.
Music randomizes emotion and mindstate.
Wait, where did "randomizes" come from? The study you link and the standard view says that music can induce specific emotions. The point of the study is that emotions induced by music can carry over into other areas, which suggests we might optimize when we use specific types of music. The study you link about music and accidents also suggests specific music decreased risks.
All the papers I'm immediately seeing on Google Scholar suggest there is no association between background music and studying effect...
Hmm... Atlas Shrugged does have (ostensible) paragons. Rand's idea of Romanticism as portraying "the world as it should be" seems to match up: "What Romantic art offers is not moral rules, not an explicit didactic message, but the image of a moral person—i.e., the concretized abstraction of a moral ideal." (source) Rand's antagonists do tend to be all flaws and no virtues though.
One more hypothesis after reading other comments:
HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than "rationalist fiction", a better term would be "paragon fiction". Characters have rich and conflicting motives so life isn't a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or agains...
I'm also somewhat confused by this. I love HPMoR and actively recommend it to friends, but to the extent Eliezer's April Fools' confession can be taken literally, characterizing it as "you-don't-have-a-word genre" and coming from "an entirely different literary tradition" seems a stretch.
Some hypotheses:
One more hypothesis after reading other comments:
HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than "rationalist fiction", a better term would be "paragon fiction". Characters have rich and conflicting motives so life isn't a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or agains...
Exactly. No need to put tunnels underground when it makes substantially more sense to build platforms over existing roads. This also means cities can expand or rezone more flexibly since you can just build standard roads like now and then add bridges or full platforms when pedestrians enter the mix. Rain, snow, and deer don't require more than a simple aluminum structure.
What do you mean by applying Kelly to the LMSR?
Since relying on Kelly is equivalent to maximizing log utility of wealth, I'd initially guess there is some equivalence between a group of risk-neutral agents trading via the LMSR and a group of Kelly agents with equal wealth trading directly. I haven't seen anything around in the literature though.
"Learning Performance of Prediction Markets with Kelly Bettors" looks at the performance of double auction markets with Kelly agents, but doesn't make any reference to Hanson even though I know Pennock is...
Hidden Order by David Friedman is a popular book, but is semi-technical enough that it could serve as a textbook for an intro microeconomics course.
What are a few more structured approaches that could substantially improve matters? Some improvements can definitely be made, but I disagree that outcomes are much worse. Two studies suggest marriage markets are about 20% off the optimal match (Suen and Li (1999), "A direct test of the efficient marriage market hypothesis", based on Hong Kong data, and Cao et al (2010), "Optimizing the marriage market", based on Swiss data). While 20% is not trivial, it's not a major failure.
If there are major improvements to be had, I expect it to come...
Scarce signals do increase willingness to go on dates, based on a field experiment of online dating in South Korea.
Thanks for the SA paper!
The parameter space is only two dimensional here, so it's not hard to eyeball roughly where the minimum is if I sample enough. I can say very little about the noise. I'm more interested being able to approximate the optimum quickly (since simulation time adds up) than hitting it exactly. The approach taken in this paper based on a non-parametric tau test looks interesting.
The parameter space in this current problem is only two dimensional, so I can eyeball a plausible region, sample at a higher rate there, and iterate by hand. In another project, I had something with an very high dimensional parameter space, so I figured it's time I learn more about these techniques.
Any resources you can recommend on this topic then? Is there a list of common shortcuts anywhere?
Not really. In this particular case, I'm minimizing how long it takes a simulation reach one state, so the distribution ends up looking lognormal- or Poisson-ish.
Edit: Seeing your added question, I don't need an efficient estimator in the usual sense per se. This is more about how to search the parameter space in a reasonable way to find where the minimum is, despite the noise.
Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.
Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?
Your description of incomplete information is off. What you give as the definition of incomplete information is one type of imperfect information, where nature is added as a player.
A game has incomplete information when one player has more information than another about payoffs. Since Harsanyi, incomplete information has been seen as a special case of imperfect information with nature randomly assigning types to each player according to a commonly known distribution and payoffs given types being commonly known.
The first option is standard. When the second interpretation comes up, those strategies are referred to as behavior strategies.
If every information set is visited at most once in the course of play, then the game satisfies no-absent-mindedness and every behavior strategy can be represented as a standard mixed strategy (but some mixed strategies don't have equivalent behavior strategies).
Kuhn's theorem says the game has perfect recall (roughly players never forget anything and there is a clear progression of time) if and only if mixed and behavior strategies are equivalent.
Haidt's claim is that liberals rely on purity/sacredness relatively less often, but it's still there. Some of the earlier work on the purity axis put heavy emphasis on sex or sin. Since then, Haidt has acknowledged that the difference between liberals and conservatives might even out if you add food or environmental concerns to purity.
Yeah, environmentalist attitudes towards e.g. GMOs and nuclear power look awfully purity-minded to me. I'm not sure whether I want to count environmentalism/Green thought as part of the mainline Left, though; it's certainly not central to it, and seems to be its own thing in a lot of ways.
(Cladistically speaking it's definitely not. But cladistics can get you in trouble when you're looking at political movements.)
Haidt acknowledges that liberals feel disgust at racism and that this falls under purity/sacredness (explicitly listing it in a somewhat older article on Table 1, pg 59). His claim is that liberals rely on the purity/sacredness scale relatively more often, not that they never engage it. Still, in your example, I'd expect the typical reaction to be anger at a fairness violation rather than disgust.
My guess is the person most likely to defend this criterion is a Popperian of some flavor, since precise explanations (as you define them) can be cleanly falsified.
While it's nice when something is cleanly falsified, it's not clear we should actively strive for precision in our explanations. An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn't mean we should give it an extra penalty.
If all explanations have equal prior probability, then Bayesian...
I've used 1-2mg of nicotine (via gum) a few of times a month for a couple years. I previously used it a few times a week for a few months before getting a methylphenidate prescription for ADD. There hasn't been any noticeable dependency, but I haven't had that with other drugs either.
Using it, I feel more focused and more confident, in contrast to caffeine which tends to just leaves me jittery and methyphenidate which is better for focus but doesn't have the slight positive emotion boost. Delivered via gum, the half-life is short (an hour at most). That's...
If there is a net positive externality, then even large private benefits aren't enough. That's the whole point of the externality concept.