Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gjm 29 April 2014 09:59:05PM 1 point [-]

The equilibrium probability might not be well defined. (E.g., if for whatever reason you form a sufficiently firm intention to falsify whatever the oracle tells you.)

And yes, if the oracle tells you something about your own future actions -- which it has to, to give you an equilibrium probability -- it's unsurprising that you're going to feel a loss of freedom. Either that, or disbelieve the oracle.

Comment author: MichaelBishop 30 April 2014 01:16:23PM 0 points [-]

What does it mean for a probability not to be well defined in this context? I mean, I think I share the intuition, but I'm not really comfortable with it either. Doesn't it seem strange that a probability could be well defined until I start learning more about it and trying to change it? How little do I have to care about the probability before it becomes well defined again?

Comment author: gjm 29 April 2014 04:54:06PM 2 points [-]

Part of my confusion is that knowing the probability I will lose my job seems certain to affect the probability that I lose my job.

Yes, and this may make the question you ask the oracle ill-posed. But you can avoid this while still making the oracle about as useful: it will tell you the probability that you lose your job in the absence of a specific response to what the oracle tells you.

Alternatively, to reduce those feedback effects we could adjust the question to reduce your influence over the thing you're being given information about. So, suppose you know that your job performance is good, and appreciated by your employer, and have no reason to think that's likely to change, but your job is at risk for reasons that have nothing to do with your performance: you're at a startup that might fail to find a good enough market, or a hedge fund that's taking risks that might wipe it out, or you're a political representative for a party that may be swept out of power on account of decisions taken by people other than you. If you knew everything relevant about the world you'd see that the probability of such a failure is either 0 or 50%, but in fact you have no idea, and the oracle will tell you which.

In either case we get back to something nearer to a pure value-of-information problem.

So, taking the "exogenous failure" version of the second approach, my answer to your question is something like this: If I lose my job with no warning, I guess it might take three months to find another comparably good job; if I have plenty of warning, I can line something up faster. I might pay the equivalent of ~ 1 month's take-home pay for the information. But this is still an answer based on the possibility of making a bad prediction not come to pass after all. If all I get is some advance warning that I'm going to lose my job without warning (this is reminding me of the paradox of the unexpected hanging...) then it's less useful; let's say ~ 2 weeks' pay. Note that these figures would all increase, perhaps by a lot, if my estimate of my re-employability were lower.

it appears I getting an offer to reduce the actual risk

Yes, I don't think this is a VoI problem as posed. But again we can make it one by modifying it. You have an estimate: your earnings over the next 10 years will be normally distributed with mean M and standard deviation S. The oracle will, in exchange for your payment, give you a new value of M (about which you are currently quite uncertain) along with a new smaller value of S. Your present uncertainty about the new M corresponds to the reduction in S.

Unfortunately you still have the problem from the first thought experiment, which I propose remedying in the same way: either the oracle gives you a prediction conditional on your acting as you would have without her help (so now if the income figure is depressingly low, that suggests you aren't going to get the promotion you hoped for and you should consider looking for another job elsewhere (etc.) instead), or else you are for some reason unable to do anything to make bad predictions not come true.

Let me try to answer this question too, now it's been made more answerable. Here's a simplified version of the fisrt of those options: before asking the oracle I predict income M-S or M+S with equal probability (std dev is S). The oracle gives me better probabilities so as to halve the standard deviation, which means 93.3% for one and 6.7% for the other. On the occasions when it gives me a "bad" prediction (it says M-S with probability 93.3%) I switch to plan B, which (optimistically) is about as good a priori as what I was previously intending to do, which means it restores the probabilities to 50%. So (my mean - M) has gone from zero to 1/2 (0.933 S - 0.067 S) + 1/2.0 = 0.433 S. In practice my plan B is probably worse a priori than my plan A, and I suspect other simplifications I've made have also made the oracle's information more valuable, so the right figure is probably somewhat less than 0.433 S (note: S here is our "three years' worth of income"). My gut feeling is that it's quite a lot less, e.g. because when the oracle gives you bad news you don't know which aspects of your current plans are responsible for it. The right answer might be more like 0.1 S, or ~ 4 months' income.

In the second version (where I'm somehow prohibited from doing anything to fix the problem, if the oracle gives a low estimate of my future earnings), again the value of the information is obviously lower. I suppose it would be useful information for pension planning. As with the first question, I'm handwavily going to estimate that the benefit is half as much in this case, so 2 months' income.

I should add that these figures for the second problem still feel rather high to me. If an oracle actually offered me that information, I am not at all sure I'd feel willing to pay even two months' income for it.

Comment author: MichaelBishop 29 April 2014 09:22:43PM 0 points [-]

+1 and many thanks for wading into this with me... I've been working all day and I'm still at work so can't necessarily respond in full...

I agree that these problems are a lot simpler if reducing my uncertainty about X cannot help me affect X. This is not a minor class of problems. I'd love to have better information for a lot of problems in this class. That said, many of the problems that it seems most worthwhile for me to spend my time and money reducing my uncertainty about are of the type where I have a non-trivial role in how they play out. Assuming I do have some causal power over X, I think I'd pay a lot more to know the "equilibrium" probability of X after I've digested the information the oracle gave me - anything else seems like stale information... but learning that equilibrium probability seems weird as well. If I'm surprised by what the oracle says, then I imagine I'd ask myself questions like: how am I likely to react in regard to this information... what was the probability before I knew this information such that the current probability is what it is... It feels like I'm losing freedom... to what extent is the experience of uncertainty tied to the experience of freedom?

Comment author: gwern 05 May 2012 04:32:22PM 2 points [-]
Comment author: MichaelBishop 05 May 2012 08:18:22PM 1 point [-]

hmmm, I guess I missed that. Should I remove this post?

Comment author: MichaelBishop 29 March 2012 03:04:44PM 0 points [-]

Economist Jeff Ely recently blogged an interesting example of a slippery slope. http://cheaptalk.org/2012/03/27/the-slippery-slope/

Comment author: gwern 28 March 2012 05:10:56PM *  6 points [-]

Surely more productive industrial researchers are generally paid more. Many firms even give explicit bonuses on a per patent basis.

Yes, but the bonuses I've heard of are in the hundreds to thousands of dollars range, at companies committed to patenting like IBM. This isn't going to make a big difference to lifetime incomes where the range is 1-3 million dollars although the data may be rich enough to spot these effects (and how many patents is even '4x'? 4 patents on average per person?), and I suspect these bonuses come at the expense of salaries & benefits. (I know that's how I'd regard it as a manager: shifting risk from the company to the employee.)

And I think you're forgetting that income did increase with each standard deviation by an amount somewhat comparable to my suggested numbers for patents, so we're not explaining why IQ did not increase income whatsoever, but why it increased it relatively little, why the patenters apparently captured relatively little of the value.

Comment author: MichaelBishop 28 March 2012 05:38:57PM 2 points [-]

Woh, I did allow myself to misread/misremember your initial comment a bit so I'll dial it back slightly. The fact that even at the highest levels IQ is still positively correlated to income is important, and its what I would have expected, so the overall story does not undermine my support for the hypothesis that at the highest IQ levels, higher IQ individuals produce more positive externalities. I apologize for getting a bit sloppy there.

I would guess that if you had data from people with the same job description at the same company the correlation between IQ, patents, and income would be even higher.

Comment author: NancyLebovitz 28 March 2012 04:36:01PM 0 points [-]

I've noticed that if I notice someone online as civilized and intelligent, the odds seem rather high that I'll be seeing them writing about having an ongoing problem with depression within months.

This doesn't mean that everyone I like (online or off) is depressed, but it seems like a lot. The thing is, I don't know whether the proportion is high compared to the general population, or whether depression and intelligence are correlated. (Some people have suggested this as an explanation for what I think I've noticed.)

I wonder whether there's a correlation between depression and being conflict averse.

Comment author: MichaelBishop 28 March 2012 05:07:13PM 0 points [-]

I wonder whether there's a correlation between depression and being conflict averse. I would guess that there is, and I'm sure there has been at least some academic study of it. This doesn't really address the issue, but its related.

I also think that keeping a blog or writing in odd corners of the internet may be associated with, possibly even caused by, depression.

Comment author: gwern 28 March 2012 04:11:27PM 4 points [-]

If they were contributions to open-source projects, that would be one thing.

Open-source contribution is even more gameable than patents: at least with patents there's a human involved, checking to some degree that there is at least a little new stuff in the patent, while no one and nothing stops you from putting a worthless repo up on Github reinventing wheels poorly.

But people doing work that generates patents which don't lead to higher income - that raises some questions for me.

The usual arrangement with, say, industrial researchers is that their employers receive the unpredictable dividends from the patents in exchange for forking over regular salaries in fallow periods...

Is it possible that extremely high IQ is associated with a tendency to become "addicted" to a game like patenting?

I don't see why you would privilege this hypothesis.

Comment author: MichaelBishop 28 March 2012 05:00:28PM *  1 point [-]

Let me put it this way. Before considering the Terman data on patents you presented, I already thought IQ would be positively correlated with producing positive externalities and that there was a mostly one way causal link from the former to the latter. I expected the correlation between patents and IQ. What was new to me was the lack of correlation between IQ and income, and the lack of correlation between patents and income. Correction added: there was actually a fairly strong correlation between IQ and income, just not between income and patents, (conditional on IQ I think). Surely more productive industrial researchers are generally paid more. Many firms even give explicit bonuses on a per patent basis. So for me, given my priors, the Terman data you presented shifts me slightly against correction: does not shift me for or against the hypothesis that at the highest IQ levels, higher IQ individuals continues to be associated with producing more positive externalities. ref Still, I think increasing people's IQ, even the already gifted, probably has strong positive externalities unless the method for increasing it also has surprising (to me) side-effects.

I agree that measuring open-source contributions requires more than merely counting lines of code written. But I did want to highlight the fact that the patent system is explicitly designed to increase the private returns for a given innovation. I don't think that there is a strong correlation between the companies/industries which are patenting the most, and the companies/industries, which are benefiting the world the most.

Comment author: MichaelBishop 28 March 2012 04:11:14PM *  3 points [-]

It seems someone should link up "Why and How to Debate Charitably." I can't find a copy of the original because the author has taken it down. Here is a discussion of it on LW.. Here are my bulleted summary quotes. ADDED: Original essay I've just learned, and am very saddened to hear, that the author, Chris, committed suicide some time ago.

Comment author: gwern 03 September 2011 06:46:29PM *  10 points [-]

This is related, but not the research talked about. The Terman Project apparently found that the very highest IQ cohort had many more patents than the lower cohorts, but this did not show up as massively increased lifetime income.

Compare the bottom right IQ graph with SMPY results which show the impact of ability (SAT-M measured before age 13) on publication and patent rates. Ability in the SMPY graph varies between 99th and 99.99th percentile in quintiles Q1-Q5. The variation in IQ between the bottom and top deciles of the Terman study covers a similar range. The Terman super-smarties (i.e., +4 SD) only earned slightly more (say, 15-20% over a lifetime) than the ordinary smarties (i.e., +2.5 SD), but the probability of earning a patent (SMPY) went up by about 4x over the corresponding ability range.

http://infoproc.blogspot.com/2011/04/earnings-effects-of-personality.html

Unless we want to assume those 4x extra patents were extremely worthless, or that the less smart groups were generating positive externalities in some other mechanism, this would seem to imply that the smartest were not capturing anywhere near the value they were creating - and hence were generating significant positive externalities.

EDIT: Jones 2011 argues much the same thing - economic returns to IQ are so low because so much of it is being lost to positive externalities.

Comment author: MichaelBishop 28 March 2012 03:44:11PM *  1 point [-]

On its own, I don't consider this strong evidence for the greater productivity of the IQ elite. If they were contributions to open-source projects, that would be one thing. But people doing work that generates patents which don't lead to higher income - that raises some questions for me. Is it possible that extremely high IQ is associated with a tendency to become "addicted" to a game like patenting? Added: I think Gwern and I agree more than many people might think reading this comment.

Comment author: MarkusRamikin 21 December 2011 10:03:01AM 4 points [-]

Excusemewhat, the community, as in LW? We're a cryonics advocacy group now?

Comment author: MichaelBishop 28 March 2012 02:50:35PM 0 points [-]

I used cryonics as example because komponisto used it before me. I intended my question to be more general. "If you're trying to market LW, or ideas commonly discussed here, then which celebrities and opinion-leaders should you focus on?"

View more: Next