Lumifer comments on The Triumph of Humanity Chart - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (77)
Per Google/the World Bank, "Extreme poverty is defined as average daily consumption of $1.25 or less and means living on the edge of subsistence."
I would assume (but don't know) that the value is reasonably well calibrated, and seems absolute enough.
At worst, it's still probably a decent proxy for the number of people living near absolute subsistence level, and is certainly more useful than the much more relative poverty measures generally used (which are often little more than restatements of the GINI coefficient - that is, measurements of inequality rather than actual material need).
Right. So that gets me curious about how did they estimate the percentage of people living in "extreme poverty" in, say, 1850 China, and what are the error bars on that estimate.
Speaking qualitatively, if we take the "living on the edge of subsistence" meaning, the charts say that around 90% of the human population lived "on the edge of subsistence" in mid-XIX century. Is that so? I am not sure it matches my intuition well. Even if we look at Asia, at peasantry of Russia and China, say, these people weren't well-off, but I have doubts about the "edge of subsistence" for all of them. Of course, a great deal of their economy was non-trade local which makes estimating their consumption in something like 2009 US dollars... difficult.
From the LW slack: http://www.measuringworth.com/
That site isn't going to help me with XIX century China.
I understand interest rates, and inflation, and purchasing power parity, and all that. That all works fine for more or less developed economies where people buy with money the great majority of what they consume.
The charts posted claim to reflect the entire world and they go back to early XIX century. Whole-world data at that point is nothing but a collection of guesstimates.
Yeah. My understanding is you basically get a bunch of economists in the room to break down the problem into relevant parts, then get a bunch of historians in the room, calibrate them, get them to give credible intervals for the relevant data, and plug it all in to the model.
Is this how you think it works or is this how you think it should work?
In particular, I am curious about the "calibrating historians" part. You're going to calibrate experts against what?
It's how I think it works.
Known historical data (which they don't know).
The problem is that you want to use the best experts you have. If you are going to try to calibrate them in their field, they know it (and might have written the textbook you're calibrating them against), and if you're trying to calibrate them in the field they haven't studied, I'm not sure it's relevant to the quality of their studies.
As to "how it works", I'm pretty sure no one is actually trying to calibrate historians. I suspect the process actually works by looking up published papers and grabbing the estimates from them without any further thought -- at best. At worst you have numbers invented out of thin air, straight extrapolation of available curves, etc. etc.
Resolution and calibration are separate. They may have lower resolution in other fields but they shouldn't have lower calibration.
Edit: Thought about the previous comment, but it's not true. One thing they talk about in superforecasting is that people tend to be overconfident in their own fields while better calibrated in others.
You're thinking about this in terms of forecasting. This is not forecasting, this is historical studies.
Consider the hard sciences equivalent: you take, say, some geneticists and try to figure out whether their estimates of which genes cause what are any good by asking them questions about quantum physics to "check how they are calibrated".