Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jsteinhardt 13 July 2014 11:29:38PM *  8 points [-]

Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.

I don't think this is true at all. His points against replicability are very valid and match my experience as a researcher. In particular:

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way.

This is a very real issue and I think that if we want to solve the current issues with science we need to be honest about this, rather than close our eyes and repeat the mantra that replication will solve everything. And it's not like he's arguing against accountability. Even in your quoted passage he says:

The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.

Now, I think he goes too far by saying that no negative findings should be published; but I think they need to be held to a high standard for the very reason he gives. On the other hand, positive findings should also be held to a higher standard.

Note that there are people much wiser than me (such as Andrew Gelman) who disagree with me; Gelman is dissatisfied with the current presumption that published research is correct. I certainly agree with this but for the same reasons that Mitchell gives, I don't think that merely publishing negative results can fix this issue.

Either way, I think you are being quite uncharitable to Mitchell.

Comment author: CarlShulman 14 July 2014 06:21:51PM 4 points [-]

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way

Do you agree with the empirical claim about the frequencies of false positives in initial studies versus false negatives in replications?

Comment author: James_Miller 08 May 2014 01:17:50AM *  1 point [-]

String theorist Luboš Motl strongly disagrees with the analysis writing "Sean Carroll has no clue about physics and is helping to bury the good name of 2 graduate students".

Comment author: CarlShulman 08 May 2014 05:02:33PM 11 points [-]

Scott Aaronson on Motl's reliability, or lack thereof, with details of a specific case.

Comment author: RichardKennaway 14 March 2014 10:21:22PM 5 points [-]

Mere money doesn't solve their problem: they can offer tons of money towards random candidates, but not to the ones which are visibly/reliably talented (which are a small subset of the talented).

A way around that might be to make it known that big salaries are available, but not up front, only by proven merit after being given a job. Does this already happen?

Comment author: CarlShulman 15 March 2014 11:29:38PM 8 points [-]

This actually seems very common in office jobs where you find many workers with million dollar salaries. Wall Street firms, strategy consultancies, and law firms all use models in which salaries expand massively with time, with high attrition along the way: the "up-or-out" model.

Even academia gives tenured positions (which have enormous value to workers) only after trial periods as postdocs and assistant professors.

Main Street corporate executives have to climb the ranks.

Comment author: CarlShulman 14 March 2014 07:23:04AM *  8 points [-]

Moral pluralism or uncertainty might give a reason to construct a charity portfolio which serves multiple values, as might emerge from something like the parliamentary model.

Comment author: Daniel_Burfoot 01 April 2013 01:12:15PM 41 points [-]

Robin used a Dirty Math Trick that works on us because we're not used to dealing with large numbers. He used a large time scale of 12000 years, and assumed exponential growth in wealth at a reasonable rate over that time period. But then for depreciating the value of the wealth due to the fact that the intended recipients might not actually receive it, he used a relatively small linear factor of 1/1000 which seems like it was pulled out of a hat.

It would make more sense to assume that there is some probability every year that the accumulated wealth will be wiped out by civil war, communist takeover, nuclear holocaust, etc etc. Even if this yearly probability were small, applied over a long period of time, it would still counteract the exponential blowup in the value of the wealth. The resulting conclusion would be totally dependent on the probability of calamity: if you use a 0.01% chance of total loss, then you have about a 30% chance of coming out with the big sum mentioned in the article. But if you use a 1% chance, then your likelihood of making it to 12000 years with the money intact is 4e-53.

Comment author: CarlShulman 02 March 2014 05:45:51AM 0 points [-]

As I said in response to Gwern's comment, there is uncertainty over rates of expropriation/loss, and the expected value disproportionately comes from the possibility of low loss rates. That is why Robin talks about 1/1000, he's raising the possibility that the legal order will be such as to sustain great growth, and the laws of physics will allow unreasonably large populations or wealth.

Now, it is still a pretty questionable comparison, because there are plenty of other possibilities for mega-influence, like changing the probability that such compounding can take place (and isn't pre-empted by expropriation, nuclear war, etc).

Comment author: gwern 01 April 2013 05:51:30PM *  46 points [-]

But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100.

I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs

First things:

Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point.

(BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...)

Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%.

The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates.

And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this).

Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on.

So to recap:

  1. organizational mortality is extremely high
  2. financial mortality is likewise extremely high; and both organizational & financial mortality are relevant
  3. all estimates of risk are systematically biased downwards, estimates indicating that one of these biases is very large
  4. risks for organizations or finances increases with size
  5. opportunity cost is completely ignored

Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent.

Comment author: CarlShulman 02 March 2014 05:38:30AM *  -1 points [-]

So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240.

The premises in this argument aren't strong enough to support conclusions like that. Expropriation risks have declined strikingly, particularly in advanced societies, and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels, e.g. a stable world government run by patient immortals, or with an automated legal system designed for ultra-stability.

ETA: Weitzman on uncertainty about discount/expropriation rates.

Comment author: private_messaging 31 January 2014 06:21:13AM *  0 points [-]

And re: Pinker: If you had a bit more experience with trends on a necessarily very noisy data - you would realize that such trends are virtually irrelevant with regards to the probability of encountering some extremes (especially when those are not even that extreme - preceding the cold war, you have Hitler). It's the exact same mistake committed by particularly low brow republicans when they go on about "ha ha, global warming" during a cold spell - because they think that a trend in noisy data has huge impact on individual data points.

edit: furthermore, Pinker's data is on violence per capita - the total violence increased, it's just that the violence seems to scale sub-linearly with population. Population is growing, as well as the number of states with nuclear weapons.

Comment author: CarlShulman 01 February 2014 06:29:16PM *  2 points [-]

Pinker's data is on violence per capita - the total violence increased, it's just that the violence seems to scale sub-linearly with population.

Did you not read the book? He shows big declines in rates of wars, not just per capita damage from war.

Comment author: private_messaging 30 January 2014 12:14:46AM *  -1 points [-]

A rogue superpower - may I use this oxymoron? - could attack 400 existing nuclear reactors and nuclear waste stores with its missiles creating fallout equal to doomsday machine.

Keep in mind that in a nuclear war, even if the nuclear reactors are not particularly well targeted, many (most?) reactors are going to melt down due to having been left unattended, and spent fuel pools may catch fire too.

@Carl:

I think you dramatically under-estimate both the probability and the consequences of the nuclear war (by ignoring the non-small probability of massive worsening of the political relations, or reversal of tentative trends of less warfare).

That's quite annoying to see, the self proclaimed "existential risk experts" (professional mediocrities) increasing the risks through undermining and under-estimating things that are not fancy pet causes from the modern popular culture. Leave it to the actual scientists to occasionally give their opinions about, please, they're simply smarter than you.

Comment author: CarlShulman 30 January 2014 06:47:33AM 2 points [-]

I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker's assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.

I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can't update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts' views I shared it.

Comment author: NoSuchPlace 11 January 2014 11:43:46PM *  2 points [-]

I don't think that this is meant as a complete counter-argument against cryonics, but rather a point which needs to be considered when calculating the expected benefit of cryonics. For a very hypothetical example (which doesn't reflect my beliefs) where this sort of consideration makes a big difference:

Say I'm young and healthy, so that I can be 90% confident to still be alive in 40 years time and I also believe that immortality and reanimation will become available at roughly the same time. Then the expected benefit of signing up for cryonics, all else being equal, would be about 10 times lower if I expected the relevant technologies to go online either very soon (next 40 years) or very late (longer than I would expect cryonics companies to last) than if I expected them to go online some time after I very likely died but before cryonics companies disappeared.

Edit: Fixed silly typo.

Comment author: CarlShulman 12 January 2014 01:31:53AM *  9 points [-]

That would make sense if you were doing something like buying a lifetime cryonics subscription upfront that could not be refunded even in part. But it doesn't make sense with actual insurance, where you stop buying it if is no longer useful, so costs are matched to benefits.

  • Life insurance, and cryonics membership fees, are paid on an annual basis
  • The price of life insurance is set largely based on your annual risk of death: if your risk of death is low (young, healthy, etc) then the cost of coverage will be low; if your risk of death is high the cost will be high
  • You can terminate both the life insurance and the cryonics membership whenever you choose, ending coverage
  • If you die in a year before 'immortality' becomes available, then it does not help you

So, in your scenario:

  • You have a 10% chance of dying before 40 years have passed
  • During the first 40 years you pay on the order of 10% of the cost of lifetime cryonics coverage (higher because of membership fees not being scaled to mortality risk)
  • After 40 years 'immortality' becomes available, so you cancel your cryonics membership and insurance after only paying for life insurance priced for a 10% risk of death
  • In this world the potential benefits are cut by a factor of 10, but so are the costs (roughly); so the cost-benefit ratio does not change by a factor of 10
Comment author: gjm 11 January 2014 08:15:01PM 6 points [-]

(My version of) the above is essentially my reason for thinking cryonics is unlikely to have much value.

There's a slightly subtle point in this area that I think often gets missed. The relevant question is not "how likely is it that cryonics will work?" but "how likely is it that cryonics will both work and be needed?". A substantial amount of the probability that cryonics does something useful, I think, comes from scenarios where there's huge technological progress within the next century or thereabouts (because if it takes longer then there's much less chance that the cryonics companies are still around and haven't lost their patients in accidents, wars, etc.) -- but conditional on that it's quite likely that the huge technological progress actually happens fast enough that someone reasonably young (like Chris) ends up getting magical life extension without needing to die and be revived first.

So the window within which there's value in signing up for cryonics is where huge progress happens soon but not too soon. You're betting on an upper as well as a lower bound to the rate of progress.

Comment author: CarlShulman 11 January 2014 09:20:58PM *  10 points [-]

There's a slightly subtle point in this area that I think often gets missed.

I have seen a number of people make (and withdraw) this point, but it doesn't make sense, since both the costs and benefits change (you stop buying life insurance when you no longer need it, so costs decline in the same ballpark as benefits).

Contrast with the following question:

"Why buy fire insurance for 2014, if in 2075 anti-fire technology will be so advanced that fire losses are negligible?"

You pay for fire insurance this year to guard against the chance of fire this year. If fire risk goes down, the price of fire insurance goes down too, and you can cancel your insurance at will.

View more: Next