A paper "Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection" (Eric Schwitzgebel and Fiery Cushman) is available here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Stability-150423.pdf
Very interesting, thanks for finding it.
The methods and statistics look good (feel free to correct me). However, I wish the authors would have controlled for gender. I don’t think it would significantly change the results, but behavioral finance research indicates that men are more susceptible to certain behavioral biases than women:
https://faculty.haas.berkeley.edu/odean/papers/gender/BoysWillBeBoys.pdf
Admittedly, “Boys Will Be Boys” addresses overconfidence bias rather than framing and order biases.
My goal is convincing people to have more clear and rational, evidence-thinking, as informed by LW materials. Some people may choose to donate to AI, and others to EA - as you can see from the blog I cited, I specifically highlight the benefits of the EA movement. Regardless, as Brian Tomasik points out, helping people be more rational contributes to improving the world, and thus the ultimate goal of the EA movement.
Regardless, as Brian Tomasik points out, helping people be more rational contributes to improving the world, and thus the ultimate goal of the EA movement.
I agree that increasing rationality would improve the world, but would it improve the world more than other efforts? I believe you will face stiff competition from MIRI for effective altruist’s charitable donations. From the Brian Tomasik essay you referenced…
…because AI is likely to control the future of Earth’s light cone absent a catastrophe before then, ultimately all other applications matter through their influence on AI.
Separately…
Is encouraging philosophical reflection in general plausibly competitive with more direct work to explore the philosophical consequences of AI? My guess is that direct work like MIRI’s is more important per dollar.
Why should I support Intentional Insights instead of MIRI? I'm sure I won't be the only potential donor to ask this question, so I recommend that you craft a solid response.
Good usage of marginal dollars is one of EA principles. Of course you can argue that those principles are wrong, but it's makes no sense to expect the EA community to defend people who don't follow their principles.
And also that Harvard has demonstrated the ability to manage its endowment well.
They lost 8 billion of it in 2008.
And also that Harvard has demonstrated the ability to manage its endowment well.
They lost 8 billion of it in 2008.
Harvard's endowment has performed exceptionally well. Here’s the data: http://www.hmc.harvard.edu/docs/Final_Annual_Report_2014.pdf
Endowments are managed with long-term time horizons. Therefore, cherry picking one year of endowment performance and generalizing investment skill from it is misleading and inaccurate.
Also, percentage gains and losses are a more appropriate metric to use when comparing investment skill between managers. Otherwise, a large endowment will seem riskier than a small one, even if the two endowment allocations are identical.
For the prediction markets to work they need to settle: a bet must be decided one way or another within reasonable time so that the winners could collect the money from the losers.
How are you going to settle the bets on a 50%-population pandemic or a nuclear war?
a bet must be decided one way or another within reasonable time
Each contract would have a maturity date - that is standard.
How are you going to settle the bets on a 50%-population pandemic or a nuclear war?
Your primary concern is that the market would not be functional after a 50%-population pandemic or a nuclear war? That is a possibility. The likelihood depends on the severity of the catastrophe, popularity of the market, its technology and infrastructure, geographic distribution, disaster recovery plan, etc.
With the proper funding and interest, I think a very robust market could be created. And if it works, the information it provides will be very valuable (in my opinion).
if the contracts are based on severe (but not existential) events
You can treat insurance and reinsurance markets as prediction markets for severe events (earthquakes, hurricanes, etc.). I don't think they (or your proposed prediction markets) would be helpful in estimating the probabilities of extinction events.
Seems like the definition of "severe" is an issue here. Maybe I should have used "incredibly severe"?
Yes, reinsurance markets deal in large insured risks, but they do not target the incredibly large humanitarian risks that are more informative to us. See reinsurance deals here for reference: http://www.artemis.bm/deal_directory/
I don't think they (or your proposed prediction markets) would be helpful in estimating the probabilities of extinction events.
Care to explain your reasoning? For example, if the market indicated that the chance of a pandemic killing 50% of the population is 1,000x greater than the likelihood of a nuclear war of any kind, wouldn't a forecaster find this at least a little useful?
I have an idea related to Plan B – Survive the Catastrophe.
The unfortunate reality is that we do not have enough resources to effectively prepare for all potential catastrophes. Therefore, we need to determine which catastrophes are more likely and adjust our preparation priorities accordingly.
I propose that we create/encourage/support prediction markets in catastrophes, so that we can harness the “wisdom of the crowds” to determine which catastrophes are more likely. Large prediction markets are good at determining relative probabilities.
Of course, the prediction market contracts could not be based on an actual extinction event because no one would be alive to collect the payoff! However, if the contracts are based on severe (but not existential) events, they would still help us infer more accurate estimates for extinction event probabilities.
I agree that simple, single stage game models do not usually predict important real world outcomes. I also agree that markets change players' incentives to act outside of that market.
However, society usually notices these blind spots and addresses them in one way or another. Szabo describes two real world problems:
1) Audits ignore outside accounts/trading activity
For public companies, deception of this kind is usually illegal. Key employees of a company certify on its annual report that “this report does not contain any untrue statement of a material fact or omit to state a material fact” or something to that effect. Also, insider trading can carry stiff penalties.
2) Prediction markets may become assassination markets
Murder is illegal. Also, profits obtained by illegal acts are subject to disgorgement in the United States.
…a prediction market on a certain person's death is also an assassination market. Which is why a pre-Gulf-War-II DARPA-sponsored experimental "prediction market" included a prop bet on Saddam Hussein's death, but excluded such trading on any other, more politically correct world leaders.
An “assassination market” on normal people exists today. It is called the life settlement market.
Huh?
If I were you, I would consider the possibility that I am envious of those who signal and receive praise, and that I am rationalizing my feelings by claiming to uphold the social standard of "good taste".
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Yep.
So three people independently posted the same thing to LW: first as a comment in some thread, then as a top-level comment in the open thread, and finally as a post in Discussion :-)
Yes, that is funny. I'm glad the paper is garnering attention, as I think it's a powerful reminder that we are ALL subject to simple behavioral biases.
I reject the alternative explanation that philosophy and philosophers are crackpots.