META THREAD: what do you think about this project? About Polymath on LW?
The Future of Humanity Institute could make use of your money
Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.
Polymath-style attack on the Parliamentary Model for moral uncertainty
Thanks to ESrogs, Stefan_Schubert, and the Effective Altruism summit for the discussion that led to this post!
This post is to test out Polymath-style collaboration on LW. The problem we've chosen to try is formalizing and analyzing Bostrom and Ord's "Parliamentary Model" for dealing with moral uncertainty.
I'll first review the Parliamentary Model, then give some of Polymath's style suggestions, and finally suggest some directions that the conversation could take.
It seems that specifying the delegates' informational situation creates a dilemma.
As you write above, we should take the delegates to think that Parliament's decision is a stochastic variable such that the probability of the Parliament taking action A is proportional to the fraction of votes for A, to avoid giving the majority bloc absolute power.
However, your suggestion generates its own problems (as long as we take the parliament to go with the option with the most votes):
Suppose an issue The Parliament votes on involves options A1, A2, ..., An and an additional option X. Suppose further that the great majority of theories in which the agent has credence agree that it is very important to perform one of A1, A2, ..., An rather than X. Although all these theories have a different favourite option, which of A1, A2, ..., An is performed makes little difference to them.
Now suppose that according to an additional hypothesis in which the agent has relatively little credence, it is best to perform X.
Because the delegates who favour A1, A2, ..., An do not know that what matters is getting the majority, they see no value in coordinating themselves and concentrating their votes on one or a few options to make sure X will not end up getting the most votes. Accordingly, they will all vote for different options. X may then end up being the option with most votes if the agent has slightly more credence in the hypothesis which favours X than in any other individual theory, despite the fact that the agent is almost sure that this option is grossly suboptimal.
This is clearly the wrong result.
To me it looks like the main issues are in configuring the "delegates" so that they don't "negotiate" quite like real agents - for example, there's no delegate that will threaten to adopt an extremely negative policy in order to gain negotiating leverage over other delegates.
The part where we talk about these negotiations seems to me like the main pressure point on the moral theory qua moral theory - can we point to a form of negotiation that is isomorphic to the "right answer", rather than just being an awkward tool to get closer to the right answer?
A heuristic I've previously encountered being thrown around about whether to donate to the MIRI, or the FHI, is to fund whichever one has more room for more funding, or whichever one is experiencing more of a funding crunch at a given time. As Less Wrong is a hub for an unusually large number of donors to each of these organizations, it might be nice if there was a (semi-)annual discussion on these matters with representatives from the various organizations. How feasible would this be?
This is worth thinking about in the future, thanks. I think right now, it's good to take advantage of MIRI's matched giving opportunities when they arise, and I'd expect either organization to announce if they were under a particular crunch or aiming to hit a particular target.
$30 donated. It may become quasi-regular, monthly.
Thanks for letting us know. I wanted to donate to x-risk, but I didn't really want to give to MIRI (even though I like their goals and the people) because I worry that MIRI's approach is too narrow. FHI's broader approach, I feel, is more appropriate given our current ignorance about the vast possible varieties of existential threats.
Yes, thank you!
I've recently reconciled my behavior with my ethical intuition regarding eating animals, by way of deciding to alter my behavior and do some variation of "don't eat meat". I decided on this question long ago but did not act upon it.
I notice that there is very confusing information out there about what one should eat in order to avoid negative health impacts, and would like to read correct and useful articles on the subject, because I strongly desire to not be unhealthy. Do you have suggestions?
I am pragmatic. My intuition says that bone ash used to color certain food products has a relatively low cost (in sin-ons), and that there definitely are places I will make trades against sin-ons.
I also recognize that I would like a reasonably fast process to estimate sin-ons, and suggestions about highly impactful considerations (metabolic efficiency, things that might put various horrors on understandable scales) would be appreciated. Also, I am not sure that sin-ons is the word I am looking for as a measure of this sort of badness.
I have checked with my brain, and my brain has decided that cuteness does not particularly matter to it as a factor. Horse sashimi is delicious.
If you have things to say in favor of eating meat, please share them, and explain it to me as if I am a precocious 8 year old.
I am considering adding oysters and mussels to my vegetarian diet as a result of these two blog posts. I don't have Good Information about the nutritional problems that come from avoiding meat or the nutritional benefits of adding oysters and mussels, but it seems like a good way to hedge against deficiencies without spending too much research time, especially since I'm cutting down on eggs (Warning: unpleasant image of chicken having its beak clipped appears relatively high on that page).
That being said, I do consider this kind of thing to be "reconciling daily behaviours with abstract ethical beliefs" more than I consider it an effective form of altruism; it looks to me like poverty and the long-term future are much better places to invest Actual Altruistic Effort.
Thanks, Alex; I think you're right, and am checking into it.
...Vincent has now updated the paper; thanks again!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Consider the following degenerate case: there is only one decision to be made, and your competing theories assess it as follows.
And suppose you find theory 2 just slightly more probable than theory 1.
Then it seems like any parliamentary model is going to say that theory 2 wins, and you choose option A. That seems like a bad outcome.
Accordingly, I suggest that to arrive at a workable parliamentary model we need to do at least one of the following:
As you might gather, I find the last option the most promising.
Great example. As an alternative to your three options (or maybe this falls under your first bullet), maybe negotiation should happen behind a veil of ignorance about what decisions will actually need to be made; the delegates would arrive at a decision function for all possible decisions.
Your example does make me nervous, though, on the behalf of delegates who don't have much to negotiate with. Maybe (as badger says) cardinal information does need to come into it.