This doesn't work so well if you want to use it as a decision rule. You may end up with some ranking which leaves you indifferent between the top two options, but then you still need to pick one. I think you need to explain why whatever process you use to do that wasn't considered part of the voting system.
It seems to me that decision rules that permit indifference are more useful than decision rules that do not permit indifference, because fungibility of actions is a useful property. That is, I would view the decision rule as expressing preferences over classes of actions, but not specifying which of the actions to take within the class because it doesn't see a difference between them. Considering Buridan's Ass, it would rather "go eat hay" than "not go eat hay," but doesn't have a high-level preference for the left or right bale of hay, just like it doesn't have a preference whether it starts walking with its right hoof or its left hoof.
Something must have a preference--perhaps the Ass is right-hoofed, and so it leads with its right hoof and goes to the right bale of hay--but treating that decision as its own problem of smaller scope seems superior to me than specifying every possible detail in the high-level decision problem.
Thanks to ESrogs, Stefan_Schubert, and the Effective Altruism summit for the discussion that led to this post!
This post is to test out Polymath-style collaboration on LW. The problem we've chosen to try is formalizing and analyzing Bostrom and Ord's "Parliamentary Model" for dealing with moral uncertainty.
I'll first review the Parliamentary Model, then give some of Polymath's style suggestions, and finally suggest some directions that the conversation could take.
The Parliamentary Model
The Parliamentary Model is an under-specified method of dealing with moral uncertainty, proposed in 2009 by Nick Bostrom and Toby Ord. Reposting Nick's summary from Overcoming Bias:
In a comment, Bostrom continues:
It's an interesting idea, but clearly there are a lot of details to work out. Can we formally specify the kinds of negotiation that delegates can engage in? What about blackmail or prisoners' dilemmas between delegates? It what ways does this proposed method outperform other ways of dealing with moral uncertainty?
I was discussing this with ESRogs and Stefan_Schubert at the Effective Altruism summit, and we thought it might be fun to throw the question open to LessWrong. In particular, we thought it'd be a good test problem for a Polymath-project-style approach.
How to Polymath
The Polymath comment style suggestions are not so different from LW's, but numbers 5 and 6 are particularly important. In essence, they point out that the idea of a Polymath project is to split up the work into minimal chunks among participants, and to get most of the thinking to occur in comment threads. This is as opposed to a process in which one community member goes off for a week, meditates deeply on the problem, and produces a complete solution by themselves. Polymath rules 5 and 6 are instructive:
It seems to us as well that an important part of the Polymath style is to have fun together and to use the principle of charity liberally, so as to create a space in which people can safely be wrong, point out flaws, and build up a better picture together.
Our test project
If you're still reading, then I hope you're interested in giving this a try. The overall goal is to clarify and formalize the Parliamentary Model, and to analyze its strengths and weaknesses relative to other ways of dealing with moral uncertainty. Here are the three most promising questions we came up with:
The original OB post had a couple of comments that I thought were worth reproducing here, in case they spark discussion, so I've posted them.
Finally, if you have meta-level comments on the project as a whole instead of Polymath-style comments that aim to clarify or solve the problem, please reply in the meta-comments thread.