I've been sort of banging my head on this issue (I have jury duty next week (first time)).
The obvious possibility is what if I get put on a drug use case? The obvious injustices of the anti-drug laws are well known, and I know of the concept of nullification, but I'm bouncing back and forth as to its validity.
Some of my thoughts on this:
Thought 1: Just decide if they did it or didn't do it.
Thought 2: But can I ethically bring myself to declare guilty (and thus result in potential serious punishment) someone that really didn't actually do anything wrong? ie, to support a seriously unjust law?
Thought 3: (and here's where TDT style issues come in) On the other hand, the algorithm "if jury member, don't convict if I don't like a particular law" seems to be in general a potentially really really bad algorithm. (ie, one obvious failure mode for that algorithm would be homophobic juries that refuse to convict on hate crimes against gays)
Thought 4: Generally, those sorts of people tend to not be serious rationalists. Reasoning as if I can expect correlations among our decision algorithms seems questionable.
Thought 5: Really? Really? If I wanted to start making excuses like that, I could probably whenever I feel like construct a reference class for which I am the sole member. Thought 4 style reasoning seems itself to potentially be shaky.
So, basically I'm smart enough to have the above sequence of thoughts, but not smart enough to actually resolve it. What is a rationalist to do? (In other words, any help with untangling my thoughts on this so that I can figure out if I should go by the rule of "nullify if appropriate" or "nullification is bad, period, even if the law in question is hateful" would be greatly appreciated.)
Yes, a conviction for an unjust law is bad -- that's not in dispute. The problem is whether you are appropriately fighting this injustice by nullifying, if you reason by TDT, a decision theory that ranks very well on numerous desiderata that account for these intuitions. Appealing to specific moral duty doesn't resolve the problem, for the same reason that appealing to greed doesn't justify two-boxing on Newcomb's problem.
For my part, I do have a big problem with drug laws (despite not planning to use them). And before thinking about this as a rationalist, I did favor jury nullification. But if I put my TDT hat on, here are the problems I see with it:
Like what loqi said, if jurors tended to nullify, then the system, anticipating this, would either not use juries, or resort to extremely stringent screening methods to make sure there are no nullifiers (potentially involving massive violations of privacy for jurors, which in turn has to make them very circumspect about talking about their political views... even today people complain about the intrusiveness of questions jurors get asked). Or reduce the conviction difficulty threshold so that they can allow a conviction to go through if jurors acquit "for a bad reason".
(I think this is similar to the old debate about whether you should cut down the law to get to the devil -- or Hitler, etc. Or whether a judge in a slavery-supporting society who "sees the light" should suddenly start ignoring the laws and make whatever decisions go against slavery.)
TDT asks you to consider the consequences, conditioning on yourself (and all similar instantiations of your algorithm) being the kind of person who would keep quiet about your opinion on nullification and the particular law, and then nullify if you didn't like the law.
This seems to be equivalent to setting humans to take the policy of "using any discretion I have to bend application of laws in the direction of the legal regime I prefer". But if humans deemed this optimal, it would not be possible to have a "rule of law" system, where there are definite laws, and people can know when they're breaking them, and which disallows favoritism. It would probably not be possible to implement any law except those which are extremely popular (which may be a good thing).
So I'm forced to conclude that jury nullification is in the unfortunate position of "feeding off of its own existence", like stealing or two-boxing. While it may appear that it's a chance to shift utility toward people you like, your deciding to do so has broader implications.
So at the very least you should say upfront that you regard it as optimal to nullify unjust laws, which will get you tossed out of the pool but otherwise unhurt. (I've heard that if you even exhibit familiarity with related keywords, that's enough to get dismissed.)
Thanks, though part of my question is, due in part to the sorts of issues you bring up... should I consider it optimal for me to run the algorithm of nullify laws that seem unjust to me?