I'm just tired of the signal pollution, and would like to be able to use karma to honestly appraise the worth of my articles and posts, without seeing 80% of my downvotes come in chunks that correspond precisely to how many posts I've made since the last massive downvote spree.
EDIT to add data points:
Spurious downvoting stopped soon after I named a particular individual (not ALL downvoting stopped, but the downvotes I got all seemed on-the-level.)
One block of potentially spurious downvoting occurred approximately one week ago, but then karma patterns returned to expected levels. I consider this block dubious, because it reasonably matches what I'd expect to see if someone noticed several of my posts together and disagreed with all of them, and did not match the usual pattern of starting with the earliest or latest post that I had made and downvoting everything (it downvoted all posts in a few threads, but not in other threads), so I'm just adding for completeness.
Spurious, indiscriminate downvoting started up again approximately half an hour ago on Sunday (12/1/2013), around noon MDT.
Edit: And now on Tuesday, 12/3/2013, at 10 AM, I'm watching my karma go down again... about 30 points so far.
Edit: And now on Saturday, 12/14/2013, at 2 PM, I'm watching my karma go down again... about 15 points so far, at a rate of about 1-2 points per second.
Thank you for that nice clear demonstration that there are reasons for not wanting a rule against mass-downvoting that don't involve thinking mass-downvoting isn't a very bad thing.
I think you exaggerate, though. Making good enough rules might not be an FAI-complete problem. E.g., the rules and/or automatic detection mechanism might leave the matter partly to moderators' discretion (or to other users', if all that happens on a violation is that a complete description of what you did gets posted automatically).
(The previous paragraph is not intended as an endorsement of having such rules. Just observing that it might be possible to have useful ones without needing perfect ones.)
This may be a demonstration that ultimately, if you want to constrain human beings to achieve a complex goal, you need human moderation. (Or, of course, moderation by FAI, but we don't have one of those.)