I'm sure, but from reading the Mechanical Turk literature (eg. my boss is a robot), you need really specific tasks and a framework for working through MT so you can do majority voting on submissions and that sort of thing.
I have a hard time thinking of things you would want to do that are mechanical enough to get good results out of Turk, and large-scale enough that you would recover all the overhead of using MT and the actual fees (to say nothing of learning how to use MT and your framework!).
The article resonated with me because a number of my own activities are pretty repetitive volunteer stuff; like reading through the archives of the Evangelion ML looking for forgotten gems and information, but I can't imagine trusting that to Turkers because so many of the important parts are things I can't explain and only recognize their importance serendipitously and sometimes only in retrospect after I have learned about a related topic. Or writing the DNB FAQ, something which any volunteer could do if they just read through the DNB ML's emails to see what is important and consolidated it all, but again not something you could really write explicit instructions for.
Another option might be something less mechanized like freelancer.com (which another LW recommended as useful). Do you see that as more viable?
From the GiveWell blog, which is often interesting & applicable to our interests, comes a post on the quality of their volunteers:
(The dropout rate is probably not due to the perceived low utility of the work - GiveWell seems to be up-front that the test assignment is a test.)
I draw a few lessons from this: