Interesting thought, I'll admit hadn't actually considered that (I have a general problem with being too trusting and not seeing ulterior motives, although I suspect most people really aren't very dishonest).
I can see a few reasons why others might not be asking:
1) Its unlikely to get an answer. There hasn't been a whole lot of willingness to respond to similar requests in the past, EY has a thing about not giving in to demands. This doesn't really explain why people are still donating.
2) The number of genuine Dr Evils in the world is very small. Historically the most dangerous individuals have been the well-intentioned but deluded rather than the rationally malicious, which is odd since the latter category seem much more dangerous and therefore provides evidence of their rarity. Maybe people are just making an expected utility calculation and determining that the Dr Evil hypothesis is unlikely enough to trust SIAI anyway.
3) Eliezer is not the whole of SIAI, he is not even in charge. Some of the people involved have existing track records, if there is a conspiracy it runs very deep. I suppose its possible he has tricked every other member of the organization, but we are now adding a lot of burdensome details to what was already a fairly unlikely hypothesis.
4) If there are any real Dr Evils out there, then SIAI transparency might actually help them by giving away SIAI ideas while Dr Evil keeps his ideas to himself and as a result finishes his design first.
5) If I was Dr Evil trying to build an AI, then I wouldn't say that was what I was doing, since AI is quite a hard sell and will only get donations from a limited demographic (even more so for an out-of-the-mainstream idea like FAI). I would found the "organization for the protection of puppies kittens and bunnies" or something like that, which will probably get more donation money (or maybe even go into business rather than charity, since current evidence suggests that is overwhelmingly the most effective way to make large amounts of money).
Frankly, rather than a Dr Evil who wants to take over the Galaxy (I don't think he's ever said anything about the multiverse) a much more likely prospect is a conman who's found an underused niche. Of course, this wouldn't explain how he got some fairly big names like Jaan Tallinn and Edwin Edward to sponsor his donation drive.
Most of these reasons are being quite charitable to LW members, and unfortunately I suspect my own reason is the most common.
Earlier, I lamented that even though Eliezer named scholarship as one of the Twelve Virtues of Rationality, there is surprisingly little interest in (or citing of) the academic literature on some of Less Wrong's central discussion topics.
Previously, I provided an overview of formal epistemology, that field of philosophy that deals with (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.
Now, I've written Machine Ethics is the Future, an introduction to machine ethics, the academic field that studies the problem of how to design artificial moral agents that act ethically (along with a few related problems). There, you will find PDFs of a dozen papers on the subject.
Enjoy!