Latest in an irregular series, some of whose previous entries were Edge.org and the Girl Scouts...
I examine the Folding@home distributed computing project with reference to the costs (electricity resulting in air pollution causing deaths) and benefits (some papers): http://www.gwern.net/Charity is not about helping. Additional data on either side of the cost-benefit is welcome.
(I also recently split out my essay describing things I have changed my mind on.)
It means what you said is true and irrelevant. I would not notice if they lived or not. That doesn't matter to my ethics. What good are they to me? Probably nothing; my track record on investing in charitable donations is precisely -100% (none have ever even paid interest!). That doesn't matter to my ethics.
If you are going to attack consequentialism or valuing people besides oneself, this is entirely the wrong place to do so, and makes about as much sense as discussing Nagarjuna's arguments that nothing exists as a refutation of the assertion 'Obama is a bad president'.
They think that is not true. I guess you have a different opinion on the danger or likelihood of AI, or their effect on either one. You should take that up with them.
So why do you say that the Folding money would better be spent on starving Africans then? Shouldn't it be donated to the SIAI instead, if you believe it? If not, why not criticize them on the same basis? I am also not claiming that one shouldn't value other people, just that you don't have to weight all lives equally and shouldn't expect others to. And I don't really believe that anyone truly maximizes "lives of others", or would want to if they knew what it meant.
Also, "Charity X doesn't optimize under my personal ethics" is not the same as "Charity is not about helping", not that I disagree that signaling is important.