"Planet-cancer" environmentalists don't own server farms or make major breakthroughs in computer science, unless they're several standard deviations above the norm in both logistical competence and hypocrisy. Accordingly, they'd be working with techniques someone else developed. It's true that a general FAI would be harder to design than even a specific UFAI, but an AI with a goal along the lines of 'restore earth to it's pre-Humanity state and then prevent humans from arising, without otherwise disrupting the glorious purity of Nature' probably isn't easier to design than an anti-UFAI with the goal 'identify other AIs that are trying to kill us all and destroy everything we stand for, then prevent them from doing so, minimizing collateral damage while you do so,' while the latter would have more widespread support and therefore more resources available for it's development.
You're adding constraints to the "humanity is a cancer" project which make it a lot harder. Why not settle for "wipe out humanity in a way that doesn't cause much damage and let the planet heal itself"?
The idea of an anti-UFAI is intriguing. I'm not sure it's much easier to design than an FAI.
I think the major barrier to the development of a "wipe out humans" UFAI is that the work would have to be done in secret.