It's an interesting question that I'm pondering.
Now, while I do question the intellectual honesty of this blog, I'll link to it anyways, since the evidence does seem interesting, at the very least: http://wattsupwiththat.com/2010/01/04/where-are-the-corpses/
http://wattsupwiththat.com/2011/05/19/species-extinction-hype-fundamentally-flawed/
It does seem that environmentalism can mimic some qualities of religion (I know, since I used to be an environmentalist myself). As such, it can cause many extremely intelligent people to reject evidence that goes against their worldview.
Furthermore, it's also possible that computational chemistry may soon be our primary agent for drug discovery, rather than discovering more biological compounds in certain ecosystems (that being said, drug discovery is entirely different from drug synthesis, and discovering a gene that codes for a particular protein and splicing it into an E Coli bacterium is going to be far easier than anything computational chemistry can do in the near future).
With that all being said, what now? I do believe that there is something of value that does get lost as habitat gets destroyed. But it's hard to quantify value in these cases. Certain animals, like crows, chimpanzees, orcas, and elephants, are cognitively advanced enough to have their own cultures. If one of their subcultures get destroyed (which can be done without a fullscale extinction), then is anything valuable that gets lost? (besides value for scientific research that has potential to be applicable elsewhere?) And is it more important to worry about these separate cultures, as compared to worrying about different subspecies of the same animal? Certainly, we're now beginning to discover novel social networks in dolphins and crows. But most of these animals are not at risk of extinction, and even the chimpanzees and bonobos will only get extinct in the wild (at the very worst). There are other less advanced animals that have a higher risk of permanent extinction.
What we're prone to systematically underestimating, of course, is the possible permanent loss of micro-organisms. And of novel biological structures (and networks) that may be contained within them.
"Timeful/timeless" was what I meant, I confused my terminology. I'm confused because it seems to me that there are an infinite number of logical nodes you could put in your causal graphs that constrain the structure of your causal graphs (in a way that is of course causally explicable, but whose simplest explanation sans logical nodes may have suspiciously high K complexity in some cases), and that 'cause' ferns to grow fractally, say, and certainly 'cause' more interesting events. Arising or passing away events particularly 'caused' by these timeless properties of the universe aren't exactly teleological in that they're not necessarily determined by the future (or the past); but because those timeless properties create patterns that constrain possible futures it's like the structures in the imaginable futures that are timelessly privileged as possible/probable are themselves causing their fruition by their nature. So in my fuzzy thinking there's this conceptual soup of attractors, causality, teleology, timeless properties, and the like, and mish-mashing them is all too easy since they're different perspectives that highlight different aspects rather than compartmentalized belief structures. If I just stick to switching between timeful and timeless perspectives (without confusing myself with attractors) then I have my feet firmly on the ground.
Anyway, to (poorly) elaborate on what I was originally talking about: if might makes right (or to put it approximately, morality flows forward from the past, like naive causal validity semantics in all their arbitrariness), but right enables might (very roughly, morality flows backward from the future, logical truths constrain CDT-like optimizers and at some level of organization the TDT-like optimizers win out because cooperation just wins; an "I" that is big like a human has fewer competitors and greater rewards than an "I" that is small like a paramecium; (insert something about ant colonies?); (insert something about thinking of morality as a Pareto frontier itself moving over time (moving along a currently-hidden dimension in a way that can't be induced from seeing trends in past movements along then-hidden dimensions, say, though that's probably a terrible abstraction), and discount rates of cooperative versus non-cooperative deal-making agents up through the levels of organization and over time, with hyperbolic discounters willingly yielding to exponential discounters and so on over time)), then seeing only one and not the other is a kind of blindness.
Emphasizing "might makes right" may cause one to see the universe as full of (possibly accidental) adversaries, where diverging preferences burn up most of the cosmic commons and the winners (who one won't identify with and whose preferences one won't transitively value) take whatever computronium is left. This sort of thinking is associated with reductionism, utilitarianism/economics, and disinterest (either passive or active) in morality qua morality or shouldness qua shouldness. I won't hastily list any counterarguments here for fear of giving them a bad name but will instead opine that the counterarguments seem to me vastly underappreciated (when noticed) by what I perceive to be the prototypical median mind of Less Wrong.
Emphasizing "right enables might" may cause one to see the future as timelessly determined to be perfect, full of infinitely intricate patterns of interaction and with every sacrifice seen in hindsight as a discordant note necessary for the enabling of greatest beauty, the unseen object of sehnsucht revealed as the timeless ensemble itself. This sort of thinking is associated with "objective morality" in all its vagueness, "God" in all its/His vagueness, and in a milder form a certain conception of the acausal economy in all its vagueness. Typical objections include: What if this timeless perfection is determined to be the result of your timeful striving? Moral progress isn't inevitable. How does knowing that it will work out help you help it work out? What if, as is likely, this perfection is something that may be approached but never reached? Won't there always be an edge, a Pareto frontier, a front behind which remaining resources should be placed, a causal bottleneck, somewhere in time? Then why not renormalize, and see that war, that battle, that moment where things are still up in the air, as the only world? Are you yourself not almost entirely a conglomeration of imperfections that will get burned away in Pentecostal fire, as will most everything you now love? Et cetera. There are many similar and undoubtedly stronger arguments to gnaw at the various parts of an optimist's mind; the arguments I gave are interior to those the Less Wrong community has cached.
Right view cuts through the drama of either tinted lens to find Tao if doing so is De. It's a hypothesis.