It's not just a matter of pace; this perspective also implies a certain prioritization of the questions.
For example, as you say, it's important to conclude soon whether animal welfare is important. (1) (2) But if we preserve the genetic information that creates new animals, we preserve the ability to optimize animal welfare in the future, should we at that time conclude that it is important. (2) If we don't, then later concluding it's important doesn't get us much.
It seems to follow that preserving that information (either in the form of a breeding population, or some other form) is a higher priority, on this view, than proving that animal welfare is important. That is, for the next century, genetics research might be more relevant to maximizing long-term animal welfare than ethical philosophy research.
Of course, killing off animals is only one way to (hypothetically) irreversibly fail to optimize the future. Building an optimizing system that is incapable of correcting its initially mistaken terminal values -- either because it isn't designed to alter its programming, or because it has already converted all the mass-energy in the universe into waste heat, or whatever -- is another. There are many more.
In other words, there are two classes of questions: the ones where a wrong answer is irreversible, and the ones where it isn't. Philosophical work to determine which is which, and to get a non-wrong answer to the former ones, seems like the highest priority on this view.
===
(1) Not least because humans are already having an impact on it, but that's beside your point.
(2) By "conclude that it's important" I don't mean adopting a new value, I mean become aware of an implication of our existing values. I don't reject adopting new values, either, but I'm explicitly not talking about that here.
Barring a major collapse of human civilization (due to nuclear war, asteroid impact, etc.), many experts expect the intelligence explosion Singularity to occur within 50-200 years.
That fact means that many philosophical problems, about which philosophers have argued for millennia, are suddenly very urgent.
Those concerned with the fate of the galaxy must say to the philosophers: "Too slow! Stop screwing around with transcendental ethics and qualitative epistemologies! Start thinking with the precision of an AI researcher and solve these problems!"
If a near-future AI will determine the fate of the galaxy, we need to figure out what values we ought to give it. Should it ensure animal welfare? Is growing the human population a good thing?
But those are questions of applied ethics. More fundamental are the questions about which normative ethics to give the AI: How would the AI decide if animal welfare or large human populations were good? What rulebook should it use to answer novel moral questions that arise in the future?
But even more fundamental are the questions of meta-ethics. What do moral terms mean? Do moral facts exist? What justifies one normative rulebook over the other?
The answers to these meta-ethical questions will determine the answers to the questions of normative ethics, which, if we are successful in planning the intelligence explosion, will determine the fate of the galaxy.
Eliezer Yudkowsky has put forward one meta-ethical theory, which informs his plan for Friendly AI: Coherent Extrapolated Volition. But what if that meta-ethical theory is wrong? The galaxy is at stake.
Princeton philosopher Richard Chappell worries about how Eliezer's meta-ethical theory depends on rigid designation, which in this context may amount to something like a semantic "trick." Previously and independently, an Oxford philosopher expressed the same worry to me in private.
Eliezer's theory also employs something like the method of reflective equilibrium, about which there are many grave concerns from Eliezer's fellow naturalists, including Richard Brandt, Richard Hare, Robert Cummins, Stephen Stich, and others.
My point is not to beat up on Eliezer's meta-ethical views. I don't even know if they're wrong. Eliezer is wickedly smart. He is highly trained in the skills of overcoming biases and properly proportioning beliefs to the evidence. He thinks with the precision of an AI researcher. In my opinion, that gives him large advantages over most philosophers. When Eliezer states and defends a particular view, I take that as significant Bayesian evidence for reforming my beliefs.
Rather, my point is that we need lots of smart people working on these meta-ethical questions. We need to solve these problems, and quickly. The universe will not wait for the pace of traditional philosophy to catch up.