Isaac Asimov once described a future in which all technical thought was automated, and the role of humans was reduced to finding appropriate questions to pose to thinking machines. I wouldn't suggest planning for this eventuality, but it struck me as an interesting situation. What would we do, if we could get the answer to any question we could formulate precisely? (In the story, questions didn't need to be formulated precisely, but nevermind.) For concreteness, suppose that we have a box as smart as a million Einsteins, cooperating effectively for a century every time we ask a question, but which is capable only of solving precisely specified problems.
You can't say "analyze the result of this experiment." You can say, "find me the setting for these 10 parameters which best explains this data" or "write me a short program which predicts this data." You can't say "find me a program that plays Go well." You can say, "find me a program that beats this particular Go ai, even with a 9 stone handicap." Etc. More formally, lets say you can specify any scoring program and ask the box to find an input that scores as well as it can.
What would you do, if you got exactly one question? I don't think humanity is posed to get any earth-shattering insights. I don't think we could find a theory of everything, or a friendly AI, or any sort of AI at all, or a solution to any real problem facing us, using just one question. But maybe that is just a failure of my creativity.
What would you plan to do, if you had unlimited access? An AGI or brain emulation arguably implicitly converts our vague real world objectives into a precise form. Are there other ways to bridge the gap between what humans can formally describe and what humans want? Can you bootstrap your way there starting from current understanding? What is any reasonable first step?
Modeling physical systems is already hard. I don't think we could yet write down the dynamics of the physical systems well enough (or rather, we don't understand what the most important characteristics are) to come up with a precise formulation of the major problems in synthetic biology or nanotechnology. I certainly concede that an optimizer would be helpful in solving many subproblems, and would considerably increase the speed of new developments in pretty much every field. I don't think it solves many problems on its own though.
But even if you could solve narrow existing technological problems or develop new technologies at a steady pace, it seems like you should be able to do more. Suppose the box can do in a minute what takes existing humans a million years. Then our only upper bound on our capabilities using the box is whatever we expect of a million years of progress at the current pace. I don't know about you, but I expect pretty much everything.