If changing human nature or building AGI is impossible, we could still explore how close we can get to this. Research the most efficient form of education and group cooperation. Research the most powerful forms of non-general artificial intelligence, design better expert systems, etc. These things could still be enough meta to influence many other aspects of human life.
Instead of nanotech, we could improve other forms of automated technology. Even without the ability to manipulate atoms, building things automatically from very small (but greater than nano-) pieces could be awesome. Instead of bio-immortality, we could still invent better medicine, construct artificial limbs, extend life.
I did not want to ruin your thought experiment, just to say that in given areas less than perfect could still be great. Now let's assume that we already did what we could there, and that there is no artificial intelligence smart enough to give us advice about the strategy the humankind should choose. What is next?
It seems to me that this "Research the most efficient form of education and group cooperation. Research the most powerful forms of non-general artificial intelligence, design better expert systems, etc. These things could still be enough meta to influence many other aspects of human life." is what Leverage Reserach is mostly about.
Assume for the time being that it will forever remain beyond the scope of science to change Human Nature. AGI is also impossible, as is Nanotech, BioImmortality, and those things.
Douglas Adams mice finished their human experiment, giving to you, personally, the job of redesigning earth, and specially human society, according to your wildest utopian dreams, but you can't change the unchangeables above.
You can play with architecture, engineering, gender ratio, clothing, money, science grants, governments, feeding rituals, family constitution, the constitution itself, education, etc... Just don't forget if you slide something too far away from what our evolved brains were designed to accept, things may slide back, or instability and catastrophe may ensue.
Finally, if you are not the kind of utilitarian that assigns exactly the same amount of importance to your desires, and to that of others, I want you to create this Utopia for yourself, and your values, not everyone.
The point of this exercise is: The vast majority of folk not related to this community that I know, when asked about an ideal world, will not change human nature, or animal suffering, or things like that, they'll think about changing whatever the newspaper editors have been writing about last few weeks. I am wondering if there is symmetry here, and folks from this community here do not spend that much time thinking about those kinds of change which don't rely on transformative technologies. It is just an intuition pump, a gedankenexperiment if you will. Force your brain to face this counterfactual reality, and make the best world you can given those constraints. Maybe, if sufficiently many post here, the results might clarify something about CEV, or the sociology of LessWrongers...