So, assuming survival is important, a solution that maximises survival plus wireheading would seem to solve that problem. Of course it may well just delay the inevitable heat death ending but if we choose to make that important, then sure, we can optimise for survival as well. I'm not sure that gets around the issue that any solution we produce (with or without optimisation for survival) is merely an elaborate way of satisfying our desires (in this case including the desire to continue to exist) and thus all FAI solutions are a form of wireheading.
This post enumerates texts that I consider (potentially) useful training for making progress on Friendly AI/decision theory/metaethics.
Rationality and Friendly AI
Eliezer Yudkowsky's sequences and this blog can provide solid introduction to the problem statement of Friendly AI, giving concepts useful for understanding motivation for the problem, and disarming endless failure modes that people often fall into when trying to consider the problem.
For a shorter introduction, see
Decision theory
The following book introduces an approach to decision theory that seems to be closer to what's needed for FAI than the traditional treatments in philosophy or game theory:
Another (more technical) treatment of decision theory from the same cluster of ideas:
Following posts on Less Wrong present ideas relevant to this development of decision theory:
Mathematics
The most relevant tool for thinking about FAI seems to be mathematics, where it teaches to work with precise ideas (in particular, mathematical logic). Starting from a rusty technical background, the following reading list is one way to start:
[Edit Nov 2011: I no longer endorse scope/emphasis, gaps between entries, and some specific entries on this list.]