'Museum' and 'library' both imply, to me at least, that the data is being made available to people who might be interested in it. In the case of a paperclipper, that seems rather unlikely - why would it keep us around, instead of turning the planet into an uninhabitable supercomputer that can more quickly consider complex paperclip-maximization strategies? The information about what we were like might still exist, but probably in the form of the paperclipper's 'personal memory' - and more likely than not, it'd be tagged as 'exploitable weaknesses of squishy things' rather than 'good patterns to reproduce', which isn't very useful to us, to say the least.
I see. We have different connotations of the word, then. For me, a museum is just a place where objects of historical interest are stored.
When I talked about humans being "preserved mostly in history books and museums" - I was intending to conjour up an institution somewhat like the Jurassic park theme park. Or perhaps - looking further out - something like The Matrix. Not quite like the museum of natural history as it is today - but more like what it will turn into.
Regarding the utility of existence in a museum - it may be quite a bit better...
A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?