If we knew how to build a machine that chooses its outputs as to maximize some property of the surrounding universe, such a machine would be very dangerous, because maximizing almost any easily defined property leads to a worthless universe (without humans, or with humans living pointless lives, etc.) I believe the preceding statement is uncontroversial, and most arguments around the necessity of Friendly AI are really about how likely we are to build such a machine, or maybe something else will happen first, etc.
Instead of adding to the existing arguments, I want to reframe the question thus: what course of action would you recommend to a small group of smart people, assuming for the moment that the danger is real? In other words, what should SingInst do on an alternate Earth where normal human science will eventually build unfriendly AI? In particular:
- How do you craft your message to the public?
- What's your hiring policy?
- Do you keep your research secret?
- Do you pursue alternate avenues like uploads, or focus only on FAI?
For the sake of inconvenience, assume that many (though not all) of the insights required for developing FAI can also be easily repurposed to hasten the arrival of UFAI.
Thanks to Wei Dai for the conversation that sparked this post.
We have an existence proof of intelligences based upon "The type of systems humans are", we don't for pure maximizers. It is no good trying to develop friendliness theory based upon a pure easily reasoned about system if you can't make an intelligence out of it.
So while it is harder, this may be the sort of system we have to deal with. It is these sorts of questions I wanted to try to answer with the group in my original post.
I'll try to explain why I am sceptical of maximizer based intelligences in a discussion post. It is not because they are inhuman.