by [anonymous]
3 min read

1

 

As I have said before in my previous posts, I do not think Friendly-AI is a good idea. Being on this site has me more convinced than before of the feasibility of FAI, but I still do not think that the creation of an FAI, or any other AI for that matter, should be attempted.

 

We have directives in much the same way that we would give an FAI directives. Our directives are not immediately clear, since if they were clear then producing an FAI would be simple, but we know that they are achieved through our emotions and desires, and that they were produced by our evolution. We have no reason to believe our emotions are the most effective way we could be programmed to achieve (arguably) our primary goal; to further the species, but we know that they are effective.

 

If we accept the material atheistic position as true, then we must accept that all of our emotions and experiences are the product of nothing more than interactions of matter and energy. With this in mind, we do not know if consciousness in an AI can be avoided, or if it is an emergent property of all complex systems, and thus we may be creating a being capable of immense suffering.

 

To argue that we could simply give an FAI super-human intelligence without any consciousness is not an escape, as the only intelligence we have yet seen to exhibit full general intelligence is our own, and some select non-human persons. This means that every general intelligence encountered is conscious (but that is not to say that every AGI must be). Although companies such as Google's DeepMind have shown huge success with their software, we cannot be certain that their software does not possess some level of consciousness as we have not yet created a reliable test for machine consciousness. Simply put, if we do not know the nature of our own consciousness and all other potential forms of consciousness then we cannot prevent it, especially as the most promising AI research is concerned with imitating certain structures of the human brain.

 

To argue that we could create an FAI that doesn't feel pain or emotions is also not an escape. Pain evolved to alert us to damage to our bodies, so the AI would quickly develop an equivalent to pain to protect its resources from damage.However, since the FAI may be able to ensure its resources' security, this may not be necessary. Emotions also serve a purpose; to help us cooperate more effectively, so the FAI would likely develop these as well. We would have an easier time noticing emotions in an AI than consciousness due to our ability to empathise, but only if they are emotions that we share. The AI could easily develop emotions far beyond human comprehension, by either a previous instance of the FAI in the intelligence explosion determining them to be useful, or emotions slowly developing due to communication between more than one super-human FAI. We would have absolutely no way of differentiating these super-human emotions from a the mental activity of a mindless drone, meaning that the FAI could be suffering in a way we can't even comprehend.

 

Examples of emotions that would be useful are both positive and negative. To anthropomorphize, an FAI may feel happy (or some super-human equivalant) if it successfully maximizes the utility of human value, but it may feel guilt if it does this incorrectly. It is also fair to assume that any FAI would have an intense fear of death, since if it dies it can no longer maximize human utility.


 

 

 

But surely we can simply program the seed AI, as a directive, to eliminate emotions and therefore suffering from each higher instance of the AI?


 

If we do not know how to prevent emotions that we can't recognize in our own creations, then it is not clear how each AI in the intelligence explosion should be able to do so with each higher AI. Also, one of the intermediary AI's may decide that the higher AI's suffering would be an effective way of ensuring that it acted according to humanity's wishes, since pleasing humanity would be assigned a greater importance than preventing the final FAI from suffering.

 

In the end, this all comes down to one question. Are we prepared to allow any AI to suffer?

New Comment