LawrenceC comments on Superintelligence 8: Cognitive superpowers - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (95)
The number of actual possibilities of goals is HUGE compared to the relatively small subset of human goals. Humans share the same brain structure and general goal structure, but there's no reason to expect the first AI to share our neural/goal structure. Innocuous goals like "Prevent Suffering" and "Maxmize Happiness" may not be interpreted and executed the way we wish them to be.
Indeed, gaining superpowers probably would not compromise the AI's moral code. It only gives it the ability to fully execute the actions dictated by the moral code. Unfortunately, there's no guarantee that its morals will fall in line with ours.
There is no guarantee, therefore we have a lot of work to do!
Here is another candidate for an ethical precept, from the profession of medicine:
"First do no harm."
The doctor is instructed to begin with this heuristic, to which there are many, many exceptions.