MichaelVassar comments on A Less Wrong singularity article? - Less Wrong

28 Post author: Kaj_Sotala 17 November 2009 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 18 November 2009 11:19:44PM *  1 point [-]

An intelligent machine might make one of its first acts the assassination of other machine intelligence researchers - unless it is explicitly told not to do that. I figure we are going to want machines that will obey the law. That should be part of any sensible machine morality proposal.

I absolutely do not want my FAI to be constrained by the law. If the FAI allows machine intelligence researchers to create an uFAI we will all die. An AI that values the law above the existence of me and my species is evil, not Friendly. I wouldn't want the FAI to kill such researchers unless it was unable to find a more appealing way to ensure future safety but I wouldn't dream of constraining it to either laws or politics. But come to think of it I don't want it to be sensible either.

The Three Laws of Robotics may be a naive conception but that Zeroth law was a step in the right direction.

Comment author: MichaelVassar 19 November 2009 04:34:58PM 0 points [-]

It is a misconception to think of law as a set of rules. Even more so to understand them as a set of rules that apply to non-humans today. In addition, rules won't be very effective constraints on superintelligences.