One career path I'm sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it's okay to harm and what "harm" is.
Does this seem like a good sort of career path for someone interested in Friendly AI?
Cognitive neuroscience and cognitive psychology are far more relevant. A Friendly AI is a moral agent; it's more like a judge than a cruise missile. A killer robot must inflict harm appropriately but it does not need to know what "harm" is; that's for politicians, generals, and other strategists.
We have to extract the part of the human cognitive algorithm which, on reflection, encodes the essence of rational and moral judgment and action. That's the sort of achievement which FAI will require.
The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.