AnnaSalamon comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: AnnaSalamon 01 February 2011 03:31:27PM 11 points [-]

I'm curious if the SIAI shares that opinion.

I do. More people doing detailed moral psychology research (such as Jonathan Haidt's work), or moral philosophy with the aim of understanding what procedure we would actually want followed, would be amazing.

Research into how to build a powerful AI is probably best not done in public, because it makes it easier to make unsafe AI. But there's no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.

Comment author: XiXiDu 01 February 2011 06:24:58PM *  5 points [-]

Research into how to build a powerful AI is probably best not done in public...

Is the SIAI concerned with the data security of its research? Is the latest research saved unencrypted on EY's laptop and shared between all SIAI members? Could a visiting fellow just walk into the SIAI house, plug-in a USB stick and run with the draft for a seed AI? Those questions arise when you make a distinction between you and the "public".

But there's no reason not to engage as many good researchers as possible on moral psychology and meta-ethics.

Can that research be detached from decision theory? Since you're working on solutions applicable to AGI, is it actually possible to differentiate between the mathematical formalism of an AGI's utility function and the fields of moral psychology and meta-ethics. In other words, can you learn a lot by engaging with researchers if you don't share the math? That is why I asked if the work can effectively be subdivided if you are concerned with security.

Comment author: jacob_cannell 02 February 2011 06:58:26AM 2 points [-]

Research into how to build a powerful AI is probably best not done in public

I find this dubious - has this belief been explored in public on this site?

If AI research is completely open and public, then more minds and computational resources will be available to analyze safety. In addition, in the event that a design actually does work, it is far less likely to have any significant first mover advantage.

Making SIAI's research public and open also appears to be nearly mandatory for proving progress and joining the larger scientific community.