gwern comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 11 January 2013 09:35:13AM 1 point [-]

Yes, it is this layered approach that the OP is asking about -- I don't see that SI is trying it.

Comment author: gwern 11 January 2013 04:42:23PM 0 points [-]

In what way would SI be 'trying it'? The point about multiple layers of security being a good idea for any seed AI project has been made at least as far back as Eliezer's CFAI and brought up periodically since with innovations like the suicide button and homomorphic encryption.

Comment author: JoshuaFox 12 January 2013 04:28:24PM *  0 points [-]

I agree: That sort of innovation can be researched as additional layers to supplement FAI theory

Our question was -- to what extent should SI invest in this sort of thing.

Comment author: gwern 16 January 2013 02:32:13AM *  1 point [-]

My own view is 'not much', unless SI were to launch an actual 'let's write AGI now' project, in which case they should invest as heavily as anyone else would who appreciated the danger.

Many of the layers are standard computer security topics, and the more exotic layers like homomorphic encryption are being handled by academia & industry adequately (and it would be very difficult for SI to find cryptographers who could advance the state of the art); hence, SI's 'comparative advantage', as it were, currently seems to be in the most exotic areas like decision theory & utility functions. So I would agree with the OP summary:

Perhaps the folks who are actually building their own heuristic AGIs are in a better position than SI to develop safety mechanisms for them, while SI is the only organization which is really working on a formal theory on Friendliness, and so should concentrate on that. It could be better to focus SI's resources on areas in which it has a relative advantage, or which have a greater expected impact.

Although I would amend 'heuristic AGIs' to be more general than that.

Comment author: JoshuaFox 16 January 2013 07:19:07AM *  2 points [-]

Many of the layers are standard computer security topics, and the more exotic layers like homomorphic encryption are being handled by academia & industry adequately

That's all the more reason to publish some articles on how to apply known computer security techniques to AGI. This is way easier (though far less valuable) than FAI, but not obvious enough to go unsaid.

SI's 'comparative advantage'

Yes. But then again, don't forget the 80/20 rule. There may be some low-hanging fruit along other lines than FAI -- and for now, no one else is doing it.