You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

thomblake comments on Thinking soberly about the context and consequences of Friendly AI - Less Wrong Discussion

9 Post author: Mitchell_Porter 16 October 2012 04:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 17 October 2012 06:23:30PM 0 points [-]

Yes, that pretty well captures it.

Comment author: Mitchell_Porter 17 October 2012 10:15:21PM 3 points [-]

That is only a superficial difference, a difference of scenario considered. If you put a bad actor from ordinary machine-ethics into a possible world where you can torture someone forever, or if you put a UFAI into a possible world where the most harm it can do is blow you up once, this difference goes away.

Designing an "ethical computer program" or a "friendly AI" is not about which possible world the program inhabits, it's about the internal causality of the program and the choices it makes. The valuable parts of FAI research culture are all on this level. Associating FAI with the possible world of "post-singularity hell", as if that is the essence of what distinguishes the approach, is an example of what I want to combat in this post.

Comment author: thomblake 18 October 2012 02:01:25PM 1 point [-]

Designing an "ethical computer program" or a "friendly AI" is not about which possible world the program inhabits, it's about the internal causality of the program and the choices it makes.

The key difference is that in the case of a Seed AI, you need to find a way to make a goal system stable under recursive self-improvement. In the case of a toaster, you do not.

It's useful to keep Friendly AI concerns in mind when designing ethical robots, since they potentially become a risk when they start to get more autonomous. But when you're giving a robot a gun, the relevant ethical concerns are things like whether it will shoot civilians. The scope is relevantly different.

Really, there is a whole field out there of Machine Ethics, and it's pretty well established that it's up to a different sort of thing than what SIAI is doing. While some folks still conflate "Friendly AI" and "Machine Ethics", I think it's much better to maintain the distinction and consider FAI a subfield of Machine Ethics.