billswift comments on Open Thread: September 2009 - Less Wrong

2 Post author: AllanCrossman 01 September 2009 10:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: JamesAndrix 01 September 2009 06:00:43PM 3 points [-]

Such a design would be harder to reason about.

Let's say you've got a prototype you want to improve. How do you tell if a proposed change would make it smarter, break it, introduce a subtle cognitive bias, or make the AI want to kill you?

In order to set on limits on the kinds of things an AI will do, you need to understand how it works. You can't be experimenting on a structure you partially understand, AND be certain that the experiments won't be fatal.

This is easier when you've got a clearly defined structure to the AI, and know how the parts interact, and why.

Comment author: billswift 01 September 2009 09:40:00PM 0 points [-]

In other words they are doing it where the light's better, rather than where they dropped the keys. Given the track record of correctness proofs in comp sci, I don't think provably Friendly AI is even possible, hopefully I'm wrong there, but all they are doing is further crippling their likelihood of achieving AI before some military or business does.