DefectiveAlgorithm comments on Advice for AI makers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (196)
For solving the Friendly AI problem, I suggest the following constraints for your initial hardware system:
1.) All outside input (and input libraries) are explicitly user selected. 2.) No means for the system to establish physical action (e.g., no robotic arms.) 3.) No means for the system to establish unexpected communication (e.g., no radio transmitters.)
Once this closed system has reached a suitable level of AI, then the problem of making it friendly can be worked on much easier and more practically, and without risk of the world ending.
To start out from the beginning to make a GAI friendly through some other means seems rather ambitious to me. Why not just work on AI now, make sure when you're getting close to the goal, that the AI is suitably restricted, and then finally use the AI itself as an experimental testbed for "personality certification".
(Can someone explain/link me to why this isn't currently espoused?)
Didn't David Chalmers propose that here:
http://www.vimeo.com/7320820
...?
Test harnesses are a standard procedure - but they are not the only kind of test.
Basically, unless you are playing chess, or something, if you don't test in the real world, you won't really know if it works - and it can't do much to help you do important things - like raise funds to fuel development.