LukeStebbing comments on SIAI - An Examination - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
If we take those probabilities as a given, they strongly encourage a strategy that increases the chance that the first seed AI is Friendly.
jsalvatier already had a suggestion along those lines:
A public Friendly design could draw funding, benefit from technical collaboration, and hopefully end up used in whichever seed AI wins. Unfortunately, you'd have to decouple the F and AI parts, which is impossible.
Isn't CEV an attempt to separate F and AI parts?
It's half of the F. Between the CEV and the AGI is the 'goal stability under recursion' part.
It's a good first step.
I don't understand your impossibility comment, then.
I'm talking about publishing a technical design of Friendliness that's conserved under self-improving optimization without also publishing (in math and code) exactly what is meant by self-improving optimization. CEV is a good first step, but a programmatically reusable solution it is not.
On doing the impossible:
OK, I understand that much better now. Great point.