Manfred comments on Why AGI is extremely likely to come before FAI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
Yeah, I was playing pretty fast and loose there. With the goal being an algorithm, standard deviation doesn't make much sense, and there's not even necessarily a convergent solution (since you might always be able to make people happier by including another special case). But properties of the output should still converge, or else something is wrong, so it probably still makes sense to talk about rate of convergence there.
Which is probably pretty instant for the human universals, and then human variation can be treated as perturbations to some big complicated human-universal model. And I have no idea what kind of convergence rate for output properties that actually leads to :P