Manfred comments on Why AGI is extremely likely to come before FAI - Less Wrong

4 Post author: skeptical_lurker 01 August 2012 10:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 01 August 2012 11:32:27PM *  1 point [-]

Yeah, I was playing pretty fast and loose there. With the goal being an algorithm, standard deviation doesn't make much sense, and there's not even necessarily a convergent solution (since you might always be able to make people happier by including another special case). But properties of the output should still converge, or else something is wrong, so it probably still makes sense to talk about rate of convergence there.

Which is probably pretty instant for the human universals, and then human variation can be treated as perturbations to some big complicated human-universal model. And I have no idea what kind of convergence rate for output properties that actually leads to :P