jacob_cannell comments on Muehlhauser-Wang Dialogue - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (284)
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your 'antiprediction' is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an "antiprediction" and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.