jacob_cannell comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: Grognor 23 April 2012 03:49:33PM *  7 points [-]

Think about how ridiculous your comment must sound to them.

I have no reason to suspect that other people's use of the absurdity heuristic should cause me to reevaluate every argument I've ever seen.

That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all. I can only conclude that the surface analogy is the entire content of the claim.

That you just assume that they must be stupid

If he were just stupid, I'd have no right to be indignant at his basic mistake. He is clearly an intelligent person.

They have probably thought about everything you know long before you and dismissed it.

You are not making any sense. Think about how ridiculous your comment must sound to me.

(I'm starting to hate that you've become a fixture here.)

Comment author: jacob_cannell 16 June 2012 07:21:29PM *  2 points [-]

That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all.

I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.

For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your 'antiprediction' is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.

You can label anything an "antiprediction" and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.