You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Open Thread March 21 - March 27, 2016 - Less Wrong Discussion

3 Post author: Gunnar_Zarncke 20 March 2016 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (160)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 23 March 2016 02:39:48PM *  9 points [-]

I wish I had read the ending first; Hall is relying heavily on Deutsch to make his case. Deutsch has come up on LW before, most relevantly here. An earlier comment of mine still seems true: I think Deutsch is pointing in the right direction and diagnosing the correct problems, but I think Deutsch underestimates the degree to which other people have diagnosed the same problems and are working on solutions to address those problems.

Hall's critique is multiple parts, so I'm writing my response part by part. Horizontal lines distinguish breaks, like so:


It starts off with reasoning by analogy, which is generally somewhat suspect. In this particular analogy, you have two camps:

  1. Builders, who build ever-higher towers, hoping that they will one day achieve flight (though they don't know how that will work theoretically).

  2. Theorists, who think that they're missing something, maybe to do with air, and that the worries the builders have about spontaneous liftoff don't make sense, because height doesn't have anything to do with flight.

But note that when it comes to AI, the dividing lines are different. Bostrom gets flak for not knowing the details of modern optimization and machine learning techniques (and I think that flak is well-targeted), but Bostrom is fundamentally concerned about theoretical issues. It's the builders--the Ngs of the world who focus on adding another layer to their tower--who think that things will just work out okay instead of putting effort into ensuring that things will work out okay.

That is, the x-risk argument is the combination of a few pieces of theory: the Orthogonality Thesis, that intelligence can be implemented in silicon (the universality of intelligence), and that there aren't hard constraints to intelligence anywhere near the level of human intelligence.


One paragraphs, two paragraphs, three paragraphs... when are we going to get to the substance?

Okay, 5 paragraphs in, we get the idea that "Bayesian reasoning" is an error. Why? Supposedly he'll tell us later.

The last paragraph is good, as a statement of the universality of computation.


And the first paragraph is one of the core disagreements. Hall correctly diagnoses that we don't understand human thought at the level that we can program it. (Once we do, then we have AGI, and we don't have AGI yet.) But Hall then seems to claim that, basically, unless we're already there we won't know when we'll get there. Which is true but immaterial; right now we can estimate when we'll get there, and what our estimate is determines how we should approach.

And then the latter half of this section is just bad. There's some breakdown in communication between Bostrom and Hall; Bostrom's argument, as I understand it, is not that you get enough hardware and then the intelligence problem solves itself. (This is the "the network has become self-aware!" sci-fi model of AGI creation.) The argument is that there's some algorithmic breakthrough necessary to get to AGI, but that the more hardware you have, the smaller that breakthrough is.

(That is, suppose the root of intelligence was calculating matrix determinants. There are slow ways and fast ways to do that--if you have huge amounts of hardware, coming across Laplace Expansion is enough, but if you have small amounts of hardware, you can squeak by only if you have fast matrix multiplication.)

One major point of contention between AI experts is, basically, how many software breakthroughs we have left until AGI. It could be the case that it's two; it could be the case that it's twenty. If it's two, then we expect it to happen fairly quickly; if it's twenty, then we expect it to happen fairly slowly. This uncertainty means we cannot rule out it happening quickly.

The claim that programs do not engage in creativity and criticism is simply wrong. This is the heart and soul of numerical optimization, metaheuristic programs in particular. Programs are creative and critical beyond the abilities of humans in the narrow domains that we've been able to communicate to those programs, but the fundamental math of creativity and criticism exist (in the terms of sampling from a solution space, especially in ways that make use of solutions that we've already considered, and objective functions that evaluate those solutions). The question is how easily we will be able to scale from well-defined problems (like routing trucks or playing Go) to poorly-defined problems (like planning marketing campaigns or international diplomacy).


Part 4 is little beyond "I disagree with the Orthogonality Thesis." That is, it sees value disagreements as irrationality. Bonus points for declaring Bayesian reasoning false for little reason that I can see besides "Deutsch disagrees with it" (which, I think, is due to Deutsch's low familiarity with the math of causal models, which I think are the solution he correctly thinks is missing with EDT-ish Bayesian reasoning).


Not seeing anything worth commenting on in part 5.


Part 6 includes a misunderstanding of Arrow's Theorem. (Arrow's Theorem is a no-go theorem, but it doesn't rule out the thing Hall thinks it rules out. If the AI is allowed to, say, flip a coin when it's indifferent, Arrow's Theorem no longer applies.)

Comment author: MrMind 24 March 2016 08:35:06AM 0 points [-]

Thank you for sparing us the time to sift throught the garbage... Why the hell any discussion about AGI must start with a bad flight analogy?