Comment author: DeeElf 15 April 2014 09:12:35PM -1 points [-]

Relevant: -Anything by David Hume -Carl G. Hempel. Laws and Their Role in Scientific Explanation: http://www.scribd.com/doc/19536968/Carl-G-Hempel-Laws-and-Their-Role-in-Scientific-Explanation -Studies in the Logic of Explanation: http://www.sfu.ca/~jillmc/Hempel%20and%20Oppenheim.pdf -Causation as Folk Science: http://www.pitt.edu/~jdnorton/papers/003004.pdf -Causation: The elusive grail of epidemiology: http://link.springer.com/article/10.1023%2FA%3A1009970730507 -Causality and the Interpretation of Epidemiologic Evidence: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1513293/ -Studies in the Philosophy of Biology: Reduction and Related Problems: http://books.google.com/books?id=NMAf65cDmAQC&pg=PA3#v=onepage&q&f=false

Comment author: Daermonn 23 April 2014 03:37:33AM 0 points [-]

I don't understand why you're getting downvoted. Those were great links, and indeed relevant. I appreciated them.

Comment author: Daermonn 23 February 2014 02:32:02AM 0 points [-]

I'm in the area and I would have loved to attend, but I just saw this posting. How many people showed up? Is another meetup being planned?

Comment author: Daermonn 11 March 2013 01:59:54AM 3 points [-]

This is something I've been thinking about a lot lately, myself. I totally struggle with akrasia and executive functioning, and I find I have more willpower if I do things socially. I've been using friends to go to the gym for the past few weeks. I've actually been rethinking my relationship with leadership because of it: I used to hate being the leader (preferring to just be left alone), but now I'm thinking that I need to lead in order to do the things I want to do.

Comment author: Yvain 05 July 2012 02:36:54AM 9 points [-]

This was my favorite post on your blog and I'm glad you posted it here.

Comment author: Daermonn 05 July 2012 06:10:16PM 2 points [-]

I agree. I stumbled across this one a week ago or so - without knowing the author was associated with LW - loved it, and have been thinking about it off and off since. I'm glad to see it again. I feel like I should probably start reading your blog regularly..

Comment author: Vaniver 12 June 2012 05:12:08AM 8 points [-]

Commentary (there will be a lot of "to me"s because I have been a bystander to this exchange so far):

I think this post misunderstands Holden's point, because it looks like it's still talking about agents. Tool AI, to me, is a decision support system: I tell Google Maps where I will start from and where I will leave from, and it generates a route using its algorithm. Similarly, I could tell Dr. Watson my medical data, and it will supply a diagnosis and a treatment plan that has a high score based on the utility function I provide.

In neither case are the skills of "looking at the equations and determining real-world consequences" that necessary. There are no dark secrets lurking in the soul of A*. Indeed, that might be the heart of the issue: tool AI might be those situations where you can make a network that represents the world, identify two nodes, and call your optimization algorithm of choice to determine the best actions to choose to attempt to make it from the start node to the end node.

Reducing the world to a network is really hard. Determining preferences between outcomes is hard. But Tool AI looks to me like saying "well, the whole world is really too much. I'm just going to deal with planning routes, which is a simple world that I can understand," where the FAI tools aren't that relevant. The network might be out of line with reality, the optimization algorithm might be buggy or clumsy, but the horror stories that keep FAI researchers up at night seem impossible because of the inherently limited scope, and the ability to do dry runs and simulations until the AI's model of reality is trusted enough to give it control.

Now, this requires that AI only be used for things like planning where to put products on shelves, not planning corporate strategy- but if you work from the current stuff up rather than from the God algorithm down, it doesn't look like corporate strategy will be on the table until AI is developed to the point where it could be trusted with that. If someone gave me a black box that spit out plans based on English input, then I wouldn't trust it and I imagine you wouldn't either- but I don't think that's what we're looking at, and I don't know if planning for that scenario is valuable.

It seems to me that SI has discussed Holden's Tool AI idea- when it made the distinction between AI and AGI. Holden seems to me to be asking "well, if AGI is such a tough problem, why even do it?".

Comment author: Daermonn 12 June 2012 07:49:35AM 3 points [-]

This really gets at the heart of what intuitively struck me wrong (read: "confused me") in Eliezer's reply. Both Eliezer and Holden engage with the example "Google Maps AGI"; I'm not sure what the difference is - if any - between "Google Maps AGI" and the sort of search/decision-support algorithms that Google Maps and other GPS systems currently use. The algorithm Holdon describes and the neat A* algorithm Eliezer presents seem to just do exactly what the GPS on my phone already does. If the Tool AI we're discussing is different than current GPS systems, then what is the difference? Near as I understand it, AGI is intelligent across different domains in the same way a human is, while Tool AI (= narrow AI?) is the sort of simple-domain search algorithms we see in GPS. Am I missing something here?

But if what Holden is talking about by Tool AI is just this sort of simple(r), non-reflective search algorithm, then I understand why he thinks this is significantly less risky; GPS-style Tool AI only gets me lost when it screws up, instead of killing the whole human species. Sure, this tool is imperfect: sometimes it doesn't match my utility function, and returns a route that leads me into traffic, or would take too long, or whatever; sometimes it doesn't correctly model what's actually going on, and thinks I'm on the wrong street. Even still, gradually building increasingly agentful Tool AIs - ones that take more of the optimization process away from the human user - seems like it would be much safer than just swinging for the fences right away.

So I think that Vaniver is right when he says that the heart of Holden's Tool AI point is "Well, if AGI is such a tough problem, why even do it?"

This being said, I still think that Eliezer's reply succeeds. I think his most important point is the one about specialization: AGI and Tool AI demand domain expertise to evaluate arguments about safety, and the best way to cultivate that expertise is with an organization that specializes in FAI-grade programmers. The analogy with the sort of optimal-charity work Holden specializes in was particularly weighty.

I see Eliezer's response to Holden's challenge - "why do AGI at all?" - as: "Because you need FAI-grade skills to know if you need to do AGI or not." If AGI is an existential threat, and you need FAI-grade skills to know how to deal with that threat, then you need FAI-grade programmers.

(Though, I don't know if "The world needs FAI-grade programmers, even if we just want to do Tool AI right now" carries through to "Invest in SIAI as a charity," which is what Holden is ultimately interested in.)

Comment author: pkkm 02 June 2012 07:04:03AM *  16 points [-]

People who do great things look at the same world everyone else does, but notice some odd detail that's compellingly mysterious.

Paul Graham, What You'll Wish You'd Known

Comment author: Daermonn 04 June 2012 06:16:55AM *  6 points [-]

This speech was really something special. Thanks for posting it. My favorite sections:

"If it takes years to articulate great questions, what do you do now, at sixteen? Work toward finding one. Great questions don't appear suddenly. They gradually congeal in your head. And what makes them congeal is experience. So the way to find great questions is not to search for them-- not to wander about thinking, what great discovery shall I make? You can't answer that; if you could, you'd have made it.

The way to get a big idea to appear in your head is not to hunt for big ideas, but to put in a lot of time on work that interests you, and in the process keep your mind open enough that a big idea can take roost. Einstein, Ford, and Beckenbauer all used this recipe. They all knew their work like a piano player knows the keys. So when something seemed amiss to them, they had the confidence to notice it."

And:

"Rebellion is almost as stupid as obedience. In either case you let yourself be defined by what they tell you to do. The best plan, I think, is to step onto an orthogonal vector. Don't just do what they tell you, and don't just refuse to. Instead treat school as a day job. As day jobs go, it's pretty sweet. You're done at 3 o'clock, and you can even work on your own stuff while you're there."

Great stuff.

Comment author: Daermonn 11 May 2012 11:07:02PM *  1 point [-]

This is a good one. I definitely sympathize with Eliezer's point that Bayesian probability theory is only part of the solution. e.g., in philosophy of science, the deductive-nomological account of scientific explanation is being displaced by a mechanistic view of explanation. In this context, a mechanism is an organization of parts which is responsible for some phenomena. This change is driven by the inapplicability of D-N to certain areas of science, especially the biomedical sciences, where matters are more complex and we can't really deduce conclusions from universal laws; instead, people are treating law-like regularity as phenomena to be explained by appeal to the organized interactions of underlying parts.

e.g., Instead of explaining, "You display symptoms Y; All people with symptoms Y have disease X; Therefore, you have disease X," mechanists explain by positing a mechanism, the functioning of which constitutes the phenomena to be explained. This seems to me to be intimately related to Eliezer's "reduce-to-algorithm" stance; and that an appeal to reduce abstract beliefs to physical mechanisms seems to be a pretty good way to generalize his stance here. In addition, certain mechanistic philosophers have done work to connect mechanisms and mechanistic explanation with Bayesian probability, and with Pearl's work on Bayesian networks and causality. Jon Williamson at Kent has my favorite account: he uses Recursive Bayesian Networks to model this sort of mechanistic thinking quantitatively.

Comment author: Bugmaster 05 February 2012 06:59:17PM 0 points [-]

But the best flavor text ever is still Martyrs' Tomb.

I don't know, I find the Wall of Vapor quote inspirational, as well:

Walls of a castle are made out of stone,
Walls of a house out of bricks or of wood.
My walls are made out of magic alone,
Stronger than any that ever have stood.

Comment author: Daermonn 10 February 2012 05:45:35AM 2 points [-]

From Shattered Perception (Discard all the cards in your hand, then draw that many cards.):

"You must shatter the fetters of the past. Only then can you truly act."

I think this one takes the cake, in terms of rationality.