You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on Course recommendations for Friendliness researchers - Less Wrong Discussion

62 Post author: Louie 09 January 2013 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread.

Comment author: lukeprog 09 January 2013 06:56:02PM *  24 points [-]

Allow me to add another clarification.

Earlier, I explained to someone that most of the problems in Eliezer's forthcoming Open Problems in Friendly AI sequence are still at the stage of being philosophy problems. Why, then, do Louie and I talk about FAI being "mostly a math problem, not a programming problem"?

The point we're trying to make is that Friendly AI, as we understand it, isn't chiefly about hiring programmers and writing code. Instead, it's mostly about formalizing problems (in reflective reasoning, decision theory, etc.) into math problems, and then solving those math problems. The formalization step itself will likely require the invention of new math — not so much clever programming tricks (though those may be required at a later stage).

Most of the "open FAI problems" are still at the stage of philosophy because, as Louie says, "most of the theory behind self-reference, decision theory, and general AI techniques haven't been formalized.... yet." But we think those philosophical problems will be formalized into math problems, sometimes requiring new math.

So we're not (at this stage) looking to hire great programmers. We're looking to hire great mathematicians; even though most of the problems are still at the "philosophy" stage of formalization.

The specific practice of turning philosophy into math is a key feature of the field called formal epistemology, but Louie and I couldn't find any "standard" classes or textbooks on formal epistemology that we would strongly recommend, so we had to leave it off the list for now.

Examples of turning philosophy into math include VNM's formalization of "rationality" into axiomatic utility theory, and Shannon's formalization of information concepts with his information theory.

Comment author: Vladimir_Nesov 09 January 2013 07:56:12PM *  25 points [-]

will likely require the invention of new fundamental math

I wish you'd stop saying that (without justification or clarification). Modern math seems quite powerful enough to express most problems, so the words "new" and "fundamental" sound somewhat suspicious. Is this "new fundamental math" something like the invention of category theory? Probably not. Clarifying the topic of Friendly AI would almost certainly involve nontrivial mathematical developments, but in the current state of utter confusion it seems premature to characterize these developments as "fundamental".

We don't know how it turns out, what we know is that only a mathematical theory would furnish an accurate enough understanding of the topic, and so it seems to be a good heuristic to have mathematicians work on the problem, because non-mathematicians probably won't be able to develop a mathematical theory. In addition, we have some idea about the areas where additional training might be helpful, such as logic, type theory, formal languages, probability and computability.

Comment author: lukeprog 09 January 2013 11:28:18PM 16 points [-]

You're right, the word "fundamental" might suggest the wrong kinds of things. I'm not at all confident that Friendly AI will require the invention of something like category theory. So, I've removed the word "fundamental" from the above comment.

Comment author: JoshuaFox 09 January 2013 08:20:53PM 3 points [-]

Developing new fundamental math is hard. SI may have to do it, but keep it to a minimum if you want to win!