Gram_Stone comments on FAI Research Constraints and AGI Side Effects - Less Wrong

14 Post author: JustinShovelain 03 June 2015 07:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: 27chaos 03 June 2015 07:57:18PM 7 points [-]

This seems like a mathematical write up of a very simple idea. I dislike papers such as this. The theory itself could have been described in one sentence, and nothing other than the theory itself is presented here. No evidence of the theory's empirical value, no discussion of what the actual leakage ratio is or what barriers to Friendliness remain. A lot of math used as mere ornamentation.

Comment author: Gram_Stone 03 June 2015 09:04:41PM 5 points [-]

Formalizations can simultaneously be simple and useful. I'm reminded of things like Chapter 4 of Superintelligence and Bostrom's GCR model. These are relatively simple models, but they make very explicit things that we previously had only considered in natural language. Attention is a limited resource, and things like this allow us to focus our attention on this model's inputs, that is, what observations we should be making in the empirical case, and allow us to focus on other things to formalize in the theoretical case. Technological strategy cannot be discussed in natural language forever if we are to make substantial progress, and now we have a better idea of what to measure.

Comment author: 27chaos 03 June 2015 09:28:54PM -2 points [-]

I hope we see such progress soon.

Comment author: EGI 08 June 2015 07:18:14AM 3 points [-]

Problem is that this formalisation is probably bullshit. It looks a bit like a video game where you generate "research points" for AGI and/or FAI. Research IRL does not work like that. You need certain key insights for AGI and a different set for FAI if some insights are shared among both sets (they probably are) the above model does not work any longer. Further problem: How do you quantify G and F? A mathematical modell with variables you can't quantify is of um very limited use (or should I say ornamentation?).

Comment author: Gram_Stone 08 June 2015 05:27:28PM 1 point [-]

It sounds like we're just rehashing the old arguments over the Drake equation.

You need certain key insights for AGI and a different set for FAI if some insights are shared among both sets (they probably are) the above model does not work any longer.

The model doesn't assume that the sets of research are disjoint. See this thread where jessicat assumed the model wouldn't work for her conception of FAI research in which the FAI problem is entirely reduced to AGI research. Fremaining and Gremaining can both include units of FAI or AGI research. First paragraph of the section on model 1.

How do you quantify G and F? A mathematical modell with variables you can't quantify is of um very limited use (or should I say ornamentation?).

The point is that this is not a question that you even would have asked before. It's just like the criticism about the last four factors in the Drake equation, but how many people were thinking about the questions raised by those factors before they were invented? I think this is more useful to have than not, and it can be built upon, which the authors apparently intend to do. Instead of just getting a good one in on me, actually ask "How do you quantify G and F?" We can ask subquestions that probably have a bearing on that question. What AGI research could be dependent upon FAI research, and vice versa? Are there examples of past technologies in which safety and functionality were at odds, and how analogous are these past examples with FAI/AGI research? How did, say, the Manhattan Project, especially in its early days, quantify and estimate its progress against other national nuclear weapon projects? What literature already exists on the topic of estimating research progress? Etc.

And then there are questions about how to improve the model, some of which they pose in the post itself. Although I haven't found that any of your criticisms hold, I would still ask, how would you model this problem?