shminux comments on New report: Intelligence Explosion Microeconomics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (244)
Here is my takeout from the report. It is not a summary, and some of the implications are mine.
The 4 Theses (conjectures, really):
if true, imply that AGI is an x-risk, because an AGI emerging in an ad hoc fashion will compete with humans and inevitably win.
The difference between 1-3 and 4 is that 1-3 are outside of human control, but there is a hope for solving 4, hence FAI research.
There are a few outs, which the report considers unlikely:
Given the above, the obvious first step is formalizing each of the theses 1-3 as a step toward evaluating their validity. The report outlines potential steps toward formalizing thesis 1, Intelligence Explosion (IE):
The answer to this last question would then determine the direction of the FAI effort, if any.
This was basically the content of the first 4 chapters, as far as I can tell. (Chapter 2 is the advocacy of an outside view and chapter 3 is mostly advocacy of the hard take-off.) Chapters 5 and 6 are a mix of open questions in IE relevant to the Step 3 above, some speculations and MIRI policy arguments, as well as some musings about the scope/effort/qualifications required to tackle the problem.