James_Miller comments on Q&A with new Executive Director of Singularity Institute - Less Wrong

26 Post author: lukeprog 07 November 2011 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread. Show more comments above.

Comment author: quartz 07 November 2011 07:19:21AM 65 points [-]

How are you going to address the perceived and actual lack of rigor associated with SIAI?

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute. This is likely to pose problems for your plan to work with professors to find research candidates. It is also likely to be an indicator of little high-quality work happening at the Institute.

In his recent Summit presentation, Eliezer states that "most things you need to know to build Friendly AI are rigorous understanding of AGI rather than Friendly parts per se". This suggests that researchers in AI and machine learning should be able to appreciate high-quality work done by SIAI. However, this is not happening, and the publications listed on the SIAI page--including TDT--are mostly high-level arguments that don't meet this standard. How do you plan to change this?

Comment author: James_Miller 07 November 2011 03:21:35PM *  40 points [-]

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute.

I believe that high-quality research is happening at the Singularity Institute.

James Miller, Associate Professor of Economics, Smith College.

PhD, University of Chicago.

Comment author: XFrequentist 07 November 2011 10:41:55PM 23 points [-]

To distinguish the above from the statement "I like the Singularity Institute", could you be specific about what research activities you have observed in sufficient detail to confidently describe as "high-quality"?

ETA: Not a hint of sarcasm or snark intended, I'm sincerely curious.

Comment author: James_Miller 08 November 2011 01:25:01AM *  24 points [-]

I'm currently writing a book on the Singularity and have consequently become extremely familiar with the organization's work. I have gone through most of EY's writings and have an extremely high opinion of them. His research on AI plays a big part in my book. I have also been ending my game theory classes with "rationality shorts" in which I present some of EY's material from the sequences.

I also have a high opinion of Carl Shulman's (an SI employee) writings including “How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects." (Co-authored with Bostrom) and Shulman's paper on AGI and arms races.