Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

jsalvatier comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: jsalvatier 03 May 2011 05:55:14PM 7 points [-]

I think this is a legitimate concern. It's probably not a significant issue right now, but definitely would be one if SIAI started making dramatic progress towards AGI. I don't think it deserves the downvotes its getting.

Comment author: Vladimir_Nesov 03 May 2011 08:09:38PM 13 points [-]

Note: the comment has been completely rewritten since the original wave of downvoting. It's much better now.

Comment author: BrandonReinhart 03 May 2011 08:05:11PM *  2 points [-]

I agree, this doesn't deserve to be downvoted.

It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn't also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.

Comment author: JohnH 03 May 2011 07:35:57PM 1 point [-]

I am more concerned about the possibility that random employees at Google will succeed in making an AGI then I am at SIAI constructing one. To begin with, even if there were only 1000 employees at Google that were interested in AGI and they were only interested in it enough to work 1 hour a month each and they were only 80% as effective as Eliezer (as being some of the smartest people in the world doesn't quite put them on the same level as Eli) then if Eliezer will have AGI in say, 2031 then Google will have it in about 2017.

Comment author: TheOtherDave 03 May 2011 08:03:06PM 11 points [-]

Personally, I expect even moderately complicated problems -- especially novel ones -- to not scale or decompose at all cleanly.

So, leaving aside all questions about who is smarter than whom, I don't expect a thousand smart people working an hour a month on a project to be nearly as productive as one smart person working eight hours a day.

If you could share your reasons for expecting otherwise, I might find them enlightening.

Comment author: JohnH 03 May 2011 08:20:39PM *  5 points [-]

The idea is that they are sharing their information and findings so that while they are less efficient then working constantly on the problem they are able to point out possible solutions to each other that one person working by himself would be less likely to notice except through a longer process. As there would be between 4-5 people working on the project at any one time during the month I assume they would be working in a group and would stagger the times such that a nearly continuous effort is produced. Also, as much of the problem involves thinking about things, by not focusing on the issue constantly they may be more likely to come up with a solution then if they are focusing on it constantly.

This is a hypothetical, I have no idea how many people at Google are interested in AI or how much time they spend on it. I would imagine that there most likely are quite a few people at Google working on AGI as it relates directly to Google's core business and that they work on it significantly more than one hour a month each.

(edit the comment with intelligence and Eli was a pun.)

Comment author: bogdanb 14 May 2011 09:22:27AM 2 points [-]

the comment with intelligence and Eli was a pun.

I don’t get it. I can haz Xplanation?

Comment author: JohnH 14 May 2011 03:29:05PM 2 points [-]

The word Eli can also be used for god, hence the pun.

Comment author: bogdanb 15 May 2011 01:49:15PM 1 point [-]

Oh :-)