Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

XiXiDu comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread.

Comment author: XiXiDu 03 May 2011 02:56:08PM *  25 points [-]

The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.

It isn't much harder to steal code than to steal money from a bank account. Given the nature of research being conducted by the SIAI, one of the first and most important steps would have to be to think about adequate security measures.

If you are a potential donor interested to mitigate risks from AI then before contributing money you will have to make sure that your contribution does not increase those risks even further.

If you believe that risks from AI are to be taken seriously then you should demand that any organisation that studies artificial general intelligence has to establish significant measures against third-party intrusion and industrial espionage that is at least on par with the biosafety level 4 required for work with dangerous and exotic agents.

It might be the case that the SIAI does already employ various measures against the possibility of theft of sensitive information, yet any evidence that hints at the possibility of weak security should be taken seriously. Especially the possibility that there are potentially untrustworthy people who can access critical material should be examined.

Comment author: CarlShulman 03 May 2011 06:14:59PM 18 points [-]

Upvoted for raising some important points. Ceteris paribus, one failure of internal controls is nontrivial evidence of future ones.

For these purposes one should distinguish between sections of the organization. Eliezer Yudkowsky and Marcello Herreshoff's AI work is a separate 'box' from other SIAI activities such as the Summit, Visiting Fellows program, etc. Eliezer is far more often said to be too cautious and secretive with respect to that than the other way around.

Comment author: wedrifid 04 May 2011 03:47:35AM 4 points [-]

It isn't much harder to steal code than to steal money from a bank account. Given the nature of research being conducted by the SIAI, one of the first and most important steps would have to be to think about adequate security measures.

This would certainly be a critical consideration if or when the SIAI was actually doing work directly on AGI construction. I don't believe that is a focus in the near future. There is too much to be done before that becomes a possibility.

(Not that establishing security anyway is a bad thing.)

Comment author: katydee 04 May 2011 04:49:26AM *  3 points [-]

I am at least 95% confident that the procedures that govern access to money in SIAI are both different from and far less strict than the procedures that would be used to govern access to code. Even the woefully obsolete "So You Want To Be A Friendly AI Programmer" document contains procedures that are more strict than this for dealing with actual AI code, and my suspicion is that these are likely to have updated only in the direction of more strictness since.

Disclaimer: I am neither an affiliate of nor a donor to the SIAI.

Comment author: jsalvatier 03 May 2011 05:55:14PM 7 points [-]

I think this is a legitimate concern. It's probably not a significant issue right now, but definitely would be one if SIAI started making dramatic progress towards AGI. I don't think it deserves the downvotes its getting.

Comment author: Vladimir_Nesov 03 May 2011 08:09:38PM 13 points [-]

Note: the comment has been completely rewritten since the original wave of downvoting. It's much better now.

Comment author: BrandonReinhart 03 May 2011 08:05:11PM *  2 points [-]

I agree, this doesn't deserve to be downvoted.

It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn't also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.

Comment author: JohnH 03 May 2011 07:35:57PM 1 point [-]

I am more concerned about the possibility that random employees at Google will succeed in making an AGI then I am at SIAI constructing one. To begin with, even if there were only 1000 employees at Google that were interested in AGI and they were only interested in it enough to work 1 hour a month each and they were only 80% as effective as Eliezer (as being some of the smartest people in the world doesn't quite put them on the same level as Eli) then if Eliezer will have AGI in say, 2031 then Google will have it in about 2017.

Comment author: TheOtherDave 03 May 2011 08:03:06PM 11 points [-]

Personally, I expect even moderately complicated problems -- especially novel ones -- to not scale or decompose at all cleanly.

So, leaving aside all questions about who is smarter than whom, I don't expect a thousand smart people working an hour a month on a project to be nearly as productive as one smart person working eight hours a day.

If you could share your reasons for expecting otherwise, I might find them enlightening.

Comment author: JohnH 03 May 2011 08:20:39PM *  5 points [-]

The idea is that they are sharing their information and findings so that while they are less efficient then working constantly on the problem they are able to point out possible solutions to each other that one person working by himself would be less likely to notice except through a longer process. As there would be between 4-5 people working on the project at any one time during the month I assume they would be working in a group and would stagger the times such that a nearly continuous effort is produced. Also, as much of the problem involves thinking about things, by not focusing on the issue constantly they may be more likely to come up with a solution then if they are focusing on it constantly.

This is a hypothetical, I have no idea how many people at Google are interested in AI or how much time they spend on it. I would imagine that there most likely are quite a few people at Google working on AGI as it relates directly to Google's core business and that they work on it significantly more than one hour a month each.

(edit the comment with intelligence and Eli was a pun.)

Comment author: bogdanb 14 May 2011 09:22:27AM 2 points [-]

the comment with intelligence and Eli was a pun.

I don’t get it. I can haz Xplanation?

Comment author: JohnH 14 May 2011 03:29:05PM 2 points [-]

The word Eli can also be used for god, hence the pun.

Comment author: bogdanb 15 May 2011 01:49:15PM 1 point [-]

Oh :-)

Comment author: katydee 03 May 2011 03:16:32PM *  3 points [-]

You post employs good parallelism of form, yet poor parallelism of substance.

EDIT: no longer relevant, but kept for context

Comment author: XiXiDu 03 May 2011 05:47:52PM 0 points [-]

I added to the comment and expanded on what I thought would be an obvious inference.

Comment author: katydee 03 May 2011 07:20:21PM *  4 points [-]

I didn't find your inference too oblique, I found it too inaccurate.

Comment author: timtyler 05 May 2011 07:57:52PM *  2 points [-]

If you believe that risks from AI are to be taken seriously then you should demand that any organisation that studies artificial general intelligence has to establish significant measures against third-party intrusion and industrial espionage that is at least on par with the biosafety level 4 required for work with dangerous and exotic agents.

What if you believe in openness and transparency - and feel that elaborate attempts to maintain secrecy will cause your partners to believe you are hiding motives or knowledge from them - thereby tarnishing your reputation - and making them trust you less by making yourself appear selfish and unwilling to share?

Surely, then the strategies you refer to could easily be highly counter-productive.

Basically, if you misguidedly impose secrecy on the organisations involved then the good guys have fewer means of honestly signalling their altruism towards each other - and cooperating with each other - which means that their progress is slower and their relative advantage is diminished. That is surely bad news for overall risk.

The "opposite" strategy is much better, IMO. Don't cooperate with secretive non-sharers. They are probably selfish bad guys. Sharing now is the best way to honestly signal that you will share again in the future.