timtyler comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 04 May 2011 10:33:38PM *  3 points [-]

These two excerpts summarize where I disagree with SIAI:

Our needs and opportunities could change in a big way in the future. Right now we are still trying to lay the basic groundwork for a project to build an FAI. At the point where we had the right groundwork and the right team available, that project could cost several million dollars per year.

As to patents and commercially viable innovations - we're not as sure about these. Our mission is ultimately to ensure that FAI gets built before UFAI; putting knowledge out there with general applicability for building AGI could therefore be dangerous and work directly against our mission.

So, SIAI plans to develop an AI that will take over the world, keeping their techniques secret, and therefore not getting critiques from the rest of the world.

This is WRONG. Horrendously, terrifyingly, irrationally wrong.

There are two major risks here. One is the risk of an arbitrarily-built AI, made not with Yudkowskian methodologies, whatever they will be, but with due diligence and precautions taken by the creators to not build something that will kill everybody.

The other is the risk of building a "FAI" that works, and then successfully becomes dictator of the universe for the rest of time, and this turns out more poorly than we had hoped.

I'm more afraid of the second than of the first. I find it implausible that it is harder to build an AI that doesn't kill or enslave everybody, than to build an AI that does enslave everybody, in a way that wiser beings than us would agree was beneficial.

And I find it even more implausible, if the people building the one AI can get advice from everyone else in the world, while the people building the FAI do not.

Comment author: timtyler 04 May 2011 11:41:04PM -2 points [-]

So, SIAI plans to develop an AI that will take over the world, keeping their techniques secret, and therefore not getting critiques from the rest of the world.

This is WRONG. Horrendously, terrifyingly, irrationally wrong.

It reminds me of this:

if we can make it all the way to Singularity without it ever becoming a "public policy" issue, I think maybe we should.

The plan to steal the singularity.

Comment author: wedrifid 05 May 2011 03:57:54AM *  3 points [-]

The plan to steal the singularity.

Any other plan would be insane! (Or, at least, only sane as a second choice when stealing seems impractical.)

Comment author: timtyler 05 May 2011 08:15:07AM *  0 points [-]

Uh huh. You don't think some other parties might prefer to be consulted?

A plan to pull this off before the other parties wake up may set off some alarm bells.

Comment author: wedrifid 05 May 2011 08:23:56AM 0 points [-]

A plan to pull this off before the other parties wake up may set off some alarm bells.

... The kind of thing that makes 'just do it' seem impractical?

Comment author: timtyler 05 May 2011 08:44:18AM 0 points [-]

"Plan to Singularity" dates back to 2000. Other parties are now murmuring - but I wouldn't say machine intelligence had yet become a "public policy" issue. I think it will, in due course though. So, I don't think the original plan is very likely to pan out.