You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on Building toward a Friendly AI team - Less Wrong Discussion

24 Post author: lukeprog 06 June 2012 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 08 June 2012 06:36:35AM 4 points [-]

On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn't just a bunch of outsiders to the field doing idle philosophizing.

Of course, this requires that SI is ready to publish part of its AGI research.

Comment author: reup 08 June 2012 09:39:19AM *  9 points [-]

I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?

I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.

Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?

SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.

PD: Hm, so what about the project lead?

SI: Well, he's done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.

PD: Huh. So, how has the work gone so far?

SI: That's the best part, we're keeping it all secret so that our advances don't fall into the wrong hands. You wouldn't want that, would you?

PD: [backing away slowly] No, of course not... Well, I need to do a little more reading about your organization, but this sounds, um, good...

Comment author: Kaj_Sotala 08 June 2012 11:01:30AM 0 points [-]

Indeed.

Comment author: V_V 25 January 2013 02:34:58PM 0 points [-]

And did you exchange a walk on part in the war for a lead role in a cage?

"Wish You Were Here" - R. Waters, D. Gilmour

Comment author: private_messaging 11 June 2012 03:22:43PM *  -1 points [-]

That also requires that SI really isn't just a bunch of outsiders to the field doing idle philosophizing about infinitely powerful fully general purpose minds that would be so general purpose they'd be naturally psychopathic (seeing the psychopathy as type of intelligent behaviour that fully general intelligence should do).

If the SI is that, the best course of action for SI is to claim that it does or would have to do such awesome research that to publish it would be to risk the mankind survival, and so to protect the mankind it only does philosophizing.