reup comments on Building toward a Friendly AI team - Less Wrong

24 Post author: lukeprog 06 June 2012 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread.

Comment author: reup 07 June 2012 08:56:34PM 6 points [-]

Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do.

I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:

  1. By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.

  2. You risk undermining your credibility on risk reduction by tarring yourselves as crackpots. In particular, looking for good mathematicians to work out your theories comes off as "we already know the truth, now we just need people to prove it."

  3. You're a small organization. Splitting your focus is not a recipe for greater effectiveness.

Comment author: Kaj_Sotala 08 June 2012 06:36:35AM 4 points [-]

On the other hand, SI might get taken more seriously if it is able to demonstrate that it actually does know something about AGI design and isn't just a bunch of outsiders to the field doing idle philosophizing.

Of course, this requires that SI is ready to publish part of its AGI research.

Comment author: reup 08 June 2012 09:39:19AM *  9 points [-]

I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?

I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.

Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?

SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.

PD: Hm, so what about the project lead?

SI: Well, he's done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.

PD: Huh. So, how has the work gone so far?

SI: That's the best part, we're keeping it all secret so that our advances don't fall into the wrong hands. You wouldn't want that, would you?

PD: [backing away slowly] No, of course not... Well, I need to do a little more reading about your organization, but this sounds, um, good...

Comment author: Kaj_Sotala 08 June 2012 11:01:30AM 0 points [-]

Indeed.

Comment author: V_V 25 January 2013 02:34:58PM 0 points [-]

And did you exchange a walk on part in the war for a lead role in a cage?

"Wish You Were Here" - R. Waters, D. Gilmour

Comment author: private_messaging 11 June 2012 03:22:43PM *  -1 points [-]

That also requires that SI really isn't just a bunch of outsiders to the field doing idle philosophizing about infinitely powerful fully general purpose minds that would be so general purpose they'd be naturally psychopathic (seeing the psychopathy as type of intelligent behaviour that fully general intelligence should do).

If the SI is that, the best course of action for SI is to claim that it does or would have to do such awesome research that to publish it would be to risk the mankind survival, and so to protect the mankind it only does philosophizing.