loqi comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 13 June 2009 01:37:22PM 8 points [-]

Basically yes. Civilizations, species and worlds are mortal; there are rare long-lived species whose environment has remained unchanged for long periods of time, but the environment in which we evolved is long gone and our current one is not merely not stable, it is not even in equilibrium. And as long as we remain confined to one little planet running off a dwindling resource base and with everyone in weapon range of everyone else, there is nothing good about our long-term prospects. (For a fictional but eloquent discussion of some of the issues involved, see Permanence by Karl Schroeder.)

To change that, we need more advanced technology, for which we need software tools smart enough to help us deal with complexity. If our best minds start buying into the UFAI meme and turning away from building anything more ambitious than a social networking mashup, we may simply waste whatever chance we had. That is why UFAI belief is not as its proponents would have it the road of safety, but the road of oblivion.

Comment author: loqi 13 June 2009 06:50:24PM *  1 point [-]

rhollerith raised some reasonable objections to this response that I'd like to see answered, but I'll try and answer your question without that information:

What would you consider to be the minimum estimate of the probability that I'm right, necessary to "reasonably" motivate concern or action?

As as far as concern goes, I think my threshold for concern over your proposition is identical to my threshold for concern over UFAI, as they postulate similar results (UFAI still seems marginally worse due to the chance of destroying intelligent alien life, but I'll write this off as entirely negligible for the current discussion). I'd say 1:10,000 is a reasonable threshold for concern of the vocalized form, "hey, is anyone looking into this?" I'd love to see some more concrete discussion on this.

"Action" in your scenario is complicated by its direct opposition to acceptance of UFAI, so I can only give you some rough constraints. To simplify, I'll assume all risks allow equally effective action to compensate for them, even though this is clearly not the case.

Let R = the scenario you've described, E = the scenario in which UFAI is a credible threat. "R and E" could be described as "damned if we do, damned if we don't", in which case action is basically futile, so I'll consider the case where R and E are disjoint. In that case, action would only be justifiable if p(R) > p(E). My intuition says that such justification is proportional to p(R) - p(E), but I'd prefer more clarity in this step.

So that's a rough answer... if T is my threshold probability for action in the face of existential risk, T * (p(R) - p(E)) is my threshold for action on your scenario. If R and E aren't disjoint, it looks something like T * (p(R and ~E) - p(E and ~R)).

Comment author: rwallace 13 June 2009 08:56:38PM 0 points [-]

A fair answer, thanks.

Though I'm not convinced "R and E" necessarily means "damned either way". If I believed E in addition to R, I think what I would do is:

Forget about memetics in either direction as likely to do more harm than good, and concentrate all available resources on developing Friendly AI as reliably and quickly as possible.

However, provably Friendly AI is still not possible with 2009 vintage tools.

So I'd do it in stages, a series of self improving AIs, the early ones with low intelligence and crude Friendliness architecture, using them to develop better Friendliness architecture in tandem with increasing intelligence for the later ones. No guarantees, but if recursive self-improvement actually worked, I think that approach would have a reasonable chance of success.