multifoliaterose comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 12 August 2010 08:31:19PM *  4 points [-]

As I've said elsewhere:

(a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us,

(b) Even if AGI deserves top priority, there's still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

(c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they're making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.

Comment author: mkehrt 12 August 2010 08:51:52PM 2 points [-]

I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.

Comment author: Vladimir_Nesov 12 August 2010 08:40:31PM *  1 point [-]

My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.

Comment author: multifoliaterose 12 August 2010 08:47:57PM 1 point [-]

Okay, we had this back and forth before and I didn't understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.

Comment author: Vladimir_Nesov 12 August 2010 08:58:26PM *  4 points [-]

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn't work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.

Comment author: multifoliaterose 12 August 2010 11:32:12PM *  5 points [-]

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

I agree that in general people should be more concerned about existential risk and that it's worthwhile to promote general awareness of existential risk.

But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.

More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I'm seriously concerned about this issue.

If Eliezer can't explain why it's pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like "I believe that AGI will be developed over the next 100 years but it's hard for me to express why so it's understandable that people don't believe me" or "I'm uncertain as to whether or not AGI will be developed over the next 100 years"

When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he's actively damaging the cause of existential risk.

Comment author: timtyler 13 August 2010 08:19:20AM 0 points [-]

Re: "AGI will be developed over the next 100 years"

I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of:

http://alife.co.uk/essays/how_long_before_superintelligence/

Comment author: multifoliaterose 13 August 2010 10:29:13AM 0 points [-]

Thanks, I'll check this out when I get a chance. I don't know whether I'll agree with your conclusions, but it looks like you've at least attempted to answer one of my main questions concerning the feasibility of SIAI's approach.

Comment author: CarlShulman 13 August 2010 11:58:46AM 1 point [-]

Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.

Comment author: timtyler 13 August 2010 08:10:42PM 0 points [-]

http://www.engagingexperience.com/2006/07/ai50_first_poll.html

If the raw data was ever published, that might be of some interest.

Comment author: gwern 13 August 2010 01:37:06PM 0 points [-]

Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.

Comment author: CarlShulman 13 August 2010 02:01:47PM 1 point [-]

And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.

Comment author: timtyler 13 August 2010 08:13:03AM 1 point [-]

Re: "Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all."

The marginal benefit of making machines smarter seems large - e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8

I don't really see that situation changing much anytime soon - there will probably be such marginal benefits for a long time to come.

Comment author: timtyler 13 August 2010 06:41:44AM 0 points [-]

Re: "There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits."

The humans are going to be obliterated soon?!?

Alas, you don't present your supporting reasoning.

Comment author: multifoliaterose 13 August 2010 10:26:41AM *  1 point [-]

No, no, I'm not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven't seen a good argument for why.

Comment author: ciphergoth 13 August 2010 11:17:17AM 4 points [-]

I think AGI wiping out humanity is far more likely than nuclear war doing so (it's hard to kill everyone with a nuclear war) but even if I didn't, I'd still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.

Comment author: multifoliaterose 13 August 2010 12:33:04PM *  0 points [-]

Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes?

Several points:

(1) Nuclear war could still cause an astronomical waste in the form that I discuss here.

(2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.

(3) If you satisfactorially address my point (a), points (b) and (c) will remain.

Comment author: timtyler 13 August 2010 08:17:14PM 1 point [-]

p(asteroid strike/year) is pretty low. Most are not too worried.

Comment author: multifoliaterose 14 August 2010 09:31:07AM 0 points [-]

The question is whether at present it's possible to lower existential risk more by funding and advocating FAI research than than it is to lower existential risk by funding and advocating an asteroid strike prevention program. Despite the low probability of an asteroid strike, I don't think that the answer to this question is obvious.

Comment author: timtyler 14 August 2010 10:17:49AM *  1 point [-]

I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today's companies decide to duke it out - and today's companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.

Comment author: xamdam 03 September 2010 03:13:31PM 0 points [-]

To do that, a major thing we will need is intelligent machines

that=asteroids?

If yes, I highly doubt we need machines significantly more intelligent than existing military technology adopted for the purpose.

Comment author: timtyler 03 September 2010 08:13:28PM *  0 points [-]

That would hardly be a way to "get out of the current vulnerable position as soon as possible".

Comment author: multifoliaterose 14 August 2010 10:34:21AM 0 points [-]

I agree that friendly intelligent machines would be a great asset to assuaging future existential risk.

My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.

I may be wrong, but would require a careful argument for the opposite position before changing my mind.

Comment author: timtyler 14 August 2010 10:44:58AM *  1 point [-]

Asteroid strikes are very unlikely - so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen - by most accounts. Detailed justification is beyond the scope of this comment, though.

Comment author: Vladimir_Nesov 14 August 2010 10:38:28AM 0 points [-]

My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.

Considering the larger problem statement, technically understanding what we value as opposed to actually building an AGI with those values, what do you see as distinguishing a situation where we are ready to consider the problem, from a situation where we are not? How can one come to such conclusion without actually considering the problem?

Comment author: timtyler 13 August 2010 08:15:57PM 1 point [-]

p(machine intelligence) is going up annually - while p(nuclear holocaust) has been going down for a long time now. Neither are likely to obliterate civilisation - but machine intelligence could nontheless be disruptive.