multifoliaterose comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 13 August 2010 06:41:44AM 0 points [-]

Re: "There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits."

The humans are going to be obliterated soon?!?

Alas, you don't present your supporting reasoning.

Comment author: multifoliaterose 13 August 2010 10:26:41AM *  1 point [-]

No, no, I'm not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven't seen a good argument for why.

Comment author: ciphergoth 13 August 2010 11:17:17AM 4 points [-]

I think AGI wiping out humanity is far more likely than nuclear war doing so (it's hard to kill everyone with a nuclear war) but even if I didn't, I'd still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.

Comment author: multifoliaterose 13 August 2010 12:33:04PM *  0 points [-]

Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes?

Several points:

(1) Nuclear war could still cause an astronomical waste in the form that I discuss here.

(2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.

(3) If you satisfactorially address my point (a), points (b) and (c) will remain.

Comment author: timtyler 13 August 2010 08:17:14PM 1 point [-]

p(asteroid strike/year) is pretty low. Most are not too worried.

Comment author: multifoliaterose 14 August 2010 09:31:07AM 0 points [-]

The question is whether at present it's possible to lower existential risk more by funding and advocating FAI research than than it is to lower existential risk by funding and advocating an asteroid strike prevention program. Despite the low probability of an asteroid strike, I don't think that the answer to this question is obvious.

Comment author: timtyler 14 August 2010 10:17:49AM *  1 point [-]

I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today's companies decide to duke it out - and today's companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.

Comment author: xamdam 03 September 2010 03:13:31PM 0 points [-]

To do that, a major thing we will need is intelligent machines

that=asteroids?

If yes, I highly doubt we need machines significantly more intelligent than existing military technology adopted for the purpose.

Comment author: timtyler 03 September 2010 08:13:28PM *  0 points [-]

That would hardly be a way to "get out of the current vulnerable position as soon as possible".

Comment author: multifoliaterose 14 August 2010 10:34:21AM 0 points [-]

I agree that friendly intelligent machines would be a great asset to assuaging future existential risk.

My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.

I may be wrong, but would require a careful argument for the opposite position before changing my mind.

Comment author: timtyler 14 August 2010 10:44:58AM *  1 point [-]

Asteroid strikes are very unlikely - so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen - by most accounts. Detailed justification is beyond the scope of this comment, though.

Comment author: multifoliaterose 14 August 2010 10:57:43AM 1 point [-]

Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don't think that it's easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it's so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months).

I'd be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.

Comment author: Vladimir_Nesov 14 August 2010 10:38:28AM 0 points [-]

My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.

Considering the larger problem statement, technically understanding what we value as opposed to actually building an AGI with those values, what do you see as distinguishing a situation where we are ready to consider the problem, from a situation where we are not? How can one come to such conclusion without actually considering the problem?

Comment author: multifoliaterose 14 August 2010 10:52:03AM 0 points [-]

I think that understanding what we value is very important. I'm not convinced that developing a technical understanding of what we value is the most important thing right now.

I imagine that for some people, working on a developing a technical understanding understanding what we value is the best thing that they could be doing. Different people have different strengths, and this leads to the utilitarian thing varying from person to person..

I don't believe that the best thing for me to do is to study human values. I also don't believe that at the margin, funding researchers who study human values is the best use of money.

Of course, my thinking on these matters is subject to change with incoming information. But if what I think you're saying is true, I'd need to see a more detailed argument than the one that you've offered so far to be convinced.

If you'd like to correspond by email about these things, I'd be happy to say more about my thinking about these things. Feel free to PM me with your email address.

Comment author: timtyler 13 August 2010 08:15:57PM 1 point [-]

p(machine intelligence) is going up annually - while p(nuclear holocaust) has been going down for a long time now. Neither are likely to obliterate civilisation - but machine intelligence could nontheless be disruptive.