Jordan comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jordan 30 October 2010 04:51:48PM 1 point [-]

Certainly, but it is an argument for the goodness of pursuing a course of action that is known to have a chance of being good.

There are roughly two types of options:

1) A plan that, if successful, will yield something good with 100% certainty, but has essentially 0% chance of succeeding to begin with.

2) A plan that, if successful, may or may not be good, with a non-zero chance of success.

Clearly type 2 is a much, much larger class, and includes plans not worth pursuing. But it may include plans worth pursuing as well. If Friendly AI is as hard as everyone makes it out to be, I'm baffled that type 2 plans aren't given more exposure. Indeed, it should be the default, with reliance on a type 1 plan a fall back given more weight only with extraordinary evidence that all type 2 plans are as assuredly dangerous as FAI is impossible.

Comment author: jimmy 30 October 2010 09:21:02PM 1 point [-]

The argument isn't that we should throw away good plans because there's some small chance of it being bad even if successful.

The argument is that the target is small enough that anything but a proof still leaves you with a ~0% chance of getting a good outcome.

Comment author: Vladimir_Nesov 30 October 2010 05:03:01PM *  1 point [-]

(1) In any case, his argument that it may not be possible to have provable Friendliness and it makes more sense to take an incremental approach to AGI than to not do AGI until Friendliness is proven seems reasonable.

That it's impossible to find a course of action that is knowably good, is not an argument for the goodness of pursuing a course of action that isn't known to be good.

Certainly, but it is an argument for
(2) the goodness of pursuing a course of action that is known to have a chance of being good.

You point out a correct statement (2) for which the incorrect argument (1) apparently argues. This doesn't argue for correctness of the argument (1).

(A course of action that is known to have a chance of being good is already known to be good, in proportion to that chance (unless it's also known to have a sufficient chance of being sufficiently bad). For AI to be Friendly doesn't require absolute certainty in its goodness, but beware the fallacy of gray.)