mwaser comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwaser 30 October 2010 02:18:54PM *  19 points [-]

If Goertzel's claim that "SIAI's arguments are so unclear that he had to construct it himself" can't be disproven by the simple expedient of posting a single link to an immediately available well-structured top-down argument then the SIAI should regard this as an obvious high-priority, high-value task. If it can be proven by such a link, then that link needs to be more highly advertised since it seems that none of us are aware of it.

Comment author: ciphergoth 30 October 2010 04:40:34PM *  6 points [-]

The nearest thing to such a link is Artificial Intelligence as a Positive and Negative Factor in Global Risk [PDF].

But of course the argument is a little large to entirely set out in one paper; the next nearest thing is What I Think, If Not Why and the title shows in what way that's not what Goertzel was looking for.

Comment author: timtyler 31 October 2010 12:29:11PM 4 points [-]

Artificial Intelligence as a Positive and Negative Factor in Global Risk

44 pages. I don't see anything much like the argument being asked for. The lack of an index doesn't help. The nearest thing I could find was this:

It may be tempting to ignore Artificial Intelligence because, of all the global risks discussed in this book, AI is hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less.

He also claims that intelligence could increase rapidly with a "dominant" probabilty.

I cannot perform a precise calculation using a precisely confirmed theory, but my current opinion is that sharp jumps in intelligence are possible, likely, and constitute the dominant probability.

This all seems pretty vague to me.

Comment author: timtyler 30 October 2010 02:44:14PM *  4 points [-]

Is this an official position in the first place? It seems to me that they want to give the impression that - without their efforts - the END IS NIGH - without committing to any particular probability estimate - which would then become the target of critics.

Halloween update: It's been a while now, and I think the response has been poor. I think this means there is no such document (which explains Ben's attempted reconstruction). It isn't clear to me that producing such a document is a "high-priority task" - since it isn't clear that the thesis is actually correct - or that the SIAI folks actually believe it.

Most of the participants here seem to be falling back on: even if it is unlikely, it could happen, and it would be devastating, so therefore we should care a lot - which seems to be a less unreasonable and more defensible position.

Comment author: [deleted] 28 June 2014 04:42:38PM 0 points [-]

It isn't clear to me that producing such a document is a "high-priority task" - since it isn't clear that the thesis is actually correct - or that the SIAI folks actually believe it.

Most of the participants here seem to be falling back on: even if it is unlikely, it could happen, and it would be devastating, so therefore we should care a lot - which seems to be a less unreasonable and more defensible position.

You lost me at that sharp swerve in the middle. With probabilities attached to the scary idea, it is an absolutely meaningless concept. What if its probability were 1 / 3^^^3, should we still care then? I could think of a trillion scary things that could happen. But without realistic estimates of how likely it is to happen, what does it matter?

Comment deleted 30 October 2010 03:13:22PM [-]
Comment author: ciphergoth 30 October 2010 04:41:11PM 3 points [-]

Multiple links are not an answer - to be what Goertzel was looking for it has to be a single link that sets out this position.

Comment author: mwaser 30 October 2010 04:52:34PM *  7 points [-]

Heh. I've read virtually all those links. I still have the three following problems.

  1. Those links are about as internally self-consistent as the Bible.
  2. There are some fundamentally incorrect assumptions that have become gospel.
  3. Most people WON'T read all those links and will therefore be declared unfit to judge anything.

What I asked for was "an immediately available well-structured top-down argument".

It would be particularly useful and effective if SIAI recruited someone with the opposite point of view to co-develop a counter-argument thread and let the two revolve around each other and solve some of these issues (or, at least, highlight the base important differences in opinion that prevent them from solution). I'm more than willing to spend a ridiculous amount of time on such a task and I'm sure that Ben would be more than willing to devote any time that he can tear away from his busy schedule.

Comment author: Perplexed 30 October 2010 05:09:16PM 10 points [-]

There are some fundamentally incorrect assumptions that have become gospel.

So go ahead and point them out. My guess is that in the ensuing debate it will be found that 1/4 of them are indeed fundamentally incorrect assumptions, 1/4 of them are arguably correct, and 1/2 of them are not really "assumptions that have become gospel". But until you provide your list, there is no way to know.