lukeprog comments on Self-skepticism: the first principle of rationality - Less Wrong

36 Post author: aaronsw 06 August 2012 12:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 06 August 2012 02:05:28AM 0 points [-]

I hope SI will agree that the FAQ answer you linked is inadequate (either overlooking some common objections, or lumping them together dismissively as unspecified obstacles that will be revealed in the future). For example, "building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion", or "computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow" are both plausible.

And yes, even if the answer is improved, it does suggest a possible pattern. It could just be a lack of resources available to create high quality, comprehensive answers to objections. Or it could be that SI is slightly more like Uri Geller in not doubting itself than GiveWell is.

Is GiveWell really doubting itself or its premise - that it's worth spending extra money evaluating where to give money? (actually, I think it is worth it, but that's not my point).

Comment author: lukeprog 06 August 2012 04:22:30AM *  14 points [-]

I hope SI will agree that the FAQ answer you linked is inadequate

As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.

Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like "put off the singularity until we can make it go well") is nearly the opposite of its original mission ("make the singularity happen as quickly as possible"). The story of that transition is here.

Comment deleted 18 August 2012 04:46:14AM *  [-]
Comment deleted 18 August 2012 05:22:49AM *  [-]
Comment deleted 18 August 2012 05:38:24AM *  [-]
Comment deleted 22 August 2012 01:01:45AM [-]
Comment deleted 22 August 2012 01:23:41AM *  [-]
Comment deleted 22 August 2012 03:07:40AM [-]
Comment author: Jonathan_Graehl 06 August 2012 08:53:34PM 0 points [-]

Nonetheless, I object to the FAQ answer on the grounds that its ontology of objections to the likelihood of singularity lacks a category that you'd expect to contain my objections.

I admit that the problem is only in not perfecting the FAQ entry, not that the objections weren't considered in detail elsewhere. Thus, no evidence of Uri-Gellerism, and more for lack of resources spent on it.

Comment author: DaFranker 06 August 2012 09:34:41PM *  5 points [-]

The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.

Perhaps you'd like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?

Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter "Closed for not taking into account my objection!". If what is currently qualified as "the most common objections" is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time's "most common objections", and then repeat.

I'm sure this argument was made in better form somewhere else before, but I'm not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.

To (very poorly) paraphrase Eliezer*: "The obvious solution to you just isn't. It wasn't obvious to X, it wasn't obvious to Y, and it certainly wasn't obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building "safe" AIs."

This also holds true of objections to SIAI, AFAICT. What seems like an "obvious" rebuttal, objection, etc. or a "common" complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of "common objections" and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses... I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of "people made aware of the issue" and "donators gained" and maybe even "researchers sensitivitized to the issue".

* Edit: Correct quote in reply by Grognor, thanks!

Comment author: Grognor 06 August 2012 10:38:07PM 5 points [-]

Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.

-Reply to Holden on Tool AI