SarahC comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread.

Comment author: [deleted] 20 August 2010 08:57:17PM 15 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur. I wouldn't go so far as to claim that Eliezer needs more self-doubt, in a psychological sense. That's an awfully personal statement to make publicly. It's not self-confidence I'm worried about, it's insularity.

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y. The ideas related to friendly AI and existential risk have not been shopped to academia or evaluated by scientists in the usual way. So they're not being tested stringently enough.

It's speculative. It feels fuzzy to me -- I'm not an expert in AI, but I have some education in math, and things feel fuzzy around here.

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

If I'm completely off base here and SIAI is going to get to the science soon, I apologize, and I'll shut up about this for a while.

But look. All this advice about the "sin of underconfidence" is all very well (and actually I've taken it to heart somewhat.) But if you're going to go test your abilities, then test them. Against skeptics. Against people who'll look at you like you're a rotten fish if you don't have a graduate degree. Get something about FAI peer-reviewed or published by a reputable press. Show us something.

Sorry to be so blunt. It's just that I want this to be something. And I have my doubts because there's doesn't seem to be enough in this floating world in the way of unmistakable, concrete achievement.

Comment author: steven0461 20 August 2010 09:48:22PM *  13 points [-]

The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y.

According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.

At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat!

It's not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There's various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.

Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.

Comment author: [deleted] 20 August 2010 10:06:58PM 5 points [-]

My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they're not justified as rigorously, or haven't met certain public standards. Why do I agree with the main post that Eliezer isn't justified in his opinion of his own importance (and SIAI's importance)? Because there isn't (yet) a lot beyond speculation here.

I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it's doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I'm likely to hold off until I see something stronger.

And I'm likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It's not an obvious, open-and shut deal.

What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?

Comment author: timtyler 21 August 2010 06:59:06AM 0 points [-]

I think they are working on their "academic credentials":

http://singinst.org/grants/challenge

...lists some 13 academic papers under various stages of development.

Comment author: torekp 22 August 2010 01:23:12AM 1 point [-]

Thanks for that last link. The paper on Changing the frame of AI futurism is extremely relevant to this series of posts.

Comment author: Morendil 20 August 2010 09:10:06PM 5 points [-]

We don't have guest posts from Dr. X or Think Tank Fellow Y.

Possibly because this blog is Less Wrong, positioned as "a community blog devoted to refining the art of human rationality", and not as the SIAI blog, or an existential risk blog, or an FAI blog.

Comment author: multifoliaterose 21 August 2010 04:59:32AM 4 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur.

I respectfully disagree with this statement, at least as an absolute. I believe that:

(A) In situations in which people are making significant life choices based on person X's claims and person X exhibits behavior which is highly correlated with delusions of grandeur, it's appropriate to raise the possibility that person X's claims arise from delusions of grandeur and ask that person X publicly address this possibility.

(B) When one raises the possibility that somebody is suffering from delusions of grandeur, this should be done in as polite and nonconfrontational way as possible given the nature of the topic.

I believe that if more people adopted these practices, this would would raise the sanity waterline.

I believe that the situation with respect to Eliezer and portions of the LW community is as in (A) and that I made a good faith effort at (B).

Comment author: WrongBot 20 August 2010 09:13:15PM 7 points [-]

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise.

LessWrong is itself a joint project of the SIAI and the Future of Humanity Institute at Oxford. Researchers at the SIAI have published these academic papers. The Singularity Summit's website includes a lengthy list of partners, including Google and Scientific American.

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

Comment author: Perplexed 21 August 2010 05:30:53PM *  6 points [-]

Researchers at the SIAI have published these academic papers.

But those don't really qualify as "published academic papers" in the sense that those terms are usually understood in academia. They are instead "research reports" or "technical reports".

The one additional hoop that these high-quality articles should pass through before they earn the status of true academic publications is to actually be published - i.e. accepted by a reputable (paper or online) journal. This hoop exists for a variety of reasons, including the claim that the research has been subjected to at least a modicum of unbiased review, a locus for post-publication critique (at least a journal letters-to-editor column), and a promise of stable curatorship. Plus inclusion in citation indexes and the like.

Perhaps the FHI should sponsor a journal, to serve as a venue and repository for research articles like these.

Comment author: CarlShulman 21 August 2010 05:48:02PM 1 point [-]

Perhaps the FHI should sponsor a journal

There are already relevant niche philosophy journals (Ethics and Information Technology, Minds and Machines, and Philosophy and Technology). Robin Hanson's "Economic Growth Given Machine Intelligence" has been accepted in an AI journal, and there are forecasting journals like Technological Forecasting and Social Change. For more unusual topics, there's the Journal of Evolution and Technology. SIAI folk are working to submit the current crop of papers for publication.

Comment author: Perplexed 21 August 2010 05:53:17PM 1 point [-]

Cool!

Comment author: [deleted] 20 August 2010 09:25:43PM 4 points [-]

Okay, I take that back. I did know about the connection between SIAI and FHI and Oxford.

What are these academic papers published in? A lot of them don't provide that information; one is in Global Catastrophic Risks.

At any rate, I exaggerated in saying there isn't any engagement with the academic mainstream. But it looks like it's not very much. And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Comment author: WrongBot 20 August 2010 09:53:51PM 4 points [-]

And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Limited time and more important objectives, I would assume. Most academic work is not substantially better than trial-and-error in terms of usefulness and accuracy; it gets by on volume. Volume is a detriment in Friendliness research, because errors can have large detrimental effects relative to the size of the error. (Like the accidental creation of a paperclipper.)

Comment author: Eliezer_Yudkowsky 20 August 2010 09:39:34PM 0 points [-]

If you want it done, feel free to do it yourself. :)

Comment author: wedrifid 21 August 2010 08:52:29AM 0 points [-]

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

... particularly in as much as they have become (somewhat) obsolete.

Comment author: MatthewBaker 05 July 2011 11:08:11PM 0 points [-]

Can you clarify please?

Comment author: wedrifid 07 July 2011 05:11:44PM 1 point [-]

Can you clarify please?

Basically, no. Whatever I meant seems to have been lost to me in the temporal context.

Comment author: MatthewBaker 07 July 2011 05:25:40PM 0 points [-]

No worries, I do the same thing sometimes.

Comment author: wedrifid 21 August 2010 10:16:43PM 2 points [-]

I agree with your conclusion but not this part:

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.

You are not entitled to that particular proof.

EDIT: The 'entitlement' link was broken.

Comment author: timtyler 21 August 2010 06:55:20AM *  2 points [-]

We don't have guest posts from Dr. X or Think Tank Fellow Y.

There's these fellows:

Some of them have contributed here:

Comment author: Perplexed 21 August 2010 05:29:59AM 1 point [-]

I only wish it were possible to upvote this comment more than once.