WrongBot comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 August 2010 08:57:17PM 15 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur. I wouldn't go so far as to claim that Eliezer needs more self-doubt, in a psychological sense. That's an awfully personal statement to make publicly. It's not self-confidence I'm worried about, it's insularity.

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y. The ideas related to friendly AI and existential risk have not been shopped to academia or evaluated by scientists in the usual way. So they're not being tested stringently enough.

It's speculative. It feels fuzzy to me -- I'm not an expert in AI, but I have some education in math, and things feel fuzzy around here.

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

If I'm completely off base here and SIAI is going to get to the science soon, I apologize, and I'll shut up about this for a while.

But look. All this advice about the "sin of underconfidence" is all very well (and actually I've taken it to heart somewhat.) But if you're going to go test your abilities, then test them. Against skeptics. Against people who'll look at you like you're a rotten fish if you don't have a graduate degree. Get something about FAI peer-reviewed or published by a reputable press. Show us something.

Sorry to be so blunt. It's just that I want this to be something. And I have my doubts because there's doesn't seem to be enough in this floating world in the way of unmistakable, concrete achievement.

Comment author: WrongBot 20 August 2010 09:13:15PM 7 points [-]

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise.

LessWrong is itself a joint project of the SIAI and the Future of Humanity Institute at Oxford. Researchers at the SIAI have published these academic papers. The Singularity Summit's website includes a lengthy list of partners, including Google and Scientific American.

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

Comment author: Perplexed 21 August 2010 05:30:53PM *  6 points [-]

Researchers at the SIAI have published these academic papers.

But those don't really qualify as "published academic papers" in the sense that those terms are usually understood in academia. They are instead "research reports" or "technical reports".

The one additional hoop that these high-quality articles should pass through before they earn the status of true academic publications is to actually be published - i.e. accepted by a reputable (paper or online) journal. This hoop exists for a variety of reasons, including the claim that the research has been subjected to at least a modicum of unbiased review, a locus for post-publication critique (at least a journal letters-to-editor column), and a promise of stable curatorship. Plus inclusion in citation indexes and the like.

Perhaps the FHI should sponsor a journal, to serve as a venue and repository for research articles like these.

Comment author: CarlShulman 21 August 2010 05:48:02PM 1 point [-]

Perhaps the FHI should sponsor a journal

There are already relevant niche philosophy journals (Ethics and Information Technology, Minds and Machines, and Philosophy and Technology). Robin Hanson's "Economic Growth Given Machine Intelligence" has been accepted in an AI journal, and there are forecasting journals like Technological Forecasting and Social Change. For more unusual topics, there's the Journal of Evolution and Technology. SIAI folk are working to submit the current crop of papers for publication.

Comment author: Perplexed 21 August 2010 05:53:17PM 1 point [-]

Cool!

Comment author: [deleted] 20 August 2010 09:25:43PM 4 points [-]

Okay, I take that back. I did know about the connection between SIAI and FHI and Oxford.

What are these academic papers published in? A lot of them don't provide that information; one is in Global Catastrophic Risks.

At any rate, I exaggerated in saying there isn't any engagement with the academic mainstream. But it looks like it's not very much. And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Comment author: WrongBot 20 August 2010 09:53:51PM 4 points [-]

And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Limited time and more important objectives, I would assume. Most academic work is not substantially better than trial-and-error in terms of usefulness and accuracy; it gets by on volume. Volume is a detriment in Friendliness research, because errors can have large detrimental effects relative to the size of the error. (Like the accidental creation of a paperclipper.)

Comment author: Eliezer_Yudkowsky 20 August 2010 09:39:34PM 0 points [-]

If you want it done, feel free to do it yourself. :)

Comment author: wedrifid 21 August 2010 08:52:29AM 0 points [-]

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

... particularly in as much as they have become (somewhat) obsolete.

Comment author: MatthewBaker 05 July 2011 11:08:11PM 0 points [-]

Can you clarify please?

Comment author: wedrifid 07 July 2011 05:11:44PM 1 point [-]

Can you clarify please?

Basically, no. Whatever I meant seems to have been lost to me in the temporal context.

Comment author: MatthewBaker 07 July 2011 05:25:40PM 0 points [-]

No worries, I do the same thing sometimes.