lukeprog comments on A Scholarly AI Risk Wiki - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
Certainly!
The academic community generally will not usefully engage with AI risk issues unless they (1) hear the arguments and already accept (or are open to) the major premises of the central arguments, or unless they (2) come around to caring by way of personal conversation and personal relationships. Individual scholarly articles, whether in journals or in a wiki, don't generally persuade people to care. Everyone has their own list of objections to the basic arguments, and you can't answer all of them in a single article. (But again, a wiki format is better for this.)
The main value of journal papers or wiki articles on AI risk is not for people who have strong counter-intuitions (e.g. "more intelligence implies more benevolence," "machines can't be smarter than humans"). Instead, they are mostly of value to people who already accept the premises of the arguments but hadn't previously noticed their implications, or who are open enough to the ideas that with enough clear explanation they can grok it.
As long as you're not picky about which journal you get into, the cost of a journal article isn't much more than that of a good scholarly wiki article. Yes, you have to do more revisions, but in most cases you can ignore the revision suggestions you don't want to make, and just make the revisions you do want to make. (Whaddyaknow? Peer review comments are often helpful.) A journal article has some special credibility value in having gone through peer review, while a wiki article has some special usefulness value in virtue of being linked directly to articles that explain other parts of the landscape.
A journal article won't necessarily get read more than a wiki article, though. More people read Bostrom's preprints on his website than the same journal articles in the actual journals. One exception to this is that journal articles sometimes get picked up by the popular media, whereas they won't write a story about a wiki article. But as I said in the OP, it won't be that expensive to convert material from good scholarly wiki articles to journal articles and vice versa, so we can have both without much extra expense.
I'm not sure I answered your question, though: feel free to ask follow-up questions.
Heck yes. As near as I can tell, what happens today is this:
A scholarly AI risk wiki can help ubermaths (and non-ubermaths like myself) to (1) understand our picture of AI risk better, more quickly, more cheaply, and in a way that requires less personal investment from SI, (2) see that there is enough serious thought going into these issues that maybe they should take it seriously and contact us, (3) see where the bleeding edges of research are that they might contribute to it, and more.
BTW, an easy way to score a conversation with SI staff is to write one of us an email that simply says "Hi my name is _ _, I got a medal in the IMO or scored well on the Putnam, and I'm starting to think seriously about AI risk."
We currently spend a lot of time in conversation with promising people, in part because one really can't get a very good idea of our current situation via the articles and blog posts that currently exist.
(These opinions are my own and may or may not represent those of other SI staffers, for example people who may or may not be named Eliezer Yudkowsky.)
Would it be useful for SIAI to run a math competition to identify ubermaths, or to try contacting people who have done well in existing competitions?
Yes. It's on our to-do list to reach out to such people, and also to look into sponsoring these competitions, but we haven't had time to do those things yet.
Who? People at FHI? Other AGI researchers?
(And thanks for good answers to the other questions.)
FHI researchers, AGI researchers, other domain experts, etc.