I feel that a lot of what's in LW (written by Eliezer or others) should be in mainstream academia. Not necessarily the most controversial views (the insistence on the MW hypothesis, cryonics, the FAI ...), but a lot of the work on overcoming biases should be there, be criticized there and be improved there. 

For example, a few debiasing methods and a more formal explanation of LW's peculiar solution to free will (and more, these are only examples).

 

I don't really get why LW's content isn't in mainstream academia to be honest. 

 

I get that peer review is not the best (far from it, although it's still the best we have, and post-publication peer-review is also improving, see PubPeer), that some would too readily dismiss LW's content, but not all. Lots would play by the rules and provide genuine criticisms during peer-review (which will lead to the alteration of the content of course), along with criticisms post publication. This is in my opinion something that has to happen.

 

LW, Eliezer, etc, can't stay on the "crank" level, not playing by the rules, publishing books and no papers. Blogs are indeed faster and reach a bigger amount of people, but I'm not arguing for only publishing in academia. Blogs can (and should) continue. 

 

Tell me what you think, as I seem to have missed something with this topic.

New to LessWrong?

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 2:03 PM

A lot of LW content is based on stuff already in academia.

In addition to things mentioned in other comments, knowing about biases can hurt people. If you try to teach rationality to people who don't care about rationality... well, most of them will just memorize your passwords and use them to impress people... and most of the remaining ones will twist the lessons to better defend their existing irrational views.

That means, if you could make (a modified version of) LessWrong a part of curriculum, ten years down the line I would expect not multitudes of rationalists, but rather... people writing blogs and books using "Bayesian" arguments for Jesus, people using their "priors" to defend homeopathy against scientific research, and people claiming they can calculate Solomonoff Induction and that it proves whatever crazy idea they want to prove.

Okay, this argument is not completely absurd -- I know a person who after reading a few LW articles produced a "theory" containing Tegmark multiverses, quantum mumbo jumbo, et cetera, to prove that evolution does not exist and the Catholic Church was right about everything; and insisted that as a rationalist I should agree with and accept his idea. Then he started promoting his version of "rationality" on Facebook among his religious friends, but luckily, the theory was too complicated for them to follow -- however, the same argument could also be used against publishing LessWrong content online, where anyone can read it. And I wouldn't argue against having LW online; but still having LW at schools feels wrong. How much rational is this?

One argument could be that LW selects for the kind of people who would voluntarily read LW despite having many other alternatives on the internet. Most people who would be hurt by LW choose to read some other website instead. If we bring LW to schools, we lose this filter. But that does not prove that having LW at schools would be a net loss. Maybe it is better to have 1000 people using LW correctly and 9000 people using LW incorrectly, than having 100 people using LW correctly and 50 people using LW incorrectly. Maybe the absolute number of rationalists is more important than the fraction of people exposed to LW material who get it right. There are already many kinds of fools, but we would benefit from having a larger rationalist community.

Maybe it's using LW material at school that seems wrong. In addition to the influence of the materials themselves, you also have the influence of the teacher, and of the classmates. The teacher may happen to be the "clever arguer" who abuses LW material to support their stupid ideas; or the classmates may come with conclusions the teacher will not disprove. Imagine all kinds of political mindkilling that could use LW as a soldier, insisting that it's only people who disagree with them who are mindkilled. -- But again, this seems like an argument against having schools in general.

Well, I don't know... I guess we could try this once. Just don't get your hopes high.

I don't really get why LW's content isn't in mainstream academia to be honest.

This is a very good question to think about :).

It probably says some mixture of bad things about academia (e.g. fear of looking silly) and bad things about LW (e.g. insufficient money to run randomized controlled trials, insufficient dedication to cite lots of related literature for every post the way lukeprog does).

Academia has flaws, like publication bias, closed access journals, expensive textbooks that don't allow comments, slower conversations, credentialism, lousy writing and maybe tenure. Less Wrong has flaws, like the fact that all its contributors are part-time, people think we're weird, it's not rewarding to write for, and maybe issues with the voting system. A blue sky kind of question to ask is how you might lay the foundation for something that outdoes both. What would academia look like if it was fully optimized for the internet age? (This question will look increasingly relevant if college enrollments continue declining and/or we start moving to a Coursera type online education model.)

One interesting thing to notice about academia is that it’s not monolithic the way Less Wrong is. You’ve got philosophers, sociologists, psychologists, economists, etc. each with their own somewhat disjunct body of knowledge. You could argue that they don’t talk to each other as much as they should, and that this is a problem. But I also see some advantages: mastering a single field is a more manageable job for a grad student than mastering many, and separately evolving bodies of thought might form a kind of system of checks against one another (e.g. if it weren’t for the psychologists, maybe economists would still be acting as though humans were perfectly rational agents).

Crazier proposals might have all the academics making their living off of a heavily subsidized prediction market or a giant Bayesian network of all humanity's knowledge that papers do updates on (screw confidence intervals). Having Scholarpedia cover the same range of ground as Wikipedia, but with cutting edge info and more rigor, is a tamer suggestion.

Interesting recent article

I don't think there should be one system of knowledge creation. It's okay to have various different system in our society that works with different incentives.

I think an organisation like CFAR, provided it's well funded is more likely to invent effective techniques for rational thinking then academic psychologists.

GiveWell is also an organisation that creates valuable knowledge. They incentivise nonprofits to do good studies that prove the effectiveness of the nonprofits.

a giant Bayesian network of all humanity's knowledge

I think there's room for a crowdsourced version of this that works like Wikipedia.

Having a science replacement grow out of Givewell would be pretty interesting. I'll bet they'd do a better job of prioritizing research areas than government agencies like the National Science Foundation.

If you talk about "science replacement" you have to think about what science is. The German word for science is "Wissenschaft". The creation of knowledge. In particular the creation of reliable knowledge. GiveWell is engaged in that project.

According to Kuhn part of a scientific revolution is that the questions that get asked change.

LW itself isn't an organisation. It's a blog. There are associated organisations such as MIRI, CFAR and FHI. There are also a bunch of members of LW who do publish academic papers.

To the extend that we have working debiasing methods as far as I understand CFAR is supposed to formalize these and publish studies to show that they work.

MIRI does publish more papers than it used to do.

If you think that an idea on LW that isn't in academia should be in academia, you can also see that as your mission. Write an academic article to present the idea.

MIRI self publishes if I'm not wrong.

Why not publish in mainstream academia ?

You are wrong, MIRI doesn't only self publish: https://intelligence.org/all-publications/ There are multiple journal articles and conference papers in the list.

Indeed, I stand corrected.

I still find a rather big amount of self-published articles in this list. I find the idea of self-publishing articles to be a bit self-defeating :/

This is old and it seems like it was a bit controversial, but lukeprog wrote a post with some reasons that publishing in mainstream academia might be suboptimal: Reasons for SIAI to not publish in mainstream journals

There is no dichotomy here: it is possible to both keep blogs and publish in mainstream journals.

work on overcoming biases, LW's peculiar solution to free will

None of that is novel, or even peculiar. The point of Lesswrong is to make it fresh, accessable, and well written. The bias writings are derived from academic work on biases, published by people doing experiments. The "solution to free will" is just basic clear thinking - if you were confused about free will upon entering, you're encouraged to not read the solution and solve it yourself as an exercise, and then check to see if that solution matches your own - it's not claiming to be novel philosophy or anything. This is primarily a hub for the dispersal of existing good ideas in a better format.

Novel ideas tend to be more narrow, more specific formulations. Timeless decision theory, for example, is specific and fairly novel, and MIRI published it.

I agree, most writings are derived from academic works.

That may seem weird, but I don't think "basic clear thinking" should be excluded from academia. Philosophy problems should in my opinion not simply be something we "solve it ourselves", and should enter as formal as it can in academia. I may also simply be unaware of the possibly similar works on this problem too.

That said, I haven't been confused by this problem either, simply got more confused after reading LW and asking what people thought around me - that it was really something that bothered people.

And TDT has been self published ... Why not in mainstream academia ?

I may also simply be unaware of the possibly similar works on this problem too.

Recorded compatibalist conceptions of free will are several centuries older than academia, so I don't think it was ever really a publishable insight. (You got it on your own, I got it on my own, and so have a lot of people throughout history - it's just that not everyone agrees.)

I don't know about the second question...assuming the premise is true, I suppose either they did not try or it wasn't accepted, I'm not sufficiently knowledgeable about academic philosophy to speculate!

Which parts of academia? There is zero experimental research that is up to any academic standards, There is no numerical modeling in any areas that I can think of. There is some foundational AI alignment work which is mostly math and any interesting results there will likely be published. There are some interesting philosophical ideas, I suppose, not sure if they are good enough to be published in academic journals. There is plenty of "raising the sanity waterline" efforts, but those hardly qualify as research. What am I missing?

[-][anonymous]9y20

LessWrong doesn't use academic terminology for a lot of things, so translation could be an issue. There have been a couple of posts featuring academic quality experimental research. Of course most of those were concurrently being published.

LessWrong doesn't use academic terminology for a lot of things, so translation could be an issue.

That's an understatement.

Well, good news, the topic of getting rationality into academia is something I'm actually working on myself as an academic. For example, I just published an op-ed in one of the most premier higher education media channels on how I as a professor used rationality-informed strategies to deal with mental illness in the classroom. Earlier, I published some research informed by rationality concepts, such as agency.

As part of my broader project of promoting rationality widely, I'm also starting up a research project on debiasing the planning fallacy using rationality-informed strategies and on finding life goals using rationality strategies. I'm planning to publish research papers on these topics, naturally.

So I'm guessing some of the reasons why rationality is just beginning to enter mainstream academia is because of the very long cycle of conducting research and then publishing. Another set of reasons has to do with academics like myself taking time to figure out how to integrate rationality into their research fields.

LW, Eliezer, etc, can't stay on the "crank" level, not playing by the rules, publishing books and no papers.

Why not? What's stopping them?

One of the rules is that beginning academics must not publish work like this. They have to publish cutting edge research for a long time before they are allowed to synthesize or popularize.

What's stopping them?

What's stopping them is that by not playing by conventional rules, they will not get official kudos in the field. People like Bostrom, etc. who do play by the rules will. One might not care about official kudos per se, but one should -- people with official kudos are the ones with actual sway on policy, etc. Important people read Bostrom's book, no one important reads EY's stuff.

I think this is the vital thing: not 'does academia work perfectly', but 'can you work more effectively THROUGH academia'. Don't know for sure the answer is yes, but it definitely seems like one key way to influence policy. Decision makers in politics and elsewhere aren't going to spend all their time looking at each field in detail, they'll trust whatever systems exist in each field to produce people who seem qualified to give a qualified opinion.

That's not really true. You can write a review article as one of your first publications and use it to lay out what you intend to work on. People won't take your review article as seriously as they will one written by Dr. Bigshot et al., but there certainly aren't any rules against it.

Also, the NSF is thrilled if you're a beginner and you're doing any sort of popular outreach. They love pop science blogs.

NSF requires many things that are bad for your career. This may well be the point, to counterbalance other sources of judgement.

Outside of the purview of NSF, here is an essay on how history is not written by a historian who was, at the time, blogging anonymously. She was afraid of her colleagues seeing her blog close to her professional interests while being open about writing essays about manga.

[Beginning academics] have to publish cutting edge research for a long time before they are allowed to synthesize or popularize.

Indeed, and I think a case can be made that this is exactly backwards (if we must have such "rules" at all).

Ok, but before we turn everything upside down, can we think a little about why academia ended up being the way it is? Hanson had some good status-based explanations about the academic career trajectory.


If you haven't done cutting edge stuff, the worry is you don't know what you are talking about yet, and shouldn't be a public-facing part of science.


Also there are well-known popularizers who aren't significant academics, e.g. Bill Nye. Bill Nye did some engineering stuff, though.

Aren't defacto most popularizers of academia journalists who write popular articles about science?

Journalists and scientists that write popular exposition books. The former are generally terrible (journalists tend to have an education that emphasizes writing, not numeracy).

The former are generally terrible

But that doesn't stop them from doing it or finding an audience.

Yes, but no one important takes them seriously.

I think plenty of politically important people read the science section of the New York Times and of other newspapers.

If important people would only listen to scientists for understanding science we would have different policy on global warming.

Same reason why milesmathis (google it, have fun) isn't taken, and shouldn't be taken seriously by the mainstream. Because "playing by the rules" didn't work - you usually end up with an unending amount of crackpottery in what is actually not published: books, blogs, etc.

Not publishing in the mainstream while publishing books and self published articles is the crackpott's artillery, unfortunately.

Think like the mainstream: given the amount of crazy stuff that's present on the internet that couldn't be published because it was, indeed, crazy, should I care about this particular guy that doesn't publish anything but books (or self published articles) ? The unfortunate answer is no.

Ok, but before we turn everything upside down, can we think a little about why academia ended up being the way it is?

Why do you assume I haven't?

Stop expecting short inferential distances!

Because you wrote one sentence without actually giving the argument. So I went with my prior on your argument. And my prior about arguments that argue for drastically changing the existing order of things is they aren't right.

Because you wrote one sentence without actually giving the argument. So I went with my prior on your argument.

That's what I'm suggesting you not do.

Writing out arguments, and in general, making one's thought processes transparent, is a lot of work. We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge.

I'm not saying you should (necessarily) believe what I say, just because I say it. You just shouldn't jump to the conclusion that I don't have justifications beyond what I have stated or am willing to bother stating.

Cf. Jonah's remark:

If I were to restrict myself to making claims that I could substantiate in a mere ~2 hours, that would preclude the possibility of me sharing the vast majority of what I know.

I'm not saying you should (necessarily) believe what I say, just because I say it.

If I'm not going to believe what you say, why even bother saying it in the first place? Isn't just saying things "a lot of work", too?

Writing out arguments, and in general, making one's thought processes transparent, is a lot of work.

Guess what, verifying arguments that haven't been written out transparently is a lot more work! And it's often a requirement if what you say is to be useful at all. It is precisely when inferential distances are long that clarifying one's argument becomes critically important!

Well, if your justifications are truly marvelous but the margin of this post is too narrow to contain them, you are basically asking everyone to trust you that you know what you're talking about. This makes it an argument by reputation (or, in a slightly more pronounced form, an argument by authority).

I am fairly confident that you have justifications you haven't bothered stating. But that's not the question, the question is whether they are good justifications and this is a much more complicated matter.

You don't seem to be engaging with what I said in the grandparent at all. The claim was:

We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge

Maybe you disagree with this, but you don't even explicitly state disagreement; your comment just looks like an attempt to enforce the very norm that I claimed was undesirable.

I have often been bothered by that norm myself, especially on Less Wrong, but it's not clear what you're proposing to put in its place. Given the fact that human beings are not even close to the kind of ideal reasoners that Aumann's theorem applies to, if you state something very far from what other people think, you cannot expect any sudden change in their probability estimate. They are just going to ignore you at best.

If you're simply saying that people should assume you have reasons, they probably do assume that. But if you say something they think is wrong, they will just assume your reasons are bad ones. It is not clear why or how you can prevent them from doing that, since you probably do the same thing to them.

"Conclusions that are at a huge inferential distance" doesn't look to me like a useful category. It includes both quantum physics and the lizardmen-are-secretly-ruling-the-Earth theory.

You (and anyone else) can, of course, offer such conclusions. But I don't know why would you expect them to necessarily be taken seriously. How do you suggest people filter out rank crackpottery?

How do you distinguish claims in advanced physics from claims about lizardmen? There are ways of judging meaningfulness and truth of conclusions that you can't yet understand or verify. There do exist experts who know things that you don't yet know, but who you can identify as having expertise about those claims. Having the norm of not mentioning such claims is an arbitrary restriction on the kinds of considerations that can be used to think or argue about a point.

How do you distinguish claims in advanced physics from claims about lizardmen?

I can buy books and read papers about advanced physics that will outline the arguments in support of these claims from first principles. In a pinch, I could even refrain from verifying the claims myself, and simply trust that others have done so competently. None of this is true when a claim is simply unsupported!

Isn't there an argument that having a million voices synthesising and popularising and ten doing detailed research is much less productive than the opposite? Feels a bit like Aristophanes:
"Ah! the Generals! they are numerous, but not good for much"

Everyone going around discussing their overarching synthesis of everything sounds like it would produce a lot of talk and little research

Indeed, and I think a case can be made that this is exactly backwards (if we must have such "rules" at all).

It comes down to funding and prestige. Publishing research in high-profile journals makes the department look good and keeps the grant money flowing. The concern is that an academic who spends time popularizing is wasting time he could have spent doing research. A few decades ago, some departments had a culture where young academics could be looked down upon for being too good at teaching for precisely this reason.

Curious about the downvote.

Is it or isn't it true generally in academia that good teaching is considered lower status than good research?

Is it or isn't it true generally in academia that good teaching is considered lower status than good research?

In the US academia it is definitely true. Especially teaching undergrads which is often enough just relegated to TAs.

At least nowadays many places bother to train TAs. My understanding is that not too long ago, the TA was just handed a syllabus and told to teach a class. Some schools had a reputation for admitting excess graduate students just to serve as TAs for a bit before being shown the door.

However, there are some universities that focus on quality undergraduate education. In those places, teaching ability is a big part of the hiring process and people have been denied tenure over poor teaching. It's the big research universities that have historically been lax in their teaching standards.

Yep. This is a good case to apply the standard heuristic: Look at incentives.

Lots would play by the rules and provide genuine criticisms during peer-review (which will lead to the alteration of the content of course), along with criticisms post publication.

Do you have any experience with peer review? In the most functional fields, peer review does a good job of assessing what is valuable, but provides no improvement in quality. In most fields it demands edits to make things worse.

I have experience with peer review, on both ends, and I strongly disagree with you.* What has your experience in peer review been like? In what field?

(*) Peer review can indeed be quite bad, and I had bad peer review before. But this has been an exception rather than the rule for my papers. I understand that I introduce selection bias by only looking at my papers.

[-][anonymous]9y00

It is often quite good, but it could be vastly better. I remember helping to translate an article into English. It was written mostly by my friend, whom I know to be impatient with phrasing, and in the course of translation he often changed the original text, added caveats etc. The gist of his meaning remained, but it was framed very diffently. Often, however, reviewers are too hurried and point out only the worst mistakes or unclear places. Perhaps it would be useful to have a 'malicious revier', a person unversed in the given field but fluent in data presentation, send his comments to the author before the article is shown to domain experts?..

Maybe LW's content would be in mainstream academia if all the grad students on LW didn't drop out of grad school to "overcome the sunk cost fallacy" :P

(I worry about the people who decide they're really interested in rationality self-selecting themselves out of one of society's most powerful institutions.)

Do grad students on LessWrong drop out at higher rates that average?

Edit: PhD completion rates are low to begin with, around 50%.

This should be interesting to look at

[-][anonymous]9y10

In the process of pushing my first first-author publication out. I can understand the low fraction.

I don't really get why LW's content isn't in mainstream academia to be honest.

Or, people here should read more of the primary sources that EY draws from (and others he doesn't), academic or not.

That's why I came here - I took the references to Jaynes and "The Map is Not the Territory" as indications of good intellectual taste, which gave me enough trust to read other works by Cialdini and Kahneman cited here.

There's a whole sea of work descended from Korzybski of The Map is Not the Territory fame, put out by a couple General Semantics groups, much of it organized around bite sized concepts, similar to the Sequences, but more of an organic whole, IMO.