Regarding all the examples of "serendipitous" discoveries that later proved so valuable, I want to propose an analogy.
Consider consumer surplus. This is when the price you would be willing to pay is higher than the price that you do pay, so you incur less cost for the same benefit. While I have not read this description of it explicitly, I put it to you that when it later transpires that the benefit was greater than you originally expected, this is also consumer surplus.
With that idea in mind, turn now to the grant issuing process and consider how those are awarded; in particular things like peer review and grant requirements seem driven more by avoiding wasting money than they are by acquiring knowledge. It feels to me like the current system is designed explicitly to reduce the scientific equivalent of consumer surplus to zero as a consequence.
Since I am otherwise confident that scientific research doesn't resemble a market very closely, I further expect this does not reflect having reached equilibrium. Therefore this lack of surplus seems strictly bad.
Great to see these points being made to a broader audience. My take from a similar investigation into science funding is that there is a common pattern to these really high impact researchers that have trouble getting funding: they're often doing methods innovation rather than object level progress on some area.* It's really hard to get grantors to understand the potential value of methods research even though it underlies scientific advancement. Big shots like the aforementioned Nobel winner, Douglas Englebart, and many others push for direct methods research only to have it seemingly fall on deaf ears even given their past accomplishments. I think part of the reason is that the benefits to major methods breakthroughs are basically unbelievable from the perspective of normal scientific work, and that people's ability to think coherently about hits based research isn't great. If we want breakthroughs the world desperately needs a billionaire who understands the value of methods work. I was really hopeful for Moskovitz to be this person given his blog posts around Asana and solving the meta problem, but have been disappointed by OpenPhil seeming to move in the direction of other foundations in terms of the range of grants they give out. What I mean by that is that glancing through their grants list, you could transplant most of them to the grants list from other foundations and no one would bat an eyelid. Thankfully there are a few exceptions, and people in methods have to take any concessions they get. The Templeton Foundation is another grantor in this space that at least has tried a little bit.
*Yes, there are arguments to be made about whether methods work is better thought of as something that could be pursued as it's own thing vs something that must generally arise out of object level work. And I'd be thrilled if that argument *was actually happening*.
(QRI is working on the consciousness meter btw ;)
I just listened to Tyler Cowen's interview with Ed Boyden (link and transcript). The second half contained a lot of questions about current scientific infrastructure, and Boyden had a lot of interesting comments, so I've reproduced a few particular quotes here and added headings.
(Things I've not quoted that LWers might be interested in: Boyden said that whole brain emulations probably work in principle, that he meditates every day and has used an internal family systems meditation for 10 years.)
The Surprisingly Poor State of Funding
COWEN: How should we improve the funding of science in this country?
BOYDEN: I like to look at the history of science to learn about its future, and one thing I’ve learned a lot over the last couple years — and it’s even happened to me — is that it’s really hard to fund pioneering ideas.
Brian Kobilka, who recently won the Nobel Prize for solving the structure of the G-protein-coupled receptor — and for context, one-third of all drugs target this class of molecules, so it’s a very, very important class of drugs — he lost his funding because he wasn’t making progress fast enough. If I recall, he had to moonlight as an emergency room physician to keep going on his research.
Doug Prasher, who cloned the gene for green fluorescent protein, which has been used in something like a million biology studies, ballpark — he lost his funding and eventually left science, ended up driving a shuttle bus for, I believe, a rental car facility or something.
Anyway, there’s so many stories. For me, it became personal because when we proposed this expansion microscopy technology, where we blow up brain specimens and other specimens a hundred times in volume to map them, people thought it was nonsense. People were skeptical. People hated it. Nine out of my first ten grants that I wrote on it were rejected.
If it weren’t for the Open Philanthropy Project that heard about our struggles to get this project funded — through, again, a set of links that were, as far as I can tell, largely luck driven — maybe our group would have been out of business. But they came through and gave us a major gift, and that kept us going.
BOYDEN: I read a statistic that 40 percent of the professors at MIT trained at one point in their career at Stanford, Harvard, or MIT.
How to Improve Funding
COWEN: Let’s say you had $10 billion or $20 billion a year, and you would control your own agency, and you were starting all over again, but current institutions stay in place. What would you do with it? How would you structure your grants? You’re in charge. You’re the board. You do it.
BOYDEN: Yeah, three thoughts. The first thing that I thought a lot about — studying these past cases and then going through it myself — is thinking about peer review. What is peer review?
When you propose a project, a bunch of your peers will then critique it. The problem that a lot of these daring-sounding projects encounter is that they sound bad during peer review because they’re so off the wall, or they bring together multiple fields that maybe nobody’s qualified to evaluate them.
One thought is, what if — instead of taking people’s opinions and then just sort of combining those opinions, and then, okay, you’re in or you’re out in terms of getting the money — what if we take a step back, and we think about why the peers are thinking this way?
If somebody critiques a proposal, but they’re doing it from a vantage point that doesn’t see a certain part of the proposal as valuable because they’re missing an underlying piece of knowledge, or they’re evaluating a proposal — based upon opinion — that, if we think about the logical underpinnings of it, the rationale is actually pretty solid in terms of its being linked to ground-truthable sciences, like physics and chemistry.
In other words, if we take a step back and apply more logical principles of evaluation to the outcomes of peer review, can we actually improve the ranking of these proposals? This is something I’m thinking a lot about right now. As I evaluate people and evaluate ideas that people propose to me as well, I’m trying to hone those skills in myself. That’s one of the three things I would do.
[...]The second thing I would do is to be more dynamic in my funding. Right now, maybe there’s a grant that you apply for, and then a year later you get the money.
But what if somebody tries something out one Friday afternoon, and whoa, that could cure disease, or that could yield an amazing new insight into biology, or that could allow us to diagnose brain diseases early, or whatever? Why wait a year? What if one could dynamically allocate funding up and down based upon the real-time metrics of science?
In my own group, sometimes we get a project out of the blue, and hey, that’s pretty cool. Then we’ll dynamically try to understand if we can reallocate resources. That’s another thing I would do.
The third thing I would do is I would go looking for trouble. I would go looking for serendipity. If you look at CRISPR for genome editing — that was found by some scientists working on yogurt. If you look at fluorescent proteins — that was identified by a person who just was obsessed with jellyfish.
In my own field, if you look at our optogenetics work or our expansion microscopy work — these fields owe a debt to basic curiosity about critters living in bodies of water for optogenetics, and expansion microscopy goes back to the 1980s where people were wondering why do certain polymers swell so hugely, with no practical-purpose implications of it.
One idea is, how do we find the diamonds in the rough, the big ideas but they’re kind of hidden in plain sight? I think we see this a lot. Machine learning, deep learning, is one of the hot topics of our time, but a lot of the math was worked out decades ago — backpropagation, for example, in the 1980s and 1990s. What has changed since then is, no doubt, some improvements in the mathematics, but largely, I think we’d all agree, better compute power and a lot more data.
So how could we find the treasure that’s hiding in plain sight? One of the ideas is to have sort of a SWAT team of people who go around looking for how to connect the dots all day long in these serendipitous ways.
COWEN: Does that mean fewer committees and more individuals?
BOYDEN: Or maybe individuals that can dynamically bring together committees. “Hey, you’re a yogurt scientist that’s curious about this weird CRISPR molecule you just found. Here’s some bioinformaticists who are looking to find patterns. Here’s some protein engineers who love — ”
COWEN: But should the evaluators be fewer committees and more individuals? The people doing the work will always be groups, but committees, arguably, are more conservative. Should we have people with more dukedoms and fiefdoms? They just hand out money based on what they think?
BOYDEN: A committee of people who have multiple non-overlapping domains of knowledge can be quite productive.
What if I brought together to evaluate a proposal, and I have a physicist who can tell me, “You know what? That amount of energy won’t kill the brain.” Then I have a biologist who says, “You know what? That’s a really important problem.” And then a chemist who would say, “You know what? That molecule probably won’t be toxic.” You actually need a committee to judge some of these ideas
Why is Science Slowing Down?
COWEN: Is progress in science slowing down right now?
BOYDEN: That’s a good question. I think what’s happening is we’re tackling bigger problems. Let me explain what that means.
In physics, there’s a small number of building blocks, like protons and electrons, and a small number of ways they interact, like electromagnetism and so forth. Chemistry — there’s more stuff. There’s a hundred-odd things in the periodic table, although maybe there’s only 30 to 50 that you actually have to work with if you’re trying to make something actually happen. Again, there’s a small number of bonds: covalent and ionic and so forth.
I think the problem right now is that a lot of the scientific questions we’re wrestling with, whether it’s in biology and medicine — but I’m not an expert in this; you know more about some of these things than I do — but in economics and education and so forth, it also seems like — from my distant view — some of these problems relate to this idea that there’s a lot of different building blocks and a lot of ways they interact.
In biology, we have what, 30,000 genes in the human genome, and while we know their sequence, for the most part, we have no idea how these gene products interact with each other, and how they’re architected into cells and tissues and organs, and how those go wrong. The problem is this cognitrone explosion of possibilities is so staggeringly huge that a lot of what we try will fail.
What do we do about it? One point of view is, “Well, if we had better tools, and we could map those building blocks and those interactions, maybe we could reduce the risk of biomedical science.” Again, it’s not my field. You know more about this than I do. I’d love to hear your opinion. But in economics and in other fields, it also seems like people are trying to make better maps of things and how they interact.
That’s one idea. What if we could make these problems . . . Progress might seem to be slower because the problems are so hard. But with better tools, maybe we can level the playing field and make 21st-century sciences more tractable, in the same way that 20th-century sciences gave us lasers and computers and the internet.
COWEN: In economics, we have more good empirical papers than ever before, but virtually no more theoretical breakthroughs, and I’m not sure we’ll ever have them again.
BOYDEN: Oh, how interesting.
COWEN: That may just be diminishing returns. There are so many fundamental ideas, and you learn those, and you stop, and then you measure things.
BOYDEN: Hmm. Well, in biomedicine, systems didn’t evolve to be understood. They evolved to survive and reproduce and all that. One can hope for structure. Biology does give you more structure than we deserve, I think. DNA has a double helix, and you can read out the genetic code.
There’s always this question of why is the universe understandable in the first place, and maybe now we’re entering the realm of complexity where things are less understandable. But again, we have to accept reality for what it is.
How to Hire Good Scientists
BOYDEN: ...in our group at MIT, I have two PhD students. Neither finished college, actually. I can’t think of any other neuroscience groups on Earth where that’s true.
Later in the interview.
COWEN: What kind of students are you likely to hire that your peers would not hire?
BOYDEN: Well, I really try to get to know people at a deep level over a long period of time, and then to see how their unique background and interests might change the field for the better.
I have people in my group who are professional neurosurgeons, and then, as I mentioned, I have college dropouts, and I have people who . . . We recently published a paper where we ran the brain expansion process in reverse. So take the baby diaper polymer, add water to expand it, and then you can basically laser-print stuff inside of it, and then collapse it down, and you get a piece of nanotechnology.
The co–first author of that paper doesn’t have a scientific laboratory background. He was a professional photographer before he joined my group. But we started talking, and it turns out, if you’re a professional photographer, you know a lot of very practical chemistry. It turns out that our big demo — and why the paper got so much attention — was we made metal nanowires, and the way we did it was using a chemistry not unlike what you do in photography, which is a silver chemistry.
I really try to understand how individual people and their unique background and interests could change the world, but it means that we don’t really have a formula. I try not to have formulas, in general, when it comes to the actual day-to-day of science. I often say to people in my group, “We want to revolutionize the world for the better and do the right thing and be ethical, but beyond that, let’s not try to make any artificial policies.”
How to Find Good Ideas
COWEN: [H]ow do you use discoveries from the past more than other scientists do?
BOYDEN: One way to think of it is that, if a scientific topic is really popular and everybody’s doing it, then I don’t need to be part of that. What’s the benefit of being the 100,000th person working on something?
So I read a lot of old papers. I read a lot of things that might be forgotten because I think that there’s a lot of treasure hiding in plain sight. As we discussed earlier, optogenetics and expansion microscopy both begin from papers from other fields, some of which are quite old and which mostly had been ignored by other people.
I sometimes practice what I call failure rebooting. We tried something, or somebody else tried something, and it didn’t work. But you know what? Something happened that made the world different. Maybe somebody found a new gene. Maybe computers are faster. Maybe some other discovery from left field has changed how we think about things. And you know what? That old failed idea might be ready for prime time.
With optogenetics, people were trying to control brain cells with light going back to 1971. I was actually reading some earlier papers. There were people playing around with controlling brain cells with light going back to the 1940s. What is different? Well, this class of molecules that we put into neurons hadn’t been discovered yet.
COWEN: The same is true in economics, I think. Most of behavioral economics you find in Adam Smith and Pigou, who are centuries old.
BOYDEN: Wow. I almost think search engines like Google often are trying to look at the most popular things, and to advance science, what we almost need is a search engine for the most important unpopular things.
COWEN: Sometimes I try doing searches. I take the words I want, and then I throw in a random word that is not related at all, and I try googling that, or through Google Scholar, and I see what comes up.
BOYDEN: Absolutely. I do that a lot, too. That’s one thing where I really value those six years I spent learning a bit of chemistry and a bit of physics and a bit of electrical engineering, because it allows me to stitch together some facts from different fields, and that can be very helpful for launching a new idea or judging whether an idea’s actually worth pursuing.
In Summary
COWEN: Last question. As a researcher, what could and would you do with more money?
BOYDEN: Well, I’m always looking for new serendipitous things, connecting the dots between different fields. These ideas always seem a bit crazy and are hard to get funded. I see that both in my group but also in many other groups.
I think if I was given a pile of money right now, what I would like to do is to find a way — not just in our group but across many groups — to try to find those unfundable projects where, number one, if we think about the logic of it, “Hey, there’s a non-zero chance it could be revolutionary.” Number two, we can really, in a finite amount of time, test the idea. And if it works, we can dynamically allocate more money to it. But if it doesn’t work, then we can de-allocate money to it.
If I think about optogenetics or expansion microscopy, or these other techniques that we’ve been talking about, the amount of money that we actually invested in it to get it going was not that much. They were actually fairly inexpensive projects.
Then finally, I would like to go out and treasure hunt. Let’s look at the old literature. Let’s look at people who might be on the fringes of science, but they don’t have the right connections, like the people who I talked about earlier. They’re not quite in the right place to achieve the rapid scale-up of the project. But by connecting the dots between people and topics, you know what? We could design an amazing project together.