A few weeks ago I had a fascinating conversation with Ruby about models of the research process and how to improve it. This post outlines one particular model which I took away from that conversation: open problems as the primary factor which create a paradigm.

There’s a cluster of things like research agendas, open problems, project proposals, and/or challenges; we’ll refer to that whole cluster as “open problems”. The unifying theme here is the function(s) these things serve:

  • Define a problem
  • Provide context: why is the problem interesting/valuable? Why is it hard? What would a solution look like? What background work exists?
  • Provide starting points/footholds/surface area - places for people to start working on the problem
  • Create a status reward for solving the problem - mainly by making the importance and difficulty public knowledge

Let’s walk through each of those pieces.

First, an open problem defines a problem. That sounds obvious, but it’s more difficult than it sounds: defining a problem means setting it up in such a way that anyone who understands the problem-explanation can recognize a solution. For pre-paradigmatic fields, this is hard. What would a solution to e.g. an embedded agency problem look like? If someone proposed a “solution”, and we asked 50 different researchers whether this was actually a solution to the problem, would their answers all agree? Probably not, though the embedded agency sequence brought us a lot closer to that point.

Second, an open problem provides context. Why is the problem interesting/valuable? Why is it hard? This goes hand-in-hand with defining the problem: we define the problem in a particular way because we expect a solution to provide some value. If a solution to a differently-defined problem would not clearly provide the same value, then that’s an argument in support of our particular problem definition.

Third, an open problem provides starting points/footholds/surface area for people to tackle the problem. The problem definition and value-prop inevitably ties to background work and existing techniques, and explains why those techniques are not already sufficient. That provides a jumping-off point for newcomers.

Finally, establishment of an open problem creates a status reward for solving the problem. We define what a solution looks like, so others can easily recognize success. We explain why a solution would be valuable, and why it’s difficult. Once all those things become public knowledge, we have the recipe for a status reward associated with solving the problem.

Key thing to note: it’s not the problem difficulty and importance themselves which create a status reward. Rather, it’s the fact that difficulty and importance are common knowledge. In order to create the status reward, the importance and difficulty of the problem have to be broadcast to many people in an understandable way.

Put all these pieces together, and it looks like a recipe for creating a paradigm within a pre-paradigmatic field. We get common problems, standards for solving those problems, motivation for the problems, starting points for newcomers, and status points to incentivize participation, all in one fell swoop.

Potentially testable prediction: it seems like someone could create a field this way, de novo. The main requirements would be good writing skills and memetic reach - and having a good set of open problems, which we would hope is the hard part. The Sequences or some of MIRI’s work might even be examples. This seems like a testable, standalone model of the process of formation of new research fields.

To wrap it up, one note on the challenges which such a model suggests for research infrastructure. There’s potential for misalignment of incentives: the problems with highest return on investment (in terms of value of solving the problem) will not necessarily be the problems with highest status reward. Problems which are easier to understand will be easier to communicate to more people and therefore more easily achieve a high status reward; problems which are high-value but harder to explain will be harder to attach a large status reward to. This could potentially be addressed at the meta-level: create a framework for identifying important problems and recognizing their solutions, in such a way that people can easily understand the framework without necessarily understanding the individual problems/solutions.

Credit for this model is at least as much Ruby's as it is mine, if not more. He has additional interesting background info and different weights of emphasis, so take a look at his comments below.

New Comment
5 comments, sorted by Click to highlight new comments since:
[-]Ruby260

Thanks, John, for the conversation and the write-up. Definitely great you getting something up. Here are my own add-ons to the post:

The Existing LW Questions Platform "Failed" Because of Lack of Context

I think a lot about how LessWrong can cause more intellectual progress, and more recently I've been thinking about why LessWrong's existing Open Questions feature didn't succeed at our highest hopes for it. Concretely, it hasn't gotten existing full-time researchers outsourcing parts of their work to willing others via LessWrong. Researchers at OpenPhil, FHI, MIRI, AI Impacts, etc. don't post questions that then get great answers via people on LessWrong going off and working for days/weeks/months.

One of the largest factors, I believe, is that in fact very difficult hard to convey the context of a research question. Why is it interesting, what kinds of answers are useful, how to go about it. At best you need to explain a large swathe of your current research project and at worst someone needs to study for months or years to understand the background. This requires more effort on the part of the question-asker than writing up a few paragraphs and possibly much more of an answerer who might then have a long reading list.

This problem of imparting context came up repeatedly in interviews I did with current LW/EA researchers.

Version 2: Research Agendas

Trying to address the problems on the Open Questions feature led me to something I've been calling the "Research Agendas feature" which I've attempted to model more closely on how research currently gets done.

  1. "Research Agendas" are owned/worked on by a very small number of people who commit to working hard on them, in contrast with the QA forum model where a lot of people might spend a relatively small amount of attention. These people actually put in the hours to properly share context on the research via explaining and/or studying and/or talking at length.
  2. "Research Agendas" are defined by 1) the Open Questions they're trying to answer, 2) a methodology / paradigm that Research Agenda intends to use to define and/or answer the questions posed. In writing up a "Research Agenda", one is expected to write up the context or at least write enough so that someone could go study and then understand.
  3. If "Research Agendas" caught on, the way you'd know what some researcher was up to is you'd go read their open "Research Agendas" where they explain what they're trying to do. Others could potentially join in.

Open Questions -> Research Agendas -> Paradigms

If you bundle up enough questions into "Research Agendas" that share common context, presumed methods, and a sense of what the answers look like–and if the questions are compelling enough–I think you get on track to having a shared paradigm where broadly people have shared context: a sense of what's trying to be answered, how to go about it, and what success looks like. Conversely, they don't need to keep rehashing the fundamentals each time.

I think John makes it sound a bit too easy. To get a whole paradigm going, I think you need some deep Open Questions that generate enough work to keep busy for a while, and I think those questions need to be quite compelling. I'm going mostly off what Kuhn himself wrote, popularizer of this "paradigm" notion:

In this essay, ‘normal science’ means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice. Today such achievements are recounted, though seldom in their original form, by science textbooks, elementary and advanced. These textbooks expound the body of accepted theory, illustrate many or all of its successful applications, and compare these applications with exemplary observations and experiments. Before such books became popular early in the nineteenth century (and until even more recently in the newly matured sciences), many of the famous classics of science fulfilled a similar function. Aristotle’s Physica, Ptolemy’s Almagest, Newton’s Principia and Opticks, Franklin’s Electricity, Lavoisier’s Chemistry, and Lyell’s Geology—these and many other works served for a time implicitly to define the legitimate problems and methods of a research field for succeeding generations of practitioners. They were able to do so because they shared two essential characteristics. Their achievement was sufficiently unprecedented to attract an enduring group of adherents away from competing modes of scientific activity. Simultaneously, it was sufficiently open-ended to leave all sorts of problems for the redefined group of practitioners to resolve.

Achievements that share these two characteristics I shall henceforth refer to as ‘paradigms,’ a term that relates closely to ‘normal science.’ By choosing it, I mean to suggest that some accepted examples of actual scientific practice—examples which include law, theory, application, and instrumentation together—provide models from which spring particular coherent traditions of scientific research. These are the traditions which the historian describes under such rubrics as ‘Ptolemaic astronomy’ (or ‘Copernican’), ‘Aristotelian dynamics’ (or ‘Newtonian’), ‘corpuscular optics’ (or ‘wave optics’), and so on. The study of paradigms, including many that are far more specialized than those named illustratively above, is what mainly prepares the student for membership in the particular scientific community with which he will later practice. Because he there joins men who learned the bases of their field from the same concrete models, his subsequent practice will seldom evoke overt disagreement over fundamentals. Men whose research is based on shared paradigms are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., for the genesis and continuation of a particular research tradition.

Kuhn, Thomas S.. The Structure of Scientific Revolutions (pp. 10-11). University of Chicago Press. Kindle Edition. [Emphasis added]

One comment I made in response to John's draft is that, following Kuhn, you probably need more than writing skill and mimetic reach–you need to be building a scientific achievement that's recognizable enough to people as striking at what they really care about such that they stop what they're doing and come do it your way. 

Embedded Agency might actually achieve that, but I'm led to believe it wasn't a small feat.

Methodology

Another comment is that I think shared methodology needs emphasis. It fits to lump methodology a bit under the Open Question definition but it's large enough to highlight as crucial to establishing paradigms.

Achieving Paradigm-genesis

As far as I can, most of the research of interest to the LW/EA cluster is in a pre-paradigmatic state. There's an undercurrent of shared epistemic approach and people are trying to innovate (one, two, three), but there's no sense of "these are the questions we need to answer, this what an answer looks like, and this is what you should do to get that answer". Existing methods and standards of analysis in history and sociology probably don't cut it for us, but we're not mature enough to have our own. Related, we've got proliferating schools of AI Alignment/Safety (can't even agree on the name, geeze).

I should clarify, we want multiple paradigms for multiple different problems we tackle. Predicting the rate of technological progress is a different task to developing a provably safe AGI design.

I don't think reaching enough consensus to form paradigms will be quick, but I'm hopeful that if we can more clearly communicate the problems we're tackling, how we're doing it, and the results we're getting, then we're on track to building ourselves little paradigms and sub-paradigms that make your thoughts precise and greatly accelerate work (up until the point you discover where you paradigm was broken from the beginning).

Communities evolve around shared methodology because of the surprisingly detail rich nature of reality. Methodology has a lot of degrees of freedom. This then creates correlated blind spots/echo chambers. One way I like to think about this is common data formats. Hard research problems create new data formats suited to the problem. But if the researchers aren't also doing the extra work to maintain backwards compatibility then they won't notice when they start rejecting things for not already being in their preferred format. And for reasonable reasons! Research time can be precious. Especially in the environment of artificial scarcity that donors en masse believe is healthy.

I'm noticing what might be a miscommunication/misunderstanding between your comment and the post and Kuhn. It's not that the statement of such open problems creates the paradigm; it's that solutions to those problems creates the paradigm.

The problems exist because the old paradigms (concepts, methods etc) can't solve them. If you can state some open problems such that everyone agrees that those problems matter, and whose solution could be verified by the community, then you've gotten a setup for solutions to create a new paradigm. A solution will necessarily use new concepts and methods. If accepted by the community, these concepts and methods constitute the new paradigm.

(Even this doesn't always work if the techniques can't be carried over to further problems and progress. For example, my impression is that Logical Induction nailed the solution to a legitimately important open problem, but it does not seem that the solution has been of a kind which could be used for further progress.)

Nice post and model!

I agree with you and Ruby that this is a big part of how paradigms are born. I also like your decomposition, even if it looks obvious, because that creates exactly the kind of standards you mention. Or to be cheeky, defining new open problems is an open problem, and you provided part of a standard for a solution.

Personally, I love open problems and research agendas. My research, both during my thesis on distributed computing and now on AI Alignment, basically focuses around finding such problems and expressing them. I'm just better at that than at actually solving clean and concrete problems that are already stated.

And that makes me want to point something missing from your post or Ruby's comment: how ridiculously hard it is to get people to care about your new open problem if you don't already have status. Or put differently, there's no status reward for writing an open problem. I'm not even talking about whether people want to work on it; just having any feedback whatsoever on what they think about it is incredibly difficult.

This has two negative consequences: first, the big status researchers are usually busy doing their own research, and so they don't have the time to write an open problem; and second, starting/budding researchers with potential ideas but no status either censor themselves or receive no feedback on their ideas. Once again, I'm not saying that all such ideas are good, merely that they have to wait because they come from someone that hasn't payed their dues.

Finally, establishment of an open problem creates a status reward for solving the problem. We define what a solution looks like, so others can easily recognize success. We explain why a solution would be valuable, and why it’s difficult. Once all those things become public knowledge, we have the recipe for a status reward associated with solving the problem.

Interestingly, it not only provides a status reward for solving problems which you already have for problems solved with publishable solutions but it also provides a way to reward people for formulating the problems.

Problems which are easier to understand will be easier to communicate to more people and therefore more easily achieve a high status reward

It seems like it's important that the users of your platform can vote about how important they consider certain problems. 

One idea would be to give every user an opportunity to rank problems on their profile.