" The space of all possible policies is gigantic..."
Well, maybe not necessarily. It depends on the grain of what qualifies as a "difference", which then classifies policies as unique (individual) or distinct. Also, there are many possible moves in a chess game, but far fewer that make sense.
I found the concrete examples to be a bit dumbed-down compared to the cognitive rhetoric behind the main thrust of the piece. Also of questionable relevance. Contrast was a bit high, too.
Great article!
I was thinking of political policies. Government bills can often be 800+ pages, and seem to contain many specific decisions per page. I could easily imagine one having 20 possible decisions per page, each with 10 options, for 500 pages (assuming some pages are padding), meaning 100k simple decisions total. This is obviously a very rough estimate.
I hope that better examples will become obvious in future writing, will keep that in mind. Thanks for the feedback!
Meta: This is meant to be a succinct and high-level overview. I expect to go more into detail on parts of the system in future posts. This work builds on this previous post.
Predictions can be a powerful tool but are typically difficult to use in isolation. Being sure to predict the right things is hard. Organizing information from predictions is hard. Coming up with decision options to be predicted is hard. If we seek to create a crowdsourced system of predictions to give us useful information, it would be really useful if we could figure out solutions to these other problems as well.
In the future, I expect that much of the most important prediction work will exist within larger ecosystems of collective reasoning. We can call such systems "Predictive Reasoning Systems."
To give a motivating example, there's been some speculation on Futarchy, a concept of government that involves using prediction markets to determine which government policies would be optimal. Attempts to do this with existing prediction systems could pose substantial challenges. Which specific prediction questions should be asked? How should resources be allocated among questions? The space of all possible policies is gigantic, so how will this best be ideated about and then narrowed down? Predictive Reasoning Systems outline a high-level construct that may be more capable of handling such high-level challenges.
Prediction Systems vs. Predictive Reasoning Systems
We can call a system used exclusively to make predictions a Prediction System.
One interacts with a Prediction System by posing and answering prediction questions. Points or subsidization can be assigned by users to prioritize work on what seems like the most useful questions.
In contrast, one would interact with a Predictive Reasoning System by giving it higher level goals. For example, "Optimally spend 1000 human hours to come up with whatever information would most help our business." The system would translate these goals into lots of lower level work, including the use of forecasting.
If a group of people poses multiple questions on a Prediction System, with one overarching goal in mind, one can refer to this [combination of Prediction System + question askers] as a simple Predictive Reasoning System. One can even say that wherever a Prediction System exists, a Predictive Reasoning System of some sort exists around it. That said, just because it formally exists does not mean that it's well optimized.
Primary Functions
There many possible ways to categorize the functionality and subcomponents of Predictive Reasoning Systems, but a good one to start with may be to categorize them by high-level function. One attempt at this would split functions up as predictions, evaluations, ideation, knowledge management, and ontology development. Each of these encompasses a great deal of existing thought and literature, but could still use considerable additional work for best use in Predictive Reasoning Systems.
Predictions
Quite obviously, predictive reasoning systems should make predictions. There should be many predictable measures for forecasters to work on.
Evaluations
Many predictable questions should eventually be evaluated (judged) in order to measure predictor performance and possibly reward them accordingly. There are many different ways of doing this.
Human-involved evaluations present a challenging problem with substantial literature outside of forecasting. Trustworthy oracles of even simple parameters also present challenges that have discussed most recently around blockchain applications.
Example Evaluations:
"What will the GDP be of the United States in 2030, according to Wikipedia?"
"What will the next Quinnipiac poll rate the United States president in 2024?"
"What will be the estimated counterfactual monetary value of project X, according to independent auditor Y?"
Ideation
In some cases decisions and things to predict are obvious. In many others, especially as things scale, they aren't.
For example, say a business is considering a specific project proposal. They can either rejected or accept it. In this case, there is no immediate ideation necessary. Predictors could predict the outcome (such as profit in the next quarter) conditional on the business rejecting or accepting the proposal.
But this is not a very realistic scenario. In many cases the proposal could be modified, in which case, there could be a very large space of accessible options. Here it could be very nice to have some very knowledgeable and creative people suggesting improvements. If this were combined with a flexible prediction system it may be able to converge upon a much better proposal than either predictions or ideation could independently end up with.
Crowd ideation has been one of the most successful use cases of online crowdsourcing, so could be a natural component of an online crowdsourced prediction system.
Knowledge Management
Competent forecasters learn reusable facts and practices during their work. They may also be able to be augmented with non-forecast researchers to help collect and provide useful information. Financial traders typically work with an extensive support staff of operations and research assistance, and it would make sense that great forecasters would be aided by similar help.
On a related note, knowledge management systems are highly valuable to most organizations, so it would make sense that they would also be useful for groups of forecasters working on similar subjects.
With some structure, knowledge management should not only be useful to forecasting, but forecasting could also be useful to knowledge management. Forecasts could be made on the benefits of specific knowledge modifications, leading to increasingly efficient improvements as a Predictive Reasoning System becomes more advanced.
Ontology Development
In order to forecast the most useful things, it's important to organize information and forecasts into reasonably effective ontologies. This includes theoretical domain work in order to arrive at and properly understand the most effective ontologies.
If an organization would want to identify the most useful business practices to adopt, it would first have to establish a decent taxonomy of the possible options. If it were to do a poor job with this and then spent a lot of effort forecasting, and then later dramatically change its taxonomy, then much of that work could be wasted.
In another frame, ontology selection is very similar to the issue of feature selection in data science. A poor choice of variables to predict would lead to results that both aren't decision relevant nor otherwise interesting.
Optional Functions
The above functions are those that seemed most necessary for Predictive Reasoning Systems in the next 5-15 years. However, there's one more that wouldn't change the reasoning structure but would modify behavior.
Action Follow-Through (Input & Output)
Predictive Reasoning Systems as stated above are meant to create as much value as possible in the form of information, specifically by using predictions. The idea is that the result of this work will be used for future decisions and that those decisions will be made by agents external to these systems. However, that doesn't exactly have to be the case. With some minor changes, it would be possible for Predictive Reasoning Systems to invoke direct agency and trigger actions in the world.
For example, one system may determine, with high-certainty, that one good course of action for an organization would be to purchase a new lamp. This could automatically trigger an online order.
The capability of Predictive Reasoning Systems to trigger direct actions in the world is here called "Action Follow-Through", and can be thought of as similar to I/O in software systems. It can raise significant safety issues, though it will probably be a while before it ever becomes practical. That said, if it becomes practical, it is possible that Predictive Reasoning Systems with this capability may be much more powerful than ones without it.
Predictive Reasoning Systems with Action Follow-Through could be considered one form of "decentralized autonomous corporation".
Future Work
This document lays out a definition of Predictive Reasoning Systems with very simple descriptions of what currently seems like their main functions. Each of these functions has a rich literature and considerable room for more thought. There could be a good deal of work categorizing this literature and researching the most promising methods in each function for the use in these predictive systems. There are also other categorization systems to better explore the solution space of Predictive Reasoning Systems. I expect to address some of these topics in future work.
Thanks to Ben Goldhaber and Jacob Lagerros for feedback on this post.