Epistemic Status: I think the axes are straightforward enough. I'm not particularly sure about the names, happy to hear other sorts of suggestions.

Scholarly Status: Push/pull systems have been discussed before. I'm sure there's other naming for this sort of thing in the engineering/design literature, but it probably hasn't been optimised as well to fit in with the terminology of Superintelligence. This was another fairly quickly written post.
 

This article builds on ideas from this one, but that one is not a prerequisite. 


In Superintelligence, Nick Bostrom proposed the terminology of oracles, genies, and sovereigns. These metaphors are clear and have achieved some popularity. I recommend this post by Katja Grace for an overview and some further thinking.

This terminology might be incomplete. I think that oracles have a close parallel to genies, but there's no information-only parallel to sovereigns. Here, I propose two options.

 

Oracles

To reiterate from Superintelligence, an oracle is a system that answers questions. We have similar systems now, though they are weaker than what Bostrom considered. When you enter search queries into Google or many other search programs, you are using them as oracles. Pure functions don't modify the external world and are used to in-essence answer particular queries. Some consultants are asked to solve particular questions or problems. These all represent types of weak oracles.

Informers

In comparison to an oracle, an informer is a system that actively pushes select information to recipients. AI assistants are starting to veer into this territory. They suggest particular actions without being prompted.

Some sorts of prompts given by Siri include:

"Dinner with Josh looks important! Turn on Do Not Disturb until the end of this event!" - link

"Call in to Portfolio work Session. Event at 10:00 AM found in Calendar" - link

"Tell Mum Happy Birthday!" Birthday found in Contacts" - link

A relevant suggestion that surprised one user.

Technical systems also have many sorts of push messaging systems. These would also count under the informer category, though they are more boring. Modern recommender systems (like Youtube and Facebook recommendations) are arguably examples of informers.

Controllers

Lastly, we have the controller. The controller is the extreme form of the informer. At this point, prompts are available for nearly all of human or organizational decision making. They're good enough to be dominant over decision alternatives. Imagine a form of Siri that would make suggestions on all decisions in your life, in an environment where you don't have better alternatives. Any deviation you might decide on would, in expectation, make you worse off.

For an individual, a controller might make unprompted suggestions like,

"Tomorrow, you should invest $11,305 from your 401K into this new assortment of 21 ETFs."

"Here's your action plan for this week. It includes details about your diet, workout routine, and a list of specific jokes you should make at your upcoming meetings."

"I've identified your top 4 likely romantic partners in the world. Here's the optimal strategy for interacting with them to make the best decision."

"Here's the text of the email you should send to your land owner."

Controllers wouldn't directly have decision making power, but if they are trusted enough by their recipients, they effectively would control them (with the purpose of optimizing things these recipients help specify.) Imagine a well intentioned mentor or parental figure that's extremely attentive and almost always right.

Businesses have roles called controllers, which have some parallels here. In some settings, the financial controller is the one who has to approve every major expense. More specifically, they are trusted to make judgements on if each expense will contribute positive net value.

I imagine that on the surface, controller scenarios would seem dystopian to many people. I think the devil is in the details. If people overtrust controllers they could clearly be harmful, but there are many positive situations too. Note that it should be possible to use controllers to help fix some of the problems we might have with controllers. 

I don't have any particular line in the sand to differentiate oracles, informers, and controllers. Perhaps one might be something like,

If a system that prompts you with information causes over 10% of the variance in your decisions, it's an informer. If it's over 70%, it's a controller.

Decision Relevant Differences

I'd expect that in a world with competent controllers, oracles would be considered feeble and irrelevant. 

The main factor deciding which systems should be oracles vs. controllers is one of relative trust (how much you trust the system over yourself). The more one trusts a system, the more autonomy it will be granted. As capabilities progress, oracles will be used as controllers. 

The concrete technical difference is trivial for general purpose systems. One could always have a simple function run every minute and ask an oracle, "What is the most important thing I could know now, and is it valuable enough for me to be aware of at this point?" The same could trivially convert a genie into a sovereign. In a world with competitive pressures, some agents would be likely to quickly grant autonomy to such systems, even if doing so might be risky.

Relevance to Forecasting 

Prediction market and prediction tournaments typically imagine them as oracles. I think that as they become more powerful, such systems will go in the direction of becoming controllers. This could be through the use of meta predictions in the trivial case, like a system asking itself, “which forecasting questions would produce the most value, conditional on being investigated?”


Thanks to Nuño Sempere for comments on this piece

New Comment
2 comments, sorted by Click to highlight new comments since:

Reminds me of Scott's story from way back when about the Whispering Earring: http://web.archive.org/web/20121008025245/http://squid314.livejournal.com/332946.html

(I don't have any specific point, I'm not sure the story has any specific lesson that I endorse, it just seemed interestingly related.)

Good find, I just read that. It's clearly an example of the controller.

Note that the story is quite dystopic (like I assumed people would consider it to be). 

One quick point I realized when reading this was that "common sense", or, "cultural knowledge", arguably acts like a controller now. Many people lean heavily on what they would predict others would be okay with, when deciding on an action to take. Just because their brain is doing some of the computation doesn't change the main interaction.