Followup to: Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems

tl;dr: In this installment, we look at methods of avoiding the problems related to optimization by proxy. Many potential solutions cluster around two broad categories: Better Measures, and Human Discretion. Distribution of decisions to the local level is a solution that seems more promising and is examined in more depth.

In the previous article I had promised that if there was a good reception, I would post a follow-up article to discuss ways of getting around the problem. That article made it to the front page, so here are my thoughts on how to circumvent Optimization by Proxy (OBP). Given that the previous article was belabored over at least a year and a half, this one will be decidedly less solid, more like a structured brainstorm in which you are invited to participate.

In the comments of the previous article I was pointed to The Importance of Goodhart's Law, a great article, which includes a section on mitigation. Examining those solutions in the context of OBP seems like a good skeleton to build on.

The first solution class is 'Hansonian Cynicism'. In combination with awareness of the pattern, pointing out that various processes (such as organizations) are not actually optimizing around their stated goal, but some proxy, creates cognitive dissonance for the thinking person. This sounds more like a motivation to find a solution than a solution itself. At best, knowing what goes wrong, you can use the process in a way that is informed by its weaknesses. Handling with care may mitigate some symptoms, but it doesn't make the problems go away.

The second solution class mentioned is 'Better Measures'. That is indeed what is usually attempted. The 'purist' approach to this is to work hard on finding a computable definition of the target quality. I cannot exclude the possibility of cases where this is feasible no immediate examples come to mind. The proxies that I have in mind are deeply human (quality, relevance, long-term growth) and boil down to figuring out what is 'good', thus, computing them is no small matter. Coherent Extrapolated Volition is the extreme end of this approach, boiling a few oceans in the process, certainly not immediately applicable.

A pragmatic approach to Better Measures is to simply monitor better, making the proxy more complex and therefore harder to manipulate. Discussion with Chronos in the comments of the original article was along those lines. By integrating user activity trails, Google makes it harder to game the search engine. I would imagine that if they integrated those logs with Google Analytics and Google Accounts, they would significantly raise the bar for gaming the system, at the expense of user privacy. Of course by removing most amateur and white/gray-hat SEOs from the pool, and given the financial incentives that exist, they would make it significantly more lucrative to game the system, and therefore the serious black hat SEOs that can resort to botnets, phishing and networks of hacked sites would end up being the only games in town. But I digress. Enriching the proxy with more and more parameters is a pragmatic solution that should work in the short term as a part of the arms race against manipulators, but does not look like a general or permanent solution from where I'm standing.

A special case of 'Better Measures' is that of better incentive alignment. From Charlie Munger's speech A Lesson on Elementary, Worldly Wisdom As It Relates To Investment Management & Business:

From all business, my favorite case on incentives is Federal Express. The heart and soul of their system—which creates the integrity of the product—is having all their airplanes come to one place in the middle of the night and shift all the packages from plane to plane. If there are delays, the whole operation can't deliver a product full of integrity to Federal Express customers.

And it was always screwed up. They could never get it done on time. They tried everything—moral suasion, threats, you name it. And nothing worked.

Finally, somebody got the idea to pay all these people not so much an hour, but so much a shift—and when it's all done, they can all go home. Well, their problems cleared up overnight.

In fact, my initial example was a form of naturally occurring optimization by proxy, where the incentives of the actors are aligned. I guess stock grants and options are another way to align employee incentives with company incentives. As far as I can tell, this has not been generalised either, and does not seem to reliably work in all cases, but where it does work, it may well be a silver bullet that cuts through all the other layers of the problem.

Before discussing the third and more promising avenue, I'd like to look at one unorthodox 'Better Measures' approach that came up while writing the original article. Assume that the proxy involves possessing the target quality to produce, and faking it is an NP-complete problem. The only real-world case where I can see an analog to this is cryptography. Perhaps we can stretch OPB such that WWII cryptography can be seen as an example of it. By encrypting with Enigma and keeping their keys secret (the proxies), the Axis forces aimed to maintain the secrecy of their communications (the target quality). When the allies were able to crack Enigma, this basic assumption stopped being reliable. Modern cryptography makes this actually feasible. As long as the keys don't fall into the wrong hands, and assuming no serious flaws in the cryptographic algorithms used, the fact that a document can be decrypted with someone's public key (the proxy) authenticates that document to the owner of the key (the target quality). While this works in cryptography, it may be stretching the OBP analogy too far. On the other hand, there may be a way to transfer this strategy to solve other OBP problems that I have not yet seen. If you have any thoughts around this, please put them forward in the comments.

The third class of solutions is 'Human Discretion'. This is divided in two, diametrically opposite solutions. One is 'Hierarchical rule', inspired by the ideas of Mencius Moldbug. Managers are the masters of all their subordinates, and the slaves of their higher-ups. No rules are written, so no proxies to manipulate. Except of course, for human discretion itself. Besides the tremendous potential for corruption, this does not transfer well to automated systems. Laws may be a luxury for humans, but for machines, code is everything. There is no law-independent discretion that a machine can apply, even if threatened with obliteration. The opposite of that is what the article calls 'Left anarchist Ideas'. I think that puts too much of a political slant to an idea that is much more general. I call it simply 'distribution'. The idea here is that if decisions are taken locally, there is no big juicy proxy to manipulate, but it is splintered to multitudes of local proxies, each different than the other. I think this is the way that evolution can be seen to deal with this issue. If for instance we see the immune system as an optimizer by proxy, the ability of some individuals to survive a virus that kills others is a demonstration of the fact that the virus has not fooled everyone's immune system. Perhaps the individuals that survived are vulnerable to other threats, but this would mean that a perfect storm of diseases that exploit everyone's weaknesses would have to affect a population at the same time to extinguish it. Not exactly a common phenomenon. Nature's resilience through diversity usually saves the day.

So distribution seems to be a promising avenue that deserves further examination. The use case that I usually gravitate towards is that of the spread of news. Before top-down mass media, news spread from mouth to mouth. Humans seem to have a gossip protocol hard-coded into their social function centre that works well for this task. To put it simply, we spread relevant information to gain status, and the receivers of this information do the same, until the information is well-known between all those that are reachable and interested. Mass media took over this function for a time, especially with regard to news that was of general interest but of course on a social circle level the old mechanisms kept working uninterrupted. With the advent of social networks, the old mechanisms are reasserting themselves, at scale. The asymmetric following model of Twitter seems well-suited for this scenario and re-tweeting also helps broadcast news further than the original receivers. Twitter is now often seen as a primary news source, where news breaks before it makes the headlines, even if the signal to noise ratio is low. What is interesting in this model is that there is a human decision at each point of re-broadcast. However, by the properties of scale-free networks, it does not require too many decisions for a piece of information to spread throughout the network. Users that spread false information or 'spam' are usually isolated from the graph, and therefore end up with little or no influence at all (with a caveat for socially advantageous falsities). Bear in mind that Twitter is not built or optimised around this model, so these effects appear only approximately. There are a number of changes that should make these effects much more pronounced, but this is a topic for another post. What should be noted is that contrary to popular belief, this hybrid man-machine system of news transmission scales pretty well. Just because human judgment is involved in multiple steps of the process, it doesn't make the system reliably slower, since nobody is on the critical path, and nobody has the responsibility of filtering all the content. A few decisions here and there are enough to keep the system working well.

Transferring this social-graph approach to search is less than straightforward. Again, looking at human societies pre-search engine, people would develop reputation for knowledge in a specific field. Questions in their field of expertise find their way to them sooner or later. If an expert did not have an answer but another did, a shift in subjective, implicit reputation would occur, which if repeated on multiple occasions would result in a shift of the the relative trust that the community places on the two experts. Applying this to internet search does not seem immediately feasible, but search engines like Aardvark and Q&A sites like StackOverflow and Yahoo! Answers seem to be heading in such a direction. Wikipedia, by having a network of editors trusted in certain fields also exhibits similar characteristics. The answer isn't as obvious in search as it is in news, and if algorithmic search engines disappeared tomorrow the world wouldn't have an plan B immediately at hand, the outline of an alternative is beginning to appear.

To conclude, the progression I see in both news and search is this:

  1. mouth-to-mouth
  2. top-down human curated
  3. algorithmic
  4. node-to-node (distributed)

In news this is loosely instantiated as: gossip -> newspapers -> social news sites -> twitter-like horizontal diffusion, and in search the equivalents are: community experts -> libraries / human-curated online directories -> algorithmic search engines -> social(?) search / Q&A sites. There seems to be a pattern where things are coming full circle from horizontal to vertical and back to horizontal, where the intermediate vertical step is a stopgap to allow our natural mechanisms to adapt to the new technology, scale, and vastness of information, but ultimately managing to live up to the challenge. There may be some optimism involved in my assessment as the events described have not really taken place yet. The application of this pattern to other instances of OBP such as governments and large organizations is not something I feel I could undertake for now, but I do suspect that OBP can provide a general, if not conclusive, argument for distribution of decision making as the ultimately stable state of affairs, without considering smarter than human AGI/FAI singletons.

Update: Apparently, Digg's going horizontal. This should be interesting.

Update2: I had mixed up vertical and horizontal. Another form of left-right dyslexia?

New Comment
17 comments, sorted by Click to highlight new comments since:

Temple Grandin has written about the importance of having a measure which is simple enough that it gets applied. Her example was whether cattle fall down-- this tracks health, breeding, nutrition, and flooring (and possibly more-- I don't have the source handy). If there are too many features to keep track of or if the standards are vague, then people doing the work will ignore most or all of the measures.

Grandin suggests that her visual thinking is more apt to lead to usable standards than the more common non-autistic approach of thinking in words, but iirc, she doesn't go further into what it takes to come up with good measures.

Thanks for mentioning Temple Grandin, I saw her TED speech and she is awesome.

There seems to be a tension here. Too complex and the metric doesn't get applied, too simple and it gets abused. Of course in the case of cattle, there is no real manipulator, so no problems. But in case where there are conflicting incentives, this could pose a problem. Tax collection may be one such case. Too many criteria and they become inapplicable, too few and you have manipulation issues.

I think you get manipulation from the people who are supposed to put the regulations about cattle into effect.

I agree with your general point.

The use case that I usually gravitate towards is that of the spread of news.

One other domain which is particularly vulnerable to OBP is the organization of software product development projects. Common proxies include "productivity" or "man-hours" or "defects", as opposed to the underlying goal which is generally some instance of the class "actually solving a problem for some number of actual people", which is just as hard to actually measure as "page quality" in the search space.

This has been the subject of an entire discipline, called "software engineering", for a little over 40 years now (the term was coined in 1968), and I'd summarize the results as "disappointing".

Your discussion of "distributed" mitigations to the downsides of OBP brings to mind one promising approach known as the "Theory" of Constraints (scare quotes owed to my not being quite sure the term theory is appropriate), which among other things emphasizes decentralized decision making, the anarcho-syndicalist branch of "human discretion" approaches.

The idea is that the kind of work that software development consists of can be approximated as a steady stream of small but significant decisions, where the consequences of inconsistent decisions can be dire. In this context, the people doing the work have the best access to the kind of information that motivates big ("executive") decisions, but the usual approaches to software engineering grant these people little to no authority over big decisions. Conversely, they are given generally poor guidance on the small but significant decisions.

In practice, this means for instance that instead of "project manager assigns coding tasks to engineers" which is rife with opportunities for OBP, the more promising approaches tend to rely on "engineers sign up for tasks they feel maximize their contribution".

Another implication is that a posteriori control over the quality of decisions is preferred over a priori control: feedback trumps planning. This is again similar to the Wikipedia model where everyone can edit and is trusted to do so, but violators of the implicit contracts are quickly identified and their damage repaired (or, at least, that's how it's supposed to work; there are many who suspect this is an idealization publicized as the real thing for marketing purposes).

Recommended reading: Elyahu Goldratt's The Goal for the background, books in the "lean/agile" space for specific applications - I've read and liked the Poppendiecks'.

I have been toying with the idea of making a catalogue of all the instances of OPB that have been identified. Software engineering was in my mind, but not nearly to the amount of detail that you set it to. I suppose DVCS systems like Git also take software enginering (open source especially) in that direction.

Also, I love this phrase:

Another implication is that a posteriori control over the quality of decisions is preferred over a priori control: feedback trumps planning.

It really gets through very clearly. Don't be surprised if I steal it :)

Classic example of optimization by proxy: "fill-in-the-bubble" standardized tests taken by students, such as the SATs.

Not just fill-in-the-bubble - any standardized test, really.

Driving tests are pretty good proxies. Except for the part where they make you parallel park between poles instead of between cars.

This is a great example.

I interpreted RobinZ to mean "any sit down with paper and a writing instrument" test. Driving tests are "standardized" in some sense, but still seem to be good proxies in a way that the SAT is not.

That is what I meant, yes.

Driving tests are "standardized" in some sense, but still seem to be good proxies in a way that the SAT is not.

I think the SAT is a better proxy than driving tests. I see very little effort to produce good driving tests, nor incentive to do so.

Goodhart's law on Less Wrong wiki (new stub article relating this concept to others on the wiki).

[-]taw00

tl;dr summary requested for all articles.

I thought the first paragraph covered that, but I added one anyway.

[-]taw00

I could figure out what problem you're addressing, but I didn't see if you had any new solution to the problem buried somewhere deep in the article or not.

Well, the article discusses a bunch of solutions, and in the end, my current favourite, one which I will probably put some code behind if circumstances allow. I am not sure it is novel, (goodhart's law article makes a related mention) but I don't think I've seen it discussed at this level of detail.

[-][anonymous]00

Another partial solution is to use a metametric that combines many different metrics in a non-linear fashion. So for example, if you have three metrics of what you want to maximize, say X,Y, Z, then looking at XYZ is more likely to work well. The key of using the product here rather than a linear sum is that it prevents one from having most efficient solutions where two of the metrics are tiny and one is very large. Polynomial arguments of existing metrics, if properly constructed, can be much more effective than those which don't. There have been some attempts with genetic algorithms to use this sort of thing to prevent bad optimization in those contexts, but I don't know any of the details.