That misses a bit the point of the article IMO: That some "horns" have a threshold beyond which the standard diminishing returns formula is the wrong way to think about it, and that once this threshold is crossed you open up a whole slew of new parameters to toy with, as if you'd just started on something entirely new (which is often the case in some sense).
For instance, you can learn more and more about the rules of poker (to take a very limited field which runs into diminishing returns very fast) and quickly exhaust everything there is to learn... until you master them so completely that suddenly this whole new thing opens up about counting cards and deception, which is somewhat a different thing not directly about the rules of poker but an implication of them, which upon further analysis reveal why certain rules work certain ways or why certain rules are of key importance.
Then, once you completely master that, you get into this whole thing about probabilities and game theory and abstract up into the metagame with agents simulating other agents' simulations of you... but that's a bit beyond my own level of understanding, to be honest, so I'll cut the example short here.
Basically, the guy who stopped trying to learn more about poker rules because he saw diminishing returns and never pushed forward enough to "get" the things that lead to counting cards and deception and metagame will have missed out on the point of poker entirely.
At least, if I'm interpreting the main article correctly, there are some fields of knowledge which behave in a similar fashion, and we might benefit from having good heuristics for figuring out which fields will behave like this so that we can improve our research efficiency... or something.
That some "horns" have a threshold beyond which the standard diminishing returns formula is the wrong way to think about it, and that once this threshold is crossed you open up a whole slew of new parameters to toy with, as if you'd just started on something entirely new (which is often the case in some sense).
(paragraphs 1-2) This idea that there's a transition point where suddenly returns start increasing appears nowhere in the physical instance of a powder horn (when you finally turn it over, do you get even more powder than your first easy scoop?) which even implies that returns will hit zero the instant after you turn over the horn (because now the horn is completely empty).
(paragraphs 3-4) OP then claims that some things aren't horns because there is some transition effect where returns suddenly start increasing with more investment. But the claim comes off completely flat because exactly one example is given, and a pretty dubious one at that.
Basically, the guy who stopped trying to learn more about poker rules because he saw diminishing returns and never pushed forward enough to "get" the things that lead to counting cards and deception and metagame will have missed out on the point of poker entirely.
Those may be interesting and complex, but your description matches up fine with diminishing returns.
At least, if I'm interpreting the main article correctly, there are some fields of knowledge which behave in a similar fashion, and we might benefit from having good heuristics for figuring out which fields will behave like this so that we can improve our research efficiency... or something.
At this point in the discussion, I think we would benefit more from establishing that there are such fields at all.
Yes, all good points, but one minor bump when I read your comment:
Those may be interesting and complex, but your description matches up fine with diminishing returns.
I don't quite understand what you mean. Given that an agent values having the kind of fun given by playing poker on more advanced levels, I do see a point at which poker suddenly starts generating more fun-returns for each unit of learning-to-play-poker, just before which diminishing returns still applied.
Then a bunch of increments give high value before diminishing returns start being apparent again, at which point there may or may not be more deviations from the standard pattern (I'm not aware of any personally, I have some minor evidence that there might be one more "boost" point).
So yeah. If you'd care to explain how this actually matches with standard diminishing returns theory, I'd be curious to learn about it. If I imagine plotting what I observe of poker-learners on a graph and what a standard diminishing returns model would predict, I don't see the same curves at all.
Diminishing returns doesn't mean that you can't get more out of additional investment - that'd be 'zero returns', after all. But the returns you get out of each additional increment of investment will be less than the previous returns. The thrill you get from your first poker game where you successfully bluffed your way to victory is greater than the 'return' you get millions of hands later from noticing that you are bluffing a decimal point too often, or whatever. (I don't play poker, so I don't know what the relevant examples would be.)
Besides utility, you could also express it in terms of expected value or winning probability; since poker is partially random, there's always going to be a limit on performance where even the best player will lose, and initial skill gains will move you closer to that limit than later refinements of computer modeling or whatever. Think Pareto.
Hmm. I'm not sure whether I'm confused about this, or whether I didn't adequately express the theoretical value unit I was picturing.
If I naively picture the cost as a flat "number of rules or tactics learned", and the returns as an ideal "fun-value per hand played", then for most people there's going to be a "spike" at some point where the fun-value-per-tactic goes up much faster than it should at that point in the graph, almost as fast (if not faster) as the first few increments at the very beginning.
Depending on how saturated the new player is with low-complexity gameplay value (and the diminishing returns thereof), it might even be that the curve actually accelerates up to that spike, and after the spike finally starts looking more like a diminishing returns graph.
I suppose there could be spikes like that - if one knows n-1 rules of chess, it's not fun at all, while at n rules one can actually start playing. But I don't know of any games where this spike would come after, say, months or years of practice.
Any game where increased playskill changes the shape of the tactical space, I'd think. For example, Street Fighter 2. Yeah, the arcade game.
It's easier to show than tell, but basically there's a strategy, made up of grabs and weak attacks, that's easy to execute but hard-ish to defend against. Two players who are skilled enough to use that strategy but not skilled enough to defeat it will find the game degenerate and boring, but once they're skilled enough to get past that gate they'll find a space of viable tactics that's a lot broader and more engaging.
Yes, this is a very good example. The street fighter games change completely in landscape once you get past several key difficulty walls, each of which can require months of training or more for people not already adept at the genre.
Thanks for the good example.
fezziwig gives a pretty good example; the Street Fighter series in general can be considered to have an uncommonly high number of instances like this.
However, with that said, I agree with your earlier statement that the question is whether there really are any fields of knowledge that behave like this (and would be useful to us), or if it's dependent upon key patterns of game logic and none of those patterns are present in nature, or some other explanation that makes these cases too limited in scope to be worth exploring the way I thought the OP was suggesting.
I downvoted on account of the use of "The Way" as a name for a set of useful techniques in the art of human rationality. It won't be understood by casual readers, and it sounds very cultish.
Incidentally, the article would be greatly improved by the addition of specific examples of the "huge range of benefits" supposedly available to people with mastery of the popular rationality techniques promoted on LessWrong, but not to those struggling at the narrow end of the horn.
It would probably be useful to compile a list of times in the past that coming out the other side of the bull's horn was worth it. If you're trying to find a common thread.
What immediately comes to mind as an obvious example is Newtonian physics. There was a period in the history of science where it looked like we had figured out almost everything worth knowing in this field. That turned out not to be the case in a big way. There were clues that there might be a deeper, more general theory in the inconsistencies in observational data at the time and it seems like this would be a good place to start.
What fields are there that seem like they are mostly figured out but with a few nagging inconsistencies? Continuing the physics theme, the standard model does a heck of a job but there is still dark matter/energy and gravity to figure out. It's clear from the number of top level minds devoted to studying these things that people already think this horn is worth bulldozing through though. It's not that people don't think it's worth digging, it's just really hard.
Maybe there are fields that are similarly saturated theory wise but have inconsistencies that aren't being thought about a lot?
The key is confirmed experimental results that are other than predicted by established theory. When theory is very well established, there is a tendency to out-of-hand dismiss contradictory results as probable errors. Sometimes that "theory of error" is accepted without the errors ever being identified. This especially can happen if there is mixed success in confirmation, which can happen when a phenomenon is not understood and is difficult to set up.
Nuclear physics is such a field, where quantum mechanics is incredibly successful at making accurate predictions when the environment is simple, i.e., in a plasma.
However, in the solid state, to apply quantum mechanics, to predict fusion probabilities, notably, requires simplifying assumptions.
Seeking to test the accuracy of these assumptions, Pons and Fleischmann, starting in about 1984, found a heat anomaly. The effect was difficult to set up, it required loading of deuterium into palladium at a ratio higher than was normally considered possible, and most palladium samples didn't work.
They were not ready to announce the work, but the University of Utah forced them, for intellectual property reasons, to hold a press conference. All hell broke loose, it is said that for a few months the bulk of the U.S. discretionary research budget was spent trying to reproduce their results.
Most of these efforts were based on inadequate information about the original research, most failed (for reasons that are now understood), and a cascade developed that there was nothing but incompetence behind the finding.
However, some researchers persisted, and eventually there were many independent confirmations, and the heat effect was found, by a dozen research groups, to be correlated with the production of helium, at the ratio expected for deuterium fusion to helium, within experimental error. Helium was not expected to be a normal product of deuterium fusion (it's a rare branch), and when normal (hot) fusion does result in helium, there is always a gamma ray, required by conservation of momentum. No gamma rays.
The mechanism is not known. What I've written here is what you will find if you look for recent reviews of the field in mainstream journals. (See especially Storms, "Status of cold fusion (2010)," Naturwissenschaften.)
But the opinion is still extremely common that the whole thing is "pathological science," or worse.
Until the mechanism is known, this might be a laboratory curiosity, or it could open up a whole new territory, with vast implications. More research is needed.
Some time ago I learned of the metaphor of 'digging the bull's horn'. This might sound a little strange, since horns are mostly hollow, but imagine a bull's horn used to store black powder. In the beginning the work is easy and you can scoop out a lot powder with very little effort. As you dig down, though, each scoop yields less powder as you dig into the narrow part of the horn until the only way you can get out more powder is to turn the horn over a dump it out.
It's often the same way with learning. When you start out in a subject there is a lot to be learned (both in quantity of material you have not yet seen and in quantity of benefits you have to gain from the information), but as you dig deeper into a subject the useful insights come less often or are more limited in scope. Eventually you dig down so far that the only way to learn more is to discover new things that no one has yet learned (to stretch the metaphor, you have to add your own powder back to dig out).
It's useful to know that you're digging the bull's horn when learning because, unless you really enjoy a subject or have some reason to believe that contributing to it is worthwhile, you can know in advance that most of the really valuable insights you'll gain will come early on. If you want to benefit from knowing about as much stuff as possible, you'll often want to stop actively pursuing a subject unless you want to make a career out of it.
But, for a few subjects, this isn't true. Sometimes, as you continue to learn the last few hard things that don't seem to provide big, broadly-useful insights, you manage to accumulate a critical level of knowledge about the subject that opens up a whole new world of insights to you that were previously hidden. To push the metaphor, you eventually dig so deep that you come out the other side to find a huge pile of powder.
The Way seems to be one of those subjects you can dig past the end of: there are some people who have mastered The Way to such an extent that they have access to a huge range of benefits not available to those still digging the horn. But when it comes to other subjects, how do you know? Great insights could be hiding beyond currently obscure fields of study because no one has bothered to dig deep enough. Aside from having clear examples of people who came out the other side to give us reason to believe it's worth while to deep really deep on some subjects, is there any way we can make a good prediction about what subjects may be worth digging to the end of the bull's horn?