Time spent with fiction when it's about some coherent body, be it a video game, book, tv series etc. Usually, the more time you spend with that coherent fictional body, the more immersed you become which means you can enjoy it more.
I think like a lot of things this is an S-curve - it takes a while to get into it before you enjoy your time the most, but eventually you start to get sick of it.
Yes. Incidentally, this is a reason why fanfiction is immensely more rewarding than would be suspected by someone who only looked at the average quality of prose involved.
Railroads in Monopoly.
Railroads in an actual railroad monopoly.
Time spent with individuals - I'd rather spend time with friends than strangers.
I don't understand the last one. Is the thing that is measured here the quality of individuals you spend time with, or the quality of time you spend with individuals, or the amount of time? In any case, you should elaborate.
That clears it, thanks. The sentence "I'd rather spend time with friends than strangers." just confused me a little because I wasn't sure if you were comparing time spent with friends vs. strangers.
edit. Now I understood it. You were talking about the whole timespan from the start of the friendship until the last moment. I thought at first that you were talking about a single session spent with an individual
Railroads in an actual railroad monopoly only have this property at small sizes, not at the limit, because the value of new stops is decreasing as you exploit less and less economically active areas. The fact that the network that's able to reach route N+1 includes route N doesn't make up for the fact that no one was going to N anyway. Plus there are costs to the network of new lines, like new switches needing to be installed, the complexity of managing routes, etc. If you were a railroad exec and you had unlimited resources (so it wasn't merely a question of the costs increasing faster than the benefits), you still wouldn't snap your fingers and cover the surface of the earth with railroad tracks. True examples in the realm of commerce and physical items are pretty much impossible, unless you are a paperclipper.
Other things that have network effects but don't have increasing marginal utility are markets (the marginal stock trader provides no liquidity and makes no trades), Facebook (the marginal account has no friends), telephone networks (the marginal customer makes and receives no calls), etc. Decreasing marginal utility is very universal. Even trust, which is a very good example, is probably more like one of these tipping point things rather than true in an absolute sense. The marginal value of a trust increment may always be positive, but it decreases past the tipping point.
In the case of a monopoly on something (railroads aren't really the greatest thing to have a monopoly on, because taking the train has so many substitutes - the ideal would be more like water and air), the number of sources that you wish to own is "all of them." If you lose even one source of that something, that's quite bad, worse than losing the second.
In general, there are two ways of avoiding the un-realism of increasing marginal utility - either have there be some upper limit on the valuable stuff that prevents it from getting out of hand, or have the marginal utility only be increasing within some common domain but decreasing eventually. A monopoly is more like the first of these than the second.
But a monopoly wanting all of something isn't the same as increasing marginal utility, it just means that marginal utility is always positive. For increasing marginal utility it has to be the case that each unit increases the value of the monopoly more than the last unit. Once a network has become large enough, you can ignore the existing network for the purposes of comparing the marginal utility of additional nodes in it. For monopolies that aren't based on network effects but pricing power, you get most of the pricing power at market shares significantly less than 100%. So there is some market share increment where you get the benefits of monopoly pricing with your normal cost structure, and the next market share increments don't allow you to increase your prices but still have your cost structure in place, ergo they have a marginal utility less than the monopolizing increment.
Lots of things have increasing marginal utility at some hypothetical margin. But very few things have increasing marginal utility at the margin on which they are utilised, precisely because if people notice that increasing marginal utility, they will increase their consumption, until they hit a new point on the utility curve where the marginal utility is no longer increasing.
For example, shminux, above, talks about education. We can well imagine that education has steeply increasing marginal utility at some levels; once you have made the investment in learning to read, using that knowledge to learn some more things is very cheap compared to the benefits. But people are already aware of this, and so have already acted to do far more than just learn some basics, to the extent that, at the margin, educational consumption appears to be a costly signaling race.
I am not sure about the attention example, there looks to be an issue with units. For example, if we think in terms of percentages, going from juggling 10 things to 9 gives ~11% more attention to the nine remaining things. Going from 2 things to 1 gives 100% more attention to the remaining single. And that's just math, not increasing marginal utility.
And if we're talking about resources to be amassed by societies, pretty much anything with a network effect qualifies.
Going from 2 things to 1 gives 100% more attention to the remaining single.
The effect will be much higher than that:
Because the brain cannot fully focus when multitasking, people take longer to complete tasks and are predisposed to error. When people attempt to complete many tasks at one time, “or [alternate] rapidly between them, errors go way up and it takes far longer—often double the time or more—to get the jobs done than if they were done sequentially,” states Meyer.[9] This is largely because “the brain is compelled to restart and refocus”.[10] A study by Meyer and David Kieras found that in the interim between each exchange, the brain makes no progress whatsoever. Therefore, multitasking people not only perform each task less suitably, but lose time in the process.
So, by focusing your attention on a single task instead of trying to do two at the same time you'll be done with that task in less than a quarter of the time (and not half as one would expect).
Multipurpose components, be they Lego, 80-20 pieces (the industrial version of Lego), electronics components, or disk space for a computer program - the number of things you can build from them grows rapidly as the number of them you have available increases, until you literally have more than you know what to do with.
Intelligence, on both an individual and societal level. Fooming AI is based on that idea. However, increasing the amount of this resource is a hard problem.
Perhaps rationality?
The early stages of any new thing with a lot of potential will behave that way, not only by networking effects, but people figuring out better ways of doing whatever it is, until both aspects reach saturation.
For every thing with increasing marginal returns, is there a saturation point, and what does it look like?
Intelligence, on both an individual and societal level. Fooming AI is based on that idea. However, increasing the amount of this resource is a hard problem.
I wonder if this is really true. The world doesn't seem to be dominated by super high g people. If anything it seems like we see diminishing returns from extra intelligence past the 130-140 level. If there were increasing returns from each added IQ point, it seems like we would see vast resources and power controlled by super geniuses.
It seems like easier self-modification is what makes AIs potentially foomy.
The world doesn't seem to be dominated by super high g people.
Consider the implications of the Ivies having mean SATs ~>2100.
I don't think most of the LW population would regard that as "super high." Plus, most people in the Ivy League having IQs upwards of 130 doesn't equate to most people with IQs upwards of 130 making it into the Ivy League. I'd be interested to know what the correlation with financial success is for additional IQ above the mean among Ivy Leaguers.
I don't think most of the LW population would regard that as "super high."
Then they should better consider what percentile of the population that corresponds, and what a mean SAT in that range implies about the tails.
Plus, most people in the Ivy League having IQs upwards of 130 doesn't equate to most people with IQs upwards of 130 making it into the Ivy League.
Irrelevant to the question as asked. Just as you pointed out about Ivy leagues and IQs - the world can be dominated by super high g people doesn't equate to most people with super high g dominating the world.
I haven't been keeping track of the results of each yearly survey, but I recall that Less Wrong, if it doesn't still, at least used to have mean SAT scores over 2100 as well. Maybe I'm mistaken and most of the membership here views Less Wrong as a "super high G" community, but I don't.
There are people at the tails who I would regard as having "super high G," but this brings us back to knb's comment above about the appearance of diminishing returns above the 130-140 IQ level. I'm not sold on this being the case, but still, for Ivy Leaguers, who have a high level of clout in our society, to have an average IQ around that level, does not address the question of whether additional IQ above that level has diminishing impact.
Maybe I'm mistaken and most of the membership here views Less Wrong as a "super high G" community, but I don't.
How much time do you spend with normal people? What's your score on Murray's high-IQ bubble checklist?
but still, for Ivy Leaguers, who have a high level of clout in our society, to have an average IQ around that level, does not address the question of whether additional IQ above that level has diminishing impact.
No, but the original claim was clearly wrong. Society is dominated by high-IQ people. Diminishing returns seems to be weirdly interpreted as 'no returns' in a lot of people's minds.
It may help if I quote a bit of what I've written on a similar issue before about diminishing returns to research:
The Long Stagnation thesis can be summarized as: "Western civilization is experiencing a general decline in marginal returns to investment". That is, every $1 or other resource (such as 'trained scientist') buys less in human well-being or technology than before, aggregated over the entire economy.
This does not imply any of the following:
No exponential curves exist (rather, they are exponential curves which are part of sigmoids which have yet to level off; Moore's law and stagnation can co-exist)
Sudden dramatic curves can exist even amid an economy of diminishing marginal returns; to overturn the overall curve, such a spike would have to be a massive society-wide revolution that can make up for huge shortfalls in output.
- Any metrics in absolute numbers have ceased to increase or have begun to fall (patents can continue growing each year if the amount invested in R&D or number of researchers increases)
- We cannot achieve meaningful increases in standards of living or capabilities (the Internet is a major accomplishment)
- Specific scientific or technological will not be achieved (eg. AI or nanotech) or be achieved by certain dates
- The stagnation will be visible in a dramatic way (eg. barbarians looting New York City)
Similarly, arguing over diminishing returns to IQ is building in a rather strange premise to the argument: that the entities in discussion will be within a few standard deviations of current people. It may be true that people with IQs of 150 are only somewhat more likely to be billionaires ruling the world than 140, but how much does that help when you're considering the actions of people with IQs much much higher? The returns can really add up.
To take an example I saw today: Hsu posted slides from an April talk, which on pg10 points out that the estimates of the additive genetic influence on intelligence (the kind we can most easily identify and do stuff like embryo selection with) & estimates of number of minor alleles imply a potential upper bound of +25 SD if you can select all beneficial variants, or in more familiar notation, IQs of 475 (100 + 15 * 25). Suppose I completely totally grant all assumptions about diminishing marginal returns to IQ based on the small samples we have available of 130+; what happens when someone with an IQ of 475 gets turned loose? Who the heck knows; they'll probably rule the world, if they want.
One of the problems with discussing this is that IQ scores and all research based on it is purely an ordinal scale based on comparing existing humans, while what we really want is to measures of intelligence on a cardinal scale which lets us compare not just humans but potential future humans and AIs too.
For all we know, diminishing returns in IQ is purely an artifact of human biology: maybe each standard deviation represents less and less 'objective intelligence', and the true gains to objective intelligence don't diminish at all or in some cases increase (chimps vs humans)!
(Hsu likes to cite a maize experiment where "over 100 generations of selection have produced a difference in oil content between the high and low selected strains of 32 times the original standard deviation!"; so when we're dealing with something that's clearly on a cardinal scale - oil content - the promised increases can be quite literal. Intelligence is not a fluid, so we're not going to get 25x more 'brain fluid', but that doesn't help us calculate the consequences: an intelligent agent is competing against humans and other software, and small absolute edges may have large consequences. A hedge fund trader who can be right 1% more of the time than his competition may be able to make a huge freaking fortune. Or, a researcher 1% better at all aspects of research may, under the log-normal model of research productivity proposed by Shockley, be much more than 1% more productive than his peers.)
We know 'human' is not a inherent limit on possible cognition or a good measurement of all activities/problems: eg chess programs didn't stagnate in strength after Deep Blue beat Kasparov, having hit the ceiling on possible performance but they kept getting better. Human performance turned out to not run the gamut from worst to best-possible but rather marked out a fairly narrow window that the chess programs were in for a few decades but passed out of, on their trajectory upwards on whatever 'objective chess intelligence' metric there may be.
(I think this may help explain why some events surprise a lot of observers: when we look at entities below the human performance window, we just see it as a uniform 'bad' level of performance, we can't see any meaningful differences and can't see any trends, so our predictions tend to be hilariously optimistic or pessimistic based on our prior views; then, when they finally enter the human performance window, we can finally apply our existing expertise and become surprised and optimistic, and then the entities can with small objective increases in performance move out of the human window entirely and it becomes an activity humans are now uncompetitive at like chess but may still contribute a bit on the margin in things like advanced chess, and eventually becomes truly superhuman as computer chess will likely soon be.
One of the problems with discussing this is that IQ scores and all research based on it is purely an ordinal scale based on comparing existing humans, while what we really want is to measures of intelligence on a cardinal scale which lets us compare not just humans but potential future humans and AIs too.
For this reason, it seems to me that conjectures about people with no negative variants getting a 25 SD IQ gain are untestable. How would one distinguish such people from someone with a gain of only(!) 15 SD or 10 SD or even 7 SD, when the population available to norm IQ tests consists of only 7 billion people?
Create enough people at 15SD to test the 25SD subjects. :)
More seriously, this may be practically untestable but I think it's also the sort of thing which doesn't need to be tested - if we're ever in a position that the answer might matter, we have bigger fish to fry.
I never argued that intelligence beyond the range accessible by human deviation is impossible, or that differences beyond that range would not be highly determinative, but this is still not the same as increasing marginal returns on intelligence. If an individual had hundreds of trillions of dollars at their disposal, there would be numerous problems that they could resolve that people with fortunes in the mere tens of billions could not, but that doesn't mean that personal fortunes have increasing marginal returns. It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
but this is still not the same as increasing marginal returns on intelligence.
Half my comment was pointing out why, if there were increasing returns, that was consistent with our observations and supported by non-human examples.
It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
No. I am objecting to your same line of thought that I have been objecting to quite from the start:
The world doesn't seem to be dominated by super high g people.
To repeat myself: this is empirically false, the domination is as we would expect for both increasing & decreasing marginal returns, and more broadly does not help us in putting anything but a lower bound on future developments such as selected humans or AIs.
And yet, NYC is still there, and unlike Rome post-barbarians, has only grown in population.
EDIT: and to expand on my point with Rome, disturbances are very common in great metropolises and imperial capitals; pointing to a blackout from over a third of a century ago as indicating the decline of America is like pointing to the Marian or Gracchian riots in Rome as indicating the fall of the Roman empire. (What, you don't remember either? Exactly.)
As it happens I am familiar with the Gracchian riots, they certainly weren't indicative of the fall of the Roman Empire as the Roman Empire didn't exist then; however, the riots were most definitely indicative of the collapse of the Roman Republic.
however, the riots were most definitely indicative of the collapse of the Roman Republic.
The 'collapse' of the Roman Republic didn't involve barbarians. Which was the point of the observation. Should America one day 'collapse', may God send us a collapse as dire and apocalyptic and with terrible outcomes as the collapse of the Roman Republic...
I'd be interested to know what the correlation with financial success is for additional IQ above the mean among Ivy Leaguers.
I'm pretty sure I've seen a paper discussing this and probably you can find data if you google around for "iq income correlation" and similar.
The world doesn't seem to be dominated by super high g people.
There aren't all that many of them. But consider, say, Jobs, Gates, Peter Thiel, and the like.
it seems like we would see vast resources and power controlled by super geniuses.
Jobs, Gates, and Thiel again, depending on how vast and how much power. But why would a genius necessarily go for vast resources and power? Would that have helped Einstein think about physics?
Btw, this is a reason I find Batman completely implausible. I'm willing to suspend that and be entertained, but he seems to spring into existence as an adult, fully formed with several lifetimes worth of knowledge, experience, wealth, and power. The only backstory I can make up to explain that is that in a former life as a genius he cracked the problems of how to retain all one's memories through rebirth, and how to ensure an auspicious rebirth. He really does have several lifetimes' worth of knowledge and experience, and then got himself reborn in a position to inherit vast wealth and power as soon as he reached legal adulthood.
But cheating on spouses in general undermines the trust that spouses should have in each other, and the cumulative impact of even 1% of spouses cheating on the institution of marriage as a whole could be quite negative.
In the comments on Scott's blog, I've recently seen the claim that this is the opposite of how traditional marriage actually worked; there used to be a lot more adultery in old times, and it acted as a pressure valve for people who would've divorced nowdays, but naturally it was all swept under the rug.
I am not sure that trustworthiness has increased marginal utility. Think about ebay or Amazon, what is the difference between 99% positive and 100% positive. Or 97% positive or 100% positive. It would seem to me that with trustworthiness there is a tipping point, at which there is a huge spike in marginal utility, and all other increases don't really add much utility.
100% positive on Amazon isn't the same as the 100% trust mean. 100% on amazon really is just a bit higher the 99%. 100% trust can't be expressed by Amazon ratings as the the underlying rating can still be hacked or 'optimized'.
Both your examples are actually just about diminishing marginal penalties as you add more attention demands, moving away from 1, or as you add more defections, moving away from 0. The real question is whether there's a resource with no natural maximum that increases in marginal utility; and this shall perhaps be difficult to find.
That's a good way of putting it. I had a vague thought pointing in this direction but wasn't able to verbalize it.
Time spent doing any kind of work with a high skill cap.
Edit: Well, okay not any kind of work meeting that criterion, to preempt the obvious LessWrongian response. Any kind you can get paid for is closer to true.
A related concept is that of the threshold good. (Perhaps someone with more economics schooling can help out with the formally correct term.) It's something that is useless until a certain threshold amount is obtained.
An example is the length of a bridge. A bridge that goes 90% of the way across a ravine is not twice as good as one that goes 45% across. Both are equally useless (for most purposes). Another example would be the stones in an arch--the final stone, or capstone, is a sine qua non.
The existence of threshold goods is what motivates the concept of assurance contracts, according to which people pledge money iff enough other people pledge enough money to get a project done.
Knowledge, esp. math knowledge. It is difficult to measure the amount as well as the benefit, but it feels like one additional year of math education (which builds upon previous math knowledge) allows to modell (and thus understand in depth) significally more phenomena and structures than the previous years.
The question may be how valuable this ability is. I get the impression that it significantly simplifies understanding conrete practival domains (which can be modelled by the math in question).
This is related to the more general education comment.
it feels like one additional year of math education (which builds upon previous math knowledge) allows to modell (and thus understand in depth) significally more phenomena and structures than the previous years.
Since math professors don't look like bodhisattvas, I rather suspect there is a turnover point when the marginal utility starts to decrease.
Generally speaking, when you start learning an unfamiliar skill the first steps have close to zero marginal utility and only when you can actually achieve something does your utility increase. Once you achieve competence, however, I doubt that your marginal utility will continue to increase.
A society where people abide by contracts 80% of the time is not 80% as good as a society where people abide by contracts 100% of the time; most of the societal value of trust (e.g. decreasing transaction costs) doesn't seem to manifest until people are pretty close to 100% trustworthy.
I don't agree since a society without contracts would be very, very bad. Still you ask an overall excellent question.
Yes, a society without contracts is very very bad. But the difference in badness between 100% and 99% compliance is much greater than between 80% and 79% compliance.
What other resources have increasing marginal utility?
Matter, if negentropy translates to utility by more than its square root, for example if negentropy translates linearly to increased lifespan and/or population, and we value lifespan/population linearly as well.
How undervalued are they?
I'm guessing that most people do not realize the above, and therefore underestimate just how high the maximum utility of the universe can be.
This seems to be more about value thresholds vs increasing marginal utility. Once you have 10 hours a day of free time, the 11th is not gonna be that much more valuable, if we measure how much stuff needs your attention every day by time it takes instead of by number of things you have to pay attention to.
It's important to trust your spouse a lot but on a numerical level the point from 98 to 99 percent isn't going to change what you do a lot.
having 100 lego pieces is probably more than 10 times as good as having 10 and maybe 1000 is times better but I don't think 2000 is more than twice as good.
Getting immersed in a fictional world makes further fiction in that world more interesting, up until you've read enough that you start seeing repetition.
Does this just mean that marginal utility is non-linear at the minima and maxima?
while the change from zero control over a supply chain in any given significantly complicated product (i.e. a computer); up to a fractional control may impart an initial high utility (i.e. I make all the mice - everyone needs to come to me for their mice); The following utility if you were to gather more control (i.e. I also make all the keyboards - everyone also needs to come to me for the keyboards) is a lot less of a utility increase. as is screen, motherboards, ram, and N pieces required to create a computer, up until the last several where control of the final pieces will give you the status of computer-master-overlord. like none before you...
come to think of it; resources when they are below a threshold for high-level production automation. for example wool. one sheep may produce between 5-10kg of wool. in the hands of any single person the value of the wool is of a certain low-level utility, but as one person amasses enough of the resource to allow a production-line to make use of the wool the utility increases and we can get yarn and socks in an efficiency that no small amount of the resource could.
where 1kg of coal will provide little utility to anyone but santa, having enough coal to run a power-station is a quite high utility in comparison to making many sad children...
Does this just mean that marginal utility is non-linear at the minima and maxima?
Mathematically, everything is non-linear at its minima and maxima. Linear functions do not have minima or maxima.
Linear functions on closed bounded domains can (and on finite dimensional closed bounded domains must, IIRC) have minima and maxima. This seems to be Elo's implicit assumption in the first paragraph, that we were just talking about resources which are available in quantities between 0% and 100%.
Most resources you might think to amass have decreasing marginal utility: for example, a marginal extra $1,000 means much more to you if you have $0 than if you have $100,000. That means you can safely apply the 80-20 rule to most resources: you only need to get some of the resource to get most of the benefits of having it.
At the most recent CFAR workshop, Val dedicated a class to arguing that one resource in particular has increasing marginal utility, namely attention. Initially, efforts to free up your attention have little effect: the difference between juggling 10 things and 9 things is pretty small. But once you've freed up most of your attention, the effect is larger: the difference between juggling 2 things and 1 thing is huge. Val also argued that because of this funny property of attention, most people likely undervalue the value of freeing up attention by orders of magnitude.
During a conversation later in the workshop I suggested another resource that might have increasing marginal utility, namely trust. A society where people abide by contracts 80% of the time is not 80% as good as a society where people abide by contracts 100% of the time; most of the societal value of trust (e.g. decreasing transaction costs) doesn't seem to manifest until people are pretty close to 100% trustworthy. The analogous way to undervalue trust is to argue that e.g. cheating on your spouse is not so bad, because only one person gets hurt. But cheating on spouses in general undermines the trust that spouses should have in each other, and the cumulative impact of even 1% of spouses cheating on the institution of marriage as a whole could be quite negative. (Lots of things about the world make more sense from this perspective: for example, it seems like one of the main practical benefits of religion is that it fosters trust.)
What other resources have increasing marginal utility? How undervalued are they?