"Spreading quicker" may not be the best question to ask. The question I'm more interested in is, What is the relationship between speed of communication, and the curve that describes innovation over time?
A good model for this is the degree of genetic isolation in a genetic algorithm. Compare two settings for a GA. One allows mating between any two organisms in the population. Another has many subpopulations, and allows genetic exchange between subpopulations less frequently.
Plot the fitness of the most-fit organism in each population by generation. The first GA, which has fast genetic communication, will initially outstrip the second, but it will plateau at a lower level of fitness, and all the organisms in the population will be identical, and evolution will stop. This is called premature convergence.
The second GA, with restricted genetic communication, will catch up and pass the fitness of the first GA, usually continuing on to a much higher optimum, because it maintains homogenous subpopulations (which allows adaptation) but a diverse global population (which prevents premature convergence).
Think about the development of pop music. As communication technology improved, pop stars like Elvis could be heard, seen, and their records marketed and moved across the entire country more efficiently than marketing local musicians, and replaced live performers with recorded music. On one hand, you could live in Peoria and listen to the most-popular musicians in the country. On the other, by 1990, American pop music had nearly stopped evolving. Rebecca Black could become popular across the nation in a single week, but the amount of innovation or quality she produced was negligible.
Basically, rapid communication gives people too much choice. They choose things comfortably similar to what they know. Isolation is needed to allow new things to gain an audience before they're stomped out by the dominant things.
You need to state your preferences as a function of the long-term trajectory of the entropy of ideas, rather than as any instantaneous quantity.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I have a maths question. Suppose that we are scoring n individuals on their performance in an area where there is significant uncertainty. We are categorizing them into a low number of categories, say 4. Effectively we're thereby saying that for the purposes of our scoring, everyone with the same score performs equally well. Suppose that we say that this means that all individuals with that score get assigned the mean actual performance of the individuals with that that score. For instance, if there were three people who got the highest score, and their perfomance equals 8, 12 and 13 units, the assigned performance is 11 units.
Now suppose that we want our scoring system to minimise information loss, so that the assigned performance is on average as close as possible to the actual performance. The question is: how do we achieve this? Specifically, how large a proportion of all individuals should fall into each category, and how does that depend on the performance distribution?
It would seem that if performance is linearly increasing as we go from low to high performers, then all categories should have the same number of individuals, whereas if the increase is exponential, then the higher categories should have a smaller number of individuals. Is there a theorem that proves this, and which exacty specifies how large the categories should be for a given shape of the curve? Thanks.