by jow
1 min read

2

This is a special post for quick takes by jow. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

3 comments, sorted by Click to highlight new comments since:

Has anyone thought about Kremer/Jones-like economic growth models (where larger populations generate more ideas, leading to superexponential growth) but where some ideas are bad? I think there’s an interesting, loose analogy between these growth models and a model of the "tug of war" between passengers and drivers in cancer. In the absence of deleterious mutations the tumor in this model grows superexponentially. The fact that fixation of a driver makes the whole population grow better is a bit like the non-rival nature of ideas. But the growth models seem to have no analog to the deleterious passengers—bad ideas that might still fix, stochastically, and reduce the technology prefactor "A".

Such a model might then exhibit a "critical population size" (as for lesion size) below which there is techno-cultural decline (ancient Tasmania?). And is there a social analog of "mutational meltdown"—in population genetics, if mutations arrive too quickly, beneficial and deleterious mutations get trapped in the same lineages (clonal interference) and cannot be independently selected. Perhaps cultural/technological change that comes too rapidly leads to memeplexes with mixtures of good and bad ideas, which are linked and so cannot be independently selected for / against…

It seems like the growth models already take much of that into account, the same way that they do crime or war: if new technologies create new crime (which of course they often do), then that simply slightly offsets the benefits of those technologies, and it is the net benefit which shows up in the long-term growth rather than some 'pure' benefit free of any drawbacks. And likewise for technologies as a whole: if you're inventing some unknown grabbag of technologies each time-period, then it's the net of all the good ideas being offset slightly by the bad ones that is getting measured or driving the growth in the next time-period etc. It would be like measuring the growth of an actual tumor: whatever growth you observe, well, that must be the net growth after the defectors inside the tumor have done their worst, by definition.

So you'd have to invoke some sort of non-constant or non-proportionality: "yes, the bad ideas are only an offset, up until some threshold value like 'inventing nuclear bombs'" (like Bostrom's 'black balls'). But then your results seems dangerously circular: if you assume some fat tail payoff from the bad ideas after a certain threshold or increasingly with time, you are building in your conclusions like "we should halt all technological progress forever".

Journal to myself as I read Volume III of the Feynman Lectures in Physics (as a commitment mechanism).

Chapter 1

Feynman begins by noting that physics at very small scales is nothing like everyday experience, which means we will have to rely on an abstract approach. He then presents the double-slit experiment, first imagining bullets passing through the screen, then water waves, and finally the quantum behavior of electrons. I found myself checking I could still derive the law of cosines. He emphasizes that all things, in fact, behave in the quantum way electrons do, although for large objects it is very hard to tell. I enjoyed the "practicality" of his descriptions, for example describing the electron gun as a heated tungsten wire in a box with a small hole in it. He concludes by introducing the uncertainty principle.

Chapter 2

This chapter is largely devoted to example realizations of the uncertainty principle. For example, if particles pass through a slit of width L, we then know their position with an uncertainty of order L. However, the slit will give rise to diffraction, which reflects uncertainty regarding the particle's momentum. If we narrow the slit, the diffraction pattern gets wider. The uncertainty principle is also used for a heuristic estimate for the size of a hydrogen atom. We write an energy for the electron E = p^2/2m - q^2/r, where m and q are the mass and charge of the electron. If the momentum is of the order given by the uncertainty relation, p = h / r, we can replace it in E and find the distance r that minimizes the energy. This yields a figure on the order of angstroms, which is the correct scale for atoms. The chapter concludes with a brief philosophical discussion regarding what is real and indeterminacy in quantum and classical mechanics.