William_Newman
William_Newman has not written any posts yet.

William_Newman has not written any posts yet.

Eliezer Yudkowsky wrote of ideas one can't see the value of, and teachers who don't seem to understand their teachings, "Sounds like either a cult or a college."
I dunno, at least for many technical fields and for some other endeavors too (like learning to communicate effectively in writing) one can see that many of the teachers can do some handy hard-to-fake real-world stuff, and that the students emerging through the pipeline tend to be able to do it too. When I was an undergraduate, the EEs in my residence hall traditionally maintained a little hand-made custom-programmed telephone PBX which ran from the two college official phone jacks in the lobby to a... (read more)
If you ever get as seriously curious about electronics as you were about physics, look at Horowitz and Hill, The Art of Electronics. Very very useful for someone who already knows the math and wants to understand electronics principles and the practicalities of one-off discrete circuit design.
Yeah, what Adam Ierymenko said:-) about hitting a complexity limit being not at all synonymous with stopping progress. Except that I was going to say "computer programmers" instead of "engineers", and I was going to use the example that when duplicate functionality in the mitochondrial genome and main-cell genome gets replaced by shared functionality, the organism tends to win back some ground from the the Williams limit you described. And, incidentally, the mitochondrial example is very closely analogous to something that practicing computer programmers pay a lot of attention to: Google for "once and only once" or "OAOO" to see endless discussion.
I don't see the problem. There seems to be no logical reason that local laws can't change because of arbitrarily complicated nonlocal rules. You can even see nontrivial examples of this in practice in some modern technology. Various of Microsoft's operating systems have reportedly contained substantial amounts of code to recognize particular usage patterns characteristic of particular old applications, and change the rules so the old application continues to work even though it depends on old behavior which has otherwise disappeared from the new operating system. Vaguely-similar principles of global patterns changing local decision rules also appear, in less-nauseating ways, in all sorts of software for solving hard optimization problems (optimizing compilers,... (read more)
There's no particular reason that constant improvement needs to surpass a fixed point. In theory, see Achilles and the tortoise. In practice, maybe you can't slice things infinitely fine (or at least you can't detect progress when you do), but still you could go on for a very long time incrementally improving military practice in the Americas while, without breakthroughs to bronze and/or cavalry, remaining solidly stuck behind Eurasia. More science fictionally, people living beneath the clouds of Venus could go for a long time incrementally improving their knowledge of the universe before catching up with Babylonian astronomy, and if a prophet from Earth brought them a holy book of astronomy, it... (read more)
Note that when someone reads your "if people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold" it does sound rather as though you're making a point about market decisions in particular, not just one of a spectrum of points like "if people have a right to vote for stupid policies, then ambitious politicians will supply all the stupid policies that people can be convinced to vote for." Also, it's not too uncommon for people to play rhetorical (and perhaps internal doublethink) games where people's rationality in market decisionmaking is judged differently than in politics.
Similarly, you could state specifically "when we let... (read more)
One reason I dislike many precautionary arguments is that they seem to undervalue what we learn by doing things. Very often in science, when we have chased down a new phenomenon, we detect it by relatively small effects before the effects get big enough to be dangerous. For potentially dangerous phenomena, what we learn by exploring around the edges of the pit can easily be more valuable than the risk we faced of inadvertently landing in the pit in some early step before we knew it was there. Among other things, what we learn from poking around the edges of the pit may protect us from stuff there that we didn't know... (read more)