Is Less Wrong discouraging less nerdy people from participating?
Less Wrong is as a community extremely nerdy. That's true for almost any definition of "nerd" that captures anyone's intuition for the word. However, to a large extent, many aspects of nerdiness are not connected to rationality at all, even though nerdiness may be associated with more rationality in some limited aspects. For example, fantasy literature is probably not in any deep way connected to either intelligent or rational thinking except for historical reasons.
Yet LW is full of references to science fiction, fantasy literature, anime and D&D. In one recent example, a post started with an only marginally connected tidbit from Heinlein. Moreover, substantial subthreads have arisen bashing aspects of other subcultures. For example, see this subthread where multiple users discuss how spectator sports are "banal" and "pointless". I suspect that this attitude may be turning away not only non-nerds but even the somewhat nerdy who enjoy watching sports, and see it has harmless tribalist fun, not very different than friends arguing over whether Star Wars or Star Trek is superior which has about the same degree of actual value here.
There's a related issue which is a serious point about rationality and human cognition: Our hobbies are to a large extent functions of our specific upbringings and surrounding culture. That some people prefer one form of fantastic escapism involving imaginary spaceships isn't at some level very different than the escapism of watching some people throw and catch objects. Looking down on other people because of these sorts of preferences is unhelpful tribalism. It might feel good, and it might be fun, but it isn't helpful.
Starcraft AI Competition
Ars Technica has an article about A Starcraft AI competition.. While this is clearly narrow AI there are some details which may interest people at LW. The article is about the best performing AI, the "Berkeley Overmind." (The AI in question only played as Zerg, one of the three possible sides in Starcraft. In fact, it seems that the AIs in general were all specialized for a single one of the three sides. While human players are often much better at one specific side, they are not nearly this specialized).
Highlights from the article:
StarCraft was released in 1998, an eternity ago by video game standards. Over those years Blizzard Entertainment, the game’s creator, has continually updated it so that it’s one of the most finely tuned and balanced Real Time Strategy (RTS) games ever made. It has three playable races: the human-like Terrans, with familiar tanks and starships, the alien Zerg, with large swarms of organic creatures, and the Protoss, technologically advanced aliens reliant on powerful but expensive units. Each race has different units and gameplay philosophies, yet no one race or combination of units has an unbeatable advantage. Player skill, ingenuity, and the ability to react intelligently to enemy actions determine victory.
This refinement and complexity makes StarCraft an ideal environment for conducting AI research. In an RTS game, events unfold in real-time and players’ orders are carried out immediately. Resources have to be gathered so fighting units can be produced and commanded into battle. The map is shrouded in fog-of-war, so enemy units and buildings are only visible when they’re near friendly buildings or units. A StarCraft player has to acquire and allocate resources to create units, coordinate those units in combat, discover, reason about and react to enemy actions, and do all this in real-time. These are all hard problems for a computer to solve.
Note, that using the interface that humans need to use was not one of the restrictions. This was an advantage that the Berkeley group used to full effect, as did other AIs in the comptetion.
We had to limit ourselves. David Burkett, another of Dan’s PhD students and the other team lead, says, “It turns out building control nodes for units is hard, so there’s a huge cost associated with building more than one [type of] unit. So we started asking: what one unit type [would be] the most effective overall?”
We focused our efforts on Zerg mutalisks: fast, dragon-like flying creatures that can attack both air and ground targets. Their mobility is unmatched, and we suspected they would be particularly amenable to computer control. Mutalisks are cheap for their strength, but large numbers are rarely seen in human play because it’s hard for a human to manage mutalisks without clumping them and making them easy prey for enemies with area attacks (attacks that do damage to all units in an area instead of a single target). A computer would have no such limitations.
The programmers then used a series of potential fields to control what the mutalisks did, with different entities and events creating different potential fields. A major issue became how to weigh these fields:
Using StarCraft’s map editor, we built Valhalla for the Overmind, where it could repeatedly and automatically run through different combat scenarios. By running repeated trials in Valhalla and varying the potential field strengths, the agent learned the best combination of parameters for each kind of engagement.
The article unfortunately doesn't go into great detail about the exact learning mechanism. Note however that this implies that the Overmind should be able to learn how to respond to other unit types.
There are other details in the article that are also interesting. For example, they replaced the standard path tracing algorithm that units do automatically with their own algorithms.
The final form of the AI can play well against very skilled human players, but it isn't at the top of the game. Note also that the Overmind is designed for one-on-one games. It should be interesting to see how this AI and similar AIs improve over the next few years. I'd be very curious how an AIXI would do in this sort of situation.
Non-trivial probability distributions for priors and Occam's razor
Assume we have a countable set of hypotheses described in some formal way with some prior distribution such that 1) our prior for each hypothesis is non-zero 2) our formal description system has only a finite number of hypotheses of any fixed length. Then, I claim that that under just this set of weak constraints, our hypotheses are under a condition that informally acts a lot like Occam's razor. In particular, let h(n) be the the probability mass assigned to "a hypothesis with description at least exactly n is correct." (ETA: fixed from earlier statement) Then, as n goes to infinity, h(n) goes to zero. So, when one looks in the large-scale complicated hypotheses must have low probability. This suggests that one doesn't need any appeal to computability or anything similar to accept some form of Occam's razor. One only needs that one has a countable hypothesis space, no hypothesis has probability zero or one, and that has a non-stupid way of writing down hypotheses.
A few questions: 1) Am I correct in seeing this as Occam-like or is this just an indication that I'm using too weak a notion of Occam's razor?
2) Is this point novel? I'm not as familiar with the Bayesian literature as other people here so I'm hoping that someone can point out if this point has been made before.
ETA: This was apparently a point made by Unknowns in an earlier thread which I totally forgot but probably read at the time. Thanks also for the other pointers.
Certainty estimates in areas outside one's expertise
One issue that I've noticed in discussions on Less Wrong is that I'm much less certain about the likely answers to specific questions than some other people on Less Wrong. But the questions where this seems to be most pronounced are mathematical questions that are close to my area of expertise (such as whether P = NP). In areas outside my expertise, my apparent confidence is apparently often higher. Thus, for example at a recent LW meet-up I expressed a much lower probability estimate that cold fusion is real than what others in the conversation estimated. This suggests that I may be systematically overestimating my confidence in areas that I don't study as much, essentially a variant of the Dunning-Krueger effect. Have other people here experienced the same pattern with their own confidence estimates?
A possible example of failure to apply lessons from Less Wrong
One issue that has been discussed here before is whether Less Wrong is causing readers and participants to behave more rationally or is primarily a time-sink. I recently encountered an example that seemed worth pointing out to the community that suggested mixed results. The entry for Less Wrong on RationalkWiki says " In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache." RationalWiki has a variety of issues that I'm not going to discuss in detail here (such as a healthy of dose of motivated cognition pervading the entire project and having serious mind-killing problems) but this sentence should be a cause for concern. What they are essentially talking about is LWians not realizing (or not internalizing) that there's a serious problem of inferential distance between people who are familiar with many of the ideas here and people who are not. Since inferential distance is an issue that has been discussed here a lot, this suggests that some people who have read a lot here are not applying the lessons even when they are consciously talking about material related to those lessons. Of course, there's no easy way to tell how representative a sample this is, how common it is, and given RW's inclination to list every possible thing they don't like about something, no matter how small, this may not be a serious issue at all. But it did seem to be serious enough to point out here.
What Science got Wrong and Why
An article at The Edge has scientific experts in various fields give their favorite examples of theories that were wrong in their fields. Most relevantly to Less Wrong, many of those scientists discuss what their disciplines did that was wrong which resulted in the misconceptions. For example, Irene Pepperberg not surprisingly discusses the failure for scientists to appreciate avian intelligence. She emphasizes that this failure resulted from a combination of different factors, including the lack of appreciation that high level cognition could occur without the mammalian cortex, and that many early studies used pigeons which just aren't that bright.
Recent results on lower bounds in circuit complexity.
There's a new paper which substantially improves lower bounds for circuit complexity. The paper, by Ryan Williams, proves that NEXP does not have ACC circuits of third-exponential size.
This is a somewhat technical result (and I haven't read the proof yet), but there's a summary of what this implies at Scott Aaronson's blog. The main upshot is that this is a substantial improvement over prior circuit complexity bounds. This is relevant since circuit complexity bounds look to be one of the most promising methods to potentially show that P != NP. These results make circuit complexity bounds be still very far off from showing that. But this result looks like it in some ways might get around the relativization barrier and natural proof barriers which are major barriers to resolving P ?=NP.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)