A hypothesis testing video game

6 Swimmy 01 April 2013 05:41AM

The Blob Family is a simple game made by Leon Arnott. At heart, it's a game about testing hypotheses and getting the right answer with the least amount of evidence you can.

The mechanics work like so: Balls bounce around the screen randomly and you control a character who needs to avoid them. You can aim the mouse anywhere and activate a sonar. On the right side are rules for how various balls will react to this, and your goal is to figure out which ball is which. As you use the sonar more, the balls speed up, so it becomes more difficult to stay alive, thus giving an incentive to test your hypothesis in as few clicks as possible.

It very nicely illustrates the principle that, to test a hypothesis, you must design tests to falisfy your intuitions rather than to confirm them. For example, in one level, when you use the sonar:

  • 1 ball heads toward the center
  • 1 ball heads away from the center
  • 1 ball heads away from the mouse
  • 1 ball heads away from you

I found myself mistakenly clicking in the center of the screen to test hypothesis 1, but this is insufficient. To design the proper tests, you need to keep the mouse out of the center, keep it away from you, and depending on the position of the balls keep it off a straight line from you.

It could also demonstrate the ability of a fast brain to test hypotheses quickly. For many levels, if you could slow time down and set up a very good test, you could solve the problem with a single click. But we humans aren't usually so attentive.

Just thought the LW crowd might enjoy it.

Is intelligence explosion necessary for doomsday?

5 Swimmy 12 March 2012 09:12PM

I searched for articles on the topic and couldn't find any.

It seems to me that intelligence explosion makes human annihilation much more likely, since superintelligences will certainly be able to outwit humans, but that a human-level intelligence that could process information much faster than humans would certainly be a large threat itself without any upgrading. It could still discover programmable nanomachines long before humans do, gather enough information to predict how humans will act, etc. We already know that a human-level intelligence can "escape from the box." Not 100% of the time, but a real AI will have the opportunity for many more trials, and its processing abilities should make it far more quick-witted than we are.

I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible. Skeptics of intelligence explosion should still be worried about the creation of computers with unfriendly goal systems. What am I missing?

Awful Austrians

34 Swimmy 12 April 2009 06:06AM

Response to: The uniquely awful example of theism

Why is theism such an ever-present example of irrationality in this community? I think ciphergoth overstates the case. Even theism is not completely immune to evidence, as the acceptance of, say, evolution by so many denominations over time will testify. Theism is a useful whipping boy because it needs no introduction.

But I think the case is overstated for another reason. There are terrible epistemologies out there that are just as bad as theism's. Allow me to tell you a tale, of how I gave up my religion and my association with a school of economics at the same time.

I grew up in a southern Presbyterian church in the U.S. While I was taught standard pseudo-evidential defenses for belief, such as "creation science" and standard critiques of evolution, my church was stringently anti-evidentialist. Their preferred apologetic was something called presuppositionalism. It's certainly a minority apologetic among major defenders of Christianity today, especially compared to the cosmological or morality arguments. But it's a particularly rigorous attempt to defend beliefs against evidence nonetheless.

Presuppositionalism (in some forms) hangs on the problem of induction. We cannot ultimately justify any of our beliefs without first making some assumptions, otherwise we end in solipsism. Christianity, then, justifies itself not on evidence, but on internal consistency. It is ok for an argument to be ultimately circular, because all arguments are ultimately circular. Christianity alone maintains perfect worldview consistency when examined through this lens, and is therefore correct.

Since I've spent a lot of time thinking about this--it can take a considerable effort to change one's mind, after all--I can imagine innumerable things wrong with it, but they're not the focus of this entry. First, I just want to note how close it is to a kind of intro-level Bayesian understanding. Bayesians admit that we must have priors, that it's indeed nonsense to think we can even have an argument with one who doesn't. We must ultimately admit that certain justifications are going to be either recursive or based on priors. We believe that we should update our priors based on evidence, but there's nothing in the math that tells us we can't start with a prior for some position of 0% or 100%. (There is something in the math that tells us such probability assignments are very bad ideas, and we have more than enough cognitive bias literature that tells us we shouldn't be so damn overconfident. But then, what if you have a prior that keeps you from accepting such evidence?) It doesn't have any of the mathematical rigor, but it comes very close on a few major points.

continue reading »

Secret Identities vs. Groupthink

19 Swimmy 09 April 2009 08:26PM

From Marginal Revolution:

A new meta-analysis (pdf) of 72 studies, involving 4,795 groups and over 17,000 individuals has shown that groups tend to spend most of their time discussing the information shared by members, which is therefore redundant, rather than discussing information known only to one or a minority of members. This is important because those groups that do share unique information tend to make better decisions.

Another important factor is how much group members talk to each other. Ironically, Jessica Mesmer-Magnus and Leslie DeChurch found that groups that talked more tended to share less unique information.

A result that shouldn't surprise this group. I've noticed obvious attempts to avoid this tendency in Less Wrong (for instance, Yvain's avoiding further Christian-bashing). We've had at least one post asking specifically for information that was unique. And I don't know about the rest of you, but I've already had plenty of new food for thought on Less Wrong.

But are we tapping the full potential? Each of us has, or should have, a secret identity. The nice thing about those identities is that they give us access to unique knowledge. We've been asked (though I can't find the link) to avoid large posts applying learned rationality techniques to controversial topics, for fear of killing minds, which seems reasonable to me. Is there a better way to allow discipline-specific knowledge to be shared among Less Wrong readers without setting off our politicosensors? It seems beneficial not only for improved rationality training, but also to enhance our secret identities. For instance, I, as an economist-in-training, would like to know not just what an anthropologist can tell me, but what a Bayesian-trained anthropologist can tell me.