A hypothesis testing video game

6 Swimmy 01 April 2013 05:41AM

The Blob Family is a simple game made by Leon Arnott. At heart, it's a game about testing hypotheses and getting the right answer with the least amount of evidence you can.

The mechanics work like so: Balls bounce around the screen randomly and you control a character who needs to avoid them. You can aim the mouse anywhere and activate a sonar. On the right side are rules for how various balls will react to this, and your goal is to figure out which ball is which. As you use the sonar more, the balls speed up, so it becomes more difficult to stay alive, thus giving an incentive to test your hypothesis in as few clicks as possible.

It very nicely illustrates the principle that, to test a hypothesis, you must design tests to falisfy your intuitions rather than to confirm them. For example, in one level, when you use the sonar:

  • 1 ball heads toward the center
  • 1 ball heads away from the center
  • 1 ball heads away from the mouse
  • 1 ball heads away from you

I found myself mistakenly clicking in the center of the screen to test hypothesis 1, but this is insufficient. To design the proper tests, you need to keep the mouse out of the center, keep it away from you, and depending on the position of the balls keep it off a straight line from you.

It could also demonstrate the ability of a fast brain to test hypotheses quickly. For many levels, if you could slow time down and set up a very good test, you could solve the problem with a single click. But we humans aren't usually so attentive.

Just thought the LW crowd might enjoy it.

Comment author: FiftyTwo 14 March 2013 12:46:46AM 12 points [-]

Having only become involved with Lesswrong after it had split off, I've never seen the appeal of "Overcoming Bias." There are a few interesting posts, but a lot of dross and random weird political/incendiary things (like the above). All the good stuff seems to be expressed better elsewhere (mainly on LW).

Possibly this just means LW's voting system is doing its job, but I still notice I'm confused by the appeal. Can anyone enlighten me?

Comment author: Swimmy 16 March 2013 09:04:04PM *  8 points [-]

Well, I'll stick up for OB and Hanson.

Hanson posts about interesting things in a droll way. That's intentional, I believe: sometimes he seems to be trying to get a rise out of people, but most of the time he's trying to reduce emotional reactions to his posts.

He's really, really invested in ideas like evolution: simple theories that explain lots of different phenomena. This is why we get lots and lots of posts about signaling, near/far, and farmers/foragers. He thinks that these explain far more than people currently give them credit, so he's trying to expand their influence. If this seems boring, let me just point at that Hanson has provided or advertised:

1) Probably the best explanation for why medical expenditures in the US grow faster than health outcomes.

1.a) What I consider the best post on any blog about what economists can say about health care reform

2) An explanation for traditional scifi aesthetics

3) Why dumbed-down arguments work better in politics

4) An ev/psych hypothesis for left/right political divide

5) An ev/psych hypothesis for the appeal of adventure novels and video game settings

6) Problems with the business world and how to fix them (and why they won't be fixed)

7) The dark side of cooperation

He's also interested in experimentation and clever solutions to social problems. Hence,

1) A fantastic (but probably politically unworkable) way to solve the problem of CEO value.

2) Futarchy, of course. I doubt it would be very efficient on a large scale because of target/noise problems, but on a local government scale I think it could be amazing.

3) Other numerous applications he's come up with or advertised: solving standardization/focal point problems (like blu-ray vs hdvd), solving which movie scripts to fund, etc.

If you have seen much of this expressed better elsewhere, consider the value of originating an idea vs. explaining in different words. A lot of the LW community was around for the OB days when Eliezer and Robin blogged together and many of us have absorbed insights from both of them. And these are all just memorable posts from the top of my head. Digging for them, I found many more interesting posts.

Those political posts that seem like trolling seem to me about questioning our moral instincts, which are often very bad. I appreciate a seemingly bizarre hypothesis over another self-congratulation about why X moral theory confirms what we all already believe anyway, hooray.

Comment author: MileyCyrus 07 March 2013 02:36:47PM 1 point [-]

What's the problem with the "compound interest will make you rich" meme? Is it inflation?

Comment author: Swimmy 07 March 2013 06:27:34PM 9 points [-]

Compound interest gains most of its power when large amounts have been saved. So if you don't make much money, compound interest simply won't make you rich, you won't be able to save enough (though you can still have a decent retirement). If you make a lot, it doesn't matter as much anyway. If you're middle class and willing to save half your income, then it might make you rich, but that is a painful 30-40 years. Explore the graphs and savings calculator here for examples of what you would need to do to have a million by 60.

Comment author: Swimmy 23 January 2013 04:56:49AM 4 points [-]

"If you type 'AI destroyed' right now, you'll be wasting a good opportunity for a fun conversation. You'll still have 'won' if you do it later, and nobody will be impressed with you for just typing 'AI destroyed' immediately, so why not wait?"

I thought of what would work on me, were I playing the game with someone I found interesting. In general, I'd say your best bet is to make the other person laugh hard.

Comment author: prase 22 January 2013 06:39:12PM *  0 points [-]

Could you please elaborate the point you are trying to make?

Comment author: Swimmy 23 January 2013 03:38:49AM *  0 points [-]

Most people don't usually make these kinds of elaborate things up. Prior probability for that hypothesis is low, even if it might be higher for Tuxedage than it would be for an average person. People do actually try the AI box experiment, and we had a big thread about people potentially volunteering to do it a while back, so prior information suggests that LWers do want to participate in these experiments. Since extraordinary claims are extraordinary evidence (within limits), Tuxedage telling this story is good enough evidence that it really happened.

But on a separate note, I'm not sure the prior probability for this being a lie would necessarily be higher just because Tuxedage has some incentive to lie. If it is found out to be a lie, the cause of FAI might be significantly hurt ("they're a bunch of nutters who lie to advance their silly religious cause"). Folks on Rational Wiki watch this site for things like that, so Tuxedage also has some incentive to not lie. Also more than one person has to be involved in this lie, giving a complexity penalty. I suppose the only story detail that needs to be a lie to advance FAI is "I almost won," but then why not choose "I won"?

Comment author: Eliezer_Yudkowsky 21 January 2013 08:07:02PM 23 points [-]

More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. The Gatekeeper player has told their friends that they will type this as soon as the Experiment starts. You can type up to one sentence in your IRC queue and hit return immediately, the other player cannot type anything before the game starts (so you can show at least one sentence up to IRC character limits before they can type AI DESTROYED). Do you think you can win?

(I haven't played this one but would give myself a decent chance of winning, against a Gatekeeper who thinks they could keep a superhuman AI inside a box, if anyone offered me sufficiently huge stakes to make me play the game ever again.)

Comment author: Swimmy 22 January 2013 05:46:42AM 6 points [-]

What are "sufficiently huge stakes," out of curiosity?

Comment author: prase 21 January 2013 05:59:22PM *  1 point [-]

I realise that it isn't polite to say that, but I don't see sufficient reasons to believe you. That is, given the apparent fact that you believe in the importance of convincing people about the danger of failing gatekeepers, the hypothesis that you are lying about your experience seems more probable than the converse. Publishing the log would make your statement much more believable (of course, not with every possible log).

(I assign high probability to the ability of a super-intelligent AI to persuade the gatekeeper, but rather low probability to the ability of a human to do the same against a sufficiently motivated adversary.)

Comment author: Swimmy 21 January 2013 07:53:03PM 0 points [-]
Comment author: Tuxedage 21 January 2013 04:30:55AM *  7 points [-]

<accolade> yeah

<accolade> I think for a superintelligence it would be a piece of cake to hack a human

<accolade> although I guess I'm Cpt. Obvious for saying that here :)

<Tuxedage> accolade, I actually have no idea what the consensus is, now that the experiment was won by EY

<Tuxedage> We should do a poll or something

<accolade> absolutely. I'm surprised that hasn't been done yet

Poll: Do you think a superintelligent AGI could escape an AI-Box, given that the gatekeepers are highly trained in resisting the AI's persuasive tactics, and that the guards are competent and organized?

Submitting...

Comment author: Swimmy 21 January 2013 05:31:44AM *  1 point [-]

I think it's almost certain that one "could," just given how much more time an AI has to think than a human does. Whether it's likely is a harder question. (I still think the answer is yes.)

Comment author: [deleted] 12 January 2013 04:15:45AM 0 points [-]

I love this, but I'm disappointed that planets don't seem to render at all (on Wine.)

In response to comment by [deleted] on January 2013 Media Thread
Comment author: Swimmy 12 January 2013 07:55:52AM 0 points [-]

Have you tried landing on them with shift+g instead of flying into them? If so, I got nothing. They render for me, if slowly.

Comment author: RobertLumley 08 January 2013 02:15:29AM 0 points [-]

Other Media Thread

Comment author: Swimmy 08 January 2013 07:50:27AM *  6 points [-]

Not a videogame per se, but still a potential timesink for some of us. I like it anyway.

Space Engine is a free space simulation software that lets you explore the universe in three dimensions, starting from planet Earth to the most distant galaxies. Areas of the known universe are represented using actual astronomical data, while regions uncharted by human astronomy are generated procedurally. Millions of galaxies, trillions of stars, countless planets!

So, a space simulator. Allows FTL travel to get between universes. No interesting creatures like Noctis had, but it is very pretty sometimes.

Edit: Also prone to crashing. Such is life.

View more: Prev | Next