Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
ctwardy40

NB: we've been told we won't be able to pay people on the new S&T market. The required scale (1,000 to 10,000 active forecasters) means it would have been a small percentage of participants anyway.

ctwardy20

Thanks for the shout! We're excited to move into Science & Tech which lets us be much more open about recruiting and sourcing. I really hope we get lots of people from LW.

As noted in my reply below, please use the new registration site (it has an atom logo, not a dagger). That will put you in the queue to be notified when we bring up the new site. The old site will go dark this Saturday, though we will still have occasional updates on the blog. The new site will be in testing over the summer (there is an email link if you're interested) and go live in the autumn.

ctwardy50

Whoa whoa whoa! That's the old registration site. Go to signup.daggre.org to register for the S&T market (like the original post said).

And, er, sorry about the disrepair of the old intake questions. I thought that had been taken care of ages ago. (But the Cognitive Reflectance Test questions are actually a pretty good screen. Fewer than half the people answer them correctly.) Anyway, the main site is going down shortly while we retool. Use signup.daggre.org to get in the queue for the Science & Technology forecasting site. We're going to take advantage of downtime this summer to make some fundamental changes we simply couldn't do while running a production site in a tournament. (We did do substantially better than a baseline opinion pool and a baseline prediction market on the geopolitical questions.)

I'll also ask our participant coordinator to check everyone who has signed up to the main site in the last week. Thanks for posting your problems.

Charles Twardy PI for the (soon-to-be-renamed) DAGGRE project

ctwardy20

Regarding superfluous division by P(A): it is omitted in some applications of Bayesian reasoning.

Search theory begins with a probability map and updates the map based on searching and not finding. The calculations are done unnormalized both for speed and to preserve information. If the map started with 10,000 points distributed over all the regions in the search area, it is useful to know that searching has reduced the total to 1,000. It suggests you have exhausted the area and should reconsider your hypothesis that the subject/object/target is in this area.

There are equivalent approaches using normalization, such as including "Rest of World" as an area. But unnormalized is faster.

ctwardy70

Thanks for the vote of confidence. I should say that while I think my paper presents things well, I cannot take credit for the statistics or experimental design. Tim van Gelder already had the machinery in place to measure pre-post gains, and had done so for several semesters. The results were published in Donohue et al. 2002. The difference here was that I took over teaching, and we continued the pre-post tests.

Although argument maps are usually used to map existing natural language arguments, one could start with the map. I like to think that the more people use these maps, the more their thinking naturally follows such a structure. I'm sure I could use more practice myself.

Just a note on terminology: the tree does have two kinds of nodes, but by virtue of being a tree, it is not a bipartite graph.

I think arguments in argument maps can be made probabilistic and converted to Bayesian networks. But as it is, it takes long enough just to make an argument map. I've recently discovered Gheorghe Tecuci's work. He's just down the hall from me, but I didn't know his work until I heard him give a talk recently. He has an elaborate system that helps analysts create structures very much like argument maps by filling in schemas, and then reasons quantitatively with them. The tree structure and the simplicity of the combination rules (min, max, average, etc.) are more limited than a full Bayesian network, but it seems to be a very nice extension of argument maps.

ctwardy10

Luke Hope and Karl Axnick did most of the work on Causal Reckoner. I have used it, but I did very little to develop it. However, I believe it is GPL, so it could be posted.