Comment author: HonoreDB 04 January 2013 06:53:55PM 1 point [-]

I came here to refer you to John Holt, but since User:NancyLebovitz already did that, I'll just add that I'm amused that your handle is Petruchio.

Comment author: wedrifid 05 July 2012 03:47:55AM 18 points [-]

They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes)

Fantastic. Please tell me which markets this applies to and link to the source of the algorithm that gives me all the free money.

Comment author: HonoreDB 05 July 2012 03:57:57AM 1 point [-]

Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Comment author: HonoreDB 05 July 2012 03:32:58AM 5 points [-]

Irrationality Game

Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%

Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%

Comment author: JGWeissman 25 May 2012 09:34:20PM 3 points [-]

If the prize for correctly answering "true" is 10 times as good as the prize for correctly answering "false", then you really should be about 91% confident the correct answer is "false" before you give that answer.

Comment author: HonoreDB 25 May 2012 09:52:02PM 0 points [-]

Yup. The propositions need to be such that you can get more confident than that.

Comment author: HonoreDB 25 May 2012 07:50:00PM 8 points [-]

My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they're very likely to pick one even if the real criminal's not there, whereas if people are leafing through a big book of mugshots they're less likely to make a false positive identification.

She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they're pretty sure Plant A is actually $FAMOUS_ACTOR here incognito. Plant A pokes his head in, says he needs to go take a call, and leaves. See who manages to talk themselves into thinking that really is the celebrity.

Comment author: HonoreDB 25 May 2012 07:14:56PM 1 point [-]

This seems like it'll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?

Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey's kisses, or tickets good for 1 point).

Choose a box: Have two actual boxes, labeled "TRUE" and "FALSE". Before the class comes in, the instructor writes a proposition on the blackboard, such as "The idea that carrots are good for your eyesight is a myth promoted as part of a government conspiracy to cover up secret military technology" or "A duck's quack never echoes, and nobody knows why." If the instructor believes that the proposition is true, the instructor puts a bunch of good prizes in the TRUE box and nothing in the FALSE box. Otherwise, the instructor fills the FALSE box with less-good prizes. The class comes in, and the instructor explains the rules. Then she spends 5 minutes trying to persuade the class that she believes the proposition. After that, people who think she actually believes it line up at the TRUE box, and everyone else lines up at the FALSE box. Everyone who guessed right gets a prize from their box. If you guess TRUE and you're right, your prize is better than if you guess FALSE and are right. Repeat this for a few propositions, and it's at least a useful test for whether you can separate what you want from what seems plausible.

Comment author: [deleted] 03 May 2012 05:58:39PM *  6 points [-]

That sounded like something right out of a Jorge Luis Borges novel...

But where does the recursion stop? Can we hypothesize that it's Turtles All The Way Down?

Comment author: HonoreDB 03 May 2012 11:03:53PM 0 points [-]

It seems likely that God would create multiple realities, populated by different sorts of people and/or with different True Religions, to feed a diverse set of people into a shared heaven. So the recursive realities would have a pyramid or lattice structure. If God has limited knowledge of the realities he's created, there could even be cycles.

Comment author: HonoreDB 03 May 2012 05:09:49PM 35 points [-]

God is, himself, in a world filled with vague, ambiguous, sometimes contradictory hints towards a divine meta-reality. He's confused, anxious, and doesn't trust his own judgment. So he's created the Abrahamic world in order to identify the people who somehow manage to arrive at the truth given a similar lack of information. One of our religions is correct--guess right and you go to Heaven to help God try to get to Double Heaven.

Comment author: HonoreDB 26 April 2012 04:01:02PM 4 points [-]
Comment author: paulfchristiano 26 April 2012 05:06:54AM 5 points [-]

We are trying to formally specify the input-output behavior of an idealized computer, running some simple program. The mathematical definition of a Turing machine with an input tape would suffice, as would a formal specification of a version of Python running with unlimited memory.

Comment author: HonoreDB 26 April 2012 03:32:51PM *  2 points [-]

Okay, I see that that's what you're saying. The assumption then (which seems reasonable but needs to be proven?) is that the simulated humans, given infinite resources, would either solve Oracle AI [edit: without accidentally creating uFAI first, I mean] or just learn how to do stuff like create universes themselves.

There is still the issue that a hypothetical human with access to infinite computing power would not want to create or observe hellworlds. We here in the real world don't care, but the hypothetical human would. So I don't think your specific idea for brute-force creating an Earth simulation would work, because no moral human would do it.

View more: Prev | Next