Comment author: casebash 05 January 2016 11:36:12PM 0 points [-]

"We can still say, e.g., that one course of action is more rational than another, even in situations where no course of action is most rational." - True.

"But I don't know of any reason to adopt that definition" - perfect rationality means to me more rational than any other agent. I think that is a reasonable definition.

Comment author: evand 29 January 2016 04:23:58AM 0 points [-]

Seeing as this is an entire article about nitpicking and mathematical constructs...

perfect rationality means to me more rational than any other agent. I think that is a reasonable definition.

Surely that should be "at least as rational as any other agent"?

Comment author: evand 30 December 2015 02:12:43AM 1 point [-]

I think you're pessimistic about tech regression.

Assuming survival of some libraries, I think basically any medium-sized functional village (thousands of people, or hundreds with a dash of trade) is adequate to maintain iron age technology. That's valuable enough that any group that survived in a fixed location for more than a couple years could see the value in the investment. (You might not even need the libraries if the right sort of person survived; I suspect I could get a lot of it without that, but it would be a lot less efficient.)

It doesn't take all that much more beyond that to get to some mix of 17th to 19th century tech. Building a useful early 19th-century machine shop is the work of one or two people, full time, for several years. Even in the presence of scavenging, I think such technology is useful enough that it won't take that long to be worth spending time on.

Basically I think anything that's survivable is unlikely to regress to before 17th century tech for a period longer than a few years.

Comment author: evand 08 November 2015 09:30:35PM 2 points [-]

So, this is exactly the sort of thing prediction markets should do well at, right? People without structural incentives to ignore a problem can make accurate predictions and make money. People who care about it can point to the market prices when making their point.

In the black swan case, I think prediction markets will do only somewhat better than alternatives, but here they should do vastly better. Right?

Comment author: Raemon 08 November 2015 06:47:16AM *  1 point [-]

My impression is that Main currently doesn't get much visibility - unless things get promoted, you have to actively go looking for articles there, whereas most people who come to LW at all see discussion by default.

Comment author: evand 08 November 2015 09:27:04PM 2 points [-]

Agreed. It's silly. This site needs more active tending in general, in my opinion.

In the mean time, you can bookmark this link.

Comment author: entirelyuseless 16 September 2015 12:53:08PM *  4 points [-]

If you don't want to violate the independence axiom (which perhaps you did), then you will need bounded utility also when considering deals with non-PEST probabilities.

In any case, if you effectively give probability a lower bound, unbounded utility doesn't have any specific meaning. The whole point of a double utility is that you will be willing to accept the double utility with half the probability. Once you won't accept it with half the probability (as will happen in your situation) there is no point in saying that something has twice the utility.

Comment author: evand 16 September 2015 11:14:20PM 4 points [-]

It's weird, but it's not quite the same as bounded utility (though it looks pretty similar). In particular, there's still a point in saying it has double the utility even though you sometimes won't accept it at half the utility. Note the caveat "sometimes": at other times, you will accept it.

Suppose event X has utility U(X) = 2 * U(Y). Normally, you'll accept it instead of Y at anything over half the probability. But if you reduce the probabilities of both events enough, that changes. If you simply had a bound on utility, you would get a different behavior: you'd always accept X and over half the probability of Y for any P(Y), unless the utility of Y was too high. These behaviors are both fairly weird (except in the universe where there's no possible construction of an outcome with double the utility of Y, or the universe where you can't construct a sufficiently low probability for some reason), but they're not the same.

Comment author: Lumifer 09 September 2015 05:02:09PM 0 points [-]

A fair point, though I don't think it makes any difference in the context. And I'm not sure the utility function is amenable to MCMC sampling...

Comment author: evand 10 September 2015 03:30:35AM 0 points [-]

I basically agree. However...

It might be more amenable to MCMC sampling than you think. MCMC basically is a series of operations of the form "make a small change and compare the result to the status quo", which now that I phrase it that way sounds a lot like human ethical reasoning. (Maybe the real problem with philosophy is that we don't consider enough hypothetical cases? I kid... mostly...)

In practice, the symmetry constraint isn't as nasty as it looks. For example, you can do MH to sample a random node from a graph, knowing only local topology (you need some connectivity constraints to get a good walk length to get good diffusion properties). Basically, I posit that the hard part is coming up with a sane definition for "nearby possible world" (and that the symmetry constraint and other parts are pretty easy after that).

Comment author: Lumifer 09 September 2015 03:44:38AM *  1 point [-]

I came up with an algorithm that compromises between them.

I am not sure of the point. If you can "sample ... from your probability distribution" then you fully know your probability distribution including all of its statistics -- mean, median, etc. And then you proceed to generate some sample estimates which just add noise but, as far as I can see, do nothing else useful.

If you want something more robust than the plain old mean, check out M-estimators which are quite flexible.

Comment author: evand 09 September 2015 02:37:58PM 0 points [-]

If you can "sample ... from your probability distribution" then you fully know your probability distribution

That's not true. (Though it might well be in all practical cases.) In particular, there are good algorithms for sampling from unknown or uncomputable probability distributions. Of course, any method that lets you sample from it lets you sample the parameters as well, but that's exactly the process the parent comment is suggesting.

Comment author: Anders_H 03 September 2015 02:56:31PM 1 point [-]

Also, they have a really dumb system where each candidate has both yes and no shares, instead of each election having shares per candidate. Which means there are more different prices than there should be, and no system-enforced rule that the sum of the probabilities = 1.

Actually, the "yes" and "no" shares are the same contracts: Buying a "yes" contract is exactly the same thing as selling a "no" contract. The best offer for "buy yes" plus the best offer for "sell no" will always equal 1, without requiring arbitrage or any action on the part of the market participants.

For some reason they have chosen a counterintuitive user interface such that these contracts appear to be different from each other, but they are the same.

Comment author: evand 03 September 2015 03:03:07PM 0 points [-]

Yes, I suppose my comment wasn't clear. There are twice as many distinct prices as there should be, not 4x. There should only be one price per candidate (plus an additional price for "other" in many cases). The "buy no" price for a single candidate should be equal to the sum of the "buy yes" prices for all the other candidates, and that relationship should be fully enforced by the exchange.

Comment author: Clarity 03 September 2015 09:06:42AM 1 point [-]

What are the 'best buys' in warm fuzzies?

I want to satisfice my urges for the least cost. Perhaps there's a kind of GiveWell for warm fuzzies out there.

I feel like 'purchasing status' is part of my warm fuzzies calculation, which complicates things.

Perhaps buying coffees for people in line around me?

Comment author: evand 03 September 2015 02:59:23PM 1 point [-]

Perhaps buying coffees for people in line around me?

That seems like a cheap experiment. Have you tried it? What else have you tried for purchasing warm fuzzies?

Comment author: Douglas_Knight 03 September 2015 02:05:33AM 1 point [-]

It is common across prediction markets that the fee structure makes it not worth it to push extreme events to further extremes. Thus unlikely candidates have too much mass and the total adds up to more than 1. But maybe Predictit is even worse for the reasons Anders gives.

Comment author: evand 03 September 2015 02:53:36PM 0 points [-]

You can build systems that preserve sum of probabilities = 1. They'll still see bias away from the extremes, because of fees and because of time value of money. But you can do a lot better than PredictIt. (One thing that helps on the fees side is to make fees go down for trades near the extremes; I argued for that in detail on Augur here.

View more: Next