This only lets you get causal probabilities for actions performed by people who opt in. Idk if that's a problem for futarchy but it makes it more limited than I think we'd like. E.g. I'm worried Apple might sue me for something, so I want a market for P(Apple wins lawsuit | do(they sue me)), and I can't commit them to following this scheme.
This has some similarities with early smallpox variolation, right? (And some differences, like the numbers.)
EV on A’s life expectancy is strongly positive.
Depending on your AI timelines :p
Why not?
Mostly "priors on this kind of thing".
(I might be able to get something more specific but that comment won't come for a week minimum, if ever.)
Oh, that sounds right. I confess I wasn't thinking at that kind of scale.
My intuitive reaction is
With 0.1% chance, transparently take a random decision among available decisions (the randomness is pre-set and independent of specific decisions/market data/etc.)
As a note, I think this doesn't need to be uniformly random. So if there's a decision that you a priori think is a terrible idea, you can downweight it in the random choice, as long as the market prices don't affect that.
A trader will probably want considerably more than 1000x payout if the probability goes down by 1000x, right?
(Re the "missed the point" reaction, I claim that it's not so much that I missed the point as that I wasn't aiming for the point. But I recognize that reactions aren't able to draw distinctions that finely.)
and neighboring ones, like “two key insights” or whatever
I... kinda feel like there's been one key insight since you were in the community? Specifically I'm thinking of transformers, or whatever it is that got us from pre-GPT era to GPT era.
Depending on what counts as "key" of course. My impression is there's been significant algorithmic improvements since then but not on the same scale. To be fair it sounds like Random Developer has a lower threshold than I took the phrase to mean.
But I do think someone guessing "two key insights away from AGI" in say 2010, and now guessing "one key insight away from AGI", might just have been right then and be right now?
(I'm aware that you're not saying they're not, but it seemed worth noting.)
Suppose in 2025, the median prediction is that it'll happen in 2027. Suppose in 2028, the median prediction is that it'll happen in 2030.
Will that be enough empirical evidence, for you to conclude that the crowd is repeatedly predicting short timelines which never materialize?
Anecdote: in 2022, my recollection is that Ethereum had been planning to switch to proof of stake for years, and that project had been repeatedly delayed. In June, my brother bet me that it wouldn't happen for at least another two years. It actually happened in September 2022.
Another thing I'm not sure this gives us, which we might want: P(X | do(some difficult action)).
E.g. P(AI safety movement grows | do(I write the most highly rated Narnia fanfic)).
We could separate it into
but I'm pretty sure that's not equivalent (e.g. maybe it's more likely to grow in worlds where writing the fic is hard).
Not sure how useful it is to have this - maybe we more often prefer to elicit P(X | do(attempt some difficult action)).