I work at the Alignment Research Center (ARC). I write a blog on stuff I'm interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
Oh, I don't think it was at all morally bad for Polymarket to make this market -- just not strategic, from the standpoint of having people take them seriously.
Top Manifold user Semiotic Rivalry said on Twitter that he knows the top Yes holders, that they are very smart, and that the Time Value of Money hypothesis is part of (but not the whole) story. The other part has to do with how Polymarket structures rewards for traders who provide liquidity.
Yeah, I think the time value of Polymarket cash doesn't track the time value of money in the global economy especially closely:
If Polymarket cash were completely fungible with regular cash, you'd expect the Jesus market to reflect the overall interest rate of the economy. In practice, though, getting money into Polymarket is kind of annoying (you need crypto) and illegal for Americans. Plus, it takes a few days, and trade opportunities often evaporate in a matter of minutes or hours! And that's not to mention the regulatory uncertainty: maybe the US government will freeze Polymarket's assets and traders won't be able to get their money out?
And so it's not unreasonable to have opinions on the future time value of Polymarket cash that differs substantially from your opinions on the future time value of money.
Yeah, honestly I have no idea why Polymarket created this question.
Do you think that these drugs significantly help with alcoholism (as one might posit if the drugs help significantly with willpower)? If so, I'm curious what you make of this Dynomight post arguing that so far the results don't look promising.
I think that large portions of the AI safety community act this way. This includes most people working on scalable alignment, interp, and deception.
Are you sure? For example, I work on technical AI safety because it's my comparative advantage, but agree at a high level with your view of the AI safety problem, and almost all of my donations are directed at making AI governance go well. My (not very confident) impression is that most of the people working on technical AI safety (at least in Berkeley/SF) are in a similar place.
We are interested in natural distributions over reversible circuits (see e.g. footnote 3), where we believe that circuits that satisfy P are exceptionally rare (probably exponentially rare).
Probably don't update on this too much, but when I hear "Berkeley Genomics Project", it sounds to me like a project that's affiliated with UC Berkeley (which it seems like you guys are not). Might be worth keeping in mind, in that some people might be misled by the name.
Ohh I see. Do you have a suggested rephrasing?