I would like to note that this dataset is not as hard as it might look like. Humans performed not so well because there is a strict time limit, I don't remember exactly but it was something like 1 hour for 25 tasks (and IIRC the medalist only made arithmetic errors). I am pretty sure any IMO gold medailst would typically score 100% given (say) 3 hours.
Nevertheless, it's very impressive, and AIMO results are even more impressive in my opinion.
Thanks, I think I understand your concern well now.
I am generally positive about the potential of prediction markets if we will somehow resolve the legal problems (which seems unrealistic in the short term but realistic in the medium term).
Here is my perspective on "why should a normie who is somewhat risk-averse, don't enjoy wagering for its own sake, and doesn't care about the information externalities, engage with prediction markets"
First, let me try to tackle the question at face value:
Second, I am not sure it has to be a thing for the masses. In general, normies usually don't have much valuable information, so why would we want them to participate? Of course, it will attract professionals who will correct mispricings and make money but ordinary people losing money is a negative externality which can even outweigh the positive ones.
I consider myself at least a semi-professional market participant. I bet on Manifold and use Metaculus a lot for a few years. I used Polymarket before but don't do it anymore and resort to funny money ones despite they have problems (and of course can't make me money).
Why I am not using Polymarket anymore:
I do agree with your point, definitely "internalize the positive information externalities generated by them" is something which prediction markets should aspire to, an important (and interesting!) problem.
However, I don't believe it's essential for "making prediction markets sustainably large" unless we have a very different understanding of "sustainably large". I am confident that it would be possible to achieve 1% of the global gambling market which would be billions of revenue and a lot of utility. It even seems to be a modest goal, given that it's a serious instrument. But unfortunately, prediction markets are "basically regulated out of existence" :(
Sidenote on funny money market problems:
Metaculus's problem is that it's not a market at all. Perhaps it's a correct decision but makes it boring, less competitive and less accurate (there are many caveats here, probably making Metaculus a market right now would make it less accurate, but from the highest-level perspective markets are a better mechanism).
Manifold's problem is that serious markets draw serious people and unserious markets draw unserious people. As a result, serious markets are significantly more accurately priced which disincentivises competitive users to participate in them. That kinda defies the whole point. And also, perhaps even more importantly, users are not engaged enough (because they don't have money at stake) so winning at Manifold is mostly information arbitrage which is tedious and unfulfilling.
Good to know :)
I do agree that subsidies run into a tragedy-of-commons scenario. So despite subsidies are beneficial, they are not sufficient.
But do you find my solution to be satisfactory?
I thought about it a lot, I even seriously considered launching my own prediction market and wrote some code for it. I strongly believe that simply allowing the usage of other assets solves most of the practical problems, so I would be happy to hear any concerns or further clarify my point.
Or another, perhaps easier solution (I updated my original answer): just allow the market company/protocol to invest the money which are "locked" until resolution to some profit generating strategy and share the profit with users. Of course, it should be diversified, both in terms of investment portfolio and across individual markets (users get the same annual rate of return, no matter what particular thing they bet on). It has some advantages and disadvantages, but I think it's a more clear-cut solution.
Isn't this just changing the denominator without changing the zero- or negative-sum nature?
I feel like you are mixing two problems here: an ethical problem and a practical problem. UPD: on second thought, maybe you just meant the second problem, but still I think my response would be clearer by considering them separately.
The ethical problem is that it looks like prediction markets do not generate income, thus they are not useful and shouldn't be endorsed, they don't differ much from gambling.
While it's true that they don't generate income and are zero-sum games in a strictly monetary sense, they do generate positive externalities. For example, there could be a prediction market about an increase of <insert a metric here> after implementing some policy. The market will allow us to estimate the policy efficiently and make better decisions. Therefore, the market will be positive-sum because of the "better judgement" externality.
The practical problem is that the zero-sum monetary nature of prediction markets disincentives participation (especially in year+ long markets) because on average it's more profitable to invest in something else (e.g. S&P 500). It can be solved by allowing to bet other assets, so people would bet their S&P 500 shares and on average get the same expected value, so it will be not disincentivising anymore.
Also, there are many cases where positive externalities can be beneficial for some particular entity. For example, an investment company may want to know about the risk of a war in a particular country to decide if they want to invest in the country or not. In such cases, the company can provide rewards for market participants and make it a positive-sum game for them even from the monetary perspective.
This approach is beneficial and used in practice, however, it is not always applicable and also can be combined with other approaches.
Additionally, I would like to note that there is no difference between ETH and "giving a loan to a business" from a mechanism design perspective, you could tokenize your loan (and it's not crypto-related, you could use traditional finance as well, I am just not sure what "traditional" word fits here) and use the tokenized loan to bet at the prediction market.
but once all the markets resolve, the total wealth would still be $1M, right
Yes, the total amount will still be the same. However, your money will not be locked during the duration of the market, so you will be able to use it to do something else, be it buying a nice home or giving a loan to a real company.
Of course, not all your money will be unlocked and probably not immediately, but it doesn't change much. Even if only 1% will be unlocked and only in certain conditions, it's still an improvement.
Also, I encourage you to look at it from another perspective:
What problem do we have? Users don't want to use prediction markets.
Surely, they would be more interested if they had free loans (of course they are not going to be actually free, but they can be much cheaper than ordinary uncollateralized loans).
Meta-comment: it's very common in finance to put money through multiple stages. Instead of just buying stock, you could buy stock, then use it as collateral to get a loan, then buy a house on this loan, rent it to somebody, sell the rent contract and use the proceeds to short the original stock to get into a delta-neutral position. Risks multiply after each stage, so it should be done carefully and responsibly. Sometimes the house of cards crumbles, but it's not a bad strategy per se.
Why does it have to be "safe enough"? If all market participants agree to bet using the same asset, it can bear any degree of risk.
I think I should have said that a good prediction market allows users to choose what asset will a particular "pair" use. It will cause a liquidity split which is also a problem, but it's also manageable and, in my opinion, it would be much closer to an imaginary perfect solution than "bet only USD".
I am not sure I understand your second sentence, but my guess is that this problem will also go away if each market "pair" uses a single (but customizable) asset. If I got it wrong, could you please clarify?
In a good prediction market design users would not bet USD but instead something which appreciates over time or generates income (e.g. ETH, Gold, S&P 500 ETF, Treasury Notes, or liquid and safe USD-backed positions in some DeFi protocol).
Another approach would be to use funds held in the market to invest in something profit-generating and distribute part of the income to users. This is the same model which non-algorithmic stablecoins (USDT, USDC) use.
So it's a problem, but definitely a solvable one, even easily solvable. The major problem is that prediction markets are basically illegal in the US (and probably some other countries as well).
Also, Manifold solves it in a different way -- positions are used to receive loans, so you can free your liquidity from long (timewise) markets and use it to e.g. leverage. The loans are automatically repaid when you sell your positions. It is easy for Manifold because it doesn't use real money, but the same concept can be implemented in the "real" markets, although it would be more challenging (there will be occasional losses for the provider due to bad debt but it's the same with any other kind of credit, it can be managed).
Regarding 9: I believe it's when you are successful enough that your AGI doesn't instantly kill you immediately but it still can kill you in the process of using it. It's in the context of a pivotal act, so it assumes you will operate it to do something significant and potentially dangerous.
I am currently job hunting, trying to get a job in AI Safety but it seems to be quite difficult especially outside of the US, so I am not sure if I will be able to do it.
If I will not land a safety job, one of the obvious options is to try to get hired by an AI company and try to learn more there in the hope I will either be able to contribute to safety there or eventually move to the field as a more experienced engineer.
I am conscious of why pushing capabilities could be bad so I will try to avoid it, but I am not sure how far it extends. I understand that being Research Scientist in OpenAI working on GPT-5 is definitely pushing capabilities but what about doing frontend in OpenAI or building infrastructure at some strong but not leading (and hopefully a bit more safety-oriented) company such as Cohere? Or let's say working in a hedge fund which invests in AI? Or working in a generative AI company which doesn't build in-house models but generates profit for OpenAI? Or working as an engineer at Google on non-AI stuff?
I do not currently see myself as an independent researcher or AI safety lab founder, so I will definitely need to find a job. And nowadays too many things seem to touch AI one way or the other, so I am curious if anybody has an idea about how could I evaluate career opportunities.
Or am I taking it too far and the post simply says "Don't do dangerous research"?
The British are, of course, determined to botch this like they are botching everything else, and busy drafting their own different insane AI regulations.
I am far from being an expert here, but I skimmed through the current preliminary UK policy and it seems significantly better compared to EU stuff. It even mentions x-risk!
Of course, I wouldn't be surprised if it will turn out to be EU-level insane eventually, but I think it's plausible that it will be more reasonable, at least from the mainstream (not alignment-centred) point of view.
I think you might find this paper relevant/interesting: https://aidantr.github.io/files/AI_innovation.pdf
TL;DR: Research on LLM productivity impacts in material disocery.
Main takeaways: