There are two arguments frequently offered for a free market economy over a centrally planned economy: an argument based around knowledge, sometimes called the socialist calculation problem; and another argument based on incentives. The arguments can be briefly summarized like so:
A point I've not seen anyone else make is that the argument from knowledge is really itself an argument from incentives in the following sense: the sensory and computational capabilities of human civilization is naturally distributed among individual humans who have a high degree of agency over their own actions. An efficient planner ought to leverage this whole base of data and compute when making decisions, but this requires giving each individual human the incentive to participate in this distributed computing process.
The limited bandwidth of human communication (on the order of bytes per second) compared to human computational power (on the order of 1e15 ops per second for the brain) means that setting up such a distributed computing scheme requires most decisions to be made locally, and this allows many opportunities for individual participants to shirk the duties that would be assigned to them by an economic planner, not only through the work-effort channel (where shirking is more obvious in many industries and can be cracked down on using coercion) but also by falsifying the results of local computations.
So the knowledge problem for the central planner can also be understood as an incentive problem for the participants in the centrally planned economy. The free market gets around this problem by enabling each person or group of people to profit from inefficiencies they find in the system, thereby incentivizing them to contribute to the aggregate economic optimization task. The fact that individual optimizations can be made locally without the need for approval from a central authority means less pressure is put on the scarce communication bandwidth available to the economy, which is reserved for the transmission of important information. While the price mechanism plays a significant role here as would be argued by e.g. Hayekians, compressed information about what drives changes in prices can be just as important.
Recently I saw that Hypermind is offering a prediction market on which threshold BTC will hit first: $40k or $60k? You can find the market on this list.
I find this funny because for this kind of question it's going to be a very good approximation to assume BTC is a martingale, and then the optional stopping theorem gives the answer to the question immediately: if BTC is currently priced at $40k < X < $60k then the probability of hitting $40k first is ($60k - X)/($20k). Since BTC itself is going to be priced much more efficiently than this small volume prediction market, the prediction market adds no additional useful information.
The market currently seems to have a bit of a long shot bias: as of the writing of this post BTC is trading at $41.85k which is consistent with 9.25% chances of hitting $60k first, but the prediction market is pricing this at 12.4%. Anyone have any ideas on the reasons behind this?