@Dagon perhaps I should have place the emphasis on "transfer". The key thing is that we are able to reliably transfer ownership in exchange for renumeration and that the resource on which on goals are contingent at least needs to be excludable. If we cannot prevent arbitrary counter-parties consuming the resource in question without paying for it then we can't have a market for it.
@johnswentworth ok but we can achieve Pareto optimal allocations using central planning, but one wouldn't normally call this a market?
"And that’s the core concept of a market: a bunch of subsystems at pareto optimality with respect to a bunch of goals."
The other key property is that the subsystems are able to reliably and voluntarily exchange the resources that relate to their goals. This is not always the case, especially in biological settings, because there is not always a way to enforce contracts- e.g. there needs to be a mechanism to prevent counter-parties from reneging on deals.
The anonymous referees for our paper Economic Drivers of Biological Complexity came up with ...
What is the epistemic status of this claim? e.g is it based on well established evidence beyond hearsay and personal opinion? How does it relate to other well-established cognitive biases such as the sunk-costs fallacy? Alternatively, are you conjecturing that there is a new kind of cognitive bias (that psychologists have not yet hypothesized) that causes people to persist with failing projects when it is irrational to do so? If so, how could it be experimentally tested?
If somebody is finding it difficult to move on from a failed project I would tend to suggest to them to "be mindful of the sunk-costs fallacy" rather then to "stare into the abyss".
https://www.lesswrong.com/tag/sunk-cost-fallacy
The main problems are the number of contracts and the relationship management problem. Once upon a time drawing up and enforcing the required number of contracts would have been prohibitevly expensive in terms of fees for lawyers. In the modern era Web 3.0 promised smart contracts to solve this kind of problem. But smart contracts don't solve the problem of incomplete contracts https://en.m.wikipedia.org/wiki/Incomplete_contracts, and this in itself can be seen as a transaction cost in the form of a risk premium. and so we are stuck with companies. In ...
The point is that all the people making cars in the company could, in principle, do the same job as self employed freelance contractors rather than as employees. Instead of a company you have lots of contracts between eg assembly line workers, salespeople and end customers, without any companies. The same number of cars could in theory be built by the same number of people in each case. The physical scenario would be identical in each case. The machinery would be identical in each case. But in the freelancer case you still have lots of people building cars but there is no invisible company to coordinate this activity instead you are relying the market.
Perhaps 'The Theory of the Firm'? The very existence of large companies is a puzzle if you are a naive believer in the power of free markets, because if the market is efficient then individuals can simply contract with other individuals through the market to achieve their desired inputs and outputs and there is no economic advantage to amassing individuals into higher-level entities called companies. The reason this doesn't this work is because of transaction costs. An example transaction cost could be the time invested in finding partner...
This problem has been studied extensively by economists within the field of organizational economics, and is called the principal-agent problem (Jensen and Meckling, 1976). In a principal-agent problem a principal (e.g. firm) hires an agent to perform some task. Both the principal and the agent are assumed to be rational expected utility maximisers, but the utility function of the agent and that of the principal are not necessarily aligned, and there is an asymmetry in the information available to each party. This situation can lead the a...
The biggest problem with the argument is that, given our current knowledge about the specific details of extraterrestrial civilizations, the term 'aliens' in P[aliens] does not fulfill the hard-to-vary criterion of a good explanation.
Skeptic: "If it's aliens, why haven't they been trying to contact us"
Post-hoc variation: "Because of the Prime Directive"
Skeptic: "If it's a physical vehicle, why does it not obey the laws of physics"
Post-hoc variation: "Because the aliens have discovered new physics which we don't know about".
etc.. etc..
Any unexplained...
From David Chapman's "Better without AI", section "Fear AI Power":
"The AI risks literature generally takes for granted that superintelligence will produce superpowers, but which powers and how this would work is rarely examined, and never in detail. One explanation given is that we are more intelligent than chimpanzees, and that is why we are more powerful, in ways chimpanzees cannot begin to imagine. Then, the reasoning goes, something more intelligent than us would be unimaginably more powerful again. But for hundreds of thousands of years humans were no...
I have now submitted the PR which for future reference can be found here: https://github.com/openai/evals/pull/1073
When presenting claims that the cognitively superior agent wins, often the AI safety community makes an analogies with 2-player zero-sum games such as Chess and Go where the smartest and most ruthless players prevail. However, most real-world interactions are best modeled by repeated non-zero-sum games.
In an ecological context, Maynard Smith and Price introduced the Hawk-Dove game to try to explain the fact that in nature, many animals facing contests over scarce resources such as mates or foods engage in only limited conflict rather than wiping out ...
I was just thinking the same. Below was my attempt using chain-of-thought and multiple simulacra. Not sure it's much improved, but note that all the ideas were generated by GPT not by me, and the template is in principle reusable.
--
You are Liu Cixin. You are writing a novella which starts "a group of scientists has discovered that Troodon dinosaurs were intelligent species who have created a technologically advanced civilization, suddenly destroyed. The year-long path to the scientific discovery starts with the group stumbling upon a strange ou...
GPT-4 is also capable of writing good literary criticism. Below is a GPT-generated review.
--
The novella, tentatively titled "Echoes in the Stone", audaciously attempts to delve into a preposterous hypothesis - that an intelligent dinosaur civilization might have existed before humankind even set foot on this Earth. It brazenly ventures into the realm of speculative fiction, presenting a tale that bristles with palaeontological intrigue and daring conjectures.
In this outrageous narrative, Dr. Ada Worthington, a stoic palaeontologist, and her me...
The idea is reminiscent of quasi-species models: https://en.wikipedia.org/wiki/Quasispecies_model
These became topical in the field of virology during the Sars-Cov-2 pandemic with some researchers hypothesizing that Sars-CoV-2 variants were part of a larger quasi-species, but I've no idea what eventual consensus was, if any. Full disclaimer: I am neither a virologist nor a biologist, and so consider the epistemic status of this comment as pure hand-waving.
Bader W, Delerce J, Aherfi S, La Scola B, Colson P. Quasispecies Analysis of SARS-CoV-2 of ...
@Richard_Ngo I notice this has been tagged as "Internal Alignment (Human)", but not "AI". Do you see trust-building in social dilemmas as a human-specific alignment technique, or do you think it might also have applications to AI safety? The reason I ask is that I am currently researching how large-language models behave in social dilemmas and other non-zero-sum games. We started with the repeated Prisoner's Dilemma, but we are also currently researching how LLM-instantiated simulacra behave in the ultimatum game, public goods, donation-g...
Yes that is me (sorry, I should have put a disclaimer). Feel free to get in touch if you want to discuss 1-1. Thanks to the pointer re mutability-trading; I will take a look, but full disclaimer- I am not a biologist by training.
Further to my original comment, this idea has also been discussed in non-human animals in the context of biological markets (Noe & Hammerstein 1995). In nature, many forms of cooperation can be described in terms of trade, e.g. primate allo-grooming effort can be used as a medium of exchange to obtain not just reciprocal grooming but also can be traded for other goods and services (Barrett et al. 1999).
In artificial markets, counter-party risk can be mitigated through institutions which enforce contracts, but in biological markets this is not pos...
An idea along these lines was first proposed by Roberts and Sherratt in 1998 and since then have been numerous studies which investigate the idea empirically in both human and non-human animals (c.f. Roberts & Renwick 2003).
Roberts, G., Sherratt, T. Development of cooperative relationships through increasing investment. Nature 394, 175–179 (1998). https://doi.org/10.1038/28160
Roberts, G., & Renwick, J. S. (2003). The development of cooperative relationships: an experiment. Proceedings of the Royal Society, 270, 2279–2283. http://www.pubmedcentral.n...
Further to my original comment, this idea has also been discussed in non-human animals in the context of biological markets (Noe & Hammerstein 1995). In nature, many forms of cooperation can be described in terms of trade, e.g. primate allo-grooming effort can be used as a medium of exchange to obtain not just reciprocal grooming but also can be traded for other goods and services (Barrett et al. 1999).
In artificial markets, counter-party risk can be mitigated through institutions which enforce contracts, but in biological markets this is not pos...
This hypothesis is equivalent to stating that if the Language of Thought Hypothesis is true, and also if natural language is very close to the LoT, then if you can encode a lossy compression of natural language you are also encoding a lossy compression of the language of thought, and therefore you have obtained an approximation of thought itself. As such, the argument hinges on the Language of Thought hypothesis, which is still an open question for cognitive science. Conversely if it is empirically observed that LLMs are indeed able to re...
yes if you take a particular side in the socialist calculation debate then a centrally-planned economy is isomorphic with "a market". And yes, if you ignore the Myerson–Satterthwaite theorem (and other impossibility results) then we can sweep aside the fact that most real-world "market" mechanisms do not yield Pareto-optimal allocations in practice :-)