1 min read

3

This is a special post for quick takes by Tamay. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
12 comments, sorted by Click to highlight new comments since:

Here’s a rough description of an idea for a betting procedure that enables people who disagree about long-term questions to make bets, despite not wanting to commit to waiting until the long-term questions are resolved. 

Suppose person A and person B disagree about whether P, but can’t find any clear concrete disagreements related to this question that can be decided soon. Since they want to bet on things that pay out soon (for concreteness say they only want to bet on things that can pay out within 5 years), they don’t end up betting on anything.

What they can do is they could agree to bet on P, and enter into a contract (or a good-faith agreement) that requires them to, after a period of 5 years, report their true odds about P.  The contract would then enable either bettor to unanimously back out of the bet, at which point the payouts would be distributed according to the difference of the odds they agreed to and the average of the odds that they currently report. In other words, the bettor who was closer to the consensus after 5 years is paid out in proportion to how much closer they were.

To ensure that bettors approximately truthfully report their odds about P after the horizon of 5 years, the contract requires A and B to report their odds to a trusted intermediary (who announces these odds simultaneously), and requires either party to accept any follow-up bets at (some function of) these reported credences. 

Bettors might agree ahead of time to the range of acceptable follow-up bet sizes, though importantly, follow-up bet sizes need to be expected to be relatively large (say, a non-trivial fraction of the existing bets) to ensure that bettors have an incentive to report something close to their true beliefs.

Follow-up bets could be revisited in the same way after another 5 years, and this would continue until P resolves, or until the betters settle. However, because bettors are required to take follow-up bets, they also have an incentive to develop accurate beliefs about P so we might expect disagreements to usually be resolved short of when P resolves. They furthermore have an incentive to arrive at a consensus if they want to avoid making follow-up bets. 

On this mechanism, bettors know that they can expect to fairly resolve their bets on a short horizon, as each will have an incentive to end the bet according to their consensus-view of who was closer to the truth. Hence, bettors would be keen to bet with each-other about P if they think that they’re directionally right, even when they don't want to wait until P completely is decided.

Anything predicated on "true odds" that are different from "odds actually encoded in wagers" is going to fail.  The whole reason any bet is available is because people's beliefs ("true odds") differ.  And in many (MANY!) cases, each believes the other to be at least somewhat irrational, or at least weighting evidence incorrectly.  Why would we expect such a counterparty to get closer to truth over time, for a proposition that isn't testable inside a reasonable time window?

A much better mechanism is to dive into cruxes and agree on shorter-term outcomes that you have different predictions for, based on your models.  Bet on those.

To ensure that bettors approximately truthfully report their odds about P after the horizon of 5 years, the contract requires A and B to report their odds to a trusted intermediary (who announces these odds simultaneously), and requires either party to accept any follow-up bets at (some function of) these reported credences. 


Are you thinking of requiring each party to accept bets on either side? And including from other parties, or only with each other? Being forced to bet both sides could ensure honesty, assuming they haven't found other bets on the same or highly correlated outcomes they can use for arbitrage.

Are you thinking of requiring each party to accept bets on either side?

Being forced to bet both sides could ensure honesty, assuming they haven't found other bets on the same or highly correlated outcomes they can use for arbitrage.

Yes. Good point.

And including from other parties, or only with each other?

I was thinking that betting would be restricted to the initial two parties (i.e. A and B), but I can imagine an alternative in which it's unrestricted.

Interesting. It reminds me of Glen Weyl's property tax idea.

I'm not convinced this can't be manipulated or at least won't be very misleading, though.

You could imagine one party was betting at odds they consider very favourable to them, and the other party betting at odds they consider only slightly favourable, based on their respective beliefs. Then, even if they don't change their credences, one party has more room to move their odds towards their own true credences, and so drag the average towards it, and take the intermediate payments,

If you can't find better intermediate outcomes, it might be better to use a betting market and allow people to cash out early as odds change. Or, bet on how the odds on a market or Metaculus or whatever will change in a few years (with high enough volume so that it's hard to manipulate).

You could imagine one party was betting at odds they consider very favourable to them, and the other party betting at odds they consider only slightly favourable, based on their respective beliefs. Then, even if they don't change their credences, one party has more room to move their odds towards their own true credences, and so drag the average towards it, and take the intermediate payments,

Sorry, I'm confused. Isn't the 'problem' that the bettor who takes a relatively more favourable odds has higher expected returns a problem with betting in general?

Hmm, ya, fair. Still, who pays who in the intermediate steps isn't necessarily very informative about where the average credence is or where it's going.

It is unless it's clear that a side that made a mistake in entering a lopsided bet. I guess the rule-of-thumb is to follow big bets (which tends to be less clearly lopsided) or bets made by two people whose judgment you trust.

I don't see how this follows. How would you know ahead of time that a bet is too lopsided in an adversarial setting with one side or both sides withholding private information, their true credences? And how lopsided is enough? Aren't almost all bets somewhat lopsided?

Since one party will almost surely have more room between the implied credences of the first bet and their own credences, we should expect directional influence in the second bet or (set of bets) whether or not anyone's beliefs changed. And if their credences aren't actually changing, we would still expect payments from one side to the other.

[-]Tamay13-3

Short version: The claim that AI automation of software engineering will erase NVIDIA's software advantage misunderstands that as markets expand, the rewards for further software improvements grow substantially. While AI may lower the cost of matching existing software capabilities, overall software project costs are likely to keep increasing as returns on optimization rise. Matching the frontier of performance in the future will still be expensive and technically challenging, and access to AI does not necessarily equalize production costs or eliminate NVIDIA's moat.

I often see the argument that, since NVIDIA is largely software, when AI automates software, NVIDIA will have no moat, and therefore NVIDIA a bad AI bet. The argument goes something like: AI drives down the cost of software, so the barriers to entry will be much lower. Competitors can "hire" AI to generate the required software by, for example, tasking LLMs with porting application-level code into appropriate low-level instructions, which would eliminate NVIDIA's competitive advantage stemming from CUDA.

However, while the cost of matching existing software capabilities will decline, the overall costs of software projects are likely to continue increasing, as is the usual pattern. This is because, with software, the returns to optimization increase with the size of the addressable market. As the market expands, companies have greater incentives to invest intensely because even small improvements in performance or efficiency can yield substantial overall benefits. These improvements impact a large number of users, and the costs are amortized across this extensive user base. 

Consider web browsers and operating systems: while matching 2000s-era capabilities now takes >1000x fewer developer hours using modern frameworks, the investments that Google makes in Chrome and Microsoft in Windows vastly exceed what tech companies spent in the 2000s. Similarly, as AI becomes a larger part of the overall economy, I expect the investments needed for state-of-the-art GPU firmware and libraries to be greater than those today.

When software development is mostly AI-driven, there will be opportunities to optimize software with more spending, such as by spending on AI inference, building better scaffolding, or producing better ways of testing and verifying potential improvements. This just seems to match our understanding of inference scaling for other complex reasoning tasks, such as programming or mathematics.

It’s also unlikely that the relative cost of producing the same software will be much more equalized; that anyone can hire the same "AI” to do the engineering. Just having access to the raw models is often not sufficient for getting state-of-the-art results (good luck matching AlphaProof's IMO performance with the Gemini API).

To be clear, I am personally not too optimistic about NVIDIA's long term future. There are good reasons to expect their moat won't persist:

  • Dethroning NVIDIA is now a trillion dollar proposition, and their key customers are all trying to produce GPU substitutes
  • Rapid technological progress tends to erode competitive advantages by enabling substitute technologies
  • NVIDIA has had issues adopting new technologies, such as CoWoS-L packaging, and therrefore appears less competent in staying ahead of its competition.

My claim is narrower: the argument that "when AI can automate software engineering, companies whose moat involves software will be outcompeted" seems incorrect.

There is an insightful literature that documents and tries to explain why large incumbent tech firms fail to invest appropriately in disruptive technologies, even when they played an important role in its invention. I speculatively think this sheds some light on why we see new firms such as OpenAI rather than incumbents such as Google and Meta leading the deployment of recent innovations in AI, notably LLMs.

Disruptive technologies—technologies that initially fail to satisfy existing demands but later surpass the dominant technology—are often underinvested in by incumbents, even when these incumbents played a major role in their invention. Henderson and Clark, 1990 discuss examples of this phenomenon, such as Xerox's failure to exploit their technology and transition from larger to smaller copiers:

 Xerox, the pioneer of plain-paper copiers, was confronted in the mid-1970s with competitors offering copiers that were much smaller and more reliable than the traditional product. The new products required little new scientific or engineering knowledge, but despite the fact that Xerox had invented the core technologies and had enormous experience in the industry, it took the company almost eight years of missteps and false starts to introduce a competitive product into the market. In that time Xerox lost half of its market share and suffered serious financial problems

and RCA’s failure to embrace the small transistorized radio during the 1950s:

In the mid-1950s engineers at RCA's corporate research and development center developed a prototype of a portable, transistorized radio receiver. The new product used technology in which RCA was accomplished (transistors, radio circuits, speakers, tuning devices), but RCA saw little reason to pursue such an apparently inferior technology. In contrast, Sony, a small, relatively new company, used the small transistorized radio to gain entry into the US, market. Even after Sony's success was apparent, RCA remained a follower in the market as Sony introduced successive models with improved sound quality and FM capability. The irony of the situation was not lost on the R&D engineers: for many years Sony's radios were produced with technology licensed from RCA, yet RCA had great difficulty matching Sony's product in the marketplace

A few explanations of this "Innovator's curse" are given in the literature:

  • Christensen (1997) suggests this is due to, among other things:
    • Incumbents focus on innovations that address existing customer needs rather than serving small markets. Customer bases usually ask for incremental improvements rather than radical innovations.
    • Disruptive products are simpler and cheaper; they generally promise lower margins, not greater profits
    • Incumbents’ most important customers usually don’t want radically new technologies, as they can’t immediately use these
  • Reinganum (1983) shows that under conditions of uncertainty, incumbent monopolists will rationally invest less in innovation than entrants will, for fear of cannibalizing the stream of rents from their existing products
  • Leonard-Barton (1992) suggests that the same competencies that have driven incumbent’s commercial success may produce ‘competency traps’ (engrained habits, procedures, equipment or expertise that make change difficult); see also Henderson, 2006
  • Henderson, 1993 highlights that entrants have greater strategic incentives to invest in radical innovation, and incumbents fall prey to inertia and complacency

After skimming a few papers on this, I’m inclined to draw an analogue here for AI: Google produced the Transformer; labs at Google, Meta, and Microsoft, have long been key players in AI research, and yet, the creation of explicitly disruptive LLM products that aim to do much more than existing technologies has been led mostly by relative new-comers (such as OpenAI, Anthropic, and Cohere for LLMs and StabilityAI for generative image models).

The same literature also suggests how to avoid the "innovator curse", such as through establishing independent sub-organizations focused on disruptive innovations (see Christensen ,1997 and Christensen, 2003), which is clearly what companies like Google have done, as its AI labs have a large degree of independence. And yet this seems not to seem to have been sufficient to establish the dominance of these firms when it comes to the frontiers of LLMs and the like.

I suspect that from inside it seems like the company uses various metrics to evaluate its employees, and the new inventions usually do not look good from this perspective. Like, when you start your own company, you can accept that during the first year or two you will only eat ramen, if it means than in five or ten years you have a chance to become rich. In someone else's company, this simply means that your KPIs suck, so the project will get cancelled, or a new manager will be assigned, who will change the original idea into something that seems good in short term.

Another reason would be company politics and bureaucracy. Like, you cannot use the best tools for the job, but instead what the rest of the company is using, even if your needs are different... and in the worst case the company standard will be some internally developed tool with lots of bugs and no documentation that no one can complain about because the person who developed it 5 or 10 year ago is currently too high in the company hierarchy.

(That is basically what you said, the first is the "incremental improvements, immediate use", the second is the "engrained habits and procedures". I guess my point is that from near mode it will appear much less rational than the abstract scientific descriptions.)