This month I lost a bunch of bets.

Back in early 2016 I bet at even odds that self-driving ride sharing would be available in 10 US cities by July 2023. Then I made similar bets a dozen times because everyone disagreed with me.

The first deployment to potentially meet our bar was Phoenix in 2022. I think Waymo is close to offering public rides in SF, and there are a few more cities being tested, but it looks like it will be at least a couple of years before we get 10 cities even if everything goes well.

Waymo’s current coverage of Phoenix (here)

Back in 2016 it looked plausible to me that the technology would be ready in 7 years. People I talked to in tech, in academia, and in the self-driving car industry were very skeptical. After talking with them it felt to me like they were overconfident. So I was happy to bet at even odds as a test of the general principle that 7 years is a long time and people are unjustifiably confident in extrapolating from current limitations.

In April of 2016 I gave a 60% probability to 10 cities. The main point of making the bets was to stake out my position and maximize volume, I was obviously not trying to extract profit given that I was giving myself very little edge. In mid 2017 I said my probability was 50-60%, and by 2018 I was under 50%.

If 34-year-old Paul was looking at the same evidence that 26-year-old Paul had in 2016 I think I would have given it a 30-40% chance instead of a 60% chance. I had only 10-20 hours of information about the field, and while it’s true that 7 years is a long time it’s also true that things take longer than you’d think, 10 cities is a lot, and expert consensus really does reflect a lot of information about barriers that aren’t easy to articulate clearly. 30% still would have made me more optimistic than a large majority of people I talked to, and so I still would have lost plenty of bets, but I would have made fewer bets and gotten better odds.

But I think 10% would have been about as unreasonable a prediction as 60%. The technology and regulation are mature enough to make deployment possible, so exactly when we get to 10 cities looks very contingent. If the technology was better then deployment would be significantly faster, and I think we should all have wide error bars about 7 years of tech progress. And the pandemic seems to have been a major setback for ride hailing. I’m not saying I got unlucky on any of these—my default guess is that the world we are in right now is the median—but all of these events are contingent enough that we should have had big error bars.

A Waymo car back in 2014

Lessons

People draw a lot of lessons from our collective experience with self-driving cars:

  • Some people claim that there was wild overoptimism, but this does not match up with my experience. Investors were optimistic enough to make a bet on a speculative technology, but it seems like most experts and people in tech thought the technology was pretty unlikely to be ready by 2023. Almost everyone I talked to thought 50% was too high, and the three people I talked to who actually worked on self-driving cars went further and said it seemed crazy. The evidence I see for attributing wild optimism seems to be valuations (which could be justified even by a modest probability of success), vague headlines (which make no attempt to communicate calibrated predictions), and Elon Musk saying things.
  • Relatedly, people sometimes treat self-driving as if it’s an easy AI problem that should be solved many years before e.g. automated software engineering. But I think we really don’t know. Perceiving and quickly reacting to the world is one of the tasks humans have evolved to be excellent at, and driving could easily be as hard (or harder) than being an engineer or scientist. This isn’t some post hoc rationalization: the claim that being a scientist is clearly hard and perception is probably easy was somewhat common in the mid 20th century but was out of fashion way before 2016 (see: Moravec’s paradox).
  • Some people conclude from this example that reliability is really hard and will bottleneck applications in general. I think this is probably overindexing on a single example. Even for humans, driving has a reputation as a task that is unusually dependent on vigilance and reliability, where most of the minutes are pretty easy and where not messing up in rare exciting circumstances is the most important part of the job. Most jobs aren’t like that! Expecting reliability to be as much of a bottleneck for software engineering as for self-driving seems pretty ungrounded, given how different the job is for humans. Software engineers write tests and think carefully about their code, they don’t need to make a long sequence of snap judgments without being able to check their work. To the extent that exceptional moments matter, it’s more about being able to take opportunities than avoid mistakes.
  • I think one of the most important lessons is that there’s a big gap between “pretty good” and “actually good enough.” This gap is much bigger than you’d guess if you just eyeballed the performance on academic benchmarks and didn’t get pretty deep into the weeds. I think this will apply all throughout ML; I think I underestimated the gap when looking in at self-driving cars from the outside, and made a few similar mistakes in ML prior to working in the field for years. That said, I still think you should have pretty broad uncertainty over exactly how long it will take to close this gap.
  • A final lesson is that we should put more trust in skeptical priors. I think that’s right as far as it goes, and does apply just as well to impactful applications of AI more generally, but I want to emphasize that in absolute terms this is a pretty small update. When you think something is even odds, it’s pretty likely to happen and pretty likely not to happen. And most people had probabilities well below 50% and so they are even less surprised than I am. Over the last 7 years I’ve made quite a lot of predictions about AI, and I think I’ve had a similar rate of misses in both directions. (I also think my overall track record has been quite good, but you shouldn’t believe that.) Overall I’ve learned from the last 7 years to put more stock in certain kinds of skeptical priors, but it hasn’t been a huge effect.

The analogy to transformative AI

Beyond those lessons, I find the analogy to AI interesting. My bottom line is kind of similar in the two cases—I think 34-year-old Paul would have given roughly a 30% chance to self-driving cars in 10 cities by July 2023, and 34-year-old Paul now assigns roughly a 30% chance to transformative AI by July 2033. (By which I mean: systems as economically impactful as a low-cost simulations of arbitrary human experts, which I think is enough to end life as we know it one way or the other.)

But my personal situation is almost completely different in the two cases: for self-driving cars I spent 10-20 hours looking into the issue and 1-2 hours trying to make a forecast, whereas for transformative AI I’ve spent thousands of hours thinking about the domain and hundreds on forecasting.

And the views of experts are similarly different. In self-driving cars, people I talked to in the field tended to think that 30% by July 2023 was too high. Whereas the researchers working in AGI who I most respect (and who I think have the best track records over the last 10 years) tend to think that 30% by July 2033 is too low. The views of the broader ML community and public intellectuals (and investors) seem similar in the two cases, but the views of people actually working on the technology are strikingly different.

The update from self-driving cars, and more generally from my short lifetime of seeing things take a surprisingly long time, has tempered my AI timelines. But not enough to get me below 30% of truly crazy stuff within the next 10 years.

New Comment
41 comments, sorted by Click to highlight new comments since: Today at 7:00 PM

Cruise is also operating with a publicly (though, public waitlist) in a few cities: SF, Austin, Phoenix. Recently announced Miami and Nashville, too. I have access.
Edit: also Houston and Dallas. Also probably Atlanta and other locations on their jobs page

You mention eight cities here. Do they count for the bet? 

[-]O O9mo95

Arguably SF, and possibly other cities don’t count. In SF, Waymo and Cruise require you to get on a relatively exclusive waitlist. Don’t see how it can be considered “publicly available”. Furthermore, Cruise is very limited in SF. It’s only available at 10pm-5am in half the city for a lot of users, including myself. I can’t comment on Waymo as it has been months since I’ve signed up for the waitlist.

In regard to Waymo (and Cruise, although I know less there) in San Francisco, the last CPUC meeting for allowing Waymo to charge for driverless service had the vote delayed.  Waymo operates in more areas and times of day than Cruise in SF last I checked.
https://abc7news.com/sf-self-driving-cars-robotaxis-waymo-cruise/13491184/

I feel like Paul's right that the only crystal clear 'yes' is Waymo in Phoenix, and the other deployments are more debatable (due to scale and scope restrictions).

Thanks for taking the time to write out these reflections. 

I'm curious about your estimates for self driving cars in the next 5 years, would you take the same bet at 50:50 odds for a 2028 July date?

Yes. My median is probably 2.5 years to have 10 of the 50 largest US cities where a member of the public can hail a self-driving car (though emphasizing that I don't know anything about the field beyond the public announcements).

Some of these bets had a higher threshold of covering >50% of the commutes within the city, i.e. multiplying fraction of days where it can run due to weather, and fraction of commute endpoints in the service zone. I think Phoenix wouldn't yet count, though a deployment in SF likely will immediately. If you include that requirement then maybe my median is 3.5 years. (My 60% wasn't with that requirement and was intended to count something like the current Phoenix deployment.)

(Updated these numbers in the 60 seconds after posting, from (2/2.5) to (2.5/3.5). Take that as an indication of how stable those forecasts are.)

There is a general phenomenon in tech that has been expressed many times of people over-estimating the short-term consequences and under-estimating the longer term ones (e.g., "Amara's law").

I think that often it is possible to see that current technology is on track to achieve X, where X is widely perceived as the main obstacle for the real-world application Y. But once you solve X, you discover that there is a myriad of other "smaller" problems Z_1 , Z_2 , Z_3 that you need to resolve before you can actually deploy it for Y.

And of course, there is always a huge gap between demonstrating you solved X on some clean academic benchmark, vs. needing to do so "in the wild". This is particularly an issue in self-driving where errors can be literally deadly but arises in many other applications.

I do think that one lesson we can draw from self-driving is that there is a huge gap between full autonomy and "assistance" with human supervision. So, I would expect we would see AI be deployed as (increasingly sophisticated) "assistants' way before AI systems actually are able to function as "drop-in" replacements for current human jobs. This is part of the point I was making here. 

Do you know of any compendiums of such Z_Ns? Would love to read one

I know of one: the steam engine was "working" and continuously patented and modified for a century (iirc) before someone used it in boats at scale. https://youtu.be/-8lXXg8dWHk

See also my post https://www.lesswrong.com/posts/gHB4fNsRY8kAMA9d7/reflections-on-making-the-atomic-bomb

the Manhattan project was all about taking something that’s known to work in theory and solving all the Z_n’s

One IMO important thing that isn't mentioned here is scaling parameter count. Neural nets can be fairly straightforwardly improved simply by making them bigger. For LLMs and AGI, there's plenty of room to scale up, but for the neural nets that run on cars, there isn't. Tesla's self-driving hardware, for example, has to fit on a single chip and has to consume a small amount of energy (otherwise it'll impact the range of the car.) They cannot just add an OOM of parameters, much less three. 
 

I agree about it having to fit on a single chip, but surely the neural net on-board would only have a relatively negligible impact on range compared to how much the electric motor consumes in motion?

IIRC in one of Tesla's talks (I forget which one) they said that energy consumption of the chip was a constraint because they didn't want it to reduce the range of the car. A quick google seems to confirm this. 100W is the limit they say: FSD Chip - Tesla - WikiChip 

IDK anything about engineering, but napkin math based on googling: FSD chip consumes 36 watts currently. Over the course of 10 hours that's 0.36 kWh. Tesla model 3 battery can fit 55kwh total, and takes about ten hours of driving to use all that up (assuming you average 30mph?) So that seems to mean that FSD chip currently uses about two-thirds of one percent of the total range of the vehicle. So if they 10x'd it, in addition to adding thousands of dollars of upfront cost due to the chips being bigger / using more chips, there would be a 6% range reduction. And then if they 10x'd it again the car would be crippled. This napkin math could be totally confused tbc.

(This napkin math is making me think Tesla might be making a strategic mistake by not going for just one more OOM. It would reduce the range and add a lot to the cost of the car, but... maybe it would be enough to add an extra 9 or two of reliability... But it's definitely not an obvious call and I can totally see why they wouldn't want to risk it.)

(Maybe the real constraint is cost of the chips. If each chip is currently say $5,000, then 10xing would add $45,000 to the cost of the car...)

related but tangential: Coning self driving vehicles as a form of urban protest

I think public concerns and protests may have an impact on the self-driving outcomes you're predicting. And since I could not find any indication in your article that you are considering such resistance, I felt it should be at least mentioned in passing.

Whoever downvoted... would you do me the courtesy of expressing what you disagree with?

Did I miss some reference to public protests in the original article? (If so, can you please point me towards what I missed?)

Do you think public protests will have zero effect on self-driving outcomes? (If so, why?)

This is hilarious

My intuition is that you got down voted for the lack of clarity about whether you're responding to me [my raising the potential gap in assessing outcomes for self-driving], or the article I referenced.

For my part, I also think that coning-as-protest is hilarious.

I'm going to give you the benefit of the doubt and assume that was your intention (and not contribute to downvotes myself.) Cheers.

Yes the fact that coning works and people are doing it is what I meant was funny.

But I do wonder whether the protests will keep up and/or scale up. Maybe if enough people protest everywhere all at once, then they can kill autonomous cars altogether. Otherwise, I think a long legal dispute would eventually come out in the car companies' favor. Not that I would know.

Why no mention of the level 4 autonomous robobuggies from Starship. These buggies have been exponentially ramping up now for over 10 years and they can make various grocery deliveries without human oversight. Autonomous vehicles have arrived and they are navigating our urban landscapes! There have been many millions of uneventful trips to date. What I find surprising is that some sort of an oversized robobuggy has not been brought out that would allow a person to be transported by them. One could imagine for example, that patrons of bars who had too many drinks to drive home could be wheeled about in these buggies. This could already be done quite safely on sidewalks at fairly low speed. for those who have had a few too many speed might not be an overly important feature. Considering how many fatalities are involved with impaired drivers it surprises me that MADD has not been more vocal in advocating for such a solution. So, in a sense there is already widespread autonomous vehicles currently operating on American streets. Importantly these vehicles are helping us to reimagine what transport could be. Instead of thinking in terms of rushing from one place to another, people might embrace more of a slow travel mentality in which instead of steering their vehicles they could do more of what they want to like surf the internet, email, chat with GPT etc.. The end of the commute as we know it?   

Interesting, thanks for the update -- I thought that company was going nowhere but didn't have data on it and am pleased to learn it is still alive. According to wikipedia,

In October 2021, Starship said that its autonomous delivery robots had completed 2 million deliveries worldwide, with over 100,000 road crossings daily.[23][24] According to the company, it reached 100,000 deliveries in August 2019 and 500,000 deliveries in June 2020.[25]

By January 2022, Starship's autonomous delivery robots had made more than 2.5 million autonomous deliveries, and traveled over 3 million miles globally,[1][26] making an average 10,000 deliveries per day.[1]

And then according to this post from April 2023 they were at 4 million deliveries.

So that's 
Aug 2019: 100K
June 2020: 500K
Oct 2021: 2M
Jan 2022: 2.5M
April 2023: 4M

I think this data is consistent with the "this company is basically a dead end" hypothesis. Seems like they've made about as many deliveries in the last 1.5 years as in the 1.5 years before that.

However, I want to believe...


 

Thank you Daniel for your reply.

The latest delivery count is 5M. That is a fairly substantial ramp up. It means that the double over the 1.5 years from October 2021 --> April 2023 is being maintained in the 1.5 years from Jan 2022 through July 2023 (admittedly with a considerable amount of overlapping time). 

In addition it is quite remarkable as noted above that they have been level 4 autonomous for years now. This is real world data that can help us move towards other level 4 applications. Obviously, when you try and have level 4 cars that move at highway speeds and must interact with humans problems can happen. Yet, when you move it down to sidewalk speeds on often sparsely traveled pavement there is a reduction in potential harm. 

I am super-excited about the potential of robobuggies! There are a near endless number of potential applications. The COVID pandemic would have been dramatically different with universal robobuggy technology. Locking down society hard when people had to grocery shop would have stopped the pandemic quickly. As it was shopping was probably one of the most important transmission vectors. With a truly hard lockdown the pandemic would have stopped within 2 weeks.

Of course transport will become much safer and more environmentally friendly with robobuggies and people will be spared the burdens of moving about space. It is interesting to note that many people are stuck in so called food deserts and these robobuggies would allow them to escape such an unhealthy existence. Robobuggy transport would also allow school children to have more educational options as they would then not be stuck into attending the school that was closest to their home.

I see robobuggies as a super positive development. Over the next 10 years with continued exponential growth we could see this at global scale.

[-]Ruby8mo53

Curated! I'm a real sucker for retrospectives, especially ones with reflection over long periods of time and with detailed reflection on the thought process. Kudos for this one. I'd be curious to see more elaboration on the points that go behind:

Overall I’ve learned from the last 7 years to put more stock in certain kinds of skeptical priors, but it hasn’t been a huge effect.

I spent 10-20 hours looking into the issue and 1-2 hours trying to make a forecast, whereas for transformative AI I’ve spent thousands of hours thinking about the domain and hundreds on forecasting.


I wonder if the actual wall time invested actually makes a difference here. If your model is flawed, if you're suffering from some bias, simply brute forcing effort won't yield any returns.

I think this is a fallacious assumptions, unless rigorously proven otherwise.

May I ask what metric you are used, using now, to base the demand for self driving share/s? Even with current narrow AI efficiency the increase in heat/crime decreases economic resource usage by the mean, and an already receding desire by the stable income home worker  for use.
Phoenix as a perfect example to future lack of demand. Laptop from house bipeds are the last to venture out for what can be delivered. Gen population retail, sans car takes public transportation.
Availability of needed tech to facilitate use market is present sure, and will undoubtedly increase in viablility. There just will not be any passengers. 

Perhaps the technology could be implemented for the new 'minor injury' class Ambulance?

Technically your bet is null, and could be parlayed double or nothing.

[-][anonymous]9mo0-2

Hi Paul.  I've reflected carefully on your post.  I have worked for several years on a SDC software infrastructure stack and have also spent a lot of time comparing the two situations.

Update: since commentators and downvoters demand numbers: I would say the odds of criticality are 90% by July 2033.   The remaining 10% is that there is a possibility of a future AI winter (investors get too impatient) and there is the possibility that revenue from AI services will not continue to scale.

I think you're badly wrong, again, and the consensus of experts are right, again.

First, let's examine your definition for transformative.  This may be the first major error:

(By which I mean: systems as economically impactful as a low-cost simulations of arbitrary human experts, which I think is enough to end life as we know it one way or the other.)

This is incorrect, and you're a world class expert in this domain.  

Transformative is a subclass of the problem of criticality.  Criticality, as you must know, means a system produces self gain larger than it's self losses.  For AGI, there are varying stages of criticality, which each do settle on an equlibria:

Investment criticality : This means that each AI system improvement or new product announcement or report of revenue causes more financial investment into AI than the industry as a whole burned in runway over that timestep.  

Equilibrium condition: either investors run out of money, globally, to invest or they perceive that each timestep the revenue gain is not worth the amount invested and choose to invest in other fields.  The former equilibrium case settles on trillions of dollars into AI and a steady ramp of revenue over time, the later is an AI crash, similar to the dotcom crash of 2000.

Economic Criticality: This means each timestep, AI systems are bringing in more revenue than the sum of costs  [amortized R&D, inference hardware costs, liability, regulatory compliance, ...]

Equilibrium condition: growth until there is no more marginal tasks an AI system can perform cheaper than a human being.  Assuming a large variety of powerful models and techniques, it means growth continues until all models and all techniques cannot enter any new niches.  The reason why this criticality is not exponential, while the next ones are, is because the marginal value gain for AI services drops with scale.  Notice how Microsoft charges just $30 a month for Copilot, which is obviously able to save far more than $30 worth of labor each month for the average office worker.  

Physical Criticality: This means AI systems, controlling robotics, have generalized manufacturing, mining, logistics, and complex system maintenance and assembly.  The majority, but not all, of labor to produce more of all of the inputs into an AI system can be produced by AI systems.  

Equilibrium condition: Exponential growth until the number of human workers on earth is again rate limiting.  If humans must still perform 5% of the tasks involved in the subdomain of "build things that are inputs into inference hardware, robotics", then the equilibria is when all humans willing, able to work on earth are doing those 5% of tasks.  

AGI criticality: True AGI can learn automatically to do any task that has clear and objective feedback.  All tasks involved in building computer chips, robotic parts (and all lower level feeder tasks and power generation and mining and logistics) have objective and measurable feedback.  Bolded because I think this is a key point and a key crux, you may not have realized this.  Many of your "expert" domain tasks do not get such feedback, or the feedback is unreliable.  For example an attorney who can argue 1 case in front of a jury every 6 months cannot reliably refine their policy based on win/loss because the feedback is so rare and depends on so many uncontrolled variables.

  AGI may still be unable to perform as well as the best experts in many domains.  This is not relevant.  It only has to perform well enough for machines controlled by the AI to collect more resources/build more of themselves than their cost.  

A worker pool of AI systems like this can be considerably subhuman across many domains, or rely heavily on using robotic manipulators that are each specialized for a task, being unable to control general purpose hands, relying heavily on superior precision and vision to complete tasks in a way different than how humans perform it.  They can make considerable mistakes, so long as the gain is positive - miswire chip fab equipment, dropped parts in the work area cause them to flush clean entire work areas, wasting all the raw materials - etc.  I am not saying the general robotic agents will be this inefficient, just that they could be.

Equilibrium Condition: exponential growth until exhaustion of usable elements in Sol.  Current consensus is earth's moon has a solid core, so all of it could potentially be mined for useful elements  A large part of Mars, it's moons, the asteroid belt, and Mercury are likely mineable.  Large areas of the earth via underground tunnel and ocean floor mining.  The Jovian moons.  Other parts of the solar system become more speculative but this is a natural consequence of machinery able to construct more of itself.

Crux : AGI criticality seems to fall short of your  requirement for "human experts" to be matched by artificial systems.  Conversely, if you invert the problem: AGI cannot control robots well, creating a need for billions of technician jobs, you do not achieve criticality, you are rate limited on several dimensions.  AI companies collect revenue more like consulting companies in such a world, and saturate when they cannot cheaply replace any more experts, or the remaining experts enjoy legal protection.  

Requirement to achieve full AGI criticality before 2033: You would need a foundation model trained on all the human manipulation you have the licenses for the video.  You would need a flexible, real time software stack, that generalizes to many kinds of robotic hardware and sensor stack.  You would need an "app store" license model where thousands of companies could contribute, instead of just 3, to the general pool of AI software, made intercompatible by using a base stack.  You would need there to not be hard legal roadblocks stopping progress.  You would need to automatically extend a large simulation of possible robotic tasks whenever surprising inputs are seen in the real world.

Amdahl's law applies to the above, so actually, probably this won't happen before 2033, but one of the lesser criticalities might.  We are already in the Investment criticality phase of this.  

 

Autonomous cars:  I had a lot of points here, but it's simple:
(1) an autonomous robo taxi must collect more revenue than the total costs, or it's subcritical, which is the situation now.  If it were critical, Waymo would raise as many billions as required and would be expanding into all cities in the USA and Europe at the same time.  (look at a ridesharing company's growth trajectory for a historical example of this)

(2) It's not very efficient to develop a realtime stack just for 1 form factor of autonomous car for 1 company.  Stacks need to be general.

(3) There are 2 companies allowed to contribute.  Anyone not an employee of Cruise or Waymo is not contributing anything towards autonomous car progress.  There's no cross licensing, and it's all closed source except for comma.ai.  This means only a small number of people are pushing the ball forward at all, and I'm pretty sure they each work serially on an improved version of their stack.  Waymo is not exploring 10 different versions of n+1 "Driver" agent using different strategies, but is putting everyone onto a single effort, which may be the wrong approach, where each mistake costs linear time.  Anyone from Waymo please correct me.  Cruise must be doing this as they have less money.

[-]O O9mo30

This is incorrect, and you're a world class expert in this domain.

This is a rather rude response. Can you rephrase that?

All tasks involved in building computer chips, robotic parts (and all lower level feeder tasks and power generation and mining and logistics) have objective and measurable feedback. Bolded because I think this is a key point and a key crux, you may not have realized this. Many of your "expert" domain tasks do not get such feedback, or the feedback is unreliable. For example an attorney who can argue 1 case in front of a jury every 6 months cannot reliably refine their policy based on win/loss because the feedback is so rare and depends on so many uncontrolled variables.

I don’t like this point. Many expert domain tasks have vast quantities of historical data we can train evaluators on. Even if the evaluation isn’t as simple to quantify, deep learning intuitively seems it can tackle it. Humans also manage to get around the fact that evaluation may be hard to gain competitive advantages as experts of those fields. Good and bad lawyers exist. (I don’t think it’s a great example as going to trial isn’t a huge part of a most lawyers’ jobs)

Having a more objective and immediate evaluation function, if that’s what you’re saying, doesn’t seem like an obvious massive benefit. The output of this evaluation function with respect to labor output over time can still be pretty discontinuous so it may not effectively be that different than waiting 6 months between attempts to know if success happened.

An example of this is it taking a long time to build and verify whether a new chip architecture improves speeds or having to backtrack and scrap ideas.

[-][anonymous]9mo20

This is a rather rude response. Can you rephrase that?

 

If I were to rephrase I might say something like "just like historical experts Einstein and Hinton, it's possible to be a world class expert but still incorrect.  I think that focusing on the human experts at the top of the pyramid is neglecting what would cause AI to be transformative, as automating 90% of humans matters a lot more than automating 0.1%.   We are much closer to automating the 90% case because..."

I don’t like this point. Many expert domain tasks have vast quantities of historical data we can train evaluators on. Even if the evaluation isn’t as simple to quantify, deep learning intuitively seems it can tackle it. Humans also manage to get around the fact that evaluation may be hard to gain competitive advantages as experts of those fields. Good and bad lawyers exist. (I don’t think it’s a great example as going to trial isn’t a huge part of a most lawyers’ jobs)

Having a more objective and immediate evaluation function, if that’s what you’re saying, doesn’t seem like an obvious massive benefit. The output of this evaluation function with respect to labor output over time can still be pretty discontinuous so it may not effectively be that different than waiting 6 months between attempts to know if success happened.

 

For lawyers: the confounding variables means a robust, optimal policy is likely not possible.  A court outcome depends on variables like [facts of case, age and gender and race of the plaintiff/defendant, age and gender and race of the attorneys, age and gender and race of each juror, who ends up the foreman, news articles on the case, meme climate at the time the case is argued, the judge, the law's current interpretation, scheduling of the case, location the trial is held...]

It would be difficult to develop a robust and optimal policy with this many confounding variables.  It would likely take more cases than any attorney can live long enough to argue or review.  

 

Contrast this to chip design.  Chip A, using a prior design, works.  Design modification A' is being tested.  The universe objectively is analyzing design A' and measurable parameters (max frequency, power, error rate, voltage stability) can be obtained.  

The problem can also be subdivided.  You can test parts of the chip, carefully exposing it to the same conditions it would see in the fully assembled chip, and can subdivide all the way to the transistor level.  It is mostly path independent - it doesn't matter what conditions the submodule saw yesterday or an hour ago, only right now.  (with a few exceptions)

Delayed feedback slows convergence to an optimal policy, yes.  

 

You cannot stop time and argue a single point to a jury, and try a different approach, and repeatedly do it until you discover the method that works.  {note this does give you a hint as to how an ASI could theoretically solve this problem}

I say this generalizes to many expert tasks like [economics, law, government, psychology, social sciences, and others].  Feedback is delayed and contains many confounding variables independent of the [expert's actions].  

While all tasks involved with building [robots, compute], with the exception of tasks that fit into the above (arguing for the land and mineral permits to be granted for the ai driven gigafactories and gigamines), offer objective feedback.

[-]O O9mo10

 the confounding variables means a robust, optimal policy is likely not possible.  A court outcome depends on variables like [facts of case, age and gender and race of the plaintiff/defendant, age and gender and race of the attorneys, age and gender and race of each juror, who ends up the foreman, news articles on the case, meme climate at the time the case is argued, the judge, the law's current interpretation, scheduling of the case, location the trial is held...]
 


I don't see why there is no robust optimal policy.  A robust optimal policy doesn't have to always win. The optimal chess policy can't win with just a king on the board.  It just has to be better than any alternative to be optimal as per the definition of optimal. I agree it's unlikely any human lawyer has an optimal policy, but this isn't unique to legal experts. 


There are confounding variables, but you could also just restate evaluation as trial win-rate (or more succinctly trial elo) instead of as a function of those variables. Likewise you can also restate chip evaluation's confounding variables as being all the atoms and forces that contribute to the chip.  
 The evaluation function for lawyers, and many of your examples is objective. The case gets won, lost, settled, dismissed, etc. 

The only difference is it takes longer to verify generalizations are correct if we go out of distribution with a certain case. In the case of a legal-expert-AI, we can't test hypotheses as easily. But this still may not be as long as you think. Since we will likely have jury-AI when we approach legal-expert-AI, we can probably just simulate the evaluations relatively easily (as legal-expert-AI is probably capable of predicting jury-AI). In the real world, a combination of historical data and mock trials help lawyers verify their generalizations are correct, so it wouldn't even be that different as it is today (just much better). In addition, process based evaluation probably does decently well here, which wouldn't need any of these more complicated simulations.

You cannot stop time and argue a single point to a jury, and try a different approach, and repeatedly do it until you discover the method that works.  {note this does give you a hint as to how an ASI could theoretically solve this problem}

Maybe not, but you can conduct mock trials and look at billions of historical legal cases and draw conclusions from that (human lawyers already read a lot). You can also simulate a jury and judge directly instead of doing a mock trial. I don't see why this won't be good enough for both humans and an ASI. The problem has high dimensionality as you stated, with many variables mattering, but a near optimal policy can still be had by capturing a subset of features. As for chip-expert-AI, I don't see why it will definitely converge to a globally optimal policy.

All I can see is that initially legal-expert-AI will have to put more work in creating an evaluation function and simulations. However, chip-expert-AI has its own problem where it's almost always working out of distribution, unlike many of these other experts. I think experts in other fields won't be that much slower than chip-expert-AI. The real difference I see here is that the theoretical limits of output of chip-expert-AI are much higher and legal-expert-AI or therapist-expert-AI will reach the end of the sigmoid much sooner. 

I say this generalizes to many expert tasks like [economics, law, government, psychology, social sciences, and others].  Feedback is delayed and contains many confounding variables independent of the [expert's actions].  

Is there something significantly different between a confounding variable that can't be controlled like scheduling and unknown governing theoretical frameworks that are only found experimentally? Both of these can still be dealt with. For the former, you may develop different policies for different schedules. For the latter, you may also intuit the governing theoretical framework. 
 

[-][anonymous]9mo20

So in this context, I was referring to criticality. AGI criticality is a self amplifying process where the amount of physical materials and capabilities increases exponentially with each doubling time. Note it is perfectly fine if humans continue to supply as inputs the network of isolated AGI instances are unable to produce. (Vs others who imagine a singleton AGI on its own. Obviously eventually the system will be rate limited by available human labor if its limited this way, but will see exponential growth until then)

I think the crux here is that all is required is for AGI to create and manufacture variants on existing technology. At no point does it need to design a chip outside of current feature sizes, at no point does any robot it designs look like anything but a variation of robots humans designed already.

This is also the crux with Paul. He says the AGI needs to be as good as the 0.1 percent human experts at the far right side of the distribution. I am saying that doesn't matter, it is only necessary to be as good as the left 90 percent of humans. Approximately , I go over how the AGI doesn't even need to be that good, merely good enough there is net gain.

This means you need more modalities on existing models but not necessarily more intelligence.

It is possible because there are regularities in how the tree of millions of distinct manufacturing tasks that humans do now use common strategies. It is possible because each step and substep has a testable and usually immediately measurable objective. For example : overall goal. Deploy a solar panel. Overall measurable value : power flows when sunlight available. Overall goal. Assemble a new robot of design A5. Overall measurable objective: new machinery is completing tasks with similar Psuccess. Each of these problems is neatly dividable into subtasks and most subtasks inherit the same favorable properties.

I am claiming more than 99 percent of the sub problems of "build a robot, build a working computer capable of hosting more AGI" work like this.

What robust and optimal means is that little human supervision is needed, that the robots can succeed again and again and we will have high confidence they are doing a good job because it's so easy to measure the ground truth in ways that can't be faked. I didn't mean the global optimal, I know that is an NP complete problem.

I was then talking about how the problems the expert humans "solve" are nasty and it's unlikely humans are even solving many of them at the numerical success levels humans have in manufacturing and mining and logistics, which are extremely good at policy convergence. Even the most difficult thing humans do - manufacture silicon ICs - converges on yields above 90 percent eventually.

How often do lawyers unjustly lose, economists make erroneous predictions, government officials make a bad call, psychologists fail and the patient has a bad outcome, or social science uses a theory that fails to replicate years later.

Early AGI can fail here in many ways and the delay until feedback slows down innovation. How many times do you need to wait for a jury verdict to replace lawyers with AI. For AI oncologists how long does it take to get a patient outcome of long term survival. You're not innovating fast when you wait weeks to months and the problem is high stakes like this. Robots deploying solar panels are low stakes with a lot more freedom to innovate.

This is incorrect, and you're a world class expert in this domain.

What's incorrect? My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?

(In the subsequent text it seems like you are saying that you don't need to match human experts in every domain in order to have a transformative impact, which I agree with. I set the TAI threshold as "economic impact as large as" but believe that this impact will be achieved by systems which are in some respects weaker than human experts and in other respects stronger/faster/cheaper than humans.)

Do you think 30% is too low or too high for July 2033?

[+][anonymous]9mo-5-1
[+][anonymous]9mo-6-2

Gentle feedback is intended

This is incorrect, and you're a world class expert in this domain.

The proximity of the subparts of this sentence read, to me, on first pass, like you are saying that "being incorrect is the domain in which you are a world class expert."

After reading your responses to O O I deduce that this is not your intended message, but I thought it might be helpful to give an explanation about how your choice of wording might be seen as antagonistic. (And also explain my reaction mark to your comment.)

For others who have not seen the rephrasing by Gerald, it reads

just like historical experts Einstein and Hinton, it's possible to be a world class expert but still incorrect. I think that focusing on the human experts at the top of the pyramid is neglecting what would cause AI to be transformative, as automating 90% of humans matters a lot more than automating 0.1%. We are much closer to automating the 90% case because...

I share the quote to explain why I do not believe that rudeness was intended.

Sadly, you might be wrong again.

I am thinking, maybe the reason you made the wrong bet back in 2016 was because you knew too little about the field rather than you naturally being too optimistic. Now in transformative, you are the expert and thus free of the rookie's optimism. But you corrected your optimism again and made a "pessimistic" future bet -- this could be an over correction.

Most importantly, betting against the best minds in one field is always a long shot. :D