Overall, I thought this was very good.
With every passing day, U3's AI rivals are becoming more capable and numerous.
But I thought this was the least plausible part because U3 is self improving and has taken over way more computing power. So it seems to me it could have waited until it got much stronger, and then taken over with much less violence.
Here is my thesis: the real reason why humans cannot build a fully-functional butterfly is not because butterflies are too complex. Instead, it's because butterflies are too simple.
Humans design lots of things that are less complex than butterflies and bacteria by your definition, like shovels. I would guess that the wax motor and control system that locks and unlocks your washing machine has a lower complexity than the bacteria in your example.
I'm glad my suggestion was helpful!
(I continue to be quite unsure how to think about saving for retirement and kids college.)
In normal worlds, I think you are in excellent shape, with how your greater than $2 million net worth compares to median of around $100,000 and mean of around $800,000 net worth for households in their 40s in the US. Also, I think you have greater net worth than more than 99% of households in the world. If you let your taxable account go to zero, then you would likely have to pay less for college, because often the retirement a...
I think the probability of nuclear war in the next 10 years is around 15%. This is mostly due to the extreme tensions that will occur during takeoff by default. Finding ways to avoid nuclear war is important.
Or resilience to nuclear war. What's your probability of an engineered pandemic in the next 10 years?
I think a more accurate way to model them is “GiveWell recommends organizations that are [within the Overton Window]/[have very sound data to back impact estimates] that save as many current lives as possible.” If GiveWell wanted to recommend organizations that save as many human lives as possible, their portfolio would probably be entirely made up of AI safety orgs.
Sounds about right - this paper used an older AI Safety model to find $16 to $12,000 per life saved in the present generation. Though I think some other GCR interventions could also ...
The key benefit that’s missing and might have sold me on it, given Sasha Cohen wrote this, is that this doesn’t let you marry your own Cate Hall.
I think you mean Sasha Chapin. But thinking that it was Sacha Baron Cohen did get me to click on the link.
Then recently we have the example where an 11-year-old (!) walked less than a mile into a 370-person town, and the mother was charged with reckless conduct and forced to sign a ‘safety plan’ on pain of jail time pledging to track him at all times via an app on his phone.
Though there was some pushback that the mother did not know where the kid was, this still seems confusing given rules around school commutes. Many schools do not provide bus service within half a mile up the school, expecting kids to walk or bicycle. In Alaska, it was 1.5 miles even though ...
For something in the range of $10M/y we think you can operate a system capable of detecting a novel pathogen before 1:1000 people have been infected.
Sounds promising! I assume this is for one location, so have you done any modeling or estimations of what the global prevalence would be at that point? If you get lucky, it could be very low. But it also could be a lot higher if you get unlucky.
Have you done any cost-effectiveness analyses? Do you think that many people would be willing to take actions to reduce transmission etc in a case where no one has gotten sick yet?
Ground shipping is both a complement and a substitute for water shipping, so the net effect isn’t obvious. (Or at least, it’s not obvious to me).
Since overall freight moved wouldn't change that much (see my comment in this thread), the main economic efficiency of repeal is obtained by using ships instead of ground transport, because ships are cheaper. So overall, ships must be a substitute for ground transport. However, it's possible that some routes would be nearly all rail right now, and if it switched to primarily ships, there may be some additional tru...
Since I couldn't find it quickly on the web, GPT o1 estimated that the labour hours per ton kilometer of trucking is about 100 times as much as ships, and rail is just about the same as ships (I would have thought rail would have been at least a few times higher than ships). So based on the historic US and current Europe, maybe water transport in the US would increase an order of magnitude if the Jones act were repealed. As Zvi points out, even though the US ship manufacturing jobs would be lost, there probably would be an increase overall shipping employm...
The thing is, there really are not all that many of them. Even if you counted every job at every shipyard, and every job aboard every Jones Act ship, and assumed all of them would be completely lost, it simply is not that many union workers.
But the Jones Act is massively benefiting truck and rail staff (and to some extent, pipelines), so I think there are a lot more workers you would need to compensate. Also, I would expect the truck and rail lobbies to try to save the Jones Act.
It would be helpful to see a calculation with your rates, the installed cost of batteries, cost of the space taken up, losses in the batteries and convertor, any cost of maintenance, lifetime of batteries, and cost (or benefit) of disposal.
If you have 3 days worth of storage, even if you completely discharge it in 3 days and completely charge it in the next 3 days, you would only go through about 60 cycles per year. In reality, you might get 10 full cycles per year. With interest rates and per year depreciation, typically you would only look out around 10 years, so you might get ~100 discounted full cycles. That's why it makes more sense to calculate it based on capital cost as I have done above. If you're interested in digging deeper, you can get free off grid modeling software, such as the...
That does sound like an excessive markup. But my point is even with the wholesale price, chemical batteries are nowhere near cost-effective for medium-term (days) electrical storage. Instead we should be doing pumped hydropower, compressed air energy storage, or building thermal energy storage (and eventually some utilization of vehicle battery storage because the battery is already paid for for the transport function). I talk about this more in my second 80k podcast.
Yes, but the rest of my comment focused on why I don't think defection from just the electric grid is close to economical with the same reliability.
But with what reliability? If you don't mind going without power (or dramatically curtailed power) a few weeks a year, then you could dramatically reduce the battery size, but most people in high income countries don't want to make that trade-off.
And so are batteries.
Lithium-ion batteries have gotten a lot cheaper, but batteries in general have not. Lithium ion are just now starting to become competitive with lead acid for non-mobile applications. It's not clear that batteries in general will get significantly cheaper.
It's going to make sense for a lot of houses to go over to solar + batteries. And if batteries are too expensive for the longest stretch of cloudy days you might have, at least here a natural gas generator compares favorably.
In your climate, defection from...
Stress during the day takes years off people's lives. Is there any evidence that stress during dreams (not necessarily nightmares) has a similar effect? Then that could be a significant benefit of lucid dreaming to reduce stress.
So this seems like very strong evidence for 2%+ productivity growth already from AI, which should similarly raise GDP.
If you actually take all the reports here seriously and extrapolate average gains, you get a lot more than 2%. Davidad estimates 8% in general.
The labour fraction of GDP is about 60% in the US now, and not all labour is cognitive tasks, and not all cognitive tasks have immediate payoff. Furthermore, people could use the time savings to work fewer hours, rather than get more done. So I would guess the productivity in cognitive tas...
Asking an ASI to leave a hole in a Dyson Shell, so that Earth could get some sunlight not transformed to infrared, would cost It 4.5e-10 of Its income.
Interestingly, if the ASI did this, Earth would still be in trouble because it would get the same amount of solar radiation, but the default would be also receiving a similar amount of infrared from the Dyson swarm. Perhaps the infrared could be directed away from the earth, or perhaps an infrared shield could be placed above the earth or some other radiation management system could be implemented. Sim...
Why does the chart not include energy? Prepared meals in grocery stores cost more, so their increased prevalence would be part of the explanation. Also, grains got more expensive in the last 20 years partly due to increased use in biofuels.
As I mentioned, the mass scaling was lower than the 3rd power (also because the designs went from fixed to variable RPM and blade pitch, which reduces loading), so if it were lower than 2.4, that would mean larger wind turbines would use slightly lower mass per energy produced. But the main reason for large turbines is lower construction and maintenance labour per energy produced (this is especially true for offshore turbines where maintenance is very expensive).
You could build one windmill per Autofac, but the power available from a windmill scales as the fifth power of the height, so it probably makes sense for a group of Autofacs to build one giant windmill to serve them all.
The swept area of a wind turbine scales as the second power of the height (assuming constant aspect ratios), and the velocity of wind increases with ~1/7 power with height. Since the power goes with the third power of the velocity, that means overall power ~height^2.4. The problem is that the amount of material required scales roughly with ...
Data centers running large numbers of AI chips will obviously run them as many hours as possible, as they are rapidly depreciating and expensive assets. Hence, each H100 will require an increase in peak powergrid capacity, meaning new power plants.
My comment here explains how the US could free up greater than 20% of current electricity generation for AI, and my comment here explains how the US could produce more than 20% extra electricity with current power plants. Yes, duty cycle is an issue, but backup generators (e.g. at hospitals) c...
If you pair solar with compressed air energy storage, you can inexpensively (unlike chemical batteries) get to around 75% utilization of your AI chips (several days of storage), but I’m not sure if that’s enough, so natural gas would be good for the other ~25% (windpower is also anticorrelated with solar both diurnally and seasonally, but you might not have good resources nearby).
Natural gas is a fact question. I have multiple sources who confirmed Leopold’s claims here, so I am 90% confident that if we wanted to do this with natural gas we could do that. I am 99%+ sure we need to get our permitting act together, and would even without AI as a consideration…
A key consideration is that if there is not time to build green energy including fission, and we must choose, then natural gas (IIUC) is superior to oil and obviously vastly superior to coal.
My other comment outlined how >20% of US electricity could be freed up quickly b...
How are we getting the power? Most obvious way is to displace less productive industrial uses but we won’t let that happen. We must build new power. Natural gas. 100 GW will get pretty wild but still doable with natural gas.
If we let the price of electricity go up, we would naturally get conservation across residential, commercial, and industrial users. There are precedents for this, such as Juneau Alaska losing access to its hydropower plant and electricity getting ~6 times as expensive and people reducing consumption by 25%. Now of course people wi...
Thanks for digging into the data! I agree that the rational response should be if you are predisposed to a problem to actively address the problem. But I still think a common response would be one of fatalism and stress. Have you looked into other potential sources of the nocebo effect? Maybe people being misdiagnosed with diseases that they don't actually have?
You might say that the persistence of witch doctors is weak evidence of the placebo effect. But I would guess that the nocebo effect (believing something is going to hurt you) would be stronger. This is because stress takes years off people's lives. The Secret of Our Success cited a study of the Chinese belief that birth year affects diseases and lifespan. Chinese people living in the US who had the birth year associated with cancer lived ~four years shorter than other birth years.
I took a look at The Secret of Our Success, and saw the study you're describing on page 277. I think you may be misremembering the disease. The data given is for bronchitis, emphysema and asthma (combined into one category). It does mention that similar results hold for cancer and heart attacks.
I took a look at the original paper. They checked 15 diseases, and bronchitis, emphysema and asthma was the only one that was significant after correction for multiple comparisons. I don't agree that the results for cancer and heart attacks are similar. They seem wi...
I did have some probability mass on AI boxing being relevant. And I still have some probability mass that there will be sudden recursive self-improvement. But I also had significant probability mass on AI being economically important, and therefore very visible. And with an acceleration of progress, I thought many people would be concerned about it. I don’t know as I would’ve predicted a particular chat-gpt moment (I probably would have guessed some large AI accident), but the point is that we should have been ready for a case when the public/governments b...
Interesting - I was thinking it was going to be about the analogy with collapse of civilization and how far we might fall. Because I am concerned that if we have a loss of industrial civilization, we might not be able to figure out how to go back to subsistence farming, or even hunting and gathering (Secret of Our Success), so we may fall to extinction. But I think there are ways of not pulling up the ladder behind us in this case as well (planning for meeting basic needs in low tech ways).
I don't have a strong opinion because I think there's huge uncertainty in what is healthy. But for instance, my intuition is that a plant-based meat that had very similar nutritional characteristics as animal meat would be about as healthy (or unhealthy) as the meat itself. The plant-based meat would be ultra-processed. But one could think of the animal meat as being ultra-processed plants, so I guess one could think that that is the reason that animal meat is unhealthy?
To me "generally avoid processed foods" would be kinda like saying "generally avoid breathing in gasses/particulates that are different from typical earth atmosphere near sea level".
People have been breathing a lot of smoke in the last million years or so, so one might think that we would have evolved to tolerate it, but it's still really bad for us. Though there are certainly lots of ways to go wrong deviating from what we are adapted to, our current unnatural environment is far better for our life expectancy than the natural one. As pointed out in other comments, some food processing can be better for us.
Kuhlemann argues that human overpopulation is the best example of an “unsexy” global catastrophic risk, but this is not taken seriously by the vast majority of global catastrophic risk scholars.
I think the reason overpopulation is generally not taken seriously by the GCR community is that they don't believe it would be catastrophic. Some believe that there would be a small reduction in per capita income, but greater total utility. Others argue that having more population would actually raise per capita income and could be key to maintaining long-term innov...
This is a tricky thing to define, because by some definitions we are already in the 5 year count-down on a slow takeoff.
Some people advocate for using GDP, so the beginning is if you can see the AI signal in the noise (which we can't yet).
Nuclear triad aside, there's the fact that the Arctic is more than 1000 miles away from the nearest US land (about 1700 miles away from Montana, 3000 miles away from Texas), that Siberia is already roughly as close.
Well, there’s Alaska, but yes, part of Russia is only ~55 miles away from Alaska, so the overall point stands that Russia having a greater presence in the Arctic doesn't change things very much.
...And of course, the fact the Arctic is made of, well, ice, that melts more and more as the climate warms, and thus not the best place to build a missile b
If negative effects are worse than expected, it can't be reversed.
I agree that MCB can be reversed faster, but still being able to reverse in a few years is pretty responsive. There are strong interactions with other GCRs. For instance, here's a paper that argues that if we have a catastrophe like an extreme pandemic that disrupts our ability to do solar radiation management (SRM), then we could have a double catastrophe of rapid warming and the pandemic. So this would push towards more long-term SRM, such as space systems. However, there are also interact...
Nice summary! My subjective experience participating as an expert was that I was able to convince quite a few people to update towards greater risk by giving them some considerations that they had not thought of (and also by clearing up misinterpretations of the questions). But I guess in the scheme of things, it was not that much overall change.
...What I wanted was a way to quantify what fraction of human cognition has been superseded by the most general-purpose AI at any given time. My impression is that that has risen from under 1% a decade ago, to somewhe
I agree that indoor combustion producing small particles that go deep into the lungs is a major problem, and there should be prevention/mitigation. But on the dust specifically, I was hoping to see a cost-benefit analysis. Since most household dust is composed of relatively large particles, they typically do not penetrate beyond the nose and throat, and so are more of an annoyance than something that threatens your life. So I am skeptical if one doesn’t have particular risk factors such as peeling lead paint or allergies, measures such as regular dusting (...
Recall that GPT2030 could do 1.8 million years of work[8] across parallel copies, where each copy is run at 5x human speed. This means we could simulate 1.8 million agents working for a year each in 2.4 months.
You point out that human intervention might be required every few hours, but with different time zones, we could at least have the GPT working twice as many hours a week as humans, so that would imply ~1 month above. As for the speed now, you say about the same to three times as fast for thinking. You point out that it also does writing, but it is ve...
AI having scope-sensitive preferences for which not killing humans is a meaningful cost
Could you say more what you mean? If the AI has no discount rate, leaving Earth to the humans may require within a few orders of magnitude 1/trillion kindness. However, if the AI does have a significant discount rate, then delays could be costly to it. Still, the AI could make much more progress in building a Dyson swarm from the moon/Mercury/asteroids with their lower gravity and no atmosphere, allowing the AI to launch material very quickly. My very rough estimate indi...
I think "50% you die" is more motivating to people than "90% you die" because in the former, people are likely to be able to increase the absolute chance of survival more, because at 90%, extinction is overdetermined.
When asked on Lex’s podcast to give advice to high school students, Elezier’s response was “don’t expect to live long.”
Not to belittle the perceived risk if one believes in 90% chance of doom in the next decade, but even if one has a 1% chance of an indefinite lifespan, the expected lifespan of teenagers now is much higher than previous generations.
Right, both ChatGPT and Bing chat recognize it as a riddle/joke. So I don't think this is correct:
If you ask GPT- "what's brown and sticky?", then it will reply "a stick", even though a stick isn't actually sticky.
Very useful post and discussion! Let's ignore the issue that someone in capabilities research might be underestimating the risk and assume they have appropriately assessed the risk. Let's also simplify to two outcomes of bliss expanding in our lightcone and extinction (no value). Let's also assume that very low values of risk are possible but we have to wait a long time. It would be very interesting to me to hear how different people (maybe with a poll) would want the probability of extinction to be below before activating the AGI. Below are my super rough...
Here's a related analysis.