Nice list! I'm skeptical of 2, 3, 4, 7, 9, 10, 21, 22, 25, 28. I'd be interested to hear more about them and also about 29 and the hydra markets stuff. Also, clearly protein folding will be solved to a significant extent, but how solved? Enough for molecular nanotech? Can you say more about what you have in mind?
I think 10x decrease in energy prices is too much. My reasons are:
Like rayom I also noticed you did not mention anything about biology and medicine. I think there will be some advances from that side. A malaria vaccine seems probable by 2040 (maybe ~80%?) and would be a big thing for large parts of the world. Also some improvement in cancer therapy seem to have relatively high probability (nothing even remotely like "cure all cancer", to be clear). We might get some improvement for Alzheimer, dementia or other age-related illnesses, but my "business as usual" expectation is that only moderate advancements will be widely deployed by 2040. Nevertheless they might be sufficient to improve significantly the quality of life of elderly people in rich countries.
Extending on point 2: if we want to talk about a price drop, then we need to think about relative elasticity of supply vs demand - i.e. how sensitive is demand to price, and how sensitive is supply to price. Just thinking about the supply side is not enough: it could be that price drops a lot, but then demand just shoots up until some new supply constraint becomes binding and price goes back up.
(Also, I would be surprised if supercomputers and AI are actually the energy consumers which matter most for pricing. Air conditioning in South America, Africa, Ind...
I understand that Malaria resists attempts at vaccination, but regarding your 80% prediction by 2040, did you see the news that a Malaria vaccine candidate did reach 77% effectiveness in phase II trials just last month? Quote: "It is the first vaccine that meets the World Health Organization's goal of a malaria vaccine with at least 75% efficacy."
People argued for metal prices being a problem for a long time and those predictions usually failed to come true.
Thanks! I edited my thing on energy to clarify, I'm mostly interested in the price of energy for powering large neural nets, and secondarily interested in the price of energy in general in the USA, and only somewhat interested in the price of energy worldwide.
I am not convinced yet that the increased demand from AI will result in increased prices. In fact I think the opposite might happen. Solar panels are basically indefinitely scalable; there are large tracts of empty sunny land in which you can just keep adding more panels basically indefinitely. A...
When I look back twenty years, it seems amazing how little has changed or improved since then. Basically just the same, but some things are less slow.
The arrival of the internet in the nineties was the only real change. The arrival of AI will be the next change, whenever that happens.
And in twenty years the looming maw of death will be closer for most of us, like a bowling ball falling into a black hole.
I share this sentiment. Shockingly little has happened in the last 20 years, good or bad, in the grand scheme of things. Our age might become a blank spot in the memory of future people looking back at history; the time where nothing much happened.
Even this recent pandemic can't shake up the blandness of our age. Which is a good thing, of course, but still.
I expect people to find 1 wild. The rest are pretty straightforward extrapolations of trends, and they're the sort of trends which have historically been quite predictable.
Definitely, and the Nate Silver piece in particular is 8 years out of date. But these are long-term trends, and the predictions don't require much precision - COVID might shift some demographic numbers by 10% for a decade, but that's not enough to substantially change the predictions for 2040.
Sure. Here's a graph from wikipedia with global fertility rate projections, with global rate dropping below replacement around 2040. (Note that replacement is slightly above 2 because people sometimes die before reproducing - wikipedia gives 2.1 as a typical number for replacement rate.)
Here's another one from wikipedia with total population, most likely peaking after 2050.
On the budget, here's an old chart from Nate Silver for US government spending specifically:
The post in which that chart appeared has lots more useful info.
For Chinese GDP, there's some decent answers on this quora question about how soon Chinese GDP per capita will catch up to the US. (Though note that I do not think Chinese GDP per capita will catch up to the US by 2040 - just to other first world countries, most of which have much lower GDP per capita than the US. For instance EU was around $36k nominal in 2019, vs $65k nominal for the US in 2019.) You can also eyeball this chart of historical Chinese GDP growth:
In terms of electricity, transmission and distribution make up 13% and 31% of costs respectively. Even if solar panels were free, I am not confident that reliable electricity would become 10x cheaper as unless each house as quite a few days of storage cheaply, they would still need distribution. Industrial electricity might approach that cheap, but I think it would depend on location and space availability otherwise at least some of the transmission and distribution costs would still exist.
Thanks, this is a good point. I've edited my post to be less confident in non-AI energy uses. Also see my reply to Jacopo.
You cover most of the interesting possibilities on the military technology front, but one thing that you don't mention that might matter especially considering the recent near-breakdowns of some of the nuclear weapon treaties e.g. NEWSTART, is the further proliferation of nuclear weapons including fourth generation nuclear weapons like nuclear shaped charge warheads, pure fusion and sub-kiloton devices or tactical nuclear weapons - and more countries fitting nuclear-armed cruise missiles or drones with nuclear capability which might be a destabilising factor. If laser technology is sufficiently developed we may also see other forms of directed energy weapons becoming more common such as electron beam weapons or electrolasers
The incentive here is scientific discovery.
You didn't answer the question about who you think would engage in that. It's interesting that you ignore the question.
Oh I guess you bring up Russians because they are the bad guys and most other countries are the good guys?
No, because they have another culture with regards to science. You have people like Dmitry Itskov who are willing to persue projects that are not for profit (filing patents) nor for status in the academic world.
It's interesting that you see Americans being greedy as synonymous for them being the good guys. It suggests to me that you haven't thought hard about who does what for what reasons.
If there was any incentive to keeps nukes secret, they would've been kept secret, but the incentive to publicize nukes outweigh the incentive to keep them secret.
I have no idea what you mean which that argument. Who's they? What time are you speaking about?
Anything related to biotech is not included here - care to explain the reason why?
I haven't thought much about biotech and don't know much about it. This is why I made this a question rather than a post, I'm super interested to hear more things to add to the list!
It's only 19 years away; do you mean to say that there are already designer babies being born?
The 3 babies from He Jiankui will be adults by then, definitely; one might quibble about how 'designer' they are, but most people count selection as 'designer' and GenPred claims to have at least one baby so far selected on their medical PGSes (unclear if they did any EDU/IQ PGSes in any way, but as I've always pointed out, because of the good genetic correlations of those with many diseases, any selection on complex diseases will naturally also boost those).
Government agencies benefit from all forms of technologies.
This assumes we are living in a world where the US government has the ability to fund far out biomedical research that's not benefitial to big pharma or another group that can lobby for it.
In reality the US government isn't even able to stock enough masks for a pandemic. I'd love to live in a world where the US government would be able to fund science purely for the sake of scientific discovery independent from any interest groups but there's no reason to believe that's the world in which we are living.
Nukes are deterrents. That's the only reason to invest in them.
Again you fail to point out what time and which actors you are talking about which suggest not having a good model.
If we look at the US government, the US government pretended for a long time that only the president can order nuclear strikes while giving that ability to a bunch of military commanders and setting the nuclear safety codes to 00000000.
If the only reason you invest in nukes is deterrence it makes no sense to have more people able to lunch nukes then the other side knows about. In that world the US government would have no reason to set the safety codes to 00000000 when ordered by congress to have safety codes.
You might also watch the Yes, Prime Minister episode about nuclear deterence for more reasons (while it's exaggerted comedy, they did a lot of background research and talked to people inside the system about how the UK political system really worked at the time).
Most comments on here are just pure conjectures by people with mostly ML background. I can't say I'm educated enough to make these wild guesses on what it's like in 2040
I have enough expertise to make wild guesses about the future to have been paid for that. In the past I was invited by people funded by my government as an expert to participate in a scenario planning excercise that involved models scenario about medical progress.
johnswentworth whom you replied to earns his money studying the history of progress and how it works, so is someone who has a fairly detailed model of how scientific progress works and isn't just someone who just has a ML background that's relevant.
LessWrong isn't a random Reddit forum. It's no place where it's a safe assumption that the people you are talking to don't have relevant experience to talk about what they are talking about.
They key question here is incentives. What incentives is there to produce human clones (likely with more genetic defects then the original) if you can't publish papers afterwards or sell a product?
I don't see any player that had the necessary ability 18 years ago and the incentive to make it happen. Which players do you consider to have both ability and incentive?
Russian billionaries come to mind but if one of them clones himself and treats the clone as his child that seems to be hard to keep secret.
Bold claim! Perhaps you should make a post (or shortform, or even just separate answer to this question) where you lay out your reasoning & evidence? I'd be interested in that.
The constant improvements in nuclear tech will lead to multiple small terrorist organizations possessing portable nuclear bombs. We'll likely see at least a few major cities suffering drastic losses from terrorist threats.
Gene therapy will be strongly encouraged in some developed nations. Near the same level of encouragement as vaccines receive.
Pollution of the oceans will take over as the most popular pressing environmental issue.
I'm especially interested in the nuclear bomb and gene therapy predictions; care to elaborate & explain your reasoning / evidence?
Very cool prompt and list. Does anybody have predictions on the level of international conflict about AI topics and the level of "freaking out about AI" in 2040, given the AI improvements that Daniel is sketching out?
There will still be wars in Europe. I think conflicts will move west of Ukraine, if Ukraine still exists by that point.
It is certainly possible but what kind of scenario are you thinking about?
For moving west of Ukraine the conflicts will have to involve EU or NATO countries, almost certainly both. So that would mean either an open Russia-NATO war or the total breakdown of both NATO and EU. Both scenarios would have huge consequences for the world as a whole, nearly as much as a war between China and US and allies.
AI-written books will be sold on Amazon, and people will buy them. Specialty services will write books on demand based on customer specifications. At least one group, and probably several, will make real money in erotica this way. The market for hand-written mass market fiction, especially low-status stuff like genre fiction and thrillers, will contract radically.
...academia will suffer even more from the influx of papers which were not written by their official authors. The job of the scientific editor will become that much harder.
Epistemic effort: I thought about this for 20 minutes and dumped my ideas, before reading others' answers
discrimination against groups that are marginalised in 2021 has reduced somewhat
Does that prediction inlude poor white people, BDSM people, generally everybody who has to strongly hide part of their identity when living in cities or only those groups that compatible with intersectional thinking?
There's been 20 years of "Prompt programming" now, and so loads of apps have been built using it and lots of kinks have been worked out. Any thoughts on what sorts of apps would be up and running by 2040 using the latest models?
Prompt programming isn't as good as it was cracked up to be. In the past, as old timers never shut up about:
"Programmers* just wrote programs and they *** worked! Or at least gave errors that made sense! And there was some consistency! When programs crashed for 'no reason' there used to a reason! None of this 'neural network gets cancer from the internet's sheer stupidity and dies.' crap!"
In the present, programming is a more...mixed role. Debugging is a nightmare, brute force to find good (inputs to get good) outputs remains a distressingly large amount of the puzzle, despite all the fancy techniques that dress it up as something - anything - else in a world that no longer makes sense.
The people working on maintenance deal with the edge cases and see things they never wanted to see, as the price paid for wonderful tech. "The fat tails of the distribution" as Taleb put it, leads to more disillusionment, burnout, etc. in the associated professions when people try to do things too ambitious. This isn't some grand narrative about Atlantis, and hubris - just more of the same, with technology that can do wonderful things, but is consistently over budget, over-hyped - the dream of AGI remains hacked together, created half by 'machines' and half by people.
Imagine an ice cream machine that seems (far too often) to stop working just when you need it the most, a delicate piece of machinery that is always a pain..to deal with. This is the future of AI. (One second it works great, the next, a shitshow - a move nicknamed 'the treacherous swan dive'. Commercial applications secretly, under the hood involve way too much caching - saving good outputs - and a lot of people working to create stuff that extrapolates from past good outputs, and reasons by input similarity to cover holes that inevitably pop up. In other words, AI will secretly be 20% humans trying to answer the question 'what would the AI do if it worked right on this input instead of spewing nonsense?', when there's enough stuff to cover the holes. The other 80% of the time, it works amazing well, and performs feats once considered miraculous, though amazement quickly fades and consumers soon return to having ridiculous expectations, from this, rather brittle technology.)
*original sentence with typos: Where once programs just wrote programs and they *** worked!
Some examples of products and services:
More people try to do things - like write books, due to encouragement from AIs. Editors are astounded by the dramatic variation in quality over the course of manuscripts.
Fanfiction enters a new age. When just AIs write stories, new genres that play to their strengths result, but which are...difficult to understand. For instance, descendants of 'Egg Salad' which was were when a story is rewritten, with one of the characters...as an egg.
Eventually AIs gets good, but conflicts arise, like: Is the 'unofficial ending' written by GPZ really how the author would have wrapped things up? Or is, GPL's ending, though less exciting, more true to the themes of the story? Debates emerge about whether an author is or isn't human (and to what extent), and whether it's really art if it's not made by a person. Could a human being really have solved the who-dunit, given that it wasn't written by a human being? These arguments over the souls of fiction are taken seriously by some. Fans of Sherlock Holmes seem to care about the legacy/the future of mysteries. Other genres, it varies.
Starlink internet is fast, reliable, cheap, and covers the entire globe.
Starlink isn't super cheap. But the quality, given the price, is a great deal, and eventually it becomes very popular as people get tired of 'the internet being slow' or not working even for short periods of time. In order to cut costs, however, businesses** that 'don't really care about their customers' don't always invest in it, and remain a source of complaints.
**also schools, K-12.
3D printing is much better and cheaper now. Most cities have at least one "Additive Factory" that can churn out high-quality metal or plastic products in a few hours and deliver them to your door, some assembly required. (They fill up downtime by working on bigger orders to ship to various factories that use 3D-printed components, which is most factories at this point since there are some components that are best made that way)
A battle begins for the label 'artisanal'.
"But it looks like it was made by a real person!"
"It looks too good. The errors there, and here, and there - it's too authentic."
Unspeakable things happen in fashion, which becomes way more varied. In some places/groups, waste and conspicuous consumption (ridiculous number of clothes, changing all the time to keep up with fashion changing with speed unimaginable today) grow so extreme that it accidentally creates a competitor out of 'minimalism'*** (a small set of amazing clothes, possibly designed to be tweaked (regularly) a little bit to fit in, but not too much, and not too obviously).
***As a result of very popular/fashionable people being left behind, and fighting the trends, so they don't have to work at keeping up literally every second of every day.
Small props start to creep in.
And ridiculous hats make an astounding comeback!
An all out war between China and the USA over Taiwan has crippled the whole world.
Good news is AI alignment is not an issue anymore.
Are you saying that's the only scenario that would prevent singularity or are you saying that it's generally a probable scenario.
Bold claim! Perhaps you should make a post (or shortform, or even just separate answer to this question) where you lay out your reasoning & evidence? I'd be interested in that. If you think it's infohazardous, maybe just a gdoc?
The predictions about AI-adjacent things seem weird when we condition on AGI not taking off by 2040. Conditional on that, it seems like the most likely world is one where the current scaling trends play out on the current problems, but current methods turned out to not generalize very well to most real-world problems (especially problems without readily-available giant data sets, or problems in non-controlled environments). In other words, this turns out pretty similar to previous AI/ML booms: a new class of problems is solved, but that class is limited, and we go into another AI winter afterwards.
In that world, I'd expect deep learning to be used commercially for things which we're already close to: procedural generation of graphics for games and maybe some movies, auto-generation of low-quality written works (for use-cases which don't involve readers paying close attention) or derivative works (like translations or summaries), that sort of thing. In most cases, it probably won't be end-to-end ML, just tools for particular steps. Prompt programming mostly turns out to be a dead end, other than a handful of narrow use-cases. Automated cars will probably still be right-around-the-corner, with companies producing cool demos regularly but nobody really able to handle the long tail. People will stop spending large amounts on large models and datasets, though models will still grow slowly as compute & data get cheaper.
I was trying hard to do exactly what you recommend doing here, and focus on only the AI-related stuff that seems basically "locked in" at this point and will happen even if no AGI etc. I think +5 OOMs of compute to train AIs by 2040 makes sense in this framework because +2 will come from reduced cost and it's hard for me to imagine no one spending a billion dollars on an AI training run by 2040. I guess that could happen if there's an AI winter, but that would be a trend-busting event... Anyhow, it seems like spending & self-driving-cars are the two cases where we disagree? You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don't happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you'd be like "well fuck AGI is near?" (Or maybe you already have short timelines?)
You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don't happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you'd be like "well fuck AGI is near?"
Self-driving cars would definitely update me significantly toward shorter timelines. Billion-dollar models are more a downstream thing - i.e. people spending billions on training models is more a measure of how close AGI is widely perceived to be than a measure of how close it actually is. So upon seeing billion-dollar models, I don't think I'd update much, because I'd already have updated on the things which made someone spend a billion dollars on a model (which may or may not actually be strong evidence for AGI being close).
In this world, I'd also expect that models are not a dramatic energy consumer (contra your #6), mainly because nobody wants to spend that much on them. I'd also expect chatbots to not have dramatically more usage than today (contra your #7) - it will still mostly be obvious when you're talking to a chatbot, and this will mostly be considered a low-status/low-quality substitute for talking to a human, and still only usable commercially for interactions in a very controlled environment (so e.g. no interactions where complicated or free-form data collection is needed). In other words, chatbot use-cases will generally be pretty similar to today's, though bot quality will be higher. Similar story with predictive tools - use-cases similar to today, limitations similar to today, but generally somewhat better.
I would expect a lot of chat bot usecases to be a mix of humans and bots. The bot can autogenerated text and then a human can check whether that's correct which takes less time then the human writing everything themselves.
Interesting. I think what you are saying is pretty plausible... it's hard for me to reason about this stuff since I'm conditionalizing on something I don't expect to happen (no singularity by 2040).
On point 12, Drone delivery: If the FAA is the reason, we should expect to see this already happening in China?
My hypothesis is, the problem is noise. Even small drones are very loud, and ones large enough to lift the larger packages would be deafening. This is something that's very hard to engineer away, since transferring large amounts of energy into the air is an unavoidable feature of a drone's mode of flight. Aircraft deal with this by being very high up, but drones have to come to your doorstep. I don't see people being ok with that level of noise on a constant, unpredictable basis.
Good point. OTOH, I feel like there are some cities in the world (maybe in China?) where it's super noisy most of the time anyway, with lots of honking cars and whatnot. Also there are rural areas where you don't have neighbors to annoy.
There is at least one firm doing drone delivery in China and they just approved a standard for it.
Lawnmowers are also very loud yet is widely tolerated (more or less). Plus, delivery drones need only to drop off the package and fly away; the noise pollution will only last for a few seconds. I also don't see why it would necessarily be unpredictable; drones don't get stuck in traffic. Maybe a dedicated time window each day becomes an industry standard.
But the real trouble I see with delivery drones is: what's the actual point? What problem is being solved here? Current delivery logistics work very well, I don't see much value being squeezed out of even faster/more predictable delivery. Looks like another solution in search of a problem to me.
To me, the most important thing isn't speed or predictability. It's price. Current delivery methods require a human being. People are expensive. If a delivery drone removes the human being from the equation then that could remove a significant fraction of the price.
There are multiple land based delievery methods that don't require a human: https://www.gearbrain.com/autonomous-food-delivery-robots-2646365636.html
Why couldn't land-based delivery vehicle become autonomous though? That would also cut out the human in the loop.
One reason might be that autonomous flying drone are easier to realize. It is true that air is an easier environment to navigate than ground, but landing and taking off at the destination could involve very diverse and unpredictable situations. You might run into the same long tail problem as self-driving cars, especially since a drone that can lift several kilos has dangerously powerful propellers.
Another problem is that flying vehicles in general are energy inefficient due to having to overcome gravity, and even more so at long distances (tyranny of the rocket equation). Of course you could use drones just for the last mile, but that's an even smaller pool to squeeze value out of.
In general, delivery drones seem less well-suited for densely populated urban environments where landing spots are hard to come by and you only need a few individual trips to serve an entire apartment building. And that's where most of the world will be living anyway.
The underlying assumption of this post is looking increasingly unlikely to obtain. Nevertheless, I find myself back here every once and a while, wistfully fantasizing about a world that might have been.
I think the predictions hold up fairly well, though it's hard to evaluate, since they are conditioning on something unlikely, and because it's only been 1.5 years out of 20, it's unsurprising that the predictions look about as plausible now as they did then. I've since learned that the bottleneck for drone delivery is indeed very much regulatory, so who knows whether it'll exist in 2040. We still don't have flying cars, after all, for basically-regulatory reasons. The military technology I outlined is looking ever-more-plausible thanks to the war in Ukraine illustrating the importance of drones of various kinds.
Thanks for this, really interesting!
Meta question: when you wrote this list, what did your thought process/strategies look like, and what do you think are the best ways of getting better at this kind of futurism?
More context:
Thanks! Good idea to make your own list before reading the rest of mine--I encourage you to post it as an answer.
My process was: I end up thinking about future technologies a lot, partly for my job and partly just cos it's exciting. Through working at AI Impacts I've developed a healthy respect for trend extrapolation as a method for forecasting tech trends; during the discontinuities project I was surprised by how many supposedly-discontinuous technological developments were in fact bracketed on both sides by somewhat-steady trends in the relevant metric. My faith in trend extrapolation has made successful predictions at least once, when I predicted that engine power-to-weight ratios would form a nice trend over two hundred years and yep.
As a result of my faith in trend extrapolation, when I think about future techs, the first thing I do is google around for relevant existing trends to extrapolate. Sometimes this leads to super surprising and super important claims, like the one about energy being 10x cheaper. (IIRC extrapolating the solar energy trend gets us to energy that is 25x cheaper or so, but I was trying to be a bit conservative).
As for the specific list I came up with: This list was constructed from memory, when I was having trouble focusing on my actual work one night. The things on the list were things I had previously concluded were probable, sometimes on the basis of trend extrapolation and sometimes not.
I wouldn't be surprised if I'm just wrong about various of these things. I don't consider myself an expert. Part of why I made the post is to get pushback, so that I could refine my view of the future.
I don't know what your bottleneck is, I'm afraid. I haven't even seen your work, for all I know it's better than mine.
I agree feedback by reality would be great but alas takes a long time to arrive. While we wait, getting feedback from each other is good.
I'm looking for a list such that for each entry on the list we can say "Yep, probably that'll happen by 2040, even conditional on no super-powerful AGI / intelligence explosion / etc." Contrarian opinions are welcome but I'm especially interested in stuff that would be fairly uncontroversial to experts and/or follows from straightforward trend extrapolation. I'm trying to get a sense of what a "business as usual, you'd be a fool not to plan for this" future looks like. ("Plan for" does not mean "count on.")
Here is my tentative list. Please object in the comments if you think anything here probably won't happen by 2040, I'd love to discuss and improve my understanding.
My list is focused on technology because that's what I happened to think about a bunch, but I'd be very interested to hear other predictions (e.g. geopolitical and cultural) as well.