This was a good catch! I did actually mean world GDP, not world GDP growth. Because people have already predicted on this, I added the correct questions above as new questions, and am leaving the previous questions here for reference:
If you're the question author, you can resolve your question on Elicit by clicking 'Yes' or 'No' in the expanded question!
How to add your own questions:
See our launch post for more details!
You can search for the question on elicit.org/binary and see the history of all predictions made! E.G. If you copy the question title in this post, and search by clicking Filter then pasting the title into "Question title contains," you can find the question here.
Yeah this seems pretty reasonable. It's actually stark looking at the Our World in Data – that seems really high per year. Do you have your model somewhere? I'd be interested to see it.
A rough distribution (on a log scale) based on the two points you estimated for wars (95% < 1B people die in wars, 85% < 10M people die in wars) gives a median of ~2,600 people dying. Does that seem right?
I noticed that your prediction and jmh's prediction are almost the exact opposite:
(I plotted this here to show the difference, although this makes the assumption that you think the probability is ~uniformly distributed from 2030 – 2100). Curious why you think these differ so much? Especially jmh, since 90% by 2030 is more surprising - the Metaculus prediction for when the next human being will walk...
Thank you for putting this spreadsheet database together! This seemed like a non-trivial amount of work, and it's pretty useful to have it all in one place. Seeing this spreadsheet made me want:
I thought the 2008 GCR questions were really interesting, and plotted the median estimates here. I was surprised by / interested in:
This is a really great conditional question! I'm curious what probability everyone puts on the assumption (GPT-N gets us to TAI) being true (i.e. do these predictions have a lot of weight in your overall TAI timelines)?
I plotted human_generated_text and sairjy's answers:
I also just discovered BERI's x-risk prediction market question set and Jacobjacob & bgold's AI forecasting database, which seem really helpful for this!
Here's a colab you can use to do this! I used it to make these aggregations:
The Ethan + Datscilly distribution is a calculation of:
- 25% * Your inside view of prosaic AGI
- 60% * Datscilly's prediction (renormalized so that all the probability < 2100)
- 15% * We get AGI > 2100 or never
This has an earlier median (2040) than your original distribution (2046).
(Note for the colab: You can use this to run your own aggregations by plugging in Elicit snapshots of the distributions you want to aggregate. We're actively working on the Elicit API, so if th...
Daniel and SDM, what do you think of a bet with 78:22 odds (roughly 4:1) based on the differences in your distributions, i.e: If AGI happens before 2030, SDM owes Daniel $78. If AGI doesn't happen before 2030, Daniel owes SDM $22.
This was calculated by:
Oh yeah that makes sense, I was slightly confused about the pod setup. The approach would've been different in that case (still would've estimated how many people in each pod were currently infected, but would've spent more time on the transmission rate for 30 feet outdoors). Curious what your current prediction for this is? (here is a blank distribution for the question if you want to use that)
Here’s my prediction for this! I predicted a median of March 1, 2029. Below are some of the data sources that informed my thinking.
Here's my prediction, and here's a spreadsheet with more details (I predicted expected # of people who would get COVID). Some caveats/assumptions:
In a similar vein to this, I found several resources that make me think it should be higher than 1% currently and in the next 1.5 years:
If people don't have a strong sense of who these people are/would be, you can look through this google scholar citation list (this is just the top AI researchers, not AGI researchers).
An update: We've set up a way to link your LessWrong account to your Elicit account. By default, all your LessWrong predictions will show up in Elicit's binary database but you can't add notes or filter for your predictions.
If you link your accounts, you can:
* Filter for and browse your LessWrong predictions on Elicit (you'll be able to see them by filtering for 'My predictions')
* See your calibration for LessWrong predictions you've made that have resolved
* Add notes to your LessWrong predictions on Elicit
* Predict on LessWrong questions in the Elic... (read more)