Would a neutral end make sense? What would that even mean? E.g., there is superintelligence but leaves us alone.
Closest I can think of top-of-mind is The Demiurge's Older Brother.
- It seems strange to want to have effective solutions for problems of the physical world without any input from the physical world.
To human intuition, yes, but fundamentally, no, not if the needed input is implicit in the training data. Or if the AI can do enough pure math to find better ways to do physical simulation to extrapolate from the data it already has. You don't need to be able to predict General Relativity from three frames of a falling apple to think that maybe, if you have simultaneous access to the results of every paper and experiment ever published, and the specs of every product and instrument and device ever made, that you could reliably come up with some importantly useful new things.
As a minor example, we already live in the world (and have for a decade or more) where materials informatics and computational chemistry tools, with no AGI at all, can cut the number of experiments a company needs to develop a new material in half. Or where generative design software, with no AGI, run by someone not an expert in the relevant field of engineering, can greatly improve the performance of mechanical parts, which can be immediately made with 3D printers. That's a very small subset of STEM for sure, but it exists today.
The following contains resources that I (Eleni) curated to help the AI Science team of AI Safety Camp 2023 prepare for the second half of the project, i.e., forecasting science capabilities. Suggestions for improvement of this guide are welcome.
Key points and readings
for forecasting in general:
for forecasting AI in particular:
If you’d like to read some posts to inform your thinking about explanation, I highly recommend:
Specific changes to consider:
How does change happen?
Scaling Laws: more high-quality data get us better results than more parameters.
Concrete AI stories
Understanding STEM AGI scenarios
Forecasting AGI and science models overlap
1) that AIs will be able to publish correct science before they can load dishwashers.
2) the world ends before more than 10% of cars on the street are autonomous.
Continuity vs discontinuity in AI progress
Meta-scientific considerations