I'm imagining a scenario in which OpenAI etc. continue to scale up their language models, and eventually we get GPT-6, which has the following properties:
--It can predict random internet text better than the best humans
--It can answer correctly questions which seem to require long chains of reasoning to answer
--With appropriate prompts it can write novel arguments, proofs, code, etc. of quality about equal to the stuff it has read on the internet. (The best stuff, if the prompt is designed correctly)
--With appropriate prompts it can give advice about arbitrary situations, including advice about strategies and plans. Again, the advice is about as good as the stuff it read on the internet, or the best stuff, if prompted correctly.
--It costs $200 per page of output, because just running the model requires a giant computing cluster.
My question is, how does this transform the world? I have the feeling that the world would be transformed pretty quickly. At the very least, the price of running the model would drop by orders of magnitude over the next few years due to algorithmic and hardware improvements, and then we'd see lots of jobs getting automated. But I'm pretty sure stuff would go crazy even before then. How?
(CONTEXT: I'm trying to decide whether "Expensive AGI" is meaningfully different from the usual AGI scenarios. If we get AGI but it costs $200 per page instead of $2, and thus isn't economically viable for most jobs, does that matter? EDIT: What if it costs $2,000 or $20,000 per page? Do things go FOOM soon even in that case?)
I believe the central impact will be a powerful compression of knowledge and a flood of legibility, which will be available to institutions and leadership first. Examples include:
Even the higher number, like $20,000 per page, is a good deal for something like Wikipedia, where the page is available to millions of readers, or for things like the Stanford Encyclopedia of Philosophy. This will have a big impact on:
While this could easily be used to generate high-quality propaganda, I feel like it still weighs much more heavily in favor of the truth. This is because bullshit's biggest advantage is that is is fast, cheap and easy to vary, whereas reality is inflexible and we comprehend it slowly. But under the proposed conditions, advanced bullshit and the truth cost the same amount, and have a similar speed. This leaves reality's inflexible pressure on every dimension of every problem a decisive advantage in favor of the truth. This has a big impact on things like:
Especially if it is at the lower end of the price scale, it becomes trivial to feed it multiple prompts and get multiple interpretations of the same question. This will give us a lot of information both in terms of compression and also in terms of method, which will cause us to be able to redirect resources into the most successful methods, and also to drop inefficient ones. I further expect this to be very transparent very quickly, though mechanisms like:
It will see heavy use by the intelligence community. A huge problem we have in the United States is our general lack of language capability; for example if GPT-6 knows Mandarin as well as any Mandarin speaker, and translates to English as well as any translator, then suddenly we get through the bottleneck and gain access to good information about Chinese attitudes. I expect this same mechanism will make foreign investment much more attractive almost universally, since domestic and foreign firms will now be working on an almost level playing field in any country with widespread internet access. If this prediction holds, I expect a large boom in investment in otherwise underdeveloped countries, because the opportunities will finally be legible.
Another interesting detail is that if GPT-6 can provide the best summaries of the available knowledge, this means that most of the world's institutions will then be working from a much more uniform knowledge base than we do currently. My initial reaction was that this is clearly for the best because the biggest roadblock to coordination is getting on the same page with the other stakeholders, but it also occurs to me that it makes transparent to everyone the cases where certain stakeholders have an untenable position. I suspect this in turn makes it more likely that some parties get the screws put to them, and further when they a) understand their own position and b) understand that everyone else understands it, they are more likely to try something radical to shift the outcome. Consider North Korea, for example.