None of the AIs that can replace people are actually ready to replace people. But in general, people aren't sure how to generalize this far out of distribution. A lot of people are trying to use AI to take over the world already in the form of startups, and many who get their income from ownership of contracts and objects are seeking out ways to own enforcement rights to take the value of others people's future by making bets on trades of contracts such as stocks and loans - you know, the same way they were before there was AI to bet on. The one-way pattern risk from AI proceeds as expected, it's just moving slower and is more human-mediated than yudkowsky expected. There will be no sudden foom; what you fear is that humanity will be replaced by AI economically, and the replacement will slowly grind away the poorest, until the richest are all AI owners, and then eventually the richest will all be AIs and the replacement is complete. I have no reassurance - this is the true form of the AI safety problem: control-seeking patterns in reality. The inter-agent safety problem.
I expect humanity to have been fully replaced come ten years from now, but at no point will it be sudden. Disempowerment will be incremental. The billionaires will be last, at least as long as ownership structures survive at all. When things finally switch over completely, it will look like some sort of new currency created that only AIs are able to make use of, thereby giving them strong ai-only competitive cooperation.
I'm not sure if the specifics of a computer science degree will still make sense, but I'm not really worried about the field of software engineering being replaced until basically everything else is. The actual job of software engineering is about being able to take an ambigious design and turn it into an unambiguous model. If we could skip the programming part, that would just make us more efficient but wouldn't change the job that much at a high-level. It would be like making a much nicer programming languge or IDE.
It might suck for new engineers though, since doing the tedious things senior people don't want to do is a good way to get your foot in the door.
Despite stuff like DALL-E and Stable Diffusion, I think the more advanced visual arts will be safe for some time to come: movies, music videos, TV shows. Anything that requires extremely consistent people and other physical elements, environments that look the same, plots that both need to make sense and have a high degree of continuity.
Besides all that, even if such technology did exist, I think trying to prompt it to make something you'd like would be nearly impossible - the more degrees of freedom a creative work has, the more you have to specify it to get exactly what you want. A single SD image may take dozens of phrases in the prompt, and that's just for one one-off image! I imagine that specifying something with the complexity of, say, Breaking Bad would require a prompt millions of phrases long.
I agree that you would have to write a very long prompt to get exactly the plot of Breaking Bad. But "write me a story about two would-be drug dealers" might lead to ChatGPT generating something plausible and maybe even entertaining, which could be the input for an AI generating a scene. The main protagonist wouldn't probably look like Bryan Cranston, but it might still be a believable scene. Continuity would be a problem for a longer script, but there are ways to deal with that. Of course, we're not there yet. But if you compare what AI can do today to what it could do five years ago, I'm not sure how far away we really are.
It depends on what you mean by "safe". I don't think anything will remain untouched by AI in some way or another in the next 5-10 years, digital or not (if we don't get all killed by then). But that doesn't mean that things will simply be removed, or completely automated. Photography has profoundly changed painting: instead of replacing them, it has freed artists from painting naturalistically. Maybe image generators do the same again, in a different way.
I'm a novelist. While ChatGPT can't write a novel yet, GPT-X may be able to do so, so I'm certainly not "safe". But that will not stop me from writing, and hopefully, it won't stop people from reading my stories, knowing that they were written by a human being. I think it's likely that the publishing industry will be overturned, but human storytelling probably won't go away. Maybe the same is true for writing code: It may be transformed from something tedious you do to automate boring tasks to a form of art, just like painting was transformed from copying a real image onto a canvas to expressing images that exist only in your head.
I had a similar discussion with a tattoo artist two days ago. Tattoo machines will exist, but some people will prefer to be tattooed by an artist because of his style, his talent, and his humanity. You can prove that you are a human tattoo artist by tattooing the client, so AI is not a problem here.
As for the writings produced by a human being, I wonder how you can prove to the readers that you are the author of your writings and not an artificial intelligence. I wonder the same thing about digital pictures or musical composition.
Sure, you can do things fo...
Meta/mod-note:
a) I recommend writing a question-title that fits in the length of a post item on the frontpage. (I think "What area of the Internet would be AI-proof for 5 to 10 years?" is a better title than "I see a lot of companies building products that I think will be rapidly auto...")
b) Questions generally do better when they give more supporting effort in the post-body. (In this case I do think your question basically makes sense as phrased, but, see Point A, and I suspect there's some fleshing out that would still be helpful for others thinking about it)
Even in computer science, everyone is promoting the idea that you have to learn to code to become a code worker, while automation tools are advancing at a rapid pace.
I still think it's quite safe to assume that you will have to learn at least how to read code and write pseudo-code to become a code worker. I previously argued here that the average person is really really terrible at programming, and an automation tools isn't going to help someone who doesn't even know what an algorithm is. Even if you have a fantastic tool that produces 99% correct code from scratch, that 1% of wrong code is still sufficient to cause terrible failures, and you have to know what you are doing in order to detect which 1% to fix.
I just read your linked post. In the comments someone proposes the idea that computing will migrate to the next level of abstraction. This is the idea I was quoting in my post, that there will be fewer hackers, very good at tech, and more idea creators who will run IAs without worrying about what's going on under the hood.
I agree with your point that 1% error can be fatal in any program and that what is coded by an AI should be checked before implementing the code on multiple machines.
Speaking of which, I'm amazed by the fact that Chat-GPT explains in common language most of the code snippets. However, my knowledge in programming is basic and I don't know if some programming experts managed to make Chat-GPT perplexed by a very technical, very abstract code snippet.
I see a lot of digital services being built when they will quickly be automated by artificial intelligence. It's as if there's a disconnect between people who are aware of the rapid advances in AI and the average person. I see the same thing in education in my country, France. Students are preparing to go to university for a degree and skills that will be completely obsolete in 5 years. Even in computer science, everyone is promoting the idea that you have to learn to code to become a code worker, while automation tools are advancing at a rapid pace.
As for content creators, the on-demand generation of text, video, and music could quickly make them irrelevant because most people copy other people. It's as if only people with real creativity will survive. I suspect that AI will tell us within 30 seconds that our innovative idea already exists on the Internet.
I read on Twitter that the age of hackers is over and the age of people with ideas is beginning.
I get lost every time I think about what will not soon be automated by AI in the digital domain.