TL;DR: Post-AGI career advice needed (asking for a friend).
Let's assume, for the sake of discussion, that Leopold Aschenbrenner is correct that at some point in the fairly near future (possibly even, as he claims, this decade) AI will be capable of acting as a drop-in remote worker as intelligent as the smartest humans and capable of doing basically any form of intellectual work that doesn't have in-person requirements, and that it can do so as well or better than pretty-much all humans, plus that it's at least two or three orders of magnitude cheaper than current pay for intellectual work (so at least an order of magnitude cheaper than a subsistence income) — and probably decreasing in cost as well.
Let's also assume for this discussion that at some point after that (perhaps not very long after that, given the increase in capacity to do intellectual work on robotics), that developments in robotics overcome Moravec's paradox, and mass production of robots greatly decreases their cost, to the point where a robot (humanoid or otherwise) can do basically every job that requires manual dexterity, hand-eye-coordination, and/or bodily agility, again for significantly less than a human subsistence wage. Let's further assume that some of the new robots are now well-waterproofed, so that even plumbers, lifeguards, and divers are out of work, and also that some of them can be made to look a lot like humans, for tasks where that appearance is useful or appealing.
I'd also like to also assume for this discussion that the concept of a "human job" is still meaningful, thus the human race doesn't go extinct or get entirely disempowered, and that we don't to any great extent merge with machines: some of us may get very good at using AI-powered tools or collaborating with AI co-workers, but we don't effectively plug AI in as a third hemisphere of our brain to the point where it dramatically increases our capabilities.
So, under this specific set of assumptions, what types of paying jobs (other than being on UBI) will then still be available to humans, even if only to talented ones? How long-term are the prospects for these jobs (after the inevitable economic transition period)?
[If you instead want to discuss the probability/implausibility/timelines of any or all of my three assumptions, rather than the economic/career consequences if all three of them occurred, then that's not an answer to my question, but it is a perfectly valid comment, and I'd love to discuss that in the comments section.]
So the criterion here is basically "jobs for which being an actual real human is a prerequisite".
Here are the candidate job categories I've already thought of:
(This is my original list, plus few minor edits: for a list significantly revised in light of all the discussion from other people's answers and comments, see my answer below.)
Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).
Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.
Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.
Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.
Doing some intellectual and/or physical work that AI/robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. (Could also be combined with item 3. below.)
Example: Doctor, veterinarian, lawyer, priest, babysitter, nurse, primary school teacher.
Epistemic status: Many people tell me "I'd never let an AI/a robot do <high-stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion, but it's unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.
Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly. Requires being reliably good at the job, and at appearing reassuringly competent while doing so.
I'd be interested to know if people think there will be specific examples of this that they believe will never go away, or at least will take a very long time to go away? (Priest is my personal strongest candidate.)
Giving human feedback/input/supervision to/of AI/robotic work/models/training data, in order to improve, check, or confirm its quality.
Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values
Epistemic status: seems inevitable, at least at first.
Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing things/self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/or more capable models need more training data and/or society keeps moving out-of-previous distribution. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we're already getting increasingly good at generating synthetic training data.]
In-person sex work where the client is willing to pay a (likely order-of-magnitude) premium for a real human provider.
Epistemic status: human nature.
Economic limits: Requires rather specific talents.
Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")
Examples: (status symbol) receptionist, maid, personal assistant
Epistemic status: human nature (assuming there are still people this unusually rich).
Economic limits: There are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.
Providing human-species-specific reproductive or medical services.
Examples: Surrogate motherhood, wet-nurse, sperm/egg donor, blood donor, organ donor.
Epistemic status: still needed.
Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.
So, what other examples can people think of?
One category that I'm personally really unsure about the long-term viability of is being an artist/creator/influencer/actor/TV personality. Just being fairly good at drawing, playing a musical instrument/other creative skills is clearly going to get automated out of having any economic value, and being really rather good at it is probably going to turn into "your primary job is to create more training data for the generative algorithms", i.e. become part of item 3. above. What is less clear to me is whether (un-human-assisted) AIs will ever become better than world class humans (who are using AI tools and/or with AI coworkers) for the original creativity aspect this sort of stuff (they will, inevitably, get technically better at performing it than unassisted humans), and if they do, to what extent/for how long people will still want content from an actual human instead, just because it's from a human, even if it's not as good (thus this making this another example of either item 1. or 5. above).
Regarding category 2, and the specific example of "lawyer", I personally think that most of this category will go away fairly quickly. Full disclosure, I'm a lawyer (mostly intellectual property related work), currently working for a big corporation. So my impression is anecdotal, but not uninformed.
TL; DR - I think most lawyer-work is going away with AI's, pretty quickly. Only creating policy and judging seem to be the kinds of things that people would pay for other humans to do. (For a while, anyway.)
I'd characterize legal work as falling primarily into three main categories:
So in short: paper-pushers; sharks; and judges.
Note that I would consider most political positions that lawyers usually fill to be in one of these categories. For instance, legislators (who obviously need not be lawyers, but often are) do transactional legal work. Courtroom judges are clearly the third group. Prosecutors/DAs are sharks.
Paper-pushers:
I see AI taking over this category almost immediately. (It's already happening, IMO.)
A huge amount of this work is preparing appropriate documents to make everyone feel that their position is adequately protected. LLM's are already superficially good at this, and the fact that there are books out there to provide basic template forms for so many transactional legal matters suggest that this is an easily templatized category.
As far as trusting the AI to do the work in place of a human, this is the type of work that most corporations or individuals feel very little emotion over. I have rarely been praised for producing a really good legal framework document or contract. And the one real exception is when it encapsulated good risk-analysis (judging).
Sharks:
My impression is that this will take longer to be taken over, but not all that long. (I think we could see it within a few years, even without real AGI coming in to existence.)
This work is about aggressively collecting and arguing for a specific side, pre-identified by the client. So there is no judgment or human value that is necessarily associated with it. So I don't think that the lack of a human presence will feel very significant to someone choosing an advocate.
At the moment, this is (IMHO) the category requiring the most creativity in its approach, but ... Given what I see from current LLMs, I think that this remains essentially a word / logic game, and I can imagine AI being specifically trained to do this well.
My biggest concern here is regarding hallucination. I'm curious what others with a real technical sense of how this can be limited appropriately would think about this.
Judges:
I think that this is the last bastion of human-lawyering. It's most closely tied to specific human desires and I think people will feel that relinquishing judgment to a machine will FEEL hardest.
Teaching a machine to judge against a specific set of criteria should be easy-ish. Automated sentencing guidelines are intended to do exactly this, and we already use them in many places. And an AI should be able to create a general sense of what risks are presented by a given set of facts, I suspect.
But the real issue in judging is in deciding which of those risks present the most significant risk and consequence, BASED ON EXPECTED HUMAN RESPONSES. That's what an in-house counsel at a company spends a lot of time advising on, and what a judge in a courtroom is basing decisions that extend or expand existing law decides on the basis of.
And while I think that AI can do that, I also think that most people will see the end result as being very dependent on the subjective view of the judge/counselor as to what is really important and really risky. And that level of subjectivity is something that may well be too hard to trust to an AI that is not really transparent to the client community (either the company leadership, or the public at large).
So, I don't think it's a real lack of capability here, but that this role hits humans in a soft spot and they will want to retain this under visible human control for longer. Or at least we will all require more experience and convincing to believe that this type of judging is being done with a human point of view.
Basically, this is already a space where a lot of people feel political pressure has a significant impact on results, and I don't see anyone being comfortable letting a machine of possibly alien / inscrutable political ideology make these judgments / give this advice.
So I think the paper-pushers and sharks are short-lived in the AI world.
Counselors/judges will last longer, I think, since they are roles that specific reflect human desire as expressed in law. But even then, most risk-evaluating starts with analysis that I think AI's will be tasked to do, much like interns do today for courtroom judges. So I don't think we'll need nearly as many.
On a personal note, I hope to be doing more advising (rather than paper-pushing and negotiating) to at least slightly future-proof my current role.
Thanks for the detailed, lengthy (and significantly self-deprecating) analysis of that specific example — clearly you've thought about this a lot. I obviously know far less about this topic than you do, but your analysis, both of likely future AI capabilities and human reactions to them, both sound accurate to me.
Good luck with your career.