TL;DR: Post-AGI career advice needed (asking for a friend).
Let's assume, for the sake of discussion, that Leopold Aschenbrenner is correct that at some point in the fairly near future (possibly even, as he claims, this decade) AI will be capable of acting as a drop-in remote worker as intelligent as the smartest humans and capable of doing basically any form of intellectual work that doesn't have in-person requirements, and that it can do so as well or better than pretty-much all humans, plus that it's at least two or three orders of magnitude cheaper than current pay for intellectual work (so at least an order of magnitude cheaper than a subsistence income) — and probably decreasing in cost as well.
Let's also assume for this discussion that at some point after that (perhaps not very long after that, given the increase in capacity to do intellectual work on robotics), that developments in robotics overcome Moravec's paradox, and mass production of robots greatly decreases their cost, to the point where a robot (humanoid or otherwise) can do basically every job that requires manual dexterity, hand-eye-coordination, and/or bodily agility, again for significantly less than a human subsistence wage. Let's further assume that some of the new robots are now well-waterproofed, so that even plumbers, lifeguards, and divers are out of work, and also that some of them can be made to look a lot like humans, for tasks where that appearance is useful or appealing.
I'd also like to also assume for this discussion that the concept of a "human job" is still meaningful, thus the human race doesn't go extinct or get entirely disempowered, and that we don't to any great extent merge with machines: some of us may get very good at using AI-powered tools or collaborating with AI co-workers, but we don't effectively plug AI in as a third hemisphere of our brain to the point where it dramatically increases our capabilities.
So, under this specific set of assumptions, what types of paying jobs (other than being on UBI) will then still be available to humans, even if only to talented ones? How long-term are the prospects for these jobs (after the inevitable economic transition period)?
[If you instead want to discuss the probability/implausibility/timelines of any or all of my three assumptions, rather than the economic/career consequences if all three of them occurred, then that's not an answer to my question, but it is a perfectly valid comment, and I'd love to discuss that in the comments section.]
So the criterion here is basically "jobs for which being an actual real human is a prerequisite".
Here are the candidate job categories I've already thought of:
(This is my original list, plus few minor edits: for a list significantly revised in light of all the discussion from other people's answers and comments, see my answer below.)
Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).
Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.
Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.
Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.
Doing some intellectual and/or physical work that AI/robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. (Could also be combined with item 3. below.)
Example: Doctor, veterinarian, lawyer, priest, babysitter, nurse, primary school teacher.
Epistemic status: Many people tell me "I'd never let an AI/a robot do <high-stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion, but it's unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.
Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly. Requires being reliably good at the job, and at appearing reassuringly competent while doing so.
I'd be interested to know if people think there will be specific examples of this that they believe will never go away, or at least will take a very long time to go away? (Priest is my personal strongest candidate.)
Giving human feedback/input/supervision to/of AI/robotic work/models/training data, in order to improve, check, or confirm its quality.
Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values
Epistemic status: seems inevitable, at least at first.
Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing things/self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/or more capable models need more training data and/or society keeps moving out-of-previous distribution. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we're already getting increasingly good at generating synthetic training data.]
In-person sex work where the client is willing to pay a (likely order-of-magnitude) premium for a real human provider.
Epistemic status: human nature.
Economic limits: Requires rather specific talents.
Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")
Examples: (status symbol) receptionist, maid, personal assistant
Epistemic status: human nature (assuming there are still people this unusually rich).
Economic limits: There are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.
Providing human-species-specific reproductive or medical services.
Examples: Surrogate motherhood, wet-nurse, sperm/egg donor, blood donor, organ donor.
Epistemic status: still needed.
Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.
So, what other examples can people think of?
One category that I'm personally really unsure about the long-term viability of is being an artist/creator/influencer/actor/TV personality. Just being fairly good at drawing, playing a musical instrument/other creative skills is clearly going to get automated out of having any economic value, and being really rather good at it is probably going to turn into "your primary job is to create more training data for the generative algorithms", i.e. become part of item 3. above. What is less clear to me is whether (un-human-assisted) AIs will ever become better than world class humans (who are using AI tools and/or with AI coworkers) for the original creativity aspect this sort of stuff (they will, inevitably, get technically better at performing it than unassisted humans), and if they do, to what extent/for how long people will still want content from an actual human instead, just because it's from a human, even if it's not as good (thus this making this another example of either item 1. or 5. above).
There has been a lot of useful discussion in the answers and comments, which has caused me to revise and expand parts of my list. So that readers looking for practical career advice don't have to read the entire comments section to find the actual resulting advice, it seemed useful to me to give a revised list. Doing this as an answer in the context of this question seems better than either making it a whole new post, or editing the list in the original post in a way that would remove the original context of the answers and comments discussion.
This is my personal attempt to summarize the answers and comments discussion: other commenters may not agree (and are of course welcome to add comments saying so). As the discussion continues and changes my opinion, I will keep this version of the list up to date (even if that requires destroying the context of any comments on it).
List of Job Categories Safe from AI/Robots (Revised)
Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).
Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.
Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.
Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.
Doing some intellectual and/or physical work that AI/robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. This could include jobs where people's willingness to pay is due to a legal requirement that certain work be done or supervised by a (suitably skilled/accredited) human (and these requirements have not yet been repealed).
Examples: Doctor, veterinarian, lawyer, priest, babysitter, nanny, nurse, primary school teacher.
Epistemic status: Many people tell me "I'd never let an AI/a robot do <high-stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion, but it's unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.
Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly (though perhaps slower for some such jobs than others). Requires being reliably good at the job, and at appearing reassuringly competent while doing so.
Giving human feedback/input/supervision to/of AI/robotic work/models/training data, in order to improve, check, or confirm its quality.
Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values
Epistemic status: seems inevitable, at least at first.
Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing things/self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/or more capable models need more training data and/or society keeps moving out-of-previous distribution so new data is needed. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we're already getting increasingly good at generating synthetic training data.] Another question is whether a lot of very smart AIs can extract a lot of this sort of data from humans without needing their explicit paid cooperation — indeed, perhaps granting permission for them to do so and not intentionally sabotaging this might even become a condition for part of UBI (at which point deciding whether to call allowing this a career or not is a bit unclear).
Skilled participant in an activity that heavily involves interactions between people, where humans prefer to do this with other real humans, are willing to pay a significant premium to do so, and you are sufficiently more skilled/talented/capable/willing to cater to others' demands than the average participant that you can make a net profit off this exchange.
Examples: director/producer/lead performer for amateur/hobby theater, skilled comedy-improv partner, human sex-worker
Epistemic status: seems extremely plausible
Economic limits: Net earning potential may be limited, depending on just how much better/more desirable you are as a fellow participant than typical people into this activity, and on the extent to which this can be leveraged in a one-producer-to-many-customers way — however, making the latter factor high is is challenging because it conflicts with the human-to-real-human interaction requirement that allows you to out-compete an AI/robot in the first place. Often a case of turning a hobby into a career.
Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")
This can either be full-time employment as a status-symbol for a specific status-signaler, or you can be making high-status "luxury" goods where owning one is a status signal, or at least has cachet. For the latter, like any luxury good, they need to be rare: this could be that they are individually hand made, and-or were specifically commissioned by a specific owner, or that they are reproduced only in a "limited edition".
Examples: (status symbol) receptionist, maid, personal assistant; (status-symbol maker) "High Art" artist, Etsy craftsperson, portrait or commissions artist, mechanical watch hand-assembler.
Epistemic status: human nature (for the full-time version, assuming there are still people this unusually rich).
Economic limits: For the full-time-employment version, there are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.
For the "maker of limited edition human-made goods" version: pretty widely applicable, and can provide a wide range of income levels depending on how skilled you are and how prestigious your personal brand is. Can be a case of turning a hobby into a career.
Providing human-species-specific reproductive or medical services.
Examples: Surrogate motherhood, wet-nurse, sperm/egg donor, blood donor, organ donor.
Epistemic status: still needed.
Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.
Certain jobs could manage to combine two (or more) of these categories. Arguably categories 1. and 5. are subsets of category 2.
Note also that for categories 1, 4, 5, and 6, the only likely customer base is other humans: so if all other humans are impoverished, these will not help you not be impoverished. 3 on the other hand is providing value to the AI portion of the economy, and 2 might (especially in the regulatory capture variant of it) sometimes have part of the AI portion of the economy as a customer. So how long jobs in category 3, and the "legally required" variant of 2, continue to exist for is rather key for how money flows between the AI and human portions of the economy. Modeled as two separate trading blocks, then ignoring any taxation/redistribution, the AI economy has a great deal to offer the humans (ggods, services, entertainment, knoeledge), while all the humans have to offer are 2 (if legally required) and 3.