TL;DR: Post-AGI career advice needed (asking for a friend).

Let's assume, for the sake of discussion, that Leopold Aschenbrenner is correct that at some point in the fairly near future (possibly even, as he claims, this decade) AI will be capable of acting as a drop-in remote worker as intelligent as the smartest humans and capable of doing basically any form of intellectual work that doesn't have in-person requirements, and that it can do so as well or better than pretty-much all humans, plus that it's at least two or three orders of magnitude cheaper than current pay for intellectual work (so at least an order of magnitude cheaper than a subsistence income) — and probably decreasing in cost as well.

Let's also assume for this discussion that at some point after that (perhaps not very long after that, given the increase in capacity to do intellectual work on robotics), that developments in robotics overcome Moravec's paradox, and mass production of robots greatly decreases their cost, to the point where a robot (humanoid or otherwise) can do basically every job that requires manual dexterity, hand-eye-coordination, and/or bodily agility, again for significantly less than a human subsistence wage. Let's further assume that some of the new robots are now well-waterproofed, so that even plumbers, lifeguards, and divers are out of work, and also that some of them can be made to look a lot like humans, for tasks where that appearance is useful or appealing.

I'd also like to also assume for this discussion that the concept of a "human job" is still meaningful, thus the human race doesn't go extinct or get entirely disempowered, and that we don't to any great extent merge with machines: some of us may get very good at using AI-powered tools or collaborating with AI co-workers, but we don't effectively plug AI in as a third hemisphere of our brain to the point where it dramatically increases our capabilities.

So, under this specific set of assumptions, what types of paying jobs (other than being on UBI) will then still be available to humans, even if only to talented ones? How long-term are the prospects for these jobs (after the inevitable economic transition period)?

[If you instead want to discuss the probability/implausibility/timelines of any or all of my three assumptions, rather than the economic/career consequences if all three of them occurred, then that's not an answer to my question, but it is a perfectly valid comment, and I'd love to discuss that in the comments section.]

So the criterion here is basically "jobs for which being an actual real human is a prerequisite".

Here are the candidate job categories I've already thought of:

(This is my original list, plus few minor edits: for a list significantly revised in light of all the discussion from other people's answers and comments, see my answer below.)

  1. Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).

    Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.

    Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.

    Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.

  2. Doing some intellectual and/or physical work that AI/robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. (Could also be combined with item 3. below.)

    Example: Doctor, veterinarian, lawyer, priest, babysitter, nurse, primary school teacher.

    Epistemic status: Many people tell me "I'd never let an AI/a robot do <high-stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion, but it's unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.

    Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly. Requires being reliably good at the job, and at appearing reassuringly competent while doing so.

    I'd be interested to know if people think there will be specific examples of this that they believe will never go away, or at least will take a very long time to go away? (Priest is my personal strongest candidate.)

  3. Giving human feedback/input/supervision to/of AI/robotic work/models/training data, in order to improve, check, or confirm its quality.

    Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values

    Epistemic status: seems inevitable, at least at first.

    Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing things/self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/or more capable models need more training data and/or society keeps moving out-of-previous distribution. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we're already getting increasingly good at generating synthetic training data.]

  4. In-person sex work where the client is willing to pay a (likely order-of-magnitude) premium for a real human provider.

    Epistemic status: human nature.

    Economic limits: Requires rather specific talents.

  5. Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")

    Examples: (status symbol) receptionist, maid, personal assistant

    Epistemic status: human nature (assuming there are still people this unusually rich).

    Economic limits: There are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.

  6. Providing human-species-specific reproductive or medical services.

    Examples: Surrogate motherhood, wet-nurse, sperm/egg donor, blood donor, organ donor.

    Epistemic status: still needed.

    Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.

So, what other examples can people think of?

One category that I'm personally really unsure about the long-term viability of is being an artist/creator/influencer/actor/TV personality. Just being fairly good at drawing, playing a musical instrument/other creative skills is clearly going to get automated out of having any economic value, and being really rather good at it is probably going to turn into "your primary job is to create more training data for the generative algorithms", i.e. become part of item 3. above. What is less clear to me is whether (un-human-assisted) AIs will ever become better than world class humans (who are using AI tools and/or with AI coworkers) for the original creativity aspect this sort of stuff (they will, inevitably, get technically better at performing it than unassisted humans), and if they do, to what extent/for how long people will still want content from an actual human instead, just because it's from a human, even if it's not as good (thus this making this another example of either item 1. or 5. above).

New Answer
New Comment

6 Answers sorted by

alexgieg

176

This probably doesn't generalize beyond very niche subcultures, but in the one I'm a member of, the Furry Fandom, art drawn by real artists is such a core aspect that, even though furries use generative AI for fun, we don't value it. One reason behind this is that, different from more typical fandoms, in which members are fans of something specific made by a 3rd party, in the Furry Fandom members are fans of each other.

Give that, and assuming the Furry Fandom continues existing in the future, I expect members will continue commissioning art from each other or, at the very least, will continue wanting to be able to commission art from each other, and will use AI-generated art as a temporary stand in while they save to commission real pieces from the actual artists they admire.

[-]gwern233

I was surprised to hear this, given how the fur flew back when we released This Pony Does Not Exist & This Fursona Does Not Exist, and how well AstraliteHeart went on to create furry imagegen with PonyDiffusion (now v6); I don't pay any attention to furry porn per se but I had assumed that it was probably going the way regular stock photos / illustrations / porn / hentai were going, as the quality of samples rapidly escalated over time & workflows developed - the bottom was falling out of the commission market with jobs cratering and AI-only 'artists' muscling in. So I asked a furry acquaintance I expected would know.

He agreed inasmuch as he said that there was a remarkable lack of AI furry porn on e621 & FurAffinity and just in general, and what I had expected hadn't happened. (Where is all the furry AI porn you'd expect to be generated with PonyDiffusion, anyway? Civitai?* That site is a nightmare to navigate, and in no way a replacement for a proper booru.) But it was not for a lack of quality.

He had a more cynical explanation, though: that despite huge demand (lots of poorer furries went absolutely nuts for TFDNE - at least, until the submissions were deleted by mod... (read more)

A supporting data point: I made a series of furry illustrations last year that combined AI-generated imagery with traditional illustration and 3d modelling- compositing together parts of a lot of different generations with some Blender work and then painting over that.  Each image took maybe 10-15 hours of work, most of which was just pretty traditional painting with a Wacom tablet.

When I posted those to FurAffinity and described my process there, the response from the community was extremely positive. However, the images were all removed a few weeks later for violating the site's anti-AI policy, and I was given a warning that if I used AI in any capacity in the future, I'd be banned from the site.

So, the furiously hardline anti-AI sentiment you'll often see in the furry community does seem to be more top-down than grassroots- not so much about demand for artistic authenticity (since everyone I interacted with seemed willing to accept my work as having had that), but more about concern for the livelihood of furry artists and a belief that generative AI "steals" art during the training process. By normalizing the use of AI, even as just part of a more traditional process, my work was seen as a threat to other artists on the site.

6the gears to ascension
I'd guess your work is in the blended category where the people currently anti-ai are being incorrect by their own lights, and your work did not in fact risk the thing they are trying to protect. I'd guess purely ai generated art will remain unpopular even with the periphery, but high-human-artistry ai art will become more appreciated by the central groups as it becomes more apparent that that doesn't compete the way they thought it did. I also doubt it will displace human-first art, as that's going to stay mildly harder to create with ai as long as there's a culture of using ai in ways that are distinct from human art, and therefore lower availability of AI designed specifically to imitate the always-subtly-shifting most recent human-artist-made-by-hand style. It's already possible to imitate, but it would require different architectures.
8the gears to ascension
I don't think AIs are able to produce the product that is being sold at all, because the product contains the causal chain that produced it. This is a preference that naturally generates AI-resistant jobs, as long as the people who hold it can themselves pay for the things. I mean, you might be able to value drift the people involved away if you experience machine at them hard enough, but it seems like as long as what they're doing is trying to ensure a physically extant biological being who is a furry made the thing and not the thing is high sensory intensity at furry-related features, it seems like it would actually resist that. Now, like, you propose preference falsification. If that's the case, I'm wrong. I currently think I'm right at like, 70% or so.
5RogerDearnaley
I think you're right: I have heard this claimed widely about Art, that part of the product and its value is the story of who made it, when and why, who's in it, who commissioned it, who previously owned it, and so forth. This is probably more true at the expensive pinnacles of the Art market, but it's still going to apply within specific subcultures. That's why forgeries are disliked: objectively they look just like the original artist's work, but the story component is a lie. More generally, luxury goods have a namber of weird economic properties, one of which is that there's a requirement that they be rare. Consider the relative value of natural diamonds or other gemstones, vs synthetic ones that are objectively of higher clarity and purity with fewer inclusions: the latter is an objectively better product but people are willing to pay a lot less for it. People pay a lot more for the former, because they're 'natural", which is really because they're rare and this a luxury/status symbol. I think this is an extension of my category 5. — rather then the human artist acting as your status symbol in person as I described above, a piece of their art that you commissioned and took them a couple of days to make just for you is hanging on your wall (or hiding in your bedroom closet, as the case may be). There are basically three reasons to own a piece of art: 1) that's nice to look at 2) I feel proud of owning it 3) other people will think better of me because I have it and show it off The background story doesn't affect 1), but it's important for 2) and 3).
3RogerDearnaley
This might also be part of why there's a tendency for famous artists to be colorful characters: that enhances the story part of the value of their art.
2RogerDearnaley
In my attempted summary of the discussion, I rolled this into Category 5.
3alexgieg
From my experience, it's on Telegram groups (maybe Discord ones too, but I don't use it myself). There are furries who love to generate hundreds of images around a certain theme, typically on their own desktop computers where they have full control and can tweak parameters until they get what they wanted exactly right. They share the best ones, sometimes with the recipes. People comment, and quickly move on. At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it. I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good: even if the industrialized one is better by objetive parameters, the handcrafted one is perceived as qualitatively distinct. So I can imagine a scenario in which there are automated, generative websites for quick consumption -- especially video, as you mentioned -- and Etsy-like made-by-a-real-person premium ones, with most of the associated social status geared towards the later. I don't know about sexual toys specifically, but something like that has been attempted with fursuits. There are cheap, knockoff Chinese fursuit sellers on sites such as Alibaba, and there's a market for those somewhere otherwise those wouldn't be advertised, but I've never seen someone wearing one of those on either big cons or small local meetups I attended, nor have I heard of someone who does. As with handcrafted art, it seems furries prefer handcrafted fursuits made either by the user themselves, or by artisan fursuit makers. I suppose that might all change if the fandom grows to the point of becoming fully mainstream. If at some point there are tens to hundreds of millions of furries, most of whom carrying furry-related fetishes (sexual or otherwise), real industries might form around us to the poin
[-]gwern162

People comment, and quickly move on.

That's the problem, of course, and why it can't replace the mainstream sites. It's trapped in fast mode and has no endurance or cumulative effect. So it sounds like there is plenty of demand (especially allowing for how terrible Telegram is as a medium for this), it's just suppressed and fugitive - which is what we would expect from the cartel model.

At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it. I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good

Ah yes, the profoundly human and irreplaceable experience of 'Paypaling some guy online $1000 for drawings of your fursona'...

How can AI ever compete with the deeply meaningful and uncommodifiable essence of the furry experience in 'commissioning from an artist you like for your friend'? Well, it could compete by 'letting you create the art for your friend instead of outsourcing it to the market'. What's more meaningful... (read more)

3RogerDearnaley
The sheer number of Geek Points that This Pony Does Not Exist wins is quite impressive.

Good one! I think I can generalize from this to a whole category (which also subsumes my sex-worker example above):


4. (v2) Skilled participant in an activity that heavily involves interactions between people, where humans prefer to do this with other real humans, are willing to pay a significant premium to do so, and you are sufficiently more skilled/talented/capable/willing to cater to others' demands than the average participant that you can make a net profit off this exchange.
Examples: Furry Fandom artist, director/producer/lead performer for amateur/ho... (read more)

Vladimir_Nesov

61

The recent Carl Shulman podcast (part 1, part 2) is informative on this question (though it should be taken in the spirit of exploratory engineering, not forecasting). In particular, in a post-AGI magically-normal world, jobs that humans are uniquely qualified to do won't be important to the industry and will be worked around. What remains of them will have the character of billionaires hiring other billionaires as waiters, so treating this question as being about careers seems noncentral.

I'm still watching this (it's interesting, but 6 hours long!), and will have more comments later.

From his point of view in what I've watched so far, what matters most about the categories of jobs above is to what extent they are critical to the AI/robotic economic growth and could end up being a limiting factor bottleneck on it.

My categories 1. and 4.–6. (for both the original version of 4. and the expanded v2 version in a comment) are all fripperies: if these jobs went entirely unfilled, and the demand for them unfulfilled, the humans would be less happy ... (read more)

Myron Hedderson

30

I think my answer would depend on your answer to "Why do you want a job?". I think that when AI and robotics have advanced to the point where all physical and intellectual tasks can be done better by AI/robots, we've reached a situation where things change very rapidly and "what is a safe line of work long term?" is hard to answer because we could see rapid changes over a period of a few years, and who knows what the end-state will look like? Also, any line of work which at time X it is economically valuable to have humans do, will have a lot of built-in incentive to make it automateable, so "what is it humans can make money at because people value the labour?" could change rapidly. For example, you suggest that sex work is one possibility, but if you have 100,000 genius-level AIs devising the best possible sex-robot, pretty quickly they'd be able to come up with something where the people who are currently paying for sex would feel like they're getting better value for money out of the sex-robot than out of a human they could pay for sex. Of course people will still want to have sex with people they like who like them back, but that isn't typically done for money.

We'll live in a world where the economy is much larger and people are much richer, so subsistence isn't a concern, provided that there's decent redistributive mechanisms of some sort in place. Like let's say we keep the tax rate the same but the GDP has gone up 1,000x - then the amount of tax revenue has gone up 1,000x, and UBI is easy. If we can't coordinate to get a UBI in place, it would still only need 1 in 1,000 people to somehow have lucked into resources and say "I wish everyone had a decent standard of living" and they could set up a charitable organization that gave out free food and shelter, with the resources under their command. So you won't need a job. Meaning, any work people got other people to do for them would have to pay an awful lot if it was something a worker didn't intrinsically want to do (if someone wanted a ditch dug for them by humans who didn't like digging ditches, they'd have to make those humans a financial offer they found made it worthwhile when all of their needs are already met - how much would you have to pay a billionaire to dig you a ditch? There's a price, but it's probably a lot.), and otherwise, you can just do whatever "productive" thing you want because you want to, you enjoy the challenge, it's a growth experience for you or whatever, and it likely pays 0, but that doesn't matter because you value it for reasons other than the pay.

I guess it could feel like a status or dignity thing, to know that other people value the things you can do, enough to keep you alive with the products of your own labour? And so you're like "nah, I don't want the UBI, I want to earn my living". In that case, keep in mind "enough to keep you alive with the products of your own labour" will be very little, as a percentage of people's income. So you can busk on a street corner, and people can occasionally throw the equivalent of a few hundred thousand of today's dollars of purchasing power into your hat, because you made a noise they liked, in the same way that I can put $5 down for a busker now because that amount of money isn't particularly significant to me, and you're set for a few years at least, instead of being able to get yourself a cup of coffee as is the case now.

Or, do you want to make a significant amount of money, such that you can do things most people can't do because you have more money than them? In that case, I think you'd need to be pushing the frontier somehow - maybe investing (with AI guidance, or not) instead of spending in non-investy ways would do it. If the economy is doubling every few years, and you decide to live on a small percentage of your available funds and invest the rest, that should compound to a huge sum within a short time, sufficient for you to, I dunno, play a key role in building the first example of whatever new technology the AI has invented recently which you think is neat, and get into the history books?

Or do you just want to do something that other people value? There will be plenty of opportunities to do that. When you're not constrained by a need to do something to survive, you could, if you wanted, make it your goal to give your friends really good and thoughtful gifts - do things for them that they really appreciate, which yes they could probably train an AI agent to do, but it's nice that you care enough to do that, the fact that you put in the thought and effort matters. And so your relationships with them are strengthened, and they appreciate you, and you feel good about your efforts, and that's your life.

Of course, there are a lot of problems in the world that won't magically get fixed overnight even if we create genius-level AIs and highly dexterous robots and for whatever reason that transition causes 0 unexpected problems. Making it so that everybody's life, worldwide, is at least OK, and we don't cause a bunch of nonhuman animal suffering, is a heavy lift to do from where we are, even with AI assistance. So if your goal is to make the lives of the people around you better, it'll be a while before you have a real struggle to find a problem worth solving because everything worthwhile has already been done, I'd think. If everything goes very well, we might get there in the natural un-extended lifetimes of people alive today, but there will be work to do for at least a decade or two even in the best case that doesn't involve a total loss of human control over the future, I'd think. The only way all problems get solved in the short term and you're really stuck for something worthwhile to do, involves a loss of human control over the situation, and that loss of control somehow going well instead of very badly.

I didn't say this, but my primary motivation for the question actually has more to do with surviving the economic transition process: if-and-when we get to a UBI-fueled post-scarcity economy, a career becomes just a hobby that also incidentally upgrades your lifestyle somewhat. However, depending on how fast the growth rates during the AGI economic transition are, how fast the government/sovereign AI puts UBI in place, and so forth, the transition could be long-drawn out, turbulent, and even unpleasant, even if we eventually reach a Good End. While personally navigating that period, understanding categories of jobs more or less safe from AGI competition seems like it could be very valuable.

7Myron Hedderson
Ok. I thought after I posted my first answer, one of the things that would be really quite valuable during the turbulent transition, is understanding what's going on and translating it for people who are less able to keep up, because of lacking background knowledge or temperament. While it will be the case after a certain point that AI can give people reliable information, there will be a segment of the population that will want to hear the interpretation of a trustworthy human, and also, the cognitive flexibility to deal with a complex and rapidly changing environment and provide advice to people based on their specific circumstances, will be a comparative advantage that lasts longer than most. Acting as a consultant to help others navigate the transition, particularly if that incorporates other expertise you have (there may be a better generic advice giver in the world, and you're not likely to be able to compete with Zvi in terms of synthesizing and summarizing information, but if you're for example well enough versed in the current situation, plus you have some professional specialty, plus you have local knowledge of the laws or business conditions in your geographic area, you could be the best consultant in the world with that combination of skills). Also, generic advice for turbulent times: learn to live on as little as possible, stay flexible and willing to move, save up as much as you can so that you have some capital to deploy when that could be very useful (if the interest rates go sky high because suddenly everyone wants money to build chip fabs or mine metals for robots or something, having some extra cash pre-transition could mean having plenty post-transition) but also you have some free cash in case things go sideways and a well placed wad of cash can get you out of a jam on short notice, let you quit your job and pivot, or do something else that has a short term financial cost but you think is good under the circumstances. Basically make yourself m
2RogerDearnaley
That sounds like good advice — thanks!

J

30

For arbitrary time horizons nothing is 'safe', but that just means our economy shifts to a new model. It doesn't mean the outcome is bad for humans. I don't know if it makes sense to worry about which part of the ship will become submerged first because everyone will rush for the other parts and those jobs will be too competitive. It might be better to worry about how to pressure the political system to take proactive action to rearchitect our economy. Ubi and/or a shorter workweek are inevitable and the sooner we sort out how to implement that the better.

for the sake of understanding the roadmap for the autopocalypse, I think you can consider these factors:

The most obvious: can the work be entirely computer-based? A corollary to this is whether a unskilled human with the assistance of an ai can replace the skilled human (e.g. Healthcare roles requiring knowledge work and physical work)

Regulatory environment. Even if it were possible for software to replace a human worker licensing requirements and other laws may protect human workers for awhile beyond that point.

Eventually machines will be able to transcend software to perform every physical task a human can now perform. And that mostly won't be anthropomorphic robots but more automation of machines that currently are operated by humans, like cars and drones. The anthropomorphic robots will appear (on the job) the furthest into the future.

But again everyone will be racing away from these jobs (and toward the remaining 'safe' ones) at the same time. Financial investment may be the safest source of income. Owning your own business may possibly benefit from the autopocalypse but at the extreme you will just be an investor in a company run by machines.

TLDR:

So the best advice is probably to build an investment portfolio (and the knowledge to do that well). If you own the companies it doesn't matter who the workers are.

I think active stock-market investing, or running your own company, in a post AGI-world is about as safe as rubbing yourself down in chum before jumping into a shark feeding frenzy. Making money on the stock market is about being better then the average investor at making predictions. If the average investor is an ASI, then you're clearly one of the suckers.

One obvious strategy would be to just buy stock and hold it (which I think may be what you were actually suggesting). But in an economy as turbulent as a post-AGI FOOM, that's only going to work for a c... (read more)

1J
Hmmm I guess I don't really use the terms 'investing' and 'trading' interchangeably.

RogerDearnaley

20

There has been a lot of useful discussion in the answers and comments, which has caused me to revise and expand parts of my list. So that readers looking for practical career advice don't have to read the entire comments section to find the actual resulting advice, it seemed useful to me to give a revised list. Doing this as an answer in the context of this question seems better than either making it a whole new post, or editing the list in the original post in a way that would remove the original context of the answers and comments discussion.

This is my personal attempt to summarize the answers and comments discussion: other commenters may not agree (and are of course welcome to add comments saying so). As the discussion continues and changes my opinion, I will keep this version of the list up to date (even if that requires destroying the context of any comments on it).

List of Job Categories Safe from AI/Robots (Revised)

  1. Doing something that machines can do better, but that people are still willing to pay to watch a very talented/skilled human do about as well as any human can (on TV or in person).

    Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.

    Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.

    Economic limits: If you're not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.

  2. Doing some intellectual and/or physical work that AI/robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. This could include jobs where people's willingness to pay is due to a legal requirement that certain work be done or supervised by a (suitably skilled/accredited) human (and these requirements have not yet been repealed).

    Examples: Doctor, veterinarian, lawyer, priest, babysitter, nanny, nurse, primary school teacher.

    Epistemic status: Many people tell me "I'd never let an AI/a robot do <high-stakes intellectual or physical work> for me/my family/my pets…" They are clearly quite genuine in this opinion, but it's unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/robot-produced work is actually, demonstrably, just as good if not better.

    Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly (though perhaps slower for some such jobs than others). Requires being reliably good at the job, and at appearing reassuringly competent while doing so.

  3. Giving human feedback/input/supervision to/of AI/robotic work/models/training data, in order to improve, check, or confirm its quality.

    Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values

    Epistemic status: seems inevitable, at least at first.

    Economic limits: I imagine there will be a lot of demand for this at first, I'm rather unsure if that demand will gradually decline, as the AIs get better at doing things/self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/or more capable models need more training data and/or society keeps moving out-of-previous distribution so new data is needed. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we're already getting increasingly good at generating synthetic training data.] Another question is whether a lot of very smart AIs can extract a lot of this sort of data from humans without needing their explicit paid cooperation — indeed, perhaps granting permission for them to do so and not intentionally sabotaging this might even become a condition for part of UBI (at which point deciding whether to call allowing this a career or not is a bit unclear).

  4. Skilled participant in an activity that heavily involves interactions between people, where humans prefer to do this with other real humans, are willing to pay a significant premium to do so, and you are sufficiently more skilled/talented/capable/willing to cater to others' demands than the average participant that you can make a net profit off this exchange.

    Examples: director/producer/lead performer for amateur/hobby theater, skilled comedy-improv partner, human sex-worker

    Epistemic status: seems extremely plausible

    Economic limits: Net earning potential may be limited, depending on just how much better/more desirable you are as a fellow participant than typical people into this activity, and on the extent to which this can be leveraged in a one-producer-to-many-customers way — however, making the latter factor high is is challenging because it conflicts with the human-to-real-human interaction requirement that allows you to out-compete an AI/robot in the first place. Often a case of turning a hobby into a career. 

  5. Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human ("They actually have a human maid!")

    This can either be full-time employment as a status-symbol for a specific status-signaler, or you can be making high-status "luxury" goods where owning one is a status signal, or at least has cachet. For the latter, like any luxury good, they need to be rare: this could be that they are individually hand made, and-or were specifically commissioned by a specific owner, or that they are reproduced only in a "limited edition".

    Examples: (status symbol) receptionist, maid, personal assistant; (status-symbol maker) "High Art" artist, Etsy craftsperson, portrait or commissions artist, mechanical watch hand-assembler.

    Epistemic status: human nature (for the full-time version, assuming there are still people this unusually rich).

    Economic limits: For the full-time-employment version, there are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.

    For the "maker of limited edition human-made goods" version: pretty widely applicable, and can provide a wide range of income levels depending on how skilled you are and how prestigious your personal brand is. Can be a case of turning a hobby into a career.

  6. Providing human-species-specific reproductive or medical services.

    Examples: Surrogate motherhood, wet-nurse, sperm/egg donor, blood donor, organ donor.

    Epistemic status: still needed.

    Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.

Certain jobs could manage to combine two (or more) of these categories. Arguably categories 1. and 5. are subsets of category 2.

Note also that for categories 1, 4, 5, and 6, the only likely customer base is other humans: so if all other humans are impoverished, these will not help you not be impoverished. 3 on the other hand is providing value to the AI portion of the economy, and 2 might (especially in the regulatory capture variant of it) sometimes have part of the AI portion of the economy as a customer. So how long jobs in category 3, and the "legally required" variant of 2, continue to exist for is rather key for how money flows between the AI and human portions of the economy. Modeled as two separate trading blocks, then ignoring any taxation/redistribution, the AI economy has a great deal to offer the humans (ggods, services, entertainment, knoeledge), while all the humans have to offer are 2 (if legally required) and 3.

clone of saturn

20

You missed what I think would be by far the largest category, regulatory capture: jobs where the law specifically requires a human to do a particular task, even if it's just putting a stamp of approval on an AI's work. There are already a lot of these, but it seems like it would be a good idea to create even more, and add rate limits to existing ones.

I intended to capture that under category 2. "…but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better…" — the regulatory capture you describe (and those regulations not yet having been repealed) would be a category of reason why (and an expression of the fact that) people are willing to pay more. Evidently that section wasn't clear enough and I should have phrased this better or given it as an example.

As I said above under category 2., I expect this to be common at first but to decrease over time, perhaps even quite rapidly, given the value differentials involved.

11 comments, sorted by Click to highlight new comments since:
[-]jbash1512

I don't understand why everybody seems to think it's desirable for there to keep being jobs or to have humans "empowered". If AI runs the world better than humans, and also provides humans with material wealth and the ability to pursue whatever hobbies they feel like, that seems like a huge win on every sane metric. Sign me up for the parasitic uberpet class.

I am scared of the idea of very powerful AI taking orders from humans, either individually or through some political process. Maybe more scared of that than of simply being paperclipped. It seems self-evidently hideously dangerous.

Yet an awful lot of people seem obsessed with avoiding the former and ensuring the latter.

I didn't say this, but my primary motivation for the question actually has more to do with surviving the economic transition process: if-and-when we get to a UBI-fueled post-scarcity economy, a career becomes just a hobby that also incidentally upgrades your lifestyle somewhat. However, depending on how fast the growth rates during the AGI economic transition are, how fast the government/sovereign AI puts UBI in place, and so forth, the transition could be long-drawn out, turbulent, and even unpleasant, even if we eventually reach a Good End. While personally navigating that period, understanding categories of jobs more or less safe from AGI competition seems like it could be very valuable.

[-]J0-6

Humans are the most destructive entity on earth and my only fear with ai is that it ends up being too human.

The most dangerous currently on Earth, yes. That AI which picked up unaligned behaviors from human bad examples could be extremely dangerous, yes (I've written other posts about that). That that's the only possibility we need to worry about, I disagree — paperclip maximizers are also quite a plausible concern and are absolutely an x-risk.

[-]J10

True... I don't know why i used the word 'only' there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a 'singularity'. Every step humanity is taking to develop AI feels like a huge leap of faith now.

Personally, I'm optimistic, or at least unworried, but that's probably partly because i know I'm going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don't think a superintelligence would choose a path that humans would be clearly resistant to, when it could simply incentivize us to do voluntarily do what it wants. Humans are far easier to deal with when they're duped into doing something they think they want to do. And it shouldn't be that hard for a superintelligence to figure out how to manipulate us that way. Using force or fear to control humans is probably the least efficient option.

I also have little doubt that corporations and state actors are already exploring how to use gpt-type ai for e.g. propaganda and other kinds of social and psychological manipulation. I mean that's what marketing is and algorithms designed to manipulate our behavior already drive the internet.

[-]J10

This was intended as agreement with the post it's replying to.

Regarding category 2, and the specific example of "lawyer", I personally think that most of this category will go away fairly quickly.  Full disclosure, I'm a lawyer (mostly intellectual property related work), currently working for a big corporation.  So my impression is anecdotal, but not uninformed.

TL; DR - I think most lawyer-work is going away with AI's, pretty quickly.  Only creating policy and judging seem to be the kinds of things that people would pay for other humans to do.  (For a while, anyway.)

 

I'd characterize legal work as falling primarily into three main categories: 

  1. transactional work (creating contracts, rules, systems for people to sign up to to advance a particular legal goal - protecting parties in a purchase, fairly sharing rights in something parties work together on, creating rules for appropriate hiring practices, etc.);  
  2. legal advocacy (representing clients in adversarial proceedings, e.g., in court, or with an administrative agency, or negotiations with another party); and
  3. legal risk-analysis (evaluating a current or proposed factual situation, and determining what risks are presented by existing legal regimes (either law or contract), deciding on a course of action, and then handing an appropriate task to the transactional or adversarial folks to carry out).

 

So in short: paper-pushers; sharks; and judges.

 

Note that I would consider most political positions that lawyers usually fill to be in one of these categories.  For instance, legislators (who obviously need not be lawyers, but often are) do transactional legal work.  Courtroom judges are clearly the third group.  Prosecutors/DAs are sharks.  

 

Paper-pushers:

I see AI taking over this category almost immediately.  (It's already happening, IMO.)

A huge amount of this work is preparing appropriate documents to make everyone feel that their position is adequately protected.  LLM's are already superficially good at this, and the fact that there are books out there to provide basic template forms for so many transactional legal matters suggest that this is an easily templatized category. 

As far as trusting the AI to do the work in place of a human, this is the type of work that most corporations or individuals feel very little emotion over.  I have rarely been praised for producing a really good legal framework document or contract.  And the one real exception is when it encapsulated good risk-analysis (judging).  

 

Sharks:

My impression is that this will take longer to be taken over, but not all that long.  (I think we could see it within a few years, even without real AGI coming in to existence.)

This work is about aggressively collecting and arguing for a specific side, pre-identified by the client.  So there is no judgment or human value that is necessarily associated with it.  So I don't think that the lack of a human presence will feel very significant to someone choosing an advocate.

At the moment, this is (IMHO) the category requiring the most creativity in its approach, but ...  Given what I see from current LLMs, I think that this remains essentially a word / logic game, and I can imagine AI being specifically trained to do this well.

My biggest concern here is regarding hallucination.  I'm curious what others with a real technical sense of how this can be limited appropriately would think about this.

 

Judges:

I think that this is the last bastion of human-lawyering.  It's most closely tied to specific human desires and I think people will feel that relinquishing judgment to a machine will FEEL hardest.

Teaching a machine to judge against a specific set of criteria should be easy-ish.  Automated sentencing guidelines are intended to do exactly this, and we already use them in many places.  And an AI should be able to create a general sense of what risks are presented by a given set of facts, I suspect.

But the real issue in judging is in deciding which of those risks present the most significant risk and consequence, BASED ON EXPECTED HUMAN RESPONSES.  That's what an in-house counsel at a company spends a lot of time advising on, and what a judge in a courtroom is basing decisions that extend or expand existing law decides on the basis of.

And while I think that AI can do that, I also think that most people will see the end result as being very dependent on the subjective view of the judge/counselor as to what is really important and really risky.  And that level of subjectivity is something that may well be too hard to trust to an AI that is not really transparent to the client community (either the company leadership, or the public at large).

So, I don't think it's a real lack of capability here, but that this role hits humans in a soft spot and they will want to retain this under visible human control for longer.  Or at least we will all require more experience and convincing to believe that this type of judging is being done with a human point of view.

Basically, this is already a space where a lot of people feel political pressure has a significant impact on results, and I don't see anyone being comfortable letting a machine of possibly alien / inscrutable political ideology make these judgments / give this advice.

 

 

So I think the paper-pushers and sharks are short-lived in the AI world.

Counselors/judges will last longer, I think, since they are roles that specific reflect human desire as expressed in law.  But even then, most risk-evaluating starts with analysis that I think AI's will be tasked to do, much like interns do today for courtroom judges.  So I don't think we'll need nearly as many.

 

On a personal note, I hope to be doing more advising (rather than paper-pushing and negotiating) to at least slightly future-proof my current role.

Thanks for the detailed, lengthy (and significantly self-deprecating) analysis of that specific example — clearly you've thought about this a lot. I obviously know far less about this topic than you do, but your analysis, both of likely future AI capabilities and human reactions to them, both sound accurate to me.

Good luck with your career.

1.All related to parenting and childcare. Most parents may not want a robot to babysit their children. 

2.Art history and museums. There is a lot of physical work and non-text knowledge involved and demand may remain. This includes art restoration (until clouds of nanobots will do it).

Most parents may not want a robot to babysit their children.

Assuming that stays true, your friends and family, who also don't have jobs, can do that in an informal quid-pro-quo. And you'll need it less often. Seems unlikely to need any meaningful kind of money economy.

Art history and museums. There is a lot of physical work and non-text knowledge involved and demand may remain. This includes art restoration (until clouds of nanobots will do it).

If the robots are fully embodied and running around doing everything, they'll presumably get that knowledge. There's a lot of non-text knowledge involved in plumbing, too, but the premise says that plumbing is done by machines.

1.All related to parenting and childcare. Most parents may not want a robot to babysit their children. 

Babysitting (and also primary school teaching) were explicitly listed as examples under my item 2. So yes, I agree, with the caveats given there.