Baudrillard's language seems quite religious, so I almost feel that a religious example might relate directly to his claims better. I haven't really read Baudrillard, but here's how I'd explain my current understanding:
Stage 1: People pray faithfully in public because they believe in God and follow a religion. Those who witness this prayer experience a window into the transcendent.
Stage 2: People realise that they can gain social status by praying in public, so they pretend to believe. Many people are aware of this, so witnessing an apparently sincere prayer ceases to be the same experience as you don't know whether it is genuine or not. It still represent the transcendent to some degree, but the experience of witnessing it just isn't the same.
Stage 3: Enough people have started praying insincerely that almost everyone starts jumping on the bandwagon. Publicly prayer has ceased to be an indicator of religiosity or faith any more, but some particularly naive people still haven't realised the pretence. People still gain status from this for speaking sufficiently elegantly. People can't be too obviously fake though or they'll be punished either by the few still naive enough to buy into it or by those who want to keep up the pretence. While in level 2, the lie was that people claimed to be faithful when they weren't, in level 3 the lie is the very existence of the congregation of the faithful.
Stage 4: Praying is now seen purely as a social move which operates according to certain rules. It's no longer necessary in and of itself to convince people that you are real, but part of the game may include punishments for making certain moves. For example, if you swear during your prayer, that might be punished for being inappropriate, even though no-one cares about religion any more, because that's seen as cheating or breaking the rules of the game. However, you can be obviously fake in ways that don't violate these rules, as the spirit of the rules has been forgotten. Maybe people pray for vain things like becoming wealthy. Or they go to church one day, then post pictures of them getting smashed the next day on Facebook, which all their church friends see, but none of them care. The naive are too few to matter and if they say anything, people will make fun of them.
I'll admit that I've added something of my own interpretation here, especially in terms of how strongly you have to pretend to be real at the various stage
There are two aspects of this post worth reviewing: as an experiment in a different mode of discourse, and as a description of the procession of simulacra, a schema originally advanced by Baudrillard.
As an experiment in a diffferent mode of discourse, I think this was a success on its own terms, and a challenge to the idea that we should be looking for the best blog posts rather than the behavior patterns that lead to the best overall discourse.
The development of the concept occurred over email quite naturally without forceful effort. I would have written this post much later, and possibly never, had I held it to the standard of "written specifically as a blog post." I have many unfinished drafts. emails, tweets, that might have advanced the discourse had I compiled them into rough blog posts like this. The description was sufficiently clear and compelling that others, including my future self, were motivated to elaborate on it later with posts drafted as such.
I and my friends have found this schema - especially as we've continued to refine it - a very helpful compression of social reality allowing us to compare different modes of speech and action.
As a description of the procession of simulacra it differs from both Baudrillard's description, and from the later refinement of the schema among people using it actively to navigate the world. I think that it would be very useful to have a clear description of the updated schema from my circle somewhere to point to, and of some historical interest for this description to clearly describe deviations from Baudrillard's account. I might get around to trying to draft the former sometime, but the latter seems likely to take more time than I'm willing to spend reading and empathizing with Baudrillard.
Over time it's become clear that the distinction between stages 1 and 2 is not very interesting compared with the distinction between 1&2, 3, and 4, and a mature naming convention would probably give these more natural names than the numbering we're using now, but my later attempt to name them was forced and premature.
Let me check I'm following with some simple claims:
Yes
"Outplay" is a bit strong - you can parasitize them. Not obvious Charles Ponzi "outplayed" the system, or that actors that feel the need to use level-3 tactics are particularly dominant individually.
What happens seems to be that at a sufficient level of parasitism, the resources necessary to support a lower simulacrum-level become scarcer for people trying to do the earlier illegible stages, because they're outcompeted by simulacra of them. Lower-level players may actually be able to outplay higher-level players in most direct contests, but find it much harder to reproduce themselves instead of getting internally metabolized by parasitic strategies into higher-level players.
Yes
Once level 1 is no longer the majority, it's not clear that there's a natural coalition to want to do either of these. Interactions that aren't just about producing good feelings will all seem zero-sum to players at level 2 and higher, so most level 2 players will see the most advantage in profiting by accommodating level 3, rather than trying to produce a world where they have more formidable adversaries, even if they're happier and healthier in that world.
Mad Men is an interesting case study of all four simulacrum levels interacting in the same social setting. It suggests that level-2 players find it disgusting and dispiriting to be too involved with level-3 even if it's profitable - like factory farming (or like being factory farmed) - the cattle get fat but aren't happy to be there.
The main thing that seems to roll this back in practice is competition on some more fundamental substrate. Level-1 armies (with the capacity to map terrain, their enemies, and themselves accurately, and act based on those maps) can massively outperform higher-level armies. The leveraged buyout trend in the 1980s seems to have empowered, for a while, people with a modus operandi of acquiring level-3 businesses and cutting out everything not necessary for a level-1 or -2 business.
Yeah, I think the right language around this probably moves away from "world". You also starve pretty soon after as you're 100% in world 4 - they're stereotyped worldviews, not literally evenly distributed worlds. [ETA: Not quite - Michael Vassar pointed out to me that selection pressures can be strong enough for stage 4 to be sustainable in sufficiently forgiving environments - you can do the things to sustain life as rituals within a power game rather than through understanding what they do.]
I think the main way to move back to world 1 in a world not dominated by it is to construct systems that internally coordinate according to world 1 and use their superior ability to build and use shared maps to outmaneuver more socially constrained systems and take territory from them.
The right way for world 1 to hold onto power is by (a) taking care of everyone involved unconditionally, so there's no particular reason to manufacture a story where you're useful, and (b) prioritizing maintaining shared maps about who's faking.
This came out in April 2019, and bore a lot of fruit especially in 2020. Without it, I wouldn't have thought about the simulacra concept and developed the ideas, and without those ideas, I don't think I would have made anything like as much progress understanding 2020 and its events, or how things work in general.
I don't think this was an ideal introduction to the topic, but it was highly motivating regarding the topic, and also it's a very hard topic to introduce or grok, and this was the first attempt that allowed later attempts. I think we should reward all of that.
Note: awhile ago I added this to our "potential things to curate" list.
I don't really feel good about curating it as-is because a) it's not super optimized to be a clear introduction to the topic, b) I don't think the terminology as it currently is is that good, c) I think it could be a bit more clear about what it's epistemic status is. (Or rather, the epistemic status here is clearly 'random un-distilled conversation", but if it were distilled, it'd then become necessary to be more clear about what people are supposed to take away from it)
The ideas here have been brought up in other related discussions, enough such that I think there'd be value in putting the work into clarifying things better. I think it's valuable for "Curated" to be the place people read to keep up with "new building blocks" for current discussion, but another important characteristic is clarity.
None of this is meant to be an obligation, just a note that "if Ben (or perhaps Jessica) were to put the time into optimizing this to be a good introduction to the medium-term discourse, I'd be interested in curating that." (I checked with Habryka before posting this, did not run this particular comment by him but he indicated a similar viewpoint)
I realize that elsethread you mentioned:
I think this isn't well-enough-understood to have a set of standardized terms yet. The implied request is to try to model these things in the world yourself, and start trying to talk about them, so that *eventually* the discourse self-corrects into a crisp model that doesn't imply absurdities like the "worlds" framework, and we can start naming things definitively. I don't expect this to happen in public.
Which makes sense. But since it's been brought up repeatedly as part of ongoing conversations, it seems worth making some kind of intermediate progress on helping people to grok the concept.
This seems to be where simulacra first started to appear in LW discourse? There doesn't seem to be a polished general post on the subject until 2020, but I feel like the concepts and classification were floating around in 2019, and some credit probably belongs on this post.
This seems like a useful frame.
I notice myself feeling somewhat frustrated that there aren't currently names for the 4 stages that are succinct but meaningful (lies/bullshit/power-games doesn't feel like it cleaves it in a way that parses for me). But this is perhaps indicative of the intrinsic difficulty in communicating about this class of problem (because it doesn't want to be easily communicated about)
I think this isn't well-enough-understood to have a set of standardized terms yet. The implied request is to try to model these things in the world yourself, and start trying to talk about them, so that *eventually* the discourse self-corrects into a crisp model that doesn't imply absurdities like the "worlds" framework, and we can start naming things definitively. I don't expect this to happen in public.
It might be useful to make such requests more explicit.
It might be useful to make explicit how much progress has been made. Most of the discussion has anchored on Baudrillard and the number 4, but it's not clear that you wanted that. Is this even supposed to be a discrete qualitative model, or is it continuous and the stages are just for verbal convenience? ("The system wireheads itself" vs "the system wireheads employees" is the only thing that jumped out at me as object-level qualitatively distinct.)
I'm generally in favour of including posts containing concepts which have seen regular use. Given the different ways this concept has been used, I'd love to see this post expanded in order to address these.
I think Raemon’s comments accurately describe my general feeling about this post-intriguing, but not well-optimized for a post.
However, I also think that this post may be the source of a subtle misconception in simulacra levels that the broader LessWrong community has adopted. Specifically, I think the distinction between 3 and 4 is blurred in this post, and tries to draw the false analogy that 1:2::3:4. Going from 3 (masks the absence of a profound reality) to 4 (no profound reality) is more clearly described not as a “widespread understanding” that they don’t mean anything, but as the lack of an attempt to promulgate a vision of reality in which they do mean something. In the jobs example, this means a title like “Vice President of Sorting” when your company doesn’t sort, but sorting is a viable job is level 3. In this case, the sign attempts to be interpreted by interviews as a kind of fabrication of a profound reality, masking the absence thereof. In my mind, this is pretty clearly where Trump falls: “successful businessman” is not a profound reality, but he’d like it to be interpreted as one. On the other hand, SL4 refers to a world in which titles aren’t designed to refer to anything at all: rather, they become their own self-referential ecosystem. In this case, all “Vice-president of laundering hexagons” means is that I was deemed worthy of a “vice-president of laundering hexagons,” not that I’m attempting to convince someone that I actually laundered hexagons but everyone knows that that’s not true.
ETA-as strongly implied above, I do not support upvoting this post.
I think this points to a mismatch between Benquo and Baudrillard, but not to a problem with the version of the concept Benquo uses. Given how successful the (modified, slightly different) concept has been, I consider this more of a problem with Baudrillard's book than a problem with Benquo's post.
I very much like that this topic is being explored, but I fear you're on the wrong track in thinking that these worlds are distinct. Jessica doesn't go far enough in the critique that this is assuming uniformity of use and knowledge of such. In fact, all these worlds are simultaneously overlaid on one another, among different people and often among different parts of the same conversation. Sometimes people are aware of the ambiguity or outright misleading use of words, sometimes they're not, and sometimes they think they are but it still has emotional impact. And we should probably add world 0: brutal honesty where titles are conservative estimates of value rather than broad categories of job, and world -1 where labels don't exist and people are referred to by the sum total of what they've done in their life.
It should be clearer that language is _ALWAYS_ a mix of cooperative and adversarial games. Everyone is trying to put ideas into each other's heads that benefit the speaker. Some of them also benefit the listener, and that's great. But it's impossible to separate from those times when the goals diverge.
On the object level of your example, I do a fair bit of interviewing, and I guarantee you that competent recruiters know what different company's titles generally mean, and even then take them with a grain of salt. Competent hiring managers focus on impact and capability in interviews, not on titles. Agreed that titles and self-described resume entries carry a lot of weight in getting and framing the interview. But outright lies won't get you anything, even if a small amount of puffery likely gets you a small improvement in success rate and initial offer.
I think you're grading what constitutes an "outright lie" on a curve. This is the usual thing to do, but it's also literally, false, insofar as "lie" has an objective meaning at all.
Important to track both what's usually done and what's denotatively true.
For concreteness, can you point to an example of something that you believe is an outright lie, but that Dagon might have been considering not an outright lie?
(ideally something that is as ambiguous as possible according to your-model-of-Dagon's-schema. "minimum viable lie" might be one way to think about it although I could imagine that in your own schema that might be a disingenuous way of framing. I use it here mostly to gesture at what I'm trying to ask rather than claiming it's the right way to describe it)
Or, to help a bit with the interpretive labor – Benquo, does this seem like a question that matches your conception of outright right lie, and Dagon, does this match your conception of "not an outright lie":
"I'm a director at [nonprofit]" (where "director" is a title that the nonprofit gives up somewhat freely to people that it wants to feel more motivated)
or
"I'm the executed director / CEO of [organization]" (where the organization is actually just you in a basement, maybe with one additional employee or a couple contractors)
Yes, especially the second one. It’s the kind of lie that I would recommend that a friend tell if they were in that situation, but it is a lie.
Thanks for the specificity! These examples ("director" being used for prestige, without any connection to actual effort/impact/power) are good ones to explore how context plays into things.
There exist idiots who will take the introduction at face value. There exist particularly insane organizations who will only accept contracts signed by a director, and for those, one kind of has to play the game. This isn't universal, or even common - SIMULTANEOUSLY, anyone who talks to or works with these directors will understand something closer to the truth.
These examples are none of the listed worlds - they have superficial elements of world 3 or 4 for the title of "director", but nobody seriously cares about that. It's world 1 for the interactions between people, where no single word carries much weight and what matters is how they behave and what they can get done.
The examples are also nowhere near universal (they're not distinct "worlds", they're examples that the world is diverse in use of words). They don't remove any of the weight from someone saying "I'm the director of a 150-person research group at X fortune-500 company".
"No single word carries much weight and what matters is how they behave and what they can get done" is really not game-1. Game-1 is all about efficient denotative communication so that you don't have to personally inspect what's going on, and can use the map instead of directly inspecting the territory. You're describing a situation in which people can privately model objective reality, not one in which words help much with this by default. There's mutual knowledge in some circumstances about how Game-4 is being played with these words, but generally with deniability. The near-universal conflation of Game-1 with many participants not being confused about Game-4 is part of this mechanism.
"I'm the director of a 150-person research group at X fortune-500 company" is a much more specific descriptor of both the size and genre of someone's territory/fiefdom, but in practice doesn't strongly indicate that an enterprise of that size has genuine research needs such that a 150-person research group is functionally necessary. So you can't make strong inferences about competence to coordinate research from that description, only competence at playing the game (assuming it's not a somewhat more legible lie).
By contrast, you would be able to make those strong inferences about a nonfinancialized enterprise in a tightly competitive field.
"No single word carries much weight and what matters is how they behave and what they can get done" is really not game-1. Game-1 is all about efficient denotative communication so that you don't have to personally inspect what's going on, and can use the map instead of directly inspecting the territory.
Wow. I really missed that. I suspect because I I don't see how anyone can claim that sort of game-1 is possible outside of technical topics (which STILL take many thousands of words to communicate concepts) or very small groups of high-trust shared-context participants (where the thousands of words are implicit). I guess I start in game-2, and I don't see much difference between games 2-4.
Language, and especially common short words and phrases, just doesn't carry that kind of precision. More generally, language is just as subject to Goodhart's Law as any other knowledge proxy.
To some extent "technical" might be the word for a domain where Game-1 is stably dominant. There are technical subjects, political subjects, and subjects somewhere in between. If so, then "this is only possible in technical domains" is a truism, and the question is whether we can make other important domains technical. Economics was in a sense an attempt to make politics, or a large part of politics technical. Philosophy was an attempt to make a different part of politics that touched civic religion and origin myths into a technical subject.
I think that armies under severe short-term performance pressure can use Game-1 within their domains, even though there's a lot of managing humans involved. Preserving Game-1 under conditions of local abundance is harder. It was known to be a major unsolved problem as early as the writing of Plato's Republic.
I think I'd filter my "technical" requirement a bit further. Not "only possible in technical domains", but "only possible for those parts of technical domains for which jargon and terms of art have been developed and accepted". Technical domains that are changing or being explored require a lot of words and interactive probing before any sort of terse communication is possible.
Even armies and trained emergency workers are very limited in the types of information they can transfer quickly and correctly, and that's AFTER a whole lot of training and preparation so that most commands are subroutine triggers, not idea transfers.
I sympathize with the desire to "make important domains technical", but I suspect it's a mix of levels that is ultimately incoherent. In domains where there is a natural feedback loop to precision, it'll happen by itself. In domains where the feedback loops _don't_ favor precision and territory-matching, it won't and can't. One could claim that is the difference between an "important" domain and one that isn't, but one would be falling for the very same problem we're discussing: the word "important" doesn't mean the same thing to each of us.
Note that small groups of shared-context individuals _CAN_ have technical discussions on topics that are otherwise imprecise and socially constructed. It's just impossible for larger or more heterogeneous groups to do so.
Nod.
Speaking for myself now: I think the overall dynamic you're pointing at makes sense, and I'd classify the process by which these words get distorted as 'deceptive', but whether I think it makes sense to call the use of director 'lying' depends on context.
The way you're framing lies/not-lies feels... too prescriptivist to feel like a robust foundation to me. (I generally do not think language prescriptivism makes sense)
Words change over time. Sometimes they change as part of a deceptive game, sometimes just because a new concept came up and someone grabbed an existing word that was close. Sometimes two languages smash into each other because of trade or conquest.
What counts as lying, and what counts as just using the new definition of a word?
I'd count the first several decades of people misusing 'literally' to mean 'a lot' as 'lying, but not deceptive' (assuming they didn't expect their listeners to believe them, and understand it as an exaggeration).
I'd count people who literally (lolsad) don't know what the word 'literally' means and start using it to mean "hyperbolic figuratively' because they've only heard the misused version... to not be lying, just using a different word. Which may or may not be bad.
For words like 'manager', whose primary role is to describe a social relationship that only really has meaning insofar as we agree on social reality, it feels even less clear cut than literally.
If you begin in world 1, I might describe the first few people to start calling themselves or their employees 'manager' as lying. By the time there's common knowledge that we're in world 3 I'm not sure that makes sense, if there's a widely agreed upon new definition of manager that includes 'promoted person at a particular point in the hierarchy.' I'm not sure about world 2.
It's possible, admittedly I don't expect this to be the case, for the word to have transformed without a lie_raemon ever being told. I think words are almost always pointing to a cluster rather than a concrete thing, even in most technical domains (where the clusters are tighter but rarely perfect).
[edit: I'm not that confident about percentage of techical jargon that is perfectly precise].
So, it seems most useful to me to define lying as 'saying a thing that is outside the current cluster of the word might reasonably mean.
If manager initially means 'manage a team of people' and then later means 'manage some people + get some perks' and then means 'only sorta-kinda-manage those people while getting some perks' and eventually just means 'get the perks'...
...I think it's fair to say something distorted and deceptive has happened, but I'm not sure it makes sense to classify any given instance as a lie.
There are "Vice Presidents" of things who don't preside or directly assist in presiding over any assembly of people. There are "Managers" who don't manage anyone, "Chief X officer" where there is no other officer for X, or anyone else doing X. Etc. Many of these terms are formed from words originally denoting a specific function, or explicitly making a comparison with other people, not just indicating a "level" of status, trust, bigshotness, etc.
This feels like it needs some context or background. What’s a “good job”, a “bad job”, a “good title”, or a “bad title”? Could we get examples of these things?
SVP of Operations vs Operations Analyst vs Warehouse Worker
Person who is responsible for coordinating operations for the entire company vs someone who does much smaller-scale object level work that effects only a small part of the company's operations vs someone who does manual labor and mostly just contributes the ability to take simple instructions and operate a human body in lieu of a robot.
This is not the clearest or the best explanation of simulacrum levels on LessWrong, but it is the first. The later posts on the subject (Simulacra and Subjectivity, Negative Feedback and Simulacra, Simulacra Levels and Their Interactions) are causally downstream of it, and are some of the most important posts on LessWrong. However, those posts were written in 2020, so I can't vote for them in the 2019 review.
I have applied the Simulacrum Levels concept often. I made spaced-repetition cards based on them. Some questions are easy to notice and ask, in simulacrum level terms, and impossible to ask otherwise: What things drive it higher or lower? What level is my conversational partner at? Can I make things more object-level? These questions were hard to notice before, but I think I've been able to answer them, in contexts where I wouldn't.
For someone who reads the Best of 2019 Review books, I think failing to mention the simulacrum levels would be a grave disservice, both because they're a really key concept for understanding the conversations that happened on LessWrong, and for understanding the world in general.
So I'm voting for inclusion. It's not the best of the explanations, but it's good enough, and it's the one we've got.
This concept has definitely proven to be a meaty topic. It is interesting in that it has already been the subject of some ongoing discussion/review in 2020, and I'm not entirely sure how the 2019 Review should work for evaluating Simulacra.
I'm nominating this post in particular, but I think it'd be good during the Review Phase to do some meta-thinking on how to process Simulacra in light of later posts by Zvi.
Given all of the discussion around simulacra, I would be disappointed if this post wasn't updated in light of this.
When I was halfway through this and read about the 4 stages, they immediately seemed to me to correspond to four types of news reporting:
Which isn't exactly what the post is about, but might be a useful analogy, or source of terminology.
Simulacra as free-floating schelling points could actually be good if they represent mathematical truth about coordination between agents within a reference class, intended to create better outcomes in the world?
But if a simulacrum corresponds to truth because people conform their future behavior to its meaning in the spirit of cooperation does it still count as a simulacrum?
It feels like you're trying to implicitly import all of good intent, in its full potential, stuff it into the word "truth", and claim it's incompatible with the use of schelling points via the distortions:
In other words I think you're assuming:
good intent = truth = in-principle CDT-verifiable truth (fair)
I don't think [2] is accurate. Certainly some people are using simulacrums "cooperatively" but only as a larger defection – that's the whole point: to receive benefits that are unearned per object-level reality.
I agree that all of this behavior isn't a good (central) example of 'immoral behavior', but it's certainly not good. It might be to some degree inevitable, but so are lots of bad things.
I've been discoursing more privately about the corruption of discourse lately, for reasons that I hope are obvious at least in the abstract, but there's one thing I did think was shareable. The context is another friend's then-forthcoming blog post about the politicization of category boundaries.
In private communication, quoted with permission, Jessica Taylor wrote:
In a world where half the employees with bad jobs get good titles, aren't their titles predictively important in that they predict how likely they are to be hired by outside companies? Their likelihood of getting hired is, under these assumptions, going to be the same as that of as people with good jobs and good titles, and higher than that of people with bad jobs and bad titles. So, in terms of things like ability to exit (and therefore negotiating ability), there are natural clusters of "people with good titles" and "people with bad titles". (Title is going to have less effect on likelihood of getting a job than it did before the bullshit titles, but it still has a significant effect)
Somewhat related to the previous point, bullshit titles actually might end up being in the interest of the people with bad jobs, in the sense that they might want others not to know their job is bad, and destroying the language here makes actual job quality harder to infer from title. People do often have a sense that covering up embarrassing information about people is benevolent to them. It doesn't seem like the argument you have presented directly challenges this sense.
Related, we have words for "rich" vs "poor", which simply name someone's position in the social system of money (similar to title), and we don't have concise ways of talking about the extent to which their wealth reflects material value that they have created, which money is at least partially about tracking (but, is also about cronyism, class interests, and theft at the same time). But, "rich" and "poor" are undeniably predictively useful, even though tracking value creation is also important.
There's some danger that uncritically using the language corresponding to the Schelling points chosen by an unjust equilibrium contributes to maintaining that equilibrium, by making these Schelling points more narratively salient; I think that's more clear in the jobs example than the money example, but it applies to both.
The bullshit title example quite reminds me of Simulacra and Simulation, which I haven't read yet. From Wikipedia:
A possible interpretation here is that signifiers originally achieve meaning in social systems by corresponding with reality (stage 1), but once they're used in a social system, if the system doesn't protect itself, lies will outcompete truth (stage 2); since social systems involve Schelling games, the lies can be important pieces on the playing field even when no one expects them to correspond with reality (stage 3), and eventually people just start treating the statements as pieces on the gameboard, not even as lies (stage 4). Thus, language is destroyed.
Stage 1 is honesty, stage 2 is lies, stage 3 is bullshit, stage 4 is pure power games.
I replied:
Let's apply this to the specific example.
In world 1, companies need supervisors to coordinate projects, promote people who seem generally good at things to those roles because it's important for profitability to have smart conscientious people in charge, and have different titles for supervisory and managerial roles vs direct labor roles in order to keep track of who's doing what. As a side effect, companies hoping to hire someone for a higher-paying supervisory role will favor applicants whose title reflects that they've already (a) been selected for such a role by someone with skin in the game, and (b) done some learning on the job so they already know how to manage. As another side effect, job title is used for external social sorting, since people on similar life trajectories have more in common, people who want to extract money will want to pay more attention to people with higher wages and expected lifetime income, etc.
In world 2, companies have started offering managerial titles to employees as a perk so that they can benefit from the desirable side effects, lessening the title's usefulness for tracking who's doing what work, but possibly increasing its correlation with some of the side effects, since the good (i.e., effective at producing the desired side effects) titles go to the people who are most skilled at playing the game. It's common advice that one of the things you should negotiate if you're an earlyish hire at a startup is job title, since a sufficiently impressive title will create path-dependency making it awkward not to make you a major executive if and when the startup successfully grows.
In short, in world 2, the system is wireheading itself with respect to titles, but in a way that comes with real resource commitments, so people who can track the map and reality separately, and play on both gameboards simultaneously, can extract things through judicious acquisition of titles.
In world 3, the system starts using titles to wirehead its employees. Titles like "Vice President of Sorting" are useless and played out in the industry, interviewers know to ask what you actually did (and probably just look at your body language, and maybe call around to get your reputation, or just check what parties you've been to), but maybe there's some connotative impressiveness left in the term, and you feel better getting to play the improv game as a Vice President rather than a Laborer. You're given social permission to switch your inner class affiliation and feel like a member of the managerial class. Probably mom and dad are impressed.
In world 4, some of the practices from world 3 are left, and it's almost universally understood emotionally that they don't refer to anything, but there's nothing real to contrast them with, so if you tell a story about yourself well enough, people will go along with it even though they know that all the "evidence" is meaningless. E.g. Trump manages to play a great businessman on TV, and this is (plus a starting endowment of money and some basic primate cunning) enough to start off his presidential run in the genre "successful businessman coming to clean up Washington." Elizabeth Holmes was also playing in world 4.
Note that as we progress through these worlds, the title becomes less useful to people like [friend]. I think this needs to be made very explicit for the argument to register to LessWrongers. The sort of person who can hold a bullshit job maybe does better in world 2 than in world 1, but [friend] doesn't play that game, he wants to do work that matters on the object level and be justly rewarded for it. (Though he's currently, understandably too distracted by cultural forces threatening to destroy world 1 altogether to focus on his object-level work.)
If World 3 were to arrive uniformly it wouldn't be very useful to anyone, but it doesn't - it always arrives unevenly, so that in the early stages while cynical managers are still metabolizing world 2 into world 3, people who can most savvily leverage class privilege into bullshit jobs know which titles to stay away from, and in the late stages outright con artists bring about world 4 when enough of the power landscape has been metabolized into world 3.
This is notably similar to the stages of a financial speculative bubble, though I think there are some differences that would be worth modeling.
Jessica's reply:
This all seems right, thanks for the additional explanation. The naive version of the Baudrillard formulation (which is naive since I haven't read the book) unfortunately assumes that worlds are uniform, and which world everyone is in is mutual knowledge, when actually some people are much more savvy than others (in terms of both knowing what game is being played and skill at playing the game), exploiting the labor of people who think they are in world 1 when actual material/informational work is necessary, or when improv of such is called for.
*****
Related: There is a war, The Scams are Winning, Anatomy of a Bubble, On the construction of beacons, Actors and Scribes, words and deeds, Naive epistemology, savvy epistemology