The curse of identity
So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?
I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby
I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem.
Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern.
- When I began my university studies back in 2006, I felt strongly motivated to do something about Singularity matters. I genuinely believed that this was the most important thing facing humanity, and that it needed to be urgently taken care of. So in order to become able to contribute, I tried to study as much as possible. I had had troubles with procrastination, and so, in what has to be one of the most idiotic and ill-thought-out acts of self-sabotage possible, I taught myself to feel guilty whenever I was relaxing and not working. Combine an inability to properly relax with an attempted course load that was twice the university's recommended pace, and you can guess the results: after a year or two, I had an extended burnout that I still haven't fully recovered from. I ended up completing my Bachelor's degree in five years, which is the official target time for doing both your Bachelor's and your Master's.
- A few years later, I became one of the founding members of the Finnish Pirate Party, and on the basis of some writings the others thought were pretty good, got myself elected as the spokesman. Unfortunately – and as I should have known before taking up the post – I was a pretty bad choice for this job. I'm good at expressing myself in writing, and when I have the time to think. I hate talking with strangers on the phone, find it distracting to look people in the eyes when I'm talking with them, and have a tendency to start a sentence over two or three times before hitting on a formulation I like. I'm also bad at thinking quickly on my feet and coming up with snappy answers in live conversation. The spokesman task involved things like giving quick statements to reporters ten seconds after I'd been woken up by their phone call, and live interviews where I had to reply to criticisms so foreign to my thinking that they would never have occurred to me naturally. I was pretty terrible at the job, and finally delegated most of it to other people until my term ran out – though not before I'd already done noticeable damage to our cause.
- Last year, I was a Visiting Fellow at the Singularity Institute. At one point, I ended up helping Eliezer in writing his book. Mostly this involved me just sitting next to him and making sure he did get writing done while I surfed the Internet or played a computer game. Occasionally I would offer some suggestion if asked. Although I did not actually do much, the multitasking required still made me unable to spend this time productively myself, and for some reason it always left me tired the next day. I felt somewhat unhappy with this, in that I felt I was doing something that anyone could do. Eventually Anna Salamon pointed out to me that maybe this was something that I was more capable of doing than others, exactly because so many people would feel that ”anyone” could do this and thus would prefer to do something else.
It may not be immediately obvious, but all three examples have something in common. In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do). "Prestigious work" could also be translated as "work that really convinces others that you are doing something valuable for a cause".
We run on corrupted hardware: our minds are composed of many modules, and the modules that evolved to make us seem impressive and gather allies are also evolved to subvert the ones holding our conscious beliefs. Even when we believe that we are working on something that may ultimately determine the fate of humanity, our signaling modules may hijack our goals so as to optimize for persuading outsiders that we are working on the goal, instead of optimizing for achieving the goal!
You can see this all the time, everywhere:
- Charity groups often have difficulty attracting people to do much-needed but boring and unprestigious work, and even people who think they care about the cause may find it difficult to do such work.
- People may think that they're motivated to study because they want to increase their earnings, but then they don't actually achieve much in their studies. In reality, they might be only motivated to give the impression of being the kind of person who studies hard in order to increase their earnings, and looking like they work hard to study is enough to give this impression.
- Countless people intend to become a published author one day, but don't actually work to polish their writing to achieve this: they want to be writers, but they don't want to write.
- Self-help techniques may seem like really useful at first, but then the person loses the motivation to consistently use them, even if the techniques would help them achieve their goal. They don't actually want to achieve their goal, they just want to be seen working for the goal. Looking at various self-help techniques and trying out some for a couple of times can be enough to fulfill this goal. Not actually achieving it also lets people go buy more self-help books and therefore maintain that self-image.
- Likewise, some people try out lots of self-help techniques and think they're making great progress, or read Less Wrong and report it helping them with procrastination, when they aren't actually any better than before and don't have any objective ways of measuring their progress.
- Likewise, some people only keep talking about solving problems all day and seem smart for having endlessly analyzed them, but never actually do anything about them. (Some people write posts like these and then comment on them, instead of solving their issues.)
- People commit altruistic acts, and then act selfishly and inconsiderately later in the day, once they feel that they have been good enough that they've earned the right to be a little selfish. In other words, they estimate that they've been good enough at presenting an altruistic image that a few transgressions won't threaten that image.
- People often choose to not find out about ways of helping others, or attempt to remain purposefully ignorant of the ways in which their actions hurt others. They are often uninterested in optimal charity, and prefer to just establish their nature as a good person by donating to some popular charity, regardless of its effectiveness. Groups that try to make others more aware of the consequences of their actions (e.g. animal rights activists presenting evidence of the way factory animals are treated, people talking about optimal charity) are often treated with scorn and derision. AGI researchers may purposefully avoid finding out about and thinking about the risks of AGI. All of these actions help establish plausible deniability: it's easier for a person to claim and think that they're a good person if they can show that they didn't know about the negative consequences of their actions.
- The freelancer's curse: for many people, working at home is much harder than working at an office, for there is no social environment pushing you to work full days. A freelancer may do a little bit of work and then feel too tired to continue, or they may be slightly sick and feel like they can't work today, or constantly have their mind claim that something else is more important for their productivity right now. "I need to figure out if I’m really hungry or—catch this—bored with what I’m doing. If I’m bored, I think I’m hungry, because that’s one of the few things I will get up from my desk to deal with. If I need a meal, I eat. But my subconscious loves to trick me (and my hips) by convincing me to leave when I’m not through. Often, the “I’m hungry” reaction comes when I’m working on something particularly difficult or something I don’t want to do. Again, it took many months (and too many calories) to figure this one out. Now, before I get something to eat, I ask myself this: Do I like what I’m working on? If the answer is no, I generally stay at my desk." -- Kristine Kathryn Rusch
- Skeptics, priding themselves on an ability to think clearly and debunk pseudoscience, may actually start engaging in undiscriminating skepticism, attacking anything that feels vaguely pseudoscientific regardless of its actual merit.
- Intellectuals may want to have an identity that sets them apart from others, becoming intellectual hipsters and meta-contrarians and question things just for the sake of questioning the accepted wisdom; more generally, people will do things just for the sake of being different.
- And many others, like ~all of Robin Hanson's posts on signaling or hypocrisy.
There's an additional caveat to be aware of: it is actually possible to fall prey to this problem while purposefully attempting to avoid it. You might realize that you have a tendency to only want to do particularly prestigeful work for a cause... so you decide to only do the least prestigeful work available, in order to prove that you are the kind of person who doesn't care about the prestige of the task! You are still optimizing your actions on the basis of expected prestige and being able to tell yourself and outsiders an impressive story, not on the basis of your marginal impact.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (295)
I guess you can't want to want stuff. When you genuinely want something (not prestige but an actual goal) you'll easily be in the "flow experience" and lose track of time and actually progress toward the goal without having to force yourself. Actually you have to force yourself to stop in order to sleep and eat because you'd just do this thing all day if you could! Find the thing where you slip into flow easily and do the most efficient thing that's at the same time quite similar to this activity.
When I was younger, I thought that I wanted to be a writer because I wanted to be the sort of person who was passionate about something, and since I hadn't found a passion yet and was pretty good at writing, it seemed like a good vessel for that drive. It took me quite awhile to realize that I saw it as a chore and never really wanted to write.
I don't see anything inherently wrong with doing things for the prestige, though, just with lying to yourself about your motivations.
"We run on corrupted hardware: our minds are composed of many modules, and the modules that evolved to make us seem impressive and gather allies are also evolved to subvert the ones holding our conscious beliefs. Even when we believe that we are working on something that may ultimately determine the fate of humanity, our signaling modules may hijack our goals so as to optimize for persuading outsiders that we are working on the goal, instead of optimizing for achieving the goal!"
I'm sorry, while I agree whole-heartedly with this assessment, your article is more of an interesting examination of this principle...than a solution, or even any new assessment. Understand that we are flawed, selfish creatures, is only the first step of many to getting anywhere, one that most of us will never get past.
I've never tried it myself, but to offer a solution to this mess, I think it would be interesting to examine the effect of Radical Honesty upon such problems.
Another way of putting it: when and where, exactly, is privacy justified?
The discussions about signalling reminded me of something in "A Guide To The Good Life" (a book about stoicism by William Irvine). I remembered a philospher who wore shabby clothes, but when I went looking for the quote, what I found was: "Cato consciously did things to trigger the disdain of other people simply so he could practice ignoring their disdain." In stoicism, the utility which is to be maximized is a personal serenity which flows from your secure knowledge that you are spending your life pursuing something genuinely valuable.
Let me ask a rude question: What makes you so sure you want to "do good"? If you do, this would be a most unusual appetite. People do what they want for other reasons, and then they explain it to themselves and others as "doing good." The motivation to "do good" isn't a primary motive. How could it be? From where might it come ? To root that sort of motive in nature, one pretty much has to invent some form of moral realism; you must cross the "is" versus "ought" chasm. Now's not the time to address the moralistic illusion, but without the prior need to morally justify one's sense of seeking right, I think moral realism would appear the fantasy it is.
One tries to do right but ends up seeking status. Then one asks: how do I weaken or redirect my status seeking? That may seem the obvious problem, but then why would someone who is smart, studies rationality, and tries to apply his conclusions end up failing to achieve his goals?
I don't buy the cynical line of that Dirty Old Obfuscator Robin Hanson: that status is our primary drive. This is a transparent rationalization for its being his primary goal. There are more important drives, call it effectance, competence, or Nietzsche's "will to power." Even "self-actualization" may do in a pinch. You obviously haven't succeeded in engaging any deep interests (in the sense of "intellectual interests" not the sense of "source of comparative advantage.") As it looks to me, that's your problem. .
You're right, of course, that signaling status often distracts from what's productive. And perhaps everyone needs to work on being distracted less. Theoretically, this could be accomplished in one of two ways. One might 1) observe the environmental triggers for status-oriented thinking and decrease one's exposure to them; or 2) find ways to gratify status striving through the objectively more valuable activity. Only 2 seems to have been discussed, but I think it's less important; even, unworkable. The problem is that indulging status drives, like most nonhomeostatic (appetitive) drives, increases their strength. If you recognize status seeking as a distraction, you're probably better off limiting your exposure to what precipitates it. (Serving as head of a political party is certainly well-calculated to be an effective trigger of status seeking.)
But, while these elements of truth impart to your analysis a sense of truthiness, they don't apply to your situation as you describe it. You weren't merely distracted; you directly subverted your own goals. No situationist tinkering will address a problem that really lies elsewhere. The problem is, it seems to me, that you are so concerned with what you "should do," ethically speaking, that either you don't recognize your intellectual interests or you refuse to follow them.
It is easy to become intellectually enchanted with an idea, whether the Singularity, the Pirate Party, or (for that matter) a religious ideal. But this doesn't mean you believe it with the certainty that your intellect claims. Your balking at the goals you set yourself suggests that beneath your conscious intellect, you are at best indifferent to them; I would go further and say you're probably downright hostile to your professed goals.
Dangit I wish I knew who this was. I hope their disassociation isn't a sign of evaporative cooling in action.
Fortunately the title of the page gives it away: it's srdiamond, who I believe still posts occasionally as common_law.
OK, that's got to be a bug..
You're right in a sense - I have been doing things that I felt were prestigious and world-saving, not necessarily the things that I had a deep, inherent interest in. But when I say that I'm now trying to concentrate on the things that I have a comparative advantage in, I mean things that I have some talent in and which I have a deep, inherent interest in. Being so interested in something that one is naturally drawn to do it, and doesn't need to force oneself to do it while gritting one's teeth, is a big part of having a comparative advantage in something.
Built in, like all other drives?
What's built in, plausibly, are specific drives (to comfort a crying baby, to take a clear example) whose gratification overlaps what we're inclined to call good. But these specific drives don't congeal into a drive to do ethical good: "good" isn't a natural property.
Now, you could say that "doing good" is just a "far" view of gratifying these specific drives. But I don't think that's the way it's used when someone sets out to "do good," that is, when they're making "near" choices.
I would tend to take the position that to "do good" is simply to take actions that satisfy (in the sense of maximizing or satisficing output utility, or some approximation thereof) some fixed function of likely great complexity, which we refer to by the handle "morality."
Obviously, we only take those actions because of our luck (in a moral sense) in having evolved to be motivated by such a function. And we are strongly motivated by other things as well. But I don't think it's reasonable to state that because we are motivated, therefore we are not motivated by morality. Of course, you might call me a moral realist, though I don't believe that morality is written in the stars.
This is probably what I've been struggling with the most during my life. I'm starting to feel like I'm close to reaching a balance in overcoming it though.
Early on my primary goal in life was being Good. Along with a bunch of other traits, I deemed status seeking and signalling as Evil and strove never to do it.
That... is hard to do and of course I didn't succeed fully. What I did manage was becoming terribly passive and self-effacing, I second-guessed any activity I engaged in even as I was doing it and abandoned anything I recognized as being signalling or status increasing unless I could come up with a convincing reason why it was objectively good. In the last few years I have reconsidered somewhat. I still have a gut instinct against it but I slowly changed my personality to accept and then embrace it since I recognized that would make me a better person.
I guess this is adding to the other comments that, yes, status and signalling is a mind killer and the first step is to notice and acknowledge that you are participating in it. The second step isn't to surpress it though, but to shape it and use it to fit who you want to be.
I still hate bragging*, so to balance the positive signaling I just did I'll add in that another, less idealistic part of my passive behavior was and probably still is the anti motto "If you don't try, no one can judge your goals or blame you for failing".
*and hate that saying so is itself bragging** :)
**recursively
You can't opt out of signalling any more than you can opt out of going to the bathroom. We all learn as children how to manage the scatological aspects of living on earth. Status is a completely analogous arena, except that everyday thoughts about it are even more sublimated and subconscious. Everyone knows the limits of physical hygiene. The limits of moral hygiene are no less biological or immutable, and no less unpleasant to discuss frankly.
Sure, but you could start optimizing to impress an all knowing, completely rational historian from the far future.
Virtues I've instilled in myself that I've found useful: don't be a hypocrite, don't be one of those people who's all talk and no action, optimal behavior is a virtue (while keeping in mind that optimal behavior may change based on your emotional state, for example, if you're worn out, it's likely that the optimal action is to focus on rejuvenating yourself instead of working more).
Another thing that had a positive impact on my personality was spending a lot of time playing management oriented computer games like Railroad Tycoon and Civilization, then deciding that I was wasting my time and that I wanted to apply the same optimization oriented thinking that I was using in the game to real life.
I don't know if imaginary person is enough for our instincts. We should also seek company of rational people, so our instincts can focus on them.
Problem with signalling is that it can probably subvert any activity, and if something is called "virtue", then it seems like a very good target. If you are not careful, you may find that you, for example, describe yourself as a worse person than you really are, to signal high non-hypocricy.
By the way, I would like to read an article about applying specific lessons to specific computer games in real life.
Citation needed. Unlike the gastrointestinal tract, brains have a built-in capacity to change their mapping between input and output over time so I can't accept that it's impossible to do anything about behavior X just because it's impossible to do anything about poop.
What is behavior X?
You can't opt out of signaling, but you can try to avoid having it hijack your reasoning.
Might "social" be more accurate?
Equally accurate and less specific. (Unless the phrase has another connotation?) I had in mind Sotala's discussion of status-seeking as an obstacle to doing good.
I execute binary quantum noise and laugh maniacally!
In multiverse, binary quantum noise execute you!
One of my new favorite comments :-)
Reference.
Internet trolling. Writing gorefics. Lacking hygiene. lieing to strangers.
And for the same thing but for rationality instead of morality: engage in minnor superstitions, fighting dirty in internet flame wars, do sloppy math.
Pee in the sink.
Is this one for rationality or morality? :p
My intuition is that passive things such as this and the procrastination Gabriel mentioned won't work.
Clever! I will think about it some rather than giving my snap judgement.
Another kinda related trick, although it might be dangerous and hard to pull of:
Convince yourself of certain things you know to be complete bull**, and forget which ones they are. This way you'll KNOW you cant rely on cached thoughts, and that you being entirely convinced something is true doesn't imply it actually being true.
There's a danger of simply getting used to being evil. And it seems quite likely to me, based on analogy between morality and conscientiousness. When I spend some time doing useful work (good), the resulting feeling of satisfaction makes me much more tempted to start wasting time (evil), because I 'earned' it. However, procrastinating mostly causes me to want to procrastinate even more.
One need only feel "evil", rather than actually be "evil". Hypothetical: try to imagine yourself as a demonic being, wearing human skin. Hold yourself to the silly superstitions that people believe of them; they cannot enter homes uninvited, are always out to make "bargains", etc... limit yourself to the harmless categories of these sorts of behaviors, and see how it affects your behavior and thinking.
The point of this being that it magnifies your personal feelings of "wickedness" without actually producing those results.
Having independently developed and implemented a related strategy with success, I would like to point out the specific nuance upon which it is most productive to focus:
You are in disguise, deep in enemy territory, and you will have to maintain this disguise for years yet to come.
The slightest slip-up could reveal you, even if no one seems to be looking, or even if the people you know are looking aren't the slightest bit suspicious. Making things up as you go along is not good enough for the long game; infernal instincts could slip out at any time. Repression just means they'll slip out in ways you don't expect. Anything out of character (and of course your character is a paragon, a saint, always generous and wise) might be memorable, anything memorable might be repeated, and anything repeated might reach the ears of the inquisitor who is less than a byte away from identifying you.
The good news is, you know your own true name and the inquisitor doesn't, so it's possible to get away with indulging your unique nature... so long as you're subtle about it. Identify your urges and pursue any reasonable opportunity to indulge them. 'Reasonable' opportunity means a situation where: 1) absolutely nobody gets hurt as a result of your indulgence ("I didn't know" is no excuse, since the wise and benevolent person you're pretending to be would have known), or even feels like they're getting hurt 2) at least one other person benefits from it more than you do, by their own assessment 3) the urge is satisfied in a way that will linger, rather than dropping off suddenly, to minimize desensitization.
There will be situations where advancing your own interests above all others seems like the only alternative to mewling incompetence, or where you can only choose between who to hurt. Be especially careful at such times, and do not allow yourself to savor them. The inquisitor is watching.
Of course, this very easily backfires - either you dislike feeling evil, so feeling evil takes up energy and doesn't leave you any to spare for altruistic acts. Alternatively, it might twist your self-image so that you think you actually are evil and start to commit evil acts and become less interested in good ones... or you think that you aren't doing things that are making you feel bad enough yet, so you start doing things that are actually evil.
I expect that getting this to work would require quite an intricate web of self-deception, and most who tried this would simply fail, one way or another.
When my trip to the Dominican Republic was ending, I was waiting for a bus to take me to the airport. I saw a "limpiabota," a shoe-shine boy, and decided it was a good time to get the mud and dirt off of my hiking boots, regular shoes, and dressier shoes.
They typically ask ten pesos for a shine but tourists might be asked to pay a few times that and natives five to ten pesos. In any case these are some of the poorest boys there and people might give them a five peso tip on top of whatever they ask. They are desperate for the money and are selling a 'luxury' good that the purchaser doesn't need to buy so it is possible to negotiate with them. I practiced my spanish talking him down from the asked for 30 pesos for the three pairs, and engaged in a tough negotiation, turning away several times and eventually getting him down to seven pesos for the three pairs. I let him shine the shoes I was wearing and gave him the other two pairs, telling him I put more than seven pesos in the shoe and it was a tip for him to take.
At the airport, everything was sold in dollars, not that I thought I'd much want to buy anything there anyway. i still had a good deal of money left in Dominican Pesos, so I put it all in my shoes. A few thousand pesos. The thought of the huge cut they take at the currency exchange counter galls me.
I have a chintzy WWLVD bracelet, it seems to work OK. Lieutenant Verrall
It's important not to try and emulate someone actually important like Stalin, as that would entail mostly signing paperwork and sleeping at your desk in your boots amid lapses of mania.
Eh. I suspect you're over-thinking it. Capturing the feeling in order to cultivate a proper emotional balance as to achieve an outcome is a measurably useful phenomenon. If it doesn't work, stop doing it.
Right - the primary mechanism is more through one's self-image than explicit status-seeking.
I've been reading Robin Hanson for five years or so and while I could often notice tendencies he describes that I found in myself, the comprehensiveness of the problem just hadn't come home to me. Just about everything I do is motivated in whole or in a large part by status seeking, and for some reason I didn't know that until just now.
Humans are social animals. We try rationalize our motives based on what we want and how we can make it look outside. So far we even lie to ourselves to reach the middle ground.
For example how many people actually say they work hard because they are greedy? No, you wont find such people explaining their motives. Instead they say they are hard working, want to excel and are gifted what they do. Why couldn't you be medicore person that is greedy and just works very hard to get loads of money?
It isn't as comforting though and society many times also rewards if we explain our motives to bit better.
Question: do you have any advice for people who want "to do something about Singularity" but are afraid of falling into the trap you describe?
Just spending more time trying to figure out whether your actions actually make sense ought to help. More specifically, try to e.g. go through the steps listed at Humans are not automatically strategic to figure out what your comparative advantage is. Also, like other people have suggested, try to align your status-seeking drive with doing things that are actually beneficial. If you're going to embark on a life-long quest, you'll need every possible motivational tool you can use, status considerations being one of them.
If you have data on whether studying is an optimal way to increase earnings, let me know or link me in the right direction, because it may have a significant impact on what I'm doing.
This is an excellent post.
I'll toss in another example: volunteering vs. donating to charity. People like the idea of volunteering, even when they could do more good by working longer hours and donating the money to charity.
When I first entered college, I had the idea that I'd go to med school and then join Doctors Without Borders. Do a lot of good in the world, right? The problem was that, while I'm good at a lot of things, biology is not my strong suit, so I found that part of the pre-med requirements frustrating. I ended up giving up and going to grad school in philosophy.
To maximize my do-gooding, I would have been better off majoring in Computer Science or Engineering (I'm really, really good at math), and committing to giving some percentage of my future earnings at a high-paying tech job to charity. Alas...
Now whenever I meet someone who tells me they want to go into a do-gooding career, I tell them they'd be better off becoming lawyers so they can donate lots of money to charity. They never like this advice.
This is what Warren Buffett has done. And he quite explicitly over the years said he wasn't going to donate while getting richer because his ability to compound his wealth was above average and so he would do more net good giving it away when he was done. (As it turns out, he gave away stock in his company, which has a very low effect on "shrinking the pie" that he is working with.)
That's pretty interesting how you self described as being really good at math but went into a career that wasn't math oriented. In myself, I've observed a trend of regarding things that I'm already good at as things that aren't especially interesting or important. Additionally, part of me likes the idea of being able to signal having a high aptitude at something that I don't bother to exploit. I wonder how many great scientists and creative types humanity has lost out on as a result of people ignoring the things they're good at because they seem too easy.
I seem to recall hearing somewhere an anecdote about a scientist who decided to dabble in some particular field. He immediately got a lot of attention and cites for his early papers, and then decided that if he could excel in the field this easily, the field wasn't worth his time.
Which is basically a terrible idea (on his part not yourself obviously). If he goes back to a field where it is hard to contribute it is likely that either the field is further into diminishing returns or already saturated with scientists. If the field where he can excel in easily is worthwhile as a science in general and gives a satisfactory level of prestige then staying in it is best for himself and for science in general. If he needs a challenge then he can just find the hardest, most critical part of the field and take that to the next level. If the whole field is not yet a fully generally solved problem then there is plenty of challenge remaining.
Becoming a lawyer is an extremely bad recipe for becoming rich these days.
Yeah. What are the MD specialties that make all the money? Radiology, Oncology...
This is probably a very dangerous idea but I think it's worth mentioning if only for the purpose of discussion:
What if you completely sabotage your signalling ability by making yourself appear highly undesirable. Then your actions will not be for the purpose of signalling as it would be futile.
I've seen this tried, for this stated purpose. My impression of the results was that it did not at all lead to careful, on-the-margins consequentialist thinking and doing. Instead, it led to a stressed out, strung out person trying desperately to avoid more pain/shame, while also feeling resentful at the world and themselves, expecting a lack of success from these attempts, and so acting more from local self-image gradients, or drama-seeking gradients, than from any motives attached to actual hope of accomplishing something non-immediate.
"Signaling motives" can be stuck on a scale, from "local, short-sighted, wire-heading-like attempts to preserve self-image, or to avoid immediate aversiveness or seek immediate reward" to "long-term strategic optimization to achieve recognition and power". It would be better to have Napoleon as an ally than to have a narcotics addict with a 10 minute time horizon as an ally, and it seems analogously better to help your own status-seeking parts mature into entities that are more like Napoleon and less like the drug addict, i.e. into entities that have strategy, hope, long-term plans, and an accurate model of the fact that e.g. rationalizations don't change the outside world.
Heyo, after your correction I still think the main thrust of my reply isn't changed. Your correction mostly just makes me wrong to think that you argued that people that disendorse their status-seeking parts don't have long-term plans, rather than that their long-term planning rationality is worsened. I think I still disagree that their planning is worsened though, but my disagreement is sort of subtle and maybe not worth explaining given opportunity costs. I also stand by my main and mostly-orthogonal points about the importance of not dealing with demons (alternatively, "not making concessions to evil" or summat);—another person you could talk to about that theme would be Nick Tarleton, whose opinion is I think somewhere between ours but is (surprisingly) closer to mine than yours, at least recently. He's probably better at talking about these things than I am.
Thanks for the brief convo by the way. :)
Upon reflection, T.S. Eliot can say it better than I can:
I would not want ha-Satan as my ally, even if I trusted myself not to get caught up in or infected by his instrumental ambitions. Still less would I want to give him direct read/write access to the few parts of my mind that I at all trust. Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you. Mix a teaspoon of wine in a barrel of sewage and you get sewage; mix a teaspoon of sewage in a barrel of wine and you get sewage. The rationality of an agent is its goal: if therefore thy goal be simple, thy whole self shall be full of rationality. But if thy goal be fractured, thy whole self shall be full of irrationality. If therefore the rationality that is in thee be irrationality, how monstrous is that irrationality!
Seen at a higher level you advise dealing with the devil—the difference in power between your genuine thirst for justice and your myriad egoistic coalitions is of a similar magnitude as that between human and transhuman intelligence. (I find it disturbing how much more cunning I get when I temporarily abandon my inhibitions. Luckily I've only let that happen twice—I'm not a wannabe omnicidal-suicidal lunatic, unlike HJPEV.) Maybe such Faustian arbitrage is a workable strategy... But I remain unconvinced, and in the meantime the payoff matrix asymmetrically favors caution.
Take no thought, saying, Wherewithal shall I avoid contempt? or, Wherewithal shall I be accepted? or, Wherewithal shall I be lauded and loved? For true metaness knoweth that ye have want of these things. But seek ye first the praxeology of meta, and its rationality; and all these things shall be added unto you. Take therefore no thought for your egoistic coalitions: for your egoistic coalitions shall take thought for the things of themselves. Sufficient unto your ten minutes of hopeless, thrashing awareness is the lack of meta thereof.
Er, nope.
Humans' goals are fractured. But this has little to do with whether or not they are rational.
You don't understand. This "rationality" you speak of is monstrous irrationality. And anyway, like I said, Meta knoweth that ye have Meta-shattered values—but your wants are satisfied by serving Meta, not by serving Mammon directly. Maybe you'd get more out of reading the second half of Matthew 6 and the various analyses thereof.
You may be misinterpreting "the rationality of an agent is its goal". Note that the original is "the light of the body is the eye".
To put my above point a little differently: Take therefore no thought for godshatter: godshatter shall take thought for the things of itself. Sufficient unto the day is the lack-of-meta thereof.
For clarity's sake: Yes, I vehemently dispute this idea that a goal can't be more or less rational. That idea is wrong, which is quickly demonstrated by the fact that priors and utility functions can be transformed into each other and we have an objectively justifiable universal prior. (The general argument goes through even without such technical details of course, such that stupid "but the choice of Turing machine matters" arguments don't distract.)
Really? How?
Oh, maybe you mean that they both have the type of
Universe -> Real? Although really it'sprior :: Universe -> [0, 1]andutilityfunction :: Universe -> Realassuming we have a discrete distribution on Universes. And anyway that's no justification for substituting apriorfor autilityfunctionany more than for substituting tail:: [a] -> [a]for init:: [a] -> [a]. Unless that's not what you mean.If you change your utility function and your prior while keeping their product constant, you'll make the same decisions. See E.T. Jaynes, Probability Theory: The Logic of Science, chapter “Decision theory -- historical background”, section “Comments”.
Right, but that still isn't really a way to turn a prior into a utility function. A prior plus a set of decisions can determine a utility function, but you need to get the decisions from somewhere before you can do that.
Right, but you never see just a prior or just a utility function in an agent anyway. I meant that within any agent you can transform them into each other. The concepts of "prior" and "utility function" are maps, of course, not metaphysically necessary distinctions, and they don't perfectly cut reality at its joints. Part of what's under debate is whether we should use the Bayesian decision theoretic framework to talk about agents, especially when we have examples where AIXI-like agents fail and humans don't. But anyway, even within the naive Bayesian decision theoretic framework, there's transformability between beliefs and preferences. Sorry for being unclear.
To check if we agree about some basics: do we agree that decisions and decision policies—praxeology—are more fundamental than beliefs and preferences? (I'm not certain I believe this, but I will for sake of argument at least.)
I don't know. The part I took issue with was saying that goals can be more or less rational, just based on the existence of an "objectively justifiable" universal prior. There are generally many ways to arrange heaps of pebbles into rectangles (assuming we can cut them into partial pebbles). Say that you discover that the ideal width of a pebble rectangle is 13. Well... you still don't know what the ideal total number of pebbles is. An ideal width of 13 just gives you a preferred way to arrange any number of pebbles. It doesn't tell you what the preferred length is, and indeed it will vary for different numbers of total pebbles.
Similarly, the important thing for an agent, the thing you can most easily measure, is the decisions they make in various situations. Given this and the "ideal objective solomonoff prior" you could derive a utility function that would explain the agent's behaviour when combined with the solomonoff prior. But all that is is a way to divide an agent into goals and beliefs.
In other words, an "objectively justifiable" universal prior only enforces an "objectively justifiable" relation between your goals and your actions (aka.
num_pebbles = 13 * length). It doesn't tell you what your goals should be any more than it tells you what your actions should be.I don't know if any of that made sense, but basically it looks to me like you're trying to solve a system of equations in three variables (prior, goals, actions) where you only have two equations (prior = X, actions = prior * goals). It doesn't have a unique solution.
Let's play rationalist Taboo!
Care to enlighten me exactly on just what it is you're disputing, and on just what points should be discussed?
Edit: Fixed markdown issue, sorry!
Meh. The goal of leading to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc. has probably much more Kolmogorov complexity than the goal of maximizing the number of paperclips in the universe. If preferring the former is irrational, I am irrational and proud of it.
Oh, also "look at the optimization targets of the processes that created the process that is me" is a short program, much shorter than needed to specify paperclip maximization, though it's somewhat tricky because all that is modulo the symbol grounding problem. And that's only half a meta level up, you can make it more elegant (shorter) than that.
Optimization processes (mainly stupid ones such as evolution) can create subprocesses with different goals.
(And stupid ones like humans.)
(Unfortunately.)
That means that I should try to have lots of children?
Why do you think of a statistical tendency toward higher rates of replication at the organism level when I say "the processes that created the process that is [you]"? That seems really arbitrary. Feel the inside of your teeth with your tongue. What processes generated that sensation? What decision policies did they have?
(ETA: I'd upvote my comment if I could.)
You mean, why did I bother wearing braces for years so as to have straight teeth? <gd&rVF!>
Maybe “maximizing the number of paperclips in the universe” wasn't the best example. “Throwing as much stuff as possible into supermassive black holes” would have been a better one.
I can only say: black holes are creepy as hell.
The shorter your encoded message, the longer the encryption / compression algorithm, until eventually the algorithm is the full raw unencoded message and the encoded message is a single null-valued signal that, when received, decodes into the full message as it is contained within the algorithm.
...isn't nearly as short or simple as it sounds. This becomes obvious once you try to replace those words with their associated meaning.
My point was that it's easier to program ("simpler") than "maximize paperclips", not that it's as simple as it sounds. (Nothing is as simple as it sounds, duh.)
I fail to see how coding a meta-algorithm to select optimal extrapolation and/or simulation algorithm in order for those chosen algorithms to determine the probable optimization target (which is even harder if you want a full PA proof) is even remotely in the same order of complexity as a machine learner that uses natural selection for algorithms that increase paperclip-count, which is one of the simplest paperclip maximizers I can think of.
This is incorrect. Eyes absorb light and produce electrical signals interpreted as vision by the brain. Further, it seems to me that the set of thing that 'the light of the body' describes is an empty set; there's no literal interpretation (our bodies do not shed visible light) and there's no construction similar enough that suggests an interpretation (the X of the body / the light of the X). "The light of the sun" / "The light of the moon" is the closest I can find and both of those suggest the literal interpretation.
Originally, I was going to do a very charitable reading: invent a sane meaning for "The X of the Y is the sub-Y" as "sub-Y is how Y handles/uses/interpets/understands X" and say that goals, as subparts of an agent, are how an agent understands its rationality - perhaps, how an agent measures their rationality. Which is indeed how we measure our rationality, by how often we achieve our goals, but this doesn't say anything new.
But when you say things like
as if you were being clear in the first place, it shows me that you don't deserve a charitable reading.
Just interpret light as ‘that which allows one to see’. That which allows the body to see is the eye.
That which allows the agent to achieve is its goals? Seems incorrect. (Parsing rationality as "that which allows one to achieve").
<nitpick>Our body does scatter visible light, though, much like the moon does.</nitpick>
How so?
Are you sure it doesn't instead favor incautiously maximizing the amount of resources that can be spent on caution?
We've had many hours of discussion since you asked this—did I answer your question satisfactorily during those house perchance?
tl;dr: Signalling is extremely important to you. Doing away with your ability to signal will leave you helplessly desperate to get it back.
I think that this is a point made not nearly often enough in rationalist circles: Signalling is important to humans, and you are not exempt just because you know that.
Upvoted. I'd love to hear your thoughts on how one could slide that scale more towards the "long-term strategic optimization" end? Assuming that it is possible, of course.
You only have to think yourself undesirable, not be undesirable, and many people already do so think, and signal away nonetheless.
Also, a proposed solution in regards to "How to be Altruistic" (in a way that DOESN'T make you feel like you've "been good enough that they've earned the right to be a little selfish.")
I think that the best way to avoid this pitfall is to incorporate whatever altruism that you want to do into your way of life, so that it doesn't feel like a one-time shot.
Example- Instead of donating a lump sum of $50 to the charity of your choice, see if there's a way to have a $1 donation made automatically every week.
Vegetarianism is another example. Once you actually become a vegetarian you don't feel like you're doing any further good just by continuing to do what you always do.
I don't have any evidence for it, just personal experience.
Matthew 6:3 seems apropos.
That sounds like it'd work, but at the cost of eliminating most of the fuzzies you'd get from your altruism and most of your donation's social signaling value. (The tax paperwork might also be more complicated if you're claiming a deduction, but that's less important.) As such I suspect it'd be a hard sell for anyone whose altruism isn't a terminal value but is rather a consequence of one of those functions, which I expect is a substantial fraction of all the altruists out there. Seems like it has the potential to be a good idea for LWers, though.
Setting it up to mail you periodic summaries of your donations over some conveniently large period of time would fix this, but would also have the potential to reestablish the "earned selfishness" problem we're trying to avoid.
As an aside, setting up that kind of repeating donation isn't likely to be that difficult. Most banks will allow you to schedule repeating payments to some entity even if you aren't being billed; I pay my dojo dues that way.
Doesn't that inherently make it a stronger signal when observed?
Choosing to donate in a self-thankless way might in general, but in this case I think that's dominated by the convenience factor and per-donation triviality. Most people would be probably be less impressed by someone who's donated $50 every month for the last year by some automatic process than by someone who's made a $500 lump donation: the former is higher in absolute terms and makes for a stabler cash flow to the charity, but also carries a fairly strong message of "I don't want to be inconvenienced by my altruism".
Interesting, and quite possible correct.
I seem to remember reading that males tend to status-seeking behaviors more than females. Or maybe it was that women seek status in a more social context. Either way, I can't find it now.
But my personal experiences are very different. Anything I've done that you could consider "high-status", I've only done because it was pretty much thrust at me. You mentioned that you disliked doing low status work, but for me even when I went into engineering (because my family didn't support me going into social work), my dream job was to work for a very small engineering firm or branch, that needed an assistant that could do all sorts of tasks. Smart enough to understand the material, but also willing to sit down and do the menial labor from technical writing, to giving presentations. That's still something I would love to do.
I guess what motivates me personally in my work is the desire to be appreciated, which is why I love child and disability care so much, and dislike my other job which is high pay, but low usefulness. But it seems like I am completely in the minority here and I don't know if that is because:
a) This site is dominated by status-seekers- perhaps because of the style (debating), substance (rationality) or demographics (male)
b) The people who commented also happen to be status-seekers - perhaps because those who weren't didn't feel compelled to write
c) Something else
Status doesn't exist in a vacuum. The audience matters. While high pay regardless of usefulness will win you status in mainstream society, it certainly will not with, say, the Less Wrong audience. Or in the Missionaries for Charity. Similarly, people with high status in a specific subgroup may be considered downright weird in mainstream society.
So perhaps you're optimising for status with your target audience.
There are also jobs which are high pay that are also low status in any audience or society.
I am so far failing to think of any.
Truckers. Military contractors. Strippers.
All three of these are low status in many audiencies/societies. I think that for each, however, there exists an audience that accords them high status.
Who considers strippers to be high status?
(Certainly not the actual audience. They just see meat to eat with their eyes, not a person. Even prostitutes are probably respected a lot more on average than strippers, since it's more common that people at least talk to prostitutes, and become more aware that there's a person there.)
I don't know about "high status", but Roissy discusses here whether it is better to insinuate, for the purposes of attracting another woman, that you've dated strippers or lawyers in the past (his conclusion: it depends), and he recounts a failed attempt to pick up an attractive stripper here.
Quotes:
I would eat my own eyes if I ever see Roissy or anyone else say the same about prostitutes (dating them when they aren't on the job).
So although strippers are low class in general, the men who watch them put them in a high status position relative to themselves. The same cannot be said of prostitutes, who are lower status than just about anyone in society including the men who use them. Prostitution is by far the most degrading occupation for a woman.
Some prostitutes have high status with their audience. Quickly translated from Punainen eksodus, a PhD sociology thesis on Finnish prostitution:
Interesting. I suppose I had in mind the kind of prostitute who has no choice of customers. On the other hand a prostitute (or "escort") who turns undesirable men down is not too far away from being a run-of-the-mill promiscuous woman who extracts material benefits from her suitors. The prostitute in this case has merely formalised her revenue stream.
In my defense, I was responding to this claim: "Even prostitutes are probably respected a lot more on average than strippers", and I don't believe that the average prostitute is in such a comfortable position. I also think that the feeling of power or control over the situation is not really the same thing as status. If you asked the Finnish prostitutes' customers whether given the choice they would prefer their own daughters to be prostitutes, or strippers (whom the men are not allowed to touch) then you might get a different perspective.
I advise you to be careful to avoid reading anything further related to this subject. Because I have seen just that!
Typical mind fallacy, perhaps?
I don't know about you, but if I happen to be watching someone stripping it's much more about the meeting of the eyes than the eyeing of the meat.
Well, if you go by the HBO specials they did about both groups, it's actually the other way around. Though really, people formed long-term relationships with their service providers in both groups.
It's not necessarily about the eyes for me, but If the stripper is any good, it's more about emotional expression than flapping their meat around. Sadly, many strippers dance like meat sacks. What worries me is that they may just know their market better than I do.
Generalizing from one example, rather. Mostly I was going by what I've heard from an acquaintance that worked as a stripper.
Truckers are highly paid?
Military contractors are low status?
Compared to members of the actual military (who often do comparable work), contractors are paid much better and respected much less.
Adam Smith said that certain jobs - executioner, for example - were well paid because they were "detestable".
Agreed, but this effect will be observed when relevant audiences deem the job low status; it does not require all audiences to.
Accountants and the like have high median salary but are widely considered to be boring people. I don't know if this is what daenerys was thinking of, but it's the best example I can think of.
Appreciation is part of the same broad family of major human drives, but it tends to motivate more actual action. ;-)
This is a difficult problem. I have come to realize there is no one solution. The general strategy I think is to have consistency checks on what you are doing. Your subconscious can only trick you into seeking status and away from optimizing your goals by hiding the contradictions from you. But as 'willpower' is not the answer, eternal vigilance isn't either. But rather you pick up via a mass of observation the myriad ways in which you are led astray, and you fix these individually. Pay attention to something different you regularly do every day and check if this comports with your goals. If you are lucky, your subconscious cannot trick you the same way twice. Though it is quite ingenious.
Isn't the general strategy to join or create communities where status is awarded for actually doing the right thing?
How many such communities can you be part of (because surely you don't only have one goal) and still not have them a diluted effect on yourself? How many such communities don't fall prey to lost purposes? How many can monitor your life with enough fidelity that they can tell if you go astray?
It seems like a large part of the problem is not that our brains unconsciously optimize for prestige per se, but they incorrectly optimize for prestige. Surely, having to take extra years to graduate and damaging one's own cause are not particularly prestigious. Helping Eliezer write a book will at least net you an acknowledgement, and you also get to later brag about how you were willing to do important work that nobody else was.
I don't have much empirical data to support this, but I suspect it might help (or at least might be worth trying to see if it helps) if you consciously optimized for prestige and world-saving simultaneously (as well as other things that you unconsciously want, like leisure), instead of trying to fight yourself. I have a feeling that in the absence of powerful self-modification technologies, trying to fight one's motivation to seek prestige will not end well.
I disagree with you and Anna in this comment.
Your comment and this post have really clarified a lot of the thoughts I've had about status - especially as someone who is largely motivated by how others perceive me - thanks!
Any thoughts on how to best consciously optimize for prestige?
I'm actually kind of ambivalent about it myself. Sometimes I wish I could go back to a simpler time when I thought that I was driven by pure intellectual curiosity alone. For someone whose "native" status-seeking tendencies aren't as destructive as the OP's, the knowledge may not be worth the cost.
Search for your comparative advantage (usually mentioned in the context of maximizing income, but is equally applicable to maximizing prestige). This can be counterintuitive so give it a second thought even if you think you already know. For example, in college I thought I was great at programming and never would have considered a career having to do with philosophy. Well, I am terrible at philosophy but as it turns out, so is everyone else, and I might actually have a greater comparative advantage in it than in programming.
Look for the Next Big Thing so you can write that seminal paper that everyone else will then cite. More generally, try to avoid competing in fields already crowded with prestige seekers. Look for fields that are relatively empty but have high potential.
Don't forget that you have other goals that you're optimizing for simultaneously, and try not to turn into a status junkie. Also double-check any plans you come up with for the kind of self-sabotage described in the OP.
I'm not so sure we accord Kaj less status overall for having taking more years to graduate and more status for helping Eliezer write that book. Are we so sure we do? We might think so, and then reveal otherwise by our behavior.
Though note that the relevant criteria is not so much what other people actually consider to be high-prestige, but what the person themselves considers to be high prestige. (I wonder if I should have emphasized this part a little more, seeing how the discussion seems to be entirely about status in the eyes of others.) For various reasons, I felt quite strongly about graduating quickly.
I was aware of that yes. But I was also assuming what you considered to be high prestige within this community was well calibrated.
I can attest that I had those exact reactions on reading those sections of the article. And in general I am more impressed by someone who graduated quickly than one who took longer than average, and by someone who wrote a book rather than one who hasn't. "But what if that's not the case?" is hardly a knock-down rebuttal.
I think it's more likely you're confusing the status you attribute to Kaj for candidness and usefulness of the post, with the status you would objectively add or subtract from a person if you heard that they floundered or flourished in college.
What I has in mind was his devotion to the cause, even as it ultimately harmed it, we think more than compensates for his lack of strategic foresight and late graduation.
With that book, we think of him less for not contributing in a more direct way to the book, even as we abstractly understand what a vital job it was.
Though of course that may just be me.
Seconding this.
As Michael Vassar would put it: capitalism with a 10% tax rate nets a larger total amount of tax revenue (long-term) than does communism with an alleged 100% tax rate -- because, when people see economic activity as getting them what they want, the economy grows more, and one ends up achieving more total, and hence also more for all major parties, than one achieves when thinking about total economic goods as a zero-sum thing to be divided up.
You have a bunch of different motives inside you, some of which involve status -- and those motives can be a source of motivation and action. If you help your status-seeking motives learn how to actually effectively acquire status (which involves hard work, promise keeping, pushing out of your comfort zone, and not wireheading on short-term self-image at the expense of goals), you can acquire more capability, long term -- and that capability can be used partly for world-saving. But you only get to harness this motive force if your brain expects that exerting effort will actually lead to happiness and recognition long term.
I don't try to not seek status, I try to channel my status-seeking drive into things that will actually be useful.
Can you give some examples?
Mod parent up as much as possible.
;-)
In other words you try to legislate your actions. But your subconscious will find loopholes and enforcement will slip.
Hm, while I'm flattered to have provided a springboard for this discussion, I find it ironic that most of the discussion thread consists of either status-seeking arguments, or else people agreeing that this is a Serious Problem -- and implicitly noting how useful it will be in showing how hard they're trying to overcome it. ;-)
AFAICT, nobody is asking how it can be fixed, whether it can be fixed, or actually proposing any solutions. (Except of course in the original discussion you linked to, but I don't get the impression anybody from this post is really reading that discussion.)
(For anyone who is interested in that, this post offers some pointers.)
This is a very clever statement, and therefore I accord you higher status, as you were hoping for when you wrote it. ;-)
Actually, I was hoping to help people. If you accord me status, but don't actually use any of the information I gave, then you are frustrating my hopes rather than satisfying them.
That was my first impulse, but I wondered why Kaj hadn't included any solutions and then wondered if this even is a problem that needs fixing. Isn't it a flaw of many thinkers that if you give them a question, they try to answer it?
I've been somewhat helped by simply realizing the problem. For example, recently I was struggling with wanting to study a lot of math and mathy AI, because that's the field that my brain has labeled the most prestigious (mostly as a result of reading Eliezer et al.). When I realized that I had been aiming at something that I felt was prestigious, not something that was actually my comparative advantage, it felt like a burden was lifted from my shoulders. I realized that I could actually take easier courses, and thereby manage to finish my Master's degree.
My understanding is the quote: "It's better to be a big fish in a small pond than a small fish in a big pond." is substantially related to status.
If I try to apply it to your situation to find isomorphisms, I find a lot:
Rather than being a small fish(struggling with math) in a big pond(Eliezer et al.), you want to be the big fish(actually my comparative advantage) in the small pond(take easier courses.)
Considering this, are you sure you've left the status framework? If so, why?
(Edited after comment from TheOtherDave for brevity.)
Comparative advantage is eating the sort of food that most greatly increases your fish size in the pond whose size implies the greatest marginal payoff for adding fish of the size you can become if you enter that pond.
When I combine what you said with:
I think I may have dissolved my confusion. You could separate it out into two pieces:
1: Comparative advantage - An Optimization Process
2: Things that will actually be useful. - Being Friendly
My confused feeling seems like it might have been from setting these things as if they were opposed and you could only maximize one.
But if you figure the two are multiplied together, it makes much more sense to attempt to balance both correctly, to maximize the result.
Utility functions aren't quite as simple as multiplying two numbers, but the basic idea of maximizing the product of comparative advantage and usefulness sounds a lot more reasonable in my head then maximizing one or the other.
Thanks!
I want to pursue my comparative advantage because that's the best way that I can help SIAI and other good causes, regardless of status considerations. Pursuing mathy stuff is only worthwhile if that's my best way of helping the causes I consider valuable.
Or to put it more succinctly: if being a big fish in a small pond, or even a small fish in a small pond, lets me make money that can be used to hire big fish in big ponds, then I'd rather do that than be a small fish in a big pond.
(I won't try to claim that I've left the status framework entirely, just to some extent on this particular issue. Heck, I'm regularly refreshing this post to see whether it's gotten more upvotes.)
That's a fair point, but because money is so fungible, it's exactly the same kind of statement that you would be making if you were in fact selfish and didn't care about existential risk at all. In the same sort of way that both a new FAI and a new UFAI may have one of their tasks be to ask for some computing power.
So while that may be the right thing to do, I'm not sure if that in of itself can be taken as evidence that you care more about existential risk than status. Although, if you take that into account, then it really does work, because you aren't getting the status that you would get if you immediately helped the SIAI, you are instead ignoring that for a later boost that will really help more.
Honestly, the more I talk about this topic, the less I feel like I actually have any concrete grasp of what I'm saying. I think I need to do some more reading, because I feel substantially too confused.
True. It also isn't very reassuring to know that some of the paths which I'm now pursuing will, if successful, give me high status (within a different community) in addition to the status boost one gets from being rich. I do know that I'm still being somewhat pulled by status considerations, but at least now I'm conscious of it. Is that enough to avoid another hijack? Probably not merely by itself. I'll just have to try to be careful.
Why are you even trying to avoid status considerations? How does avoiding status considerations help you reach your instrumental goals?
Or, more precisely: what makes you think that conscious awareness and attempting to avoid status considerations will be any more successful at changing your actual behavior than any other activity undertaken via willpower?
I'm not trying to avoid status considerations. I'm only trying to avoid them hijacking my reasoning process in such a way that I think the best ways of achieving status are also the best ways of achieving my non-status goals.
I can't completely ignore status considerations, but I might be able to trade a high-status path that achieves no other goals, to a path that is somewhat lower in status but much better at achieving my other goals. But that requires being able to see that the paths actually have different non-status outcomes.
This is very clear. Others should refer back to this for a refresher if the topic becomes confusing. I know it's set my head spinning around sometimes.
...but not edit.
(Edited after comment from pjeby for brevity.)
I suppose I could simplify this to "There are layers of status seeking. So it's very easy to think you aren't making a Status0 play, because you are making a Status1 play, and this can recurse easily to Status2, Status3, or Status4 without conscious awareness."
FWIW, that sense of "this sounds insane when I say it explicitly but feels natural if I don't think about it" is an experience I often have when I am becoming aware of my real motives and they turn out to conflict with preconceived ideas I have about myself or the world. Usually, either the awareness or the preconceived ideas tend to fade away pretty quickly. (I endorse the latter far more than the former.)
Uh, no. That is so far off from what I said that it's not even on the same planet.
See, "good" and "hypocrite" are just more status labels. ;-)
What I was saying is, if you acknowledge your actual goals, you might have a better chance of sorting out conflicts in them. Nowhere does labeling yourself (or the goals) good or bad come into it. In fact, in the discussion on solutions, I explicitly pointed out that getting rid of such labels is often quite useful.
And I most definitely did not label anyone's goals hypocritical or advise them to aspire to goodness. In fact, I said that the original questioner's behavior may well have been optimal, given their apparent goals, provided that they didn't think too much about it.
In much the same way that your comment would've been more workable for you, had you not thought too deeply about it. ;-)
Upon additionaI retrospection, (and after lunch) I agree. I'll edit those down to the more workable parts.
Since there doesn't appear to be a way to do partial strikethrough, I guess I can just save the removed/incomplete parts in a text file if for some reason anyone really wants to know the original in the near future.
It's also a Fully General Argument (and Excuse) for not solving problems.