Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rationality: Common Interest of Many Causes

36 Post author: Eliezer_Yudkowsky 29 March 2009 10:49AM

Previously in seriesChurch vs. Taskforce

It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander.  Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.  The Institute Which May Not Be Named was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

But of course, not all the rationalists I create will be interested in my own project—and that's fine.  You can't capture all the value you create, and trying can have poor side effects.

If the supporters of other causes are enlightened enough to think similarly...

Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side.  They won't capture all the value they create.  And that's fine.  They'll capture some of the value others create.  Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

But this requires—I know I'm repeating myself here, but it's important—that you be willing not to capture all the value you create.  It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever.  It requires that you don't regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support.  You only reap some of your own efforts, but you reap some of others' efforts as well.

If you and they don't agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement.  (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)

A certain person who was taking over as the president of a certain organization once pointed out that the organization had not enjoyed much luck with its message of "This is the best thing you can do", as compared to e.g. the X-Prize Foundation's tremendous success conveying to rich individuals of "Here is a cool thing you can do."

This is one of those insights where you blink incredulously and then grasp how much sense it makes.  The human brain can't grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics.  Saying, "This is the best thing" doesn't add much motivation beyond "This is a cool thing".  It just establishes a much higher burden of proof.  And invites invidious motivation-sapping comparison to all other good things you know (perhaps threatening to diminish moral satisfaction already purchased).

If we're operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more)—or at least, that most potential supporters of interest fit this description—then fighting it out over which cause is the best to support, may have the effect of decreasing the overall supply of altruism.

"But," you say, "dollars are fungible; a dollar you use for one thing indeed cannot be used for anything else!"  To which I reply:  But human beings really aren't expected utility maximizers, as cognitive systems.  Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances, people want to spread their donations around as an act of mental accounting to minimize the regret if a single cause fails, and telling someone about an additional cause may increase the total amount they're willing to help.

There are, of course, limits to this principle of benign tolerance.  If someone has a project to help stray puppies get warm homes, then it's probably best to regard them as trying to exploit bugs in human psychology for their personal gain, rather than a worthy sub-task of the great common Neo-Enlightenment project of human progress.

But to the extent that something really is a task you would wish to see done on behalf of humanity... then invidious comparisons of that project to Your-Favorite-Project, may not help your own project as much as you might think.  We may need to learn to say, by habit and in nearly all forums, "Here is a cool rationalist project", not, "Mine alone is the highest-return in expected utilons per marginal dollar project."  If someone cold-blooded enough to maximize expected utility of fungible money without regard to emotional side effects explicitly asks, we could perhaps steer them to a specialized subforum where anyone willing to make the claim of top priority fights it out.  Though if all goes well, those projects that have a strong claim to this kind of underserved-ness will get more investment and their marginal returns will go down, and the winner of the competing claims will no longer be clear.

If there are many rationalist projects that benefit from raising the sanity waterline, then their mutual tolerance and common investment in spreading rationality could conceivably exhibit a commons problem.  But this doesn't seem too hard to deal with: if there's a group that's not willing to share the rationalists they create or mention to them that other Neo-Enlightenment projects might exist, then any common, centralized rationalist resources could remove the mention of their project as a cool thing to do.

Though all this is an idealistic and future-facing thought, the benefits—for all of us—could be finding some important things we're missing right now.  So many rationalist projects have few supporters and far-flung; if we could all identify as elements of the Common Project of human progress, the Neo-Enlightenment, there would be a substantially higher probability of finding ten of us in any given city.  Right now, a lot of these projects are just a little lonely for their supporters.  Rationality may not be the most important thing in the world—that, of course, is the thing that we protect—but it is a cool thing that more of us have in common.  We might gain much from identifying ourselves also as rationalists.

 

Part of the sequence The Craft and the Community

Next post: "Helpless Individuals"

Previous post: "Church vs. Taskforce"

Comments (39)

Comment author: Demosthenes 29 March 2009 06:30:56PM 12 points [-]

In a nutshell, it might be cool to make a website and organization that promotes data collections and debate.

Rationalism requires access to high quality empirical evidence. Holding your hypotheses up to constantly changing data is a major theme of this site.

We can only rationally discuss our hypotheses and beliefs when we have something to test and the quality of datasets floating around on the internet is often low or inaccessible.

A good rationalist project might be to highlight resources for empirical evidence, run "data debates" where experts attack and defend each others datasets; a wiki for best-practices in data collection, or a wiki for navigating popular issues through good datasets (try as a nonexpert to find what studies on taxation and inequality are best and you can end up running in circles).

I think you would want to tailor this kind of project toward non experts, giving people (especially journalists) a good starting place for finding meaningful, well-collected data that can form a good jump point for rational analysis.

A project like this also leaves the door open to many interpretations and many goals, so it isn't necessarily cutting down on the number of voices out there.

I would also be interested in cataloging failed attempts. More and more I have been trying to look at survivorship biases (http://en.wikipedia.org/wiki/Survivorship_bias) behind all my beliefs.

Are there any good examples of projects like this in existence? Maybe we can leverage the community here to throw our weight behind one.

Comment author: Nebu 13 April 2009 03:34:23PM 5 points [-]

I think you would want to tailor this kind of project toward non experts, giving people (especially journalists)

I like most of your ideas, but I wonder how many journalists are willing to sacrifice readability or sensationalism for truth and accuracy.

Comment author: RickJS 08 July 2010 12:35:15AM 8 points [-]

Thanks, Eliezer!

As one of your supporters, I have been sometimes concerned that you are doing blog posts instead of working out the Friendly AI theory. Much more concerned than I show. I do try to hold it down to an occasional straight question, and hold myself back from telling you what to do. The hypothesis that I know better than you is at least -50dB.

This post is yet another glimpse into the Grand Strategy behind the strategy, and helps me dispel the fear from my less-than-rational mind.

I find it unsettling that " ... after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists."

You learned that, “The human brain can't grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics.” Evolution didn’t deliver rationality any more than it delivered arithmetic. They have to be taught and executed as procedures. They aren’t natural. And I wonder if they can be impressed into System 1 through practice, to become semi-automatic. Right now, my rational side isn’t being successful at getting me to put in 8-hour work days to save humanity.

You learned that, "Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances ... " That makes some of MY screwy behavior start to make sense! It's much more explanatory than, "I'm cheap!" or lazy, or insensitive, or rebellious, or contrary. That looks to me like a major, practical breakthrough. I will take that to my coach and my therapist, we will use it if we can.

I don't think my psychologist ever said it. I doubt it is taught in undergraduate Psychology classes. Am I just out of touch ? Has this principle been put into school curricula? That you had to learn it the hard way, that it isn't just common knowledge about people, " ... paints a disturbing picture."

You’ve done it again. In a little over one thousand words, you have given me a conceptual tool that makes the world (and myself) more intelligible and perhaps a little more manageable. Perhaps even a LOT more manageable. We shall see, in real practice.

I would appreciate any links to information on the mental accounts and amounts of willpower.

Thank you, Eliezer.

--RickJS

Comment author: RobinHanson 29 March 2009 04:34:38PM 11 points [-]

Since Less Wrong doesn't have trackbacks, I'll note that I responded to this post here: http://www.overcomingbias.com/2009/03/missing-alliances.html

Comment author: thomblake 02 April 2009 08:52:33PM 4 points [-]

A stylistic note: you have a parenthetical note defining "altruistic akrasic", but not with the first mention of the term. If you feel it needs explanation, it needs explanation when you first mention it.

Comment author: CronoDAS 31 March 2009 09:20:36PM 4 points [-]

Well, to be honest, I like dogs, and when one is in front of me, I care about it. I don't think it's particularly important to rescue stray dogs (at least, not when compared to people), but I might join such a project because I thought I might have fun doing it.

Comment author: Daniel_Burfoot 29 March 2009 02:30:55PM 4 points [-]

Something about these last couple of posts has been bothering me and I think it is this: you have a misperception regarding the comparative advantages of the group you have assembled. It seems that we are youngish and neither terribly wealthy nor well-connected, but we possess a certain quality of strength or at least the desire to obtain it. Sure, if you convinced us all to donate lots of money to project X that might do some good. But it seems likely that this group can achieve some far greater good by proceeding in a less obvious way. I can only speculate upon what that larger good might be.

Comment author: Psy-Kosh 29 March 2009 06:18:07PM 3 points [-]

Hrm... May be worthwhile compiling a list of not just causes, but organizations/groups/communities (online or otherwise) that explicitly care about rationalists (or, for a starting point, at least claim to) so that we can begin to actually try to set such a mutual-rationality-project up.

Comment author: JulianMorrison 29 March 2009 01:08:44PM 5 points [-]

I've noticed that even Enlightenment projects such as atheism can get dragged down by the low sanity waterline and kept spinning in small and useless circles by the dearth of rationalists and the lack of skill among any rationalist novices present (I include myself as such).

I am very much on board with any project to raise it.

Comment author: Furcas 29 March 2009 05:11:27PM 2 points [-]

"If you and they don't agree on everything - especially priorities - you have to be willing to agree to shut up about the disagreement."

Right. Just like anti-theists should shut up about their anti-theism for the benefit of theistic evolutionists.

That was sarcasm, if you couldn't tell.

Comment author: Psy-Kosh 29 March 2009 06:22:51PM *  8 points [-]

Well, it's more a case of, for our purposes here "does everyone involved agree that actual rationality is a good thing? Then let's not argue out stuff like which of our causes is the bestest in the common area. Rather, let's hash out basic rationality concepts, or, more to the point, basic methods of training in rationality/creating rationalists. Let's reserve the 'my cause has most utility per effort/dollar/etc' type arguments for elsewhere."

Think of it more analogous to many online forums and so on explicitly banning religious and political arguments, since those tend to go kablewy and if the forum is primarily about something else...

ie, it's not "shut up about it" but "let's agree to shut up about it HERE"

Comment author: Furcas 29 March 2009 06:58:21PM *  2 points [-]

Okay, I can see the sense in that. I just wouldn't want this sort of agreement to devolve into a kind of communally-imposed censorship of certain topics 'for the cause of Reason', in the same way that many evolutionists (theistic and atheistic) keep telling anti-theists to shut up 'for the cause of Evolution'.

In other words, I'm all for restraint if the goal is to focus on rationality, but I'm against it if the goal is to avoid offending people and preserve harmony.

Comment author: ciphergoth 29 March 2009 12:17:52PM *  2 points [-]

Sounds like it might be worth a go.

The reason everyone tries to shape a very specific message to fit existing biases rather than challenging the biases directly is that no-one wants to stick their necks out. For example, there are a whole load of charities that would benefit if scope insensitivity were reduced, but for any of them to fight it would be perceived by scope insensitivity sufferers as talking about people's lives in the cold language of cost-effectiveness.

Comment deleted 29 March 2009 11:05:55AM [-]
Comment author: Psy-Kosh 29 March 2009 06:27:13PM 4 points [-]

I think the puppies thing was more "while valuable, we may perhaps undervalue various other compassionate causes relative to that specific one due to the 'awwwww, puppies' effect, which we don't necessaraly have for all creatures."

Comment author: infotropism 29 March 2009 05:12:45PM *  5 points [-]

When I read that I understood it as :

"Ooooh cute puppies, I so want to save them" "Hey, there's also a hobo in the alley, he looks like he's dead or in bad condition" "bleh, dirty hobo, I don't even want to have a look"

I understand that it is more pleasant to look upon puppies, that generates a warm feeling, and, my, aren't they just cute ? While a hobo in a dark alley may well just engender repulsion in many people. But putting the wellbeing of a pup before that of a fellow human, including, in ways not quite so obvious as the hobo case, is a fairly repulsive idea in itself too. Beyond the fact that this idea is repulsive, which may not be the main concern of a rationalist, there's the fact that the hobo is likely to have more complex feelings, and more consciousness, that the shades of pain and distress he can feel are just worse than those of the pup; and even beyond that, you're supposed to feel empathy for your fellow human being.

You see we have some machinery up there in the fleshy attic for just that purpose. It helped, to have other human beings care for you and your misery in the environment of adaptation. So if you see something that has some cue traits, shapes, which you think is alive and warm, and in danger, that would trigger those feelings of compassion, love, will to help, secure, protect. Too bad that pups have neotenic traits, that trigger the very same machinery in our brain, without there being a real purpose to that. There would lie the bug, I'd reckon. ( See http://en.wikipedia.org/wiki/Neoteny ; http://en.wikipedia.org/wiki/Cuteness )

Of course that's forgetting the part about someone exploiting a bug for their own gain as is also said. I'd suppose that some people would do such a thing, for instance using puppy-love, but that one didn't strike much. Well, more precisely I never encountered such a case, so that's why it looks as though it is rare to me. I may be wrong, and such misdoing may be prevalent.

Comment deleted 29 March 2009 05:40:22PM [-]
Comment author: infotropism 29 March 2009 06:22:50PM *  4 points [-]

I may have to make the dog experience. Note, I've had dogs at home, several, and other pets, including cats, bunnies, pigeons, etc; of course they were my parent's so it's a bit different.

As for hobos, nope, they didn't really "choose" their state. There's a limit to what a human mind can withstand, and possibly it isn't the same set point for everyone, but past a certain amount of hardships, people break down. Then they fall. They fall because they don't have much willpower or love of life left. And they may try to make that pain feel a tad lighter by using drugs, alcohol. Or simply to have a good time, when they can't really expect anything else to cheer them much.

I hesitated before adding that, so please don't take it as a an ad hominem attack or anything, but I think one in your case may have to make the hobo experience for himself to realize how it feels. I don't mean that as in "ironical retribution : now it's your turn", but rather to say that you perhaps won't ever come close to understanding how if feels unless you live it yourself.

And this may indeed just happen to anyone. You included. Some people will choose suicide instead, others won't want to and will simply fall down, irresponsive to most obligations or opportunities, even to get better. I'll remind yourself of some recent case, such as

http://www.cbsnews.com/stories/2009/01/06/ap/business/main4702909.shtml

Finally, I'd rather know that my fellow human would care more for me, than for his pet. I'd try to reciprocate the favor as well, insofar as possible. If I can't even help another human being because I think that my dog or, who knows, my material possession, a book perhaps, has more value than him, then it goes the other way too. I don't want to live in such a world. That I want it or not, may not change the world, and I may be worse off if I care about others in a world where others don't care about me, but I'll be worse off in any of those, than in a world where we each care for each other, with some passion.

Comment deleted 29 March 2009 07:00:56PM [-]
Comment author: loqi 29 March 2009 07:48:20PM *  1 point [-]

I work with drug addicts every day. Believe me - they choose.

I don't believe you. Define "choose".

EDIT: My above objection is unclear. I should have replied ADBOC.

Comment author: jimrandomh 29 March 2009 08:37:27PM *  2 points [-]

They are presented with situations in which multiple alternatives are possible, and they select one. That is choosing. Their choices may be explained by psychological, environmental and/or chemical factors, but not explained away. See Explaining vs Explaining Away.

Comment author: infotropism 29 March 2009 09:11:26PM *  3 points [-]

That is too easy. When you see "someone choose to X", you'll usually take it to mean that the bloke could've done otherwise, ergo, if he choose to do something that did him wrong, he's responsible and hence, deserves the result he's obtained.

Maybe you can stretch the definition of responsibility too (stretch away http://yudkowsky.net/rational/the-simple-truth ), but the idea that people could do otherwise, or deserve their fate if they chose to do something 'wrong', knowing it was wrong to do it ... even barring the idea of a deserved fate, people often fall back to human nature, resorting to their heuristics and "general feeling about doing this or that".

That's not a choice. That's more like a curse, when your environmental conditions are just right and strong enough to give such processes PREDOMINANCE over your own 'rational' sophisticated mind. In such cases, you won't act rationally anymore; you've already been taken over, even if temporarily. We've been discussing akrasia and ego depletion a bit lately, this falls in the same category. Rationality is but the last layer of your mind. It floats over all those hardwired components of your mind. It is pretty fragile and artificial, at least when it comes to act rationally, as opposed to thinking rationally, or even easier, thinking about rationality.

So whether someone "choose it", or not, whatever meaning is bestowed upon the word choice, is not the most important thing. It's to understand why someone did something of a disservice to himself, and how he could be helped out of it, and if he should be helped out of it in the first place.

Comment author: ciphergoth 29 March 2009 09:21:16PM 1 point [-]

This question is totally meaningless for materialists and consequentialists. The entire business of attaching blame and deserts must be abandoned in favour of questions either to do with predictions about the world or to do with what will give the best total effect.

I'll do a post on this when I've composed it, but the start of it is the case of Phineas Gage.

Comment author: Nick_Tarleton 30 March 2009 05:29:28AM 3 points [-]

I think this is too extreme. Maybe blame and desert are best dispensed with, but it seems likely that we (our volitions) terminally disvalue interference with deliberate, 'responsible' choices, even if they're wrong, but not interference with compulsions. Even if that's not the case, it also seems likely that something like our idea of responsible vs. compulsive choice is a natural joint, predicting an action's evidential value about stable, reflectively endorsed preferences, which is heuristically useful in multiple ways.

Comment author: loqi 29 March 2009 09:43:30PM 1 point [-]

They are presented with situations in which multiple alternatives are possible, and they select one.

This does not cut it. First, you need to additionally specify that they are fully aware of these multiple possibilities. When a man decides to cross the street and gets killed by junk falling from the sky, he didn't choose to die.

Second, the word "select" fully encapsulates the mystery of the word "choose" in this context.

Third (I didn't originally make this clear), I'm not looking for a fully reductive explanation of choice, so the "explaining away" discussion isn't relevant. The statement "Believe me - they choose" appears to be attempting to communicate something, and I believe the payload is hiding in the connotation of "choose" (because the denotation is pretty tautological: people's actions are the result of their choices).

Belatedly I recall prior discussion of connotation. I've edited my original reply to include the flashy LW keyword "ADBOC".

Comment deleted 30 March 2009 04:26:41AM [-]
Comment author: loqi 30 March 2009 08:18:07PM 5 points [-]

Sounds like a sucker's game to me and I doubt I have any responsibility for your connotations [...] I have no idea where you connotations are - so you have all the cards.

This isn't an adversarial game. How do you know I will disagree with you? Even if you did know, why avoid it?

In a sense you are butting in on another conversation - where connotations are linked and have grown in context.. My use of the word choose is related to infotropism's descripiton of a hobo.

You uttered a statement of the form "Believe me - <tautology>" in direct reply to a comment I found fairly insightful. Infotropism laid out the connotation of his use of the word "choose" quite well, IMO, and your statement seems at odds with that.

When you are in the middle of an orgasium, come out of it, you want in again - Big Time.

If you define "choice" to cover this scenario in the context of a "who is worth helping" discussion, I question the value of the definition.

Connotations on the word "choice" are many, dialectical and necessarily VAGUE.

When that is the case for a word, you cannot use it to make a clear point without a supporting explanation. I'm assuming you didn't intend to make a vague point.

Comment deleted 31 March 2009 04:41:26PM *  [-]
Comment author: loqi 31 March 2009 06:53:28PM 1 point [-]

I found the above comment to be mostly incoherent, so I'll reply to the meaningful parts.

I have not at any point participated in the discussion "who is worth helping".

Infotropism made a comment that essentially said people are more likely to help "cute puppies" than "dirty hobos" due to buggy hardware. You replied that your dog's senses are probably sharper than a hobo's, and that the hobo chose his or her condition. I deem that "participation", even if you didn't understand what was being discussed or implied.

My answer to that is too radical for your ears.

How very condescending. Spare me your posturing.

Comment author: JulianMorrison 29 March 2009 01:04:04PM 4 points [-]

I severely doubt you've tried extending compassion to any nonhuman things on their own terms rather than on ours. We're smart enough versus a dog to play a kind of "coherent extrapolated volition" game. What ought I to want, if I were a dog, but given what I know as a human? "Intelligence. Urgently. Food can wait, I don't give a damn if I'm cold or have fleas, fix my brain. I'm a fucking cripple."

That is what the starting point of compassion for all life would look like. So, when you consider the puppy-rehomers, it's pretty obvious they are just letting their cuteness instincts hijack their higher minds, and they want to spread the contagion.

Comment author: MichaelGR 29 March 2009 05:22:13PM 2 points [-]

"Eliezer you are not creating a posse of rationalists."

Could you please elaborate on this statement.

Comment deleted 29 March 2009 06:01:50PM [-]
Comment author: Annoyance 29 March 2009 06:04:51PM -1 points [-]

"I just don't see how another person can create a rationalist. This work you do yourself - with a little help from your friends."

You can lead a brain to data, but you can't make it think. Thinking is a choice you make for yourself. No one else can do it for you.

Comment author: Sebastian_Hagen 29 March 2009 11:39:10AM 1 point [-]

I do not regard the extension of compassion to all living things as a bug.

So, you think we should feel compassion for Fusobacterium necrophorum specimens?

I don't feel any, I'm happy not feeling any, and I'm also happy knowing my immune system doesn't hold back against them.

I strongly expect future versions of humane morality won't include any particular compassion for microorganisms, either.

Comment author: PhilGoetz 29 March 2009 03:36:20PM 2 points [-]

You are using the Dark Arts. Dogs are not parasitic microorganisms. Marshall did not specify the function that maps organisms into appropriate levels of compassion. His statement does not imply the absurdity you are trying to reduce it to.

Comment author: steven0461 29 March 2009 04:29:45PM *  4 points [-]

It's pretty clear-cut. Bacteria are living things, therefore compassion for all living things implies compassion for bacteria. If it's appropriate to feel compassion for dogs but not bacteria, the property that makes it so is not life, but something else.

Comment author: PhilGoetz 29 March 2009 04:31:58PM 5 points [-]

It's pretty clear-cut. He spoke of showing a particular level of compassion to a dog. He also spoke of showing some compassion to all living things. He did not say to show the same level of compassion to all living things. I believe you fail to understand that your argument is not logical because you are thinking in terms of binary distinctions. Your mention of that "the property that makes it so" demonstrates this.

Comment author: steven0461 29 March 2009 04:36:24PM 2 points [-]

Zero or nonzero is a binary distinction. Do you disagree that it's appropriate to feel zero compassion for bacteria?

Comment author: PhilGoetz 29 March 2009 09:30:22PM *  5 points [-]

You're still thinking in binary terms. Zero or non-zero is a distinction that can be made arbitrarily useless.

If someone said that they wanted everyone in the world to have shoes, you would not assume that they also wanted people with no feet to have shoes. If a bacteria qualitatively lacks the feelings that are necessary for you to feel compassion for them, you assume they are not included.

If the universe were colonized by nothing but bacteria, I would not sterilize it, even if that bacteria could never evolve into anything else.

Comment author: dclayh 29 March 2009 05:15:49PM 2 points [-]

Your use of "parasitic" is also Dark: it serves no purpose other than to trigger the negative emotional associations of the word.

Comment author: PhilGoetz 01 April 2009 03:45:10AM 3 points [-]

I used the word parasitic because he gave, as his example, a specific parasitic organism.

Comment author: thomblake 02 April 2009 08:57:06PM 1 point [-]

gaining the grandeur of delusion

Is this supposed to mean something? It seems opaque, and quickly checking Google tells me that the phrase "grandeur of delusion" is so uncommon that this comment shows up as result #3. Was it merely a typo? Even if it was, how does one 'gain' anything of the sort?