It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander.  Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.  The Institute Which May Not Be Named was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

    But of course, not all the rationalists I create will be interested in my own project—and that's fine.  You can't capture all the value you create, and trying can have poor side effects.

    If the supporters of other causes are enlightened enough to think similarly...

    Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side.  They won't capture all the value they create.  And that's fine.  They'll capture some of the value others create.  Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

    But this requires—I know I'm repeating myself here, but it's important—that you be willing not to capture all the value you create.  It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever.  It requires that you don't regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support.  You only reap some of your own efforts, but you reap some of others' efforts as well.

    If you and they don't agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement.  (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)

    A certain person who was taking over as the president of a certain organization once pointed out that the organization had not enjoyed much luck with its message of "This is the best thing you can do", as compared to e.g. the X-Prize Foundation's tremendous success conveying to rich individuals of "Here is a cool thing you can do."

    This is one of those insights where you blink incredulously and then grasp how much sense it makes.  The human brain can't grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics.  Saying, "This is the best thing" doesn't add much motivation beyond "This is a cool thing".  It just establishes a much higher burden of proof.  And invites invidious motivation-sapping comparison to all other good things you know (perhaps threatening to diminish moral satisfaction already purchased).

    If we're operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more)—or at least, that most potential supporters of interest fit this description—then fighting it out over which cause is the best to support, may have the effect of decreasing the overall supply of altruism.

    "But," you say, "dollars are fungible; a dollar you use for one thing indeed cannot be used for anything else!"  To which I reply:  But human beings really aren't expected utility maximizers, as cognitive systems.  Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances, people want to spread their donations around as an act of mental accounting to minimize the regret if a single cause fails, and telling someone about an additional cause may increase the total amount they're willing to help.

    There are, of course, limits to this principle of benign tolerance.  If someone has a project to help stray puppies get warm homes, then it's probably best to regard them as trying to exploit bugs in human psychology for their personal gain, rather than a worthy sub-task of the great common Neo-Enlightenment project of human progress.

    But to the extent that something really is a task you would wish to see done on behalf of humanity... then invidious comparisons of that project to Your-Favorite-Project, may not help your own project as much as you might think.  We may need to learn to say, by habit and in nearly all forums, "Here is a cool rationalist project", not, "Mine alone is the highest-return in expected utilons per marginal dollar project."  If someone cold-blooded enough to maximize expected utility of fungible money without regard to emotional side effects explicitly asks, we could perhaps steer them to a specialized subforum where anyone willing to make the claim of top priority fights it out.  Though if all goes well, those projects that have a strong claim to this kind of underserved-ness will get more investment and their marginal returns will go down, and the winner of the competing claims will no longer be clear.

    If there are many rationalist projects that benefit from raising the sanity waterline, then their mutual tolerance and common investment in spreading rationality could conceivably exhibit a commons problem.  But this doesn't seem too hard to deal with: if there's a group that's not willing to share the rationalists they create or mention to them that other Neo-Enlightenment projects might exist, then any common, centralized rationalist resources could remove the mention of their project as a cool thing to do.

    Though all this is an idealistic and future-facing thought, the benefits—for all of us—could be finding some important things we're missing right now.  So many rationalist projects have few supporters and far-flung; if we could all identify as elements of the Common Project of human progress, the Neo-Enlightenment, there would be a substantially higher probability of finding ten of us in any given city.  Right now, a lot of these projects are just a little lonely for their supporters.  Rationality may not be the most important thing in the world—that, of course, is the thing that we protect—but it is a cool thing that more of us have in common.  We might gain much from identifying ourselves also as rationalists.

    New Comment
    53 comments, sorted by Click to highlight new comments since: Today at 2:02 PM

    In a nutshell, it might be cool to make a website and organization that promotes data collections and debate.

    Rationalism requires access to high quality empirical evidence. Holding your hypotheses up to constantly changing data is a major theme of this site.

    We can only rationally discuss our hypotheses and beliefs when we have something to test and the quality of datasets floating around on the internet is often low or inaccessible.

    A good rationalist project might be to highlight resources for empirical evidence, run "data debates" where experts attack and defend each others datasets; a wiki for best-practices in data collection, or a wiki for navigating popular issues through good datasets (try as a nonexpert to find what studies on taxation and inequality are best and you can end up running in circles).

    I think you would want to tailor this kind of project toward non experts, giving people (especially journalists) a good starting place for finding meaningful, well-collected data that can form a good jump point for rational analysis.

    A project like this also leaves the door open to many interpretations and many goals, so it isn't necessarily cutting down on the number of voices out there.

    I would also be interested in cataloging failed attempts. More and more I have been trying to look at survivorship biases (http://en.wikipedia.org/wiki/Survivorship_bias) behind all my beliefs.

    Are there any good examples of projects like this in existence? Maybe we can leverage the community here to throw our weight behind one.

    I think you would want to tailor this kind of project toward non experts, giving people (especially journalists)

    I like most of your ideas, but I wonder how many journalists are willing to sacrifice readability or sensationalism for truth and accuracy.

    Thanks, Eliezer!

    As one of your supporters, I have been sometimes concerned that you are doing blog posts instead of working out the Friendly AI theory. Much more concerned than I show. I do try to hold it down to an occasional straight question, and hold myself back from telling you what to do. The hypothesis that I know better than you is at least -50dB.

    This post is yet another glimpse into the Grand Strategy behind the strategy, and helps me dispel the fear from my less-than-rational mind.

    I find it unsettling that " ... after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists."

    You learned that, “The human brain can't grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics.” Evolution didn’t deliver rationality any more than it delivered arithmetic. They have to be taught and executed as procedures. They aren’t natural. And I wonder if they can be impressed into System 1 through practice, to become semi-automatic. Right now, my rational side isn’t being successful at getting me to put in 8-hour work days to save humanity.

    You learned that, "Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances ... " That makes some of MY screwy behavior start to make sense! It's much more explanatory than, "I'm cheap!" or lazy, or insensitive, or rebellious, or contrary. That looks to me like a major, practical breakthrough. I will take that to my coach and my therapist, we will use it if we can.

    I don't think my psychologist ever said it. I doubt it is taught in undergraduate Psychology classes. Am I just out of touch ? Has this principle been put into school curricula? That you had to learn it the hard way, that it isn't just common knowledge about people, " ... paints a disturbing picture."

    You’ve done it again. In a little over one thousand words, you have given me a conceptual tool that makes the world (and myself) more intelligible and perhaps a little more manageable. Perhaps even a LOT more manageable. We shall see, in real practice.

    I would appreciate any links to information on the mental accounts and amounts of willpower.

    Thank you, Eliezer.

    --RickJS

    Since Less Wrong doesn't have trackbacks, I'll note that I responded to this post here: http://www.overcomingbias.com/2009/03/missing-alliances.html

    I've noticed that even Enlightenment projects such as atheism can get dragged down by the low sanity waterline and kept spinning in small and useless circles by the dearth of rationalists and the lack of skill among any rationalist novices present (I include myself as such).

    I am very much on board with any project to raise it.

    Well, to be honest, I like dogs, and when one is in front of me, I care about it. I don't think it's particularly important to rescue stray dogs (at least, not when compared to people), but I might join such a project because I thought I might have fun doing it.

    A stylistic note: you have a parenthetical note defining "altruistic akrasic", but not with the first mention of the term. If you feel it needs explanation, it needs explanation when you first mention it.

    Something about these last couple of posts has been bothering me and I think it is this: you have a misperception regarding the comparative advantages of the group you have assembled. It seems that we are youngish and neither terribly wealthy nor well-connected, but we possess a certain quality of strength or at least the desire to obtain it. Sure, if you convinced us all to donate lots of money to project X that might do some good. But it seems likely that this group can achieve some far greater good by proceeding in a less obvious way. I can only speculate upon what that larger good might be.

    Hrm... May be worthwhile compiling a list of not just causes, but organizations/groups/communities (online or otherwise) that explicitly care about rationalists (or, for a starting point, at least claim to) so that we can begin to actually try to set such a mutual-rationality-project up.

    Any comments on why drug legalization is to be considered good except the many historical failures of prohibiting alcohol and tobacco and libertarian speaking of addicts' rights (which, for anyone who does believe "people are to be forcibly protected from themselves" to at least some degree, is a non-argument)?

    "If you and they don't agree on everything - especially priorities - you have to be willing to agree to shut up about the disagreement."

    Right. Just like anti-theists should shut up about their anti-theism for the benefit of theistic evolutionists.

    That was sarcasm, if you couldn't tell.

    Well, it's more a case of, for our purposes here "does everyone involved agree that actual rationality is a good thing? Then let's not argue out stuff like which of our causes is the bestest in the common area. Rather, let's hash out basic rationality concepts, or, more to the point, basic methods of training in rationality/creating rationalists. Let's reserve the 'my cause has most utility per effort/dollar/etc' type arguments for elsewhere."

    Think of it more analogous to many online forums and so on explicitly banning religious and political arguments, since those tend to go kablewy and if the forum is primarily about something else...

    ie, it's not "shut up about it" but "let's agree to shut up about it HERE"

    Okay, I can see the sense in that. I just wouldn't want this sort of agreement to devolve into a kind of communally-imposed censorship of certain topics 'for the cause of Reason', in the same way that many evolutionists (theistic and atheistic) keep telling anti-theists to shut up 'for the cause of Evolution'.

    In other words, I'm all for restraint if the goal is to focus on rationality, but I'm against it if the goal is to avoid offending people and preserve harmony.

    Sounds like it might be worth a go.

    The reason everyone tries to shape a very specific message to fit existing biases rather than challenging the biases directly is that no-one wants to stick their necks out. For example, there are a whole load of charities that would benefit if scope insensitivity were reduced, but for any of them to fight it would be perceived by scope insensitivity sufferers as talking about people's lives in the cold language of cost-effectiveness.

    The effective altruism movement and the 80000 hours project in particular seem to be stellar implementations of this line of thinking.

    Also seconding the doubts about the refrain from saving puppies - at the very least, extending compassion to other clusters in mindspace not too far from our own seems necessary from a consistent standpoint. It may not be the -most- cost-effective, but no reason to just call it a personal interest.

    [-][anonymous]15y-30

    "But of course, not all the rationalists I create will be interested in my own project - and that's fine. You can't capture all the value you create, and you shouldn't try."

    Eliezer you are not creating a posse of rationalists. This is a delusion of grandeur.

    Your continual quoting of yourself (a useful technique at OB) is also gaining the grandeur of delusion.

    "If someone has a project to help stray puppies get warm homes, then it's probably best to regard them as trying to exploit bugs in human psychology for their personal gain, rather than a worthy sub-task of the great common Neo-Enlightenment project of human progress."

    I do not regard the extension of compassion to all living things as a bug. Surely moral progress is making the circle of inclusion ever bigger.

    When I read that I understood it as :

    "Ooooh cute puppies, I so want to save them" "Hey, there's also a hobo in the alley, he looks like he's dead or in bad condition" "bleh, dirty hobo, I don't even want to have a look"

    I understand that it is more pleasant to look upon puppies, that generates a warm feeling, and, my, aren't they just cute ? While a hobo in a dark alley may well just engender repulsion in many people. But putting the wellbeing of a pup before that of a fellow human, including, in ways not quite so obvious as the hobo case, is a fairly repulsive idea in itself too. Beyond the fact that this idea is repulsive, which may not be the main concern of a rationalist, there's the fact that the hobo is likely to have more complex feelings, and more consciousness, that the shades of pain and distress he can feel are just worse than those of the pup; and even beyond that, you're supposed to feel empathy for your fellow human being.

    You see we have some machinery up there in the fleshy attic for just that purpose. It helped, to have other human beings care for you and your misery in the environment of adaptation. So if you see something that has some cue traits, shapes, which you think is alive and warm, and in danger, that would trigger those feelings of compassion, love, will to help, secure, protect. Too bad that pups have neotenic traits, that trigger the very same machinery in our brain, without there being a real purpose to that. There would lie the bug, I'd reckon. ( See http://en.wikipedia.org/wiki/Neoteny ; http://en.wikipedia.org/wiki/Cuteness )

    Of course that's forgetting the part about someone exploiting a bug for their own gain as is also said. I'd suppose that some people would do such a thing, for instance using puppy-love, but that one didn't strike much. Well, more precisely I never encountered such a case, so that's why it looks as though it is rare to me. I may be wrong, and such misdoing may be prevalent.

    [-][anonymous]15y-10

    I am not sure what your point is.

    I would suggest getting a dog at some time in your life (doesn't have to be a puppy). They are good company. You get lots of fresh air. They are a constant responsibility. Their life and their comfort are dependent on you. You will enjoy.....growing compassion.

    Hobos seem to be an unavoidable part of the distribution of human life. Drugged or alcoholised I doubt if their feelings or their senses are as sharp as my dog's. Somewhere along the line, they have chosen.

    My dog did not chose me - I chose her.

    I may have to make the dog experience. Note, I've had dogs at home, several, and other pets, including cats, bunnies, pigeons, etc; of course they were my parent's so it's a bit different.

    As for hobos, nope, they didn't really "choose" their state. There's a limit to what a human mind can withstand, and possibly it isn't the same set point for everyone, but past a certain amount of hardships, people break down. Then they fall. They fall because they don't have much willpower or love of life left. And they may try to make that pain feel a tad lighter by using drugs, alcohol. Or simply to have a good time, when they can't really expect anything else to cheer them much.

    I hesitated before adding that, so please don't take it as a an ad hominem attack or anything, but I think one in your case may have to make the hobo experience for himself to realize how it feels. I don't mean that as in "ironical retribution : now it's your turn", but rather to say that you perhaps won't ever come close to understanding how if feels unless you live it yourself.

    And this may indeed just happen to anyone. You included. Some people will choose suicide instead, others won't want to and will simply fall down, irresponsive to most obligations or opportunities, even to get better. I'll remind yourself of some recent case, such as

    http://www.cbsnews.com/stories/2009/01/06/ap/business/main4702909.shtml

    Finally, I'd rather know that my fellow human would care more for me, than for his pet. I'd try to reciprocate the favor as well, insofar as possible. If I can't even help another human being because I think that my dog or, who knows, my material possession, a book perhaps, has more value than him, then it goes the other way too. I don't want to live in such a world. That I want it or not, may not change the world, and I may be worse off if I care about others in a world where others don't care about me, but I'll be worse off in any of those, than in a world where we each care for each other, with some passion.

    [-][anonymous]15y30

    Thanks infotropism - I like your reply!

    There is a humanness to it which is so often absent in the other posts.

    I work with drug addicts every day. Believe me - they choose.

    And I agree - "There but the grace of god go I".

    I work with drug addicts every day. Believe me - they choose.

    I don't believe you. Define "choose".

    EDIT: My above objection is unclear. I should have replied ADBOC.

    They are presented with situations in which multiple alternatives are possible, and they select one. That is choosing. Their choices may be explained by psychological, environmental and/or chemical factors, but not explained away. See Explaining vs Explaining Away.

    That is too easy. When you see "someone choose to X", you'll usually take it to mean that the bloke could've done otherwise, ergo, if he choose to do something that did him wrong, he's responsible and hence, deserves the result he's obtained.

    Maybe you can stretch the definition of responsibility too (stretch away http://yudkowsky.net/rational/the-simple-truth ), but the idea that people could do otherwise, or deserve their fate if they chose to do something 'wrong', knowing it was wrong to do it ... even barring the idea of a deserved fate, people often fall back to human nature, resorting to their heuristics and "general feeling about doing this or that".

    That's not a choice. That's more like a curse, when your environmental conditions are just right and strong enough to give such processes PREDOMINANCE over your own 'rational' sophisticated mind. In such cases, you won't act rationally anymore; you've already been taken over, even if temporarily. We've been discussing akrasia and ego depletion a bit lately, this falls in the same category. Rationality is but the last layer of your mind. It floats over all those hardwired components of your mind. It is pretty fragile and artificial, at least when it comes to act rationally, as opposed to thinking rationally, or even easier, thinking about rationality.

    So whether someone "choose it", or not, whatever meaning is bestowed upon the word choice, is not the most important thing. It's to understand why someone did something of a disservice to himself, and how he could be helped out of it, and if he should be helped out of it in the first place.

    This question is totally meaningless for materialists and consequentialists. The entire business of attaching blame and deserts must be abandoned in favour of questions either to do with predictions about the world or to do with what will give the best total effect.

    I'll do a post on this when I've composed it, but the start of it is the case of Phineas Gage.

    I think this is too extreme. Maybe blame and desert are best dispensed with, but it seems likely that we (our volitions) terminally disvalue interference with deliberate, 'responsible' choices, even if they're wrong, but not interference with compulsions. Even if that's not the case, it also seems likely that something like our idea of responsible vs. compulsive choice is a natural joint, predicting an action's evidential value about stable, reflectively endorsed preferences, which is heuristically useful in multiple ways.

    I don't think that needs to be a terminal value. People's deliberate choices provide information about what will actually make them happy; with compulsions, we have evidence that those things won't really make them happy.

    I agree that it's useful to have words to distinguish what we want long-term when we think about it, and what we want short-term when tempted, and I've just done a post on that subject. However, I don't see how that helps rescue the idea of blame and deserving.

    However, I don't see how that helps rescue the idea of blame and deserving.

    In case there was any confusion, I didn't mean to say it does.

    [-][anonymous]15y10

    I think this is too extreme. Maybe blame and desert are best dispensed with, but it seems likely that we (our volitions) terminally disvalue interference with deliberate, 'responsible' choices, even if they're wrong, but not interference with compulsions. Even if that's not the case, it also seems likely that something like our idea of responsible vs. compulsive choice is a (natural joint)[http://www.overcomingbias.com/2008/02/where-boundary.html], predicting an action's evidential value about stable, reflectively endorsed preferences, which is heuristically useful in multiple ways.

    They are presented with situations in which multiple alternatives are possible, and they select one.

    This does not cut it. First, you need to additionally specify that they are fully aware of these multiple possibilities. When a man decides to cross the street and gets killed by junk falling from the sky, he didn't choose to die.

    Second, the word "select" fully encapsulates the mystery of the word "choose" in this context.

    Third (I didn't originally make this clear), I'm not looking for a fully reductive explanation of choice, so the "explaining away" discussion isn't relevant. The statement "Believe me - they choose" appears to be attempting to communicate something, and I believe the payload is hiding in the connotation of "choose" (because the denotation is pretty tautological: people's actions are the result of their choices).

    Belatedly I recall prior discussion of connotation. I've edited my original reply to include the flashy LW keyword "ADBOC".

    [-][anonymous]15y00

    So my statement was technically true, but you disagree with the connotations. If I state them explicitly, you will explain why you think they are wrong. Sounds like a sucker's game to me and I doubt I have any responsibility for your connotations (I am assuming the you agree denotationally but oject connotatively).

    In a sense you are butting in on another conversation - where connotations are linked and have grown in context.. My use of the word choose is related to infotropism's descripiton of a hobo.

    I have no idea where you connotations are - so you have all the cards.

    However.

    If you tell an addict - "One more injection in that vein and we will amputate your leg," then he will inject.

    When you are in the middle of an orgasium, come out of it, you want in again - Big Time.

    If a thin and hungry wolf meets a well fed and groomed dog, and the dog says, "Come wih me, Humans are wonderful, they feed me. Their houses are warm. They are kind. The wolf follows along. Sounds like a good deal. But then he sees the collar. "What's that?" He says. "Oh" the dog says, "they use a leash, when we go walkies".

    The wolf turns around and "chooses" freedom.

    Connotations on the word "choice" are many, dialectical and necessarily VAGUE.

    Sounds like a sucker's game to me and I doubt I have any responsibility for your connotations [...] I have no idea where you connotations are - so you have all the cards.

    This isn't an adversarial game. How do you know I will disagree with you? Even if you did know, why avoid it?

    In a sense you are butting in on another conversation - where connotations are linked and have grown in context.. My use of the word choose is related to infotropism's descripiton of a hobo.

    You uttered a statement of the form "Believe me - " in direct reply to a comment I found fairly insightful. Infotropism laid out the connotation of his use of the word "choose" quite well, IMO, and your statement seems at odds with that.

    When you are in the middle of an orgasium, come out of it, you want in again - Big Time.

    If you define "choice" to cover this scenario in the context of a "who is worth helping" discussion, I question the value of the definition.

    Connotations on the word "choice" are many, dialectical and necessarily VAGUE.

    When that is the case for a word, you cannot use it to make a clear point without a supporting explanation. I'm assuming you didn't intend to make a vague point.

    [-][anonymous]15y-10

    Interesting!

    Things don't have to be adversial and they don't have to not be adversial. I do not know if we are being adversial. Nor do I know if I was being adversial. I do however think my forecast of a possible disection of my answer has been met by you answer.

    In a sense I would think that everything we say is tautological for ourselves and seldom tautological for others. I would certainly think my useage of the word "choice" is personal and ideosyncratic as choosing and its fundament is after all anchored in the moment of existence. The object of choosing is undefined until defined and this "search-result" is charged with the web of individuality.

    "If you define "choice" to cover this scenario (reentrance to the orgasium) in the context of a "who is worth helping" discussion, I question the value of the definition."

    I have not at any point participated in the discussion "who is worth helping". (My answer to that is too radical for your ears.) In other words your disection is wrong - as it has been at every stage. As such a mechaniclal deconstruction based on your own hobby horses must necessarily be.

    I am trying to express some complicated truths. You, kind sir, do not have access to these truths. Nor can I give you access. You are looking at my finger.

    I am looking at the moon.

    Or mabye vice-versa.

    I found the above comment to be mostly incoherent, so I'll reply to the meaningful parts.

    I have not at any point participated in the discussion "who is worth helping".

    Infotropism made a comment that essentially said people are more likely to help "cute puppies" than "dirty hobos" due to buggy hardware. You replied that your dog's senses are probably sharper than a hobo's, and that the hobo chose his or her condition. I deem that "participation", even if you didn't understand what was being discussed or implied.

    My answer to that is too radical for your ears.

    How very condescending. Spare me your posturing.

    [-][anonymous]15y00

    I think we two are in a personal dialogue here - off the beaten track as it were, as I don't think others wil come visiting. Fine. We are also weaving a thread of mutual "conotations" (or in our case "misconnotations"). Also fine. A little case-study - just for us!

    A case study in disagreement! (he he)

    As you point out - Infotropism made a point about helping cute puppies contra hobos. This was a reply to an earlier point from me on "compassion". Compassion is not directly related to helping. I work in the helping profession - this does not mean I am theoretically interested in "helping". I was not participating in a discussion on helping. Originally I was attacking Eliezer for his arrogance, which has been sidestepped to diverse diversions. Infortropism and I went on to riff on common and separate themes. Then you came along.

    I am neither posturing nor condescending. I am simply making the point, that what I say will appear incohernt to you, just as I do not understand why you are so sure you are right, why you think you can find meaningful parts from a whole you find incoherent and why you think you have the right to deconstruct or correct another's expression. What you assume to be a rational and impartial analysis (by you of me) appears posturing and condescending to me! Your analysis postulates a greater intelligence, a sharper insight, a greater stringency. None of which I recognize.

    Yours is also the conceit of Eliezer who wants to program intelligence and thus thinks that things have to be reduced to algorithms, before one has understood them. It is a mechanical intelligence.

    Does one wish to dance with a jerky doll or talk to a human? For that is how your deconstruction seems to me!

    Let's just say that I am divergent and you are convergent in our ways of thinking???

    You win more points than I do from our audience. This is because convergence is the dominant style here on LW. Once in a while I get a little snarky over it. And probably I should be moving on to greener pastures. Even though I think these two styles should be able to finde a synthesis or at least improve on each other.

    Mvh

    why you think you have the right to deconstruct or correct another's expression

    I think I have that right because I do have that right.

    [-][anonymous]15y-30

    Ah!

    You must be talking about epistemic accuracy. Getting things right and not just less wrong.

    I am actually rather curious.... who or what has given you this right? What does it correlate with?

    Sky-hooks?

    I severely doubt you've tried extending compassion to any nonhuman things on their own terms rather than on ours. We're smart enough versus a dog to play a kind of "coherent extrapolated volition" game. What ought I to want, if I were a dog, but given what I know as a human? "Intelligence. Urgently. Food can wait, I don't give a damn if I'm cold or have fleas, fix my brain. I'm a fucking cripple."

    That is what the starting point of compassion for all life would look like. So, when you consider the puppy-rehomers, it's pretty obvious they are just letting their cuteness instincts hijack their higher minds, and they want to spread the contagion.

    [-][anonymous]15y20

    "if I were a dog, but given what I know as a human?"

    Intelligence (you're joking) Urgently (why are they always rushing about?) Food can wait (you're joking) Fix their brain THEY are cripples - oh look at that rabbit - I'm outta here.

    Why should I wish to extend compassion on non-human terms? I do not want a human dog, nor do they want to be human. They like being dogs - and now I'm talking nonsense. It's sorta contagious.

    I think the puppies thing was more "while valuable, we may perhaps undervalue various other compassionate causes relative to that specific one due to the 'awwwww, puppies' effect, which we don't necessaraly have for all creatures."

    "Eliezer you are not creating a posse of rationalists."

    Could you please elaborate on this statement.

    [-][anonymous]15y20

    Sure!

    According to my first quotation from Eliezer he seems to think, that HE is "creating rationalists".

    I just don't see how another person can create a rationalist. This work you do yourself - with a little help from your friends.

    Eliezers monologues on OB were fun. They were useful fragments and mirrors. Hints and metaphors. In a sense he coined the word "rationalist" as a useful suggestion for an identity.

    I am still hanging around here 'cos I too think there is some utility in chosing to be "rational".

    But I do not accept Eliezer's checklist as being exhaustive - annasalamons questionaire is an absurd example of its narrowness. And unfortunately also an example of Eliezer being right - he is creating "rationalists" after his own image.

    So beware the creation of rationalists. And let's have a little bit of independent thinking around here instead.

    "I just don't see how another person can create a rationalist. This work you do yourself - with a little help from your friends."

    You can lead a brain to data, but you can't make it think. Thinking is a choice you make for yourself. No one else can do it for you.

    gaining the grandeur of delusion

    Is this supposed to mean something? It seems opaque, and quickly checking Google tells me that the phrase "grandeur of delusion" is so uncommon that this comment shows up as result #3. Was it merely a typo? Even if it was, how does one 'gain' anything of the sort?

    [-][anonymous]15y00

    Thanks for replying!

    I think I have mostly given up on this site....

    But I like to do things thoroughly, så let me try to explain.

    English is no longer my first language, so my language use can probably become a little private. A private language is of course no language at all. So maybe you are right, that I wrote something without any meaniing at all (other than to me).

    But on the other hand - it does mean a lot to me.

    I am using a rhetorical device - playing on the upper sentence - "delusion of grandeur". Such rhetorical reversals add something or other, though I would gladly agree that it is something of a darkish art and perhaps a type of meaning-illusion - adding meaning without in itself having meaning. But meaning is such a slippery thing based on leaps of intuition and not in any way digital that such devices and "armies of metaphors" are the very stuff of communication.

    And what was I trying to communicate?

    What I am more and more coming to regard as Eliezers meglomania, which surely must be baed on an illusion. But saying the "deluded largeness of self-importance" is not a very effectful phrase.

    So I was indulging in the art of "suggestion" - in some uncontrolled sense priming the reader´s associations.

    Which many times gives a boomerang-effect.

    Also here.

    OK?

    I do not regard the extension of compassion to all living things as a bug.

    So, you think we should feel compassion for Fusobacterium necrophorum specimens?

    I don't feel any, I'm happy not feeling any, and I'm also happy knowing my immune system doesn't hold back against them.

    I strongly expect future versions of humane morality won't include any particular compassion for microorganisms, either.

    You are using the Dark Arts. Dogs are not parasitic microorganisms. Marshall did not specify the function that maps organisms into appropriate levels of compassion. His statement does not imply the absurdity you are trying to reduce it to.

    It's pretty clear-cut. Bacteria are living things, therefore compassion for all living things implies compassion for bacteria. If it's appropriate to feel compassion for dogs but not bacteria, the property that makes it so is not life, but something else.

    It's pretty clear-cut. He spoke of showing a particular level of compassion to a dog. He also spoke of showing some compassion to all living things. He did not say to show the same level of compassion to all living things. I believe you fail to understand that your argument is not logical because you are thinking in terms of binary distinctions. Your mention of that "the property that makes it so" demonstrates this.

    Zero or nonzero is a binary distinction. Do you disagree that it's appropriate to feel zero compassion for bacteria?

    You're still thinking in binary terms. Zero or non-zero is a distinction that can be made arbitrarily useless.

    If someone said that they wanted everyone in the world to have shoes, you would not assume that they also wanted people with no feet to have shoes. If a bacteria qualitatively lacks the feelings that are necessary for you to feel compassion for them, you assume they are not included.

    If the universe were colonized by nothing but bacteria, I would not sterilize it, even if that bacteria could never evolve into anything else.

    Your use of "parasitic" is also Dark: it serves no purpose other than to trigger the negative emotional associations of the word.

    I used the word parasitic because he gave, as his example, a specific parasitic organism.

    [-][anonymous]15y00

    If I reply, "And humans will develop compassion for robots who have been designed to mimic a few of our human traits" would this refute your refutation?

    No it wouldn't. Because it is irrelevant.

    Thus your reply: Adds a distinction to which I must agree - but my argument still stands.

    Let's call it the Pars Pro Toto fallacy....