I am passionately fond of the idea of creating an “Art of Rationality” sensibility/school as described in the [A Sense That More is Possible](http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/) article. 

The obstacle I see as most formidable in such an undertaking is the fact that, no matter how much “rational software” our brains absorb, we cannot escape the fact that we exist within the construct of “irrational hardware”. 

My physical body binds me to countless irrational motivations.  Just to name a few: 1) Sex.  In an overpopulated world, what is the benefit of yearning for sexual contact on a daily basis?  How often does the desire for sex influence rational thought?  Is “being rational” sexy?  If not, it is in direct conflict with my body’s desire and therefore, undesirable (whereas being able to “kick someone’s ass” is definitely sexy in cultural terms)  2) Mortality.  Given an expiration date, it becomes fairly easy to justify immediate/individually beneficial behavior above long term/expansively beneficial behavior that I will not be around long enough to enjoy. 3) Food, water, shelter.  My body needs a bare minimum in order to survive.  If being rational conflicts with my ability to provide my body with its basic needs (because I exist within an irrational construct)… what are the odds that rationality will be tossed out in favor of irrational compliance that assures my basic physical needs will be met?

As far as I can tell, being purely rational is in direct opposition to being human.  In essence, our hardware is in conflict with rationality. 

The reason there is not a “School of Super Bad Ass Black Belt Rationality” could be as simple as…. It doesn't make people want to mate with you.  It’s just not sexy in human terms. 

I’m not sure being rational will be possible until we transcend our flesh and blood bodies, at which point creating “human friendly” AI would be rather irrelevant.  If AI materializes before we transcend our flesh and blood bodies, it seems more likely that human beings will cause a conflict than the purely rational AI, so shouldn't the focus be toward human transcendence rather than FAI?

New to LessWrong?

New Comment


41 comments, sorted by Click to highlight new comments since:

The obstacle I see as most formidable in such an undertaking is the fact that, no matter how much “rational software” our brains absorb, we cannot escape the fact that we exist within the construct of “irrational hardware”.

It's possible to design some 'rational software' that corrects for known or expected 'irrationalities' in the hardware. (A trivial example: if our hardware makes decisions by picking randomly from the set of possible decisions, software that provides a list of possible decisions ranked from best to worst will obviously fail. So the software should order all possible decisions, then make only the top of the list available to the hardware.)

As far as I can tell, being purely rational is in direct opposition to being human.

This makes no sense. It might be helpful for me to say that there's no such thing as "being purely rational": one can be a rational human, or a rational alien, or a rational computer, but one can't be a rational. It's necessarily an adjective of some condition; it's a two-place word.

The reason there is not a “School of Super Bad Ass Black Belt Rationality” could be as simple as…. It doesn't make people want to mate with you.

It probably would give you the skills you'd need to learn how to become the kind of person who people do want to mate with. If only because it would give you the skills to learn how to become the kind of person you want to be, in general.

1) Sex ... 2) Mortality ... 3) Food, water, shelter.

If these are your desires and you wish to have them, then use rationality's skills to obtain them most efficiently: polyamory, relationship skill, and birth control / protection can drastically increase the frequency and intensity of sex. Organisations like SENS are attacking the mortality problem; support them while learning about existing life extension methods like healthy eating. Determine how much food, water, and shelter you need and optimise your environment so you receive enough of each with arbitrarily high probability and arbitrarily low effort.

If being rational conflicts with my ability to provide my body with its basic needs...

Then you have named the art, and failed the 12th virtue.

polyamory, relationship skill, and birth control / protection can drastically increase the frequency and intensity of sex

He probably meant that sex is something a rational being would avoid rather than maximise.

I managed to distract myself! I meant to follow that paragraph with "If these are your desires and you do not wish to have them, self-modify. Pavlovian conditioning and operant conditioning are keywords for more research in this area."

My physical body binds me to countless irrational motivations.

Motivations are not inherently rational or irrational. Being rational is not like being a blank slate. It's like being an... effective human.

But is being an effective human all that "rational".

When I look at humans who are hugely successful, I do not see rationality as their guiding principal.

Do you see some other principle that is regularly the guiding principle of hugely successful people, and is not regularly the guiding principle of not hugely successful people?

(If you do, please share! I'd like to be hugely successful, so it would be rational for me to adopt that principle if it existed.)

The book "How to Win Friends and Influence People" is sort of the go-to text here. The upshot is that the guiding principle is twofold: helping people and requesting help from people. If you wish to maximize your own wealth and power, apply this principle especially to the rich and powerful.

and is not regularly the guiding principle of not hugely successful people?

Why the dichotomy? A principle can be used by different people with different abilities, leading to different levels of success, but still remain fundamentally flawed, leading to suboptimal achievement for both gifted and non-gifted people.

Short term benefits vs long term benefits..

Why the dichotomy?

If a test regularly returns 'you have cancer' when I have cancer, and regularly returns 'you have cancer' when I don't have cancer, it's not a good test.

Similarly, if a principles guides people to be successful, and it guides people to be unsuccessful, it is not a good principle.

For example: it could be said that "eat food at least daily, drink water at least daily, and sleep daily or close to it" is a principle that hugely successful people follow. It is also a principle that not hugely successful people follow. Following this principle will not make me hugely successful.

I could just say "Pr(not successful | follows principle) needs to be low, otherwise base rate makes it meaningless".

If everybody had to choose between investing his savings in the lottery, or in index funds, then if you look at the very rich most of them will be lottery players, even though it was the worst choice.

Can you evidence that?

[This comment is no longer endorsed by its author]Reply

I don't have real world stats, but here's a hypothetical scenario. Say there's a world where there's two options for making money: a lottery with a .0001 percent (1 in 1,000,000) chance of making a billion dollars (EV $1000), or an investment with a 100 percent chance of making a million dollars. The rational thing to do is invest, but the richest people will have bought lottery tickets. So will a great many broke people, but you won't see them on the news.

[-][anonymous]10

careful with your utility function. You need utility to be linear over money to do expected value.

If your goal is to be a billionaire, the ev of the lottery is 1e-6 and the ev of the solid investment is 0. (assigning utility 1 to state of being billionaire and 0 otherwise)

What is "Can you evidence that?" supposed to mean? Especially when talking about a hypothetical scenario ...

Could you please make an effort to communicate clear questions?

(If you're asking for clarification, then Normal_Anomaly's explanation is what I meant)

Ah, I misread your comment, my apologies. I'll retract my question.

For what it's worth, I had stealthily edited my question - ("If everybody had" instead of "If everybody has"); I was trying to find a short illustration of the fact that a choice with a low expected value but a high variance will be overrepresented among those who got the highest value. It seems like I failed at being concise and clear :P

Heh, well I've got dyslexia so every now and then I'll end up reading things as different to what they actually say. It's more my dyslexia than your wording. XD

It seems like I failed at being concise and clear :P

Hmm, I wonder if being concise is all it's cracked up to be. Concise messages usually have lower information content, so they're actually less useful for narrowing down an idea's location in idea-space. Thanks, I'm looking into effective communication at the moment and probably wouldn't have realized the downside to being concise if you hadn't said that.

It’s just not sexy in human terms.

Er...speak for yourself.

The reason there is not a “School of Super Bad Ass Black Belt Rationality” could be as simple as…. It doesn't make people want to mate with you. It’s just not sexy in human terms.

You're saying people only do sexy things. Can you really not think of the many things that many people do although they make them definitely unsexy?

  • Enter monasteries, the Catholic priesthood, and other long-term organizations where they forswear sex entirely.
  • Publicly proclaim affiliations, and work for causes, which make them persecuted - often killed by states and organized religions - and which do not have enough supporters to make up for it ingroup-style. Extreme examples: self-immolation suicide as protest? Hunger strikes in prisons where the expected outcome is that they die of hunger?
  • Dedicate their lives to obscure pursuits and personal goals which no-one else appreciates. Like many now-famous scientists and researchers, and like many more still-not-famous people (aka "geeks" or "nerds") whose work didn't turn out to be extremely useful a few centuries later...

I could go on...

"Definitely unsexy" seems like a very questionable idea given widespread variance in human sexuality.

I wouldn't say so. There's not much you can do short of killing yourself (in such a way as to destroy any viable genetic material) which outright sets your expected reproductive success to zero, but statistically speaking there's a lot you can do to handicap yourself -- including most of DanArmak's examples. "Unsexy" seems as good a way of describing this as anything.

(I'm not sure about the persecuted causes example, though. Suicidal devotion has obvious issues, but short of that identifying yourself with a persecuted group strikes me as high risk/high reward.)

That's why I qualified that example with "which do not have enough supporters to make up for it ingroup-style", i.e. causes which don't have enough supporters who will then find you hyper-sexy.

Granted, but you only need a few. Dunbar's Number being what it is, a group so small that no one's ever heard of it outside can still generate enough internal status to make up for whatever you lose in external status; just look at all the cults founded essentially to get their founders laid.

And since external persecution is somewhat proportional to perceived threat, and perceived threat's somewhat proportional to size, I think you'd need a very small and very weird group for the fitness equation to work out negative purely on those grounds. Ideally a group of one, Ted Kaczynski style.

just look at all the cults founded essentially to get their founders laid.

That worked out well for the founders; not so well, I'd argue, for most of the followers.

Your pool of actual mates (with whom you have sex, let alone with whom you have children) is typically much smaller than Dunbar's number, but your pool of potential mates is typically much bigger - especially in modern large cities. A bigger pool means people can find more individually suitable mates (i.e. different people value the same prospective mate differently). It also means greater chance of mating for those with low prospects (you can find others who also have low mate-value, and you have a bigger pool to just get lucky with).

Are there actual fitness calculations that tell us at which sizes ingroup mating tends to be a winning strategy?

Of course if the group is preselected on a basis of e.g. sharing rare sexual fetishes then it's going to be successful. But if it's preselected for e.g. supporting an outlawed political cause, I would by default assume a disrtibution of sexual preferences that is broadly similar to that of the general population. (Exceptions certainly exist, like political causes related to reproductive rights.)

There is still a unique definition of "sexiness", the evolutionary one: expected number of grandchildren or some such. That's what I had in mind, and so did the OP, I believe.

The strongest impression I get from this is that you should, to use the colloquial phrase, "lurk more". The local version is "read the sequences", but all I mean is, read around long enough to figure out what "rational" means here.

But how does he recognise when he already knows what "rational" means here?

Same way anyone else does? How do you recognize when you know what "ornithopter" means?

Depends on the desired certainty. I haven't yet written a confused post about ornithopters on a forum whose understanding of "ornithopter" differs significantly from the popular usage, so this example isn't that much illuminating.

The best way to ascertain that I interpret a word the same way as others do is to engage in discussions. By reading only it's going to be more difficult.

I wouldn't think it'd be all that difficult in absolute terms in this case, but even if it is, then read a bit, ask questions and engage in discussions, and hold off on making speeches until you can reasonably infer you know what's going on.

[-][anonymous]00

You toil within the valley of irrationality caused by partial rationality. I have been a member of this site for about six months and this is something like the sixth post of "Yeah but irrationality is good/the only possibility/whatever."

Go read the sequences. In chronological order. At least past Meta-Ethics (and you can in principle skip most of the mathy quantum mechanics). If you are still confused, THEN go make a post like this.

Humans are physical beings. Even if humans were transferred into digital beings there would still remain the equivalent of passing on the bare minimum of electricity in order to more perfectly satisfy some cognitive ritual. Totally disregarding your infrastructure would only be an option to an agent with no physical parts. But those agents can't do actions. And that includes attempts to try to satisfy any rituals.

Is there such a thing as a desire that's actually just stupid? I'm not sure that's possible in a world where humans are just one possible sort of evolved intelligence and metamorality is per-species. But within humanity, can we say someone's values are in fact stupid? Irrational?

One possible example suggested by Parfit is "FutureTuesday Indifferance". Suppose Bob cares about his future self and in particular prefers not to have pain in the future, as normal people do--except for any future Tuesday. He does not care at if he is going to suffer pain in a future Tuesday. He is happy to choose on Sunday to have an extremely painful operation scheduled for Tuesday instead of a mild one for either Monday or Wednesday. It is not that he does not suffer the pain when Tuesday comes: he is only (irreducibly, arbitrarily) indifferent to pain on Tuesdays that are in the future. Parfit argues that Bob has irrational preferences.

Of course this is a very contrived and psychologically unrealistic example. But Parfit argues that, once it is granted that brute preferences can be irrational, then the question is open whether many of our actual ones are (maybe because they are subtly grounded on distinctions that when examined are as arbitrary as that between future Tuesday and Monday).

Yes. The obvious one to me is that it is totally irrational of me to want to eat pile of sweets that I know from previous experience will make me feel bad about myself ten minutes after eating it and which I rationally don't need nutritionally. I can make myself not do it, but to make myself not want to is like trying to not see an optical illusion...

In my experience, there's lots of ways to make myself not want to eat those sweets.

For example, I can get out of the house and go for a brisk walk, or better yet go to the gym and work out. IME, while I'm exercising I rarely find myself craving food of any sort unless I'm genuinely hungry.
Or I can make myself a large portion of something else to eat, and eat it until I'm stuffed. IME, I don't want to eat sweets when I'm actively full.
Or I can go to sleep. IME, I don't want to eat sweets while I'm sleeping.
Or I can douse the sweets with urine. IME, I don't want to eat sweets doused in urine.
Or many other possibilities.

The problem is I don't want to do any of those things, either.
Which is a remarkable coincidence, when I stop to think about it.
In fact, a neutral observer might conclude that I want to want to eat the sweets.

Wants are like pains: sometimes they're useful information that something should be attended to, and sometimes they're irrelevant distractions because there are more important things to do, and just have to be endured and otherwise ignored.

Hm.

Suppose I value risk, and I value longevity, and I live in a world such that high-risk environments are always low-longevity and high-longevity environments are always low-risk.

I want to say something disparaging about that arrangement, but I'm not sure what it means to call it either "stupid" or "irrational".

Certainly, it doesn't follow from the fact that I have those values that I'm stupid.

It is certainly true that my behavior will predictably be less effective at optimizing my environment for my values in that case than it would be otherwise, but the same could be said for valuing happiness over paperclips.

(shrug) I think the word I want here is "inelegant."

But within humanity, can we say someone's values are in fact stupid? Irrational?

Yes.

Non-metonymically? Why is this an interesting question?