Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Manfred 22 May 2012 07:30:01AM 6 points [-]

My physical body binds me to countless irrational motivations.

Motivations are not inherently rational or irrational. Being rational is not like being a blank slate. It's like being an... effective human.

Comment author: tygorton 22 May 2012 07:33:27AM -1 points [-]

But is being an effective human all that "rational".

When I look at humans who are hugely successful, I do not see rationality as their guiding principal.

Irrational hardware vs. rational software

-10 tygorton 22 May 2012 06:52AM

I am passionately fond of the idea of creating an “Art of Rationality” sensibility/school as described in the [A Sense That More is Possible](http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/) article. 

The obstacle I see as most formidable in such an undertaking is the fact that, no matter how much “rational software” our brains absorb, we cannot escape the fact that we exist within the construct of “irrational hardware”. 

My physical body binds me to countless irrational motivations.  Just to name a few: 1) Sex.  In an overpopulated world, what is the benefit of yearning for sexual contact on a daily basis?  How often does the desire for sex influence rational thought?  Is “being rational” sexy?  If not, it is in direct conflict with my body’s desire and therefore, undesirable (whereas being able to “kick someone’s ass” is definitely sexy in cultural terms)  2) Mortality.  Given an expiration date, it becomes fairly easy to justify immediate/individually beneficial behavior above long term/expansively beneficial behavior that I will not be around long enough to enjoy. 3) Food, water, shelter.  My body needs a bare minimum in order to survive.  If being rational conflicts with my ability to provide my body with its basic needs (because I exist within an irrational construct)… what are the odds that rationality will be tossed out in favor of irrational compliance that assures my basic physical needs will be met?

As far as I can tell, being purely rational is in direct opposition to being human.  In essence, our hardware is in conflict with rationality. 

The reason there is not a “School of Super Bad Ass Black Belt Rationality” could be as simple as…. It doesn't make people want to mate with you.  It’s just not sexy in human terms. 

I’m not sure being rational will be possible until we transcend our flesh and blood bodies, at which point creating “human friendly” AI would be rather irrelevant.  If AI materializes before we transcend our flesh and blood bodies, it seems more likely that human beings will cause a conflict than the purely rational AI, so shouldn't the focus be toward human transcendence rather than FAI?

Comment author: [deleted] 20 May 2012 06:50:10PM *  1 point [-]

If you plan to never, ever live in a non-English-speaking place, yeah, learning languages other than English is not terribly useful. Anyway, the utility function is not up for grabs, so if the reason I'm learning Irish is because I want to considerations of usefulness are not too relevant.

In response to comment by [deleted] on Learn A New Language!
Comment author: tygorton 22 May 2012 03:01:28AM 1 point [-]

Aren't there lateral benefits to learning something as complex as a new language? The level of mental focus and commitment required must have cognitive rewards and I would think any level of cognitive improvement would be of great value.

In order to learn any language, it requires a certain level of immersion in cultural concepts/perspectives outside of your own. Broadening cultural awareness and gaining new perspectives certainly contributes to an individual's ability to see the world with increased clarity.

It seems to me that measuring the worth of learning anything in terms of how directly one might make use of it cannot measure its total value.

Comment author: Jack 20 May 2012 06:08:22AM 3 points [-]

Kagan does feel that death is "bad", but he only throws this in at the very end after spending the entirety of the article arguing the opposite.

He's not, not arguing the opposite. He's doing philosophy by Socratic method. I really hope this wasn't a common misinterpretation here.

Comment author: tygorton 20 May 2012 06:22:16AM 2 points [-]

No, I"m sure it is just my lack of knowledge regarding philosophy and the associated methods of discussing it. I never actually believed that the author was trying to convince me that death was not bad, but (as I stated above) playing devil's advocate in order to explore ideas and challenge the reader. I simply wouldn't know enough about it to name it the "Socratic method". My bad.

Comment author: DanielLC 20 May 2012 04:58:50AM 0 points [-]

A purely rational person would be nigh omniscient. If a combustible engine does more good than bad (which it does), a purely ration person would realize this.

If you want to know how we'd act if we just weren't biased about risks, but were just as imprecise, consider: would it be worth while to have been substantially more cautious? Barring nuclear weapons, I doubt it. The lives lost due to technological advancements have been dwarfed by the lives saved. A well-calibrated agent would realize this, and proceed with a lesser level of caution.

There are areas where we're far too cautious, such as medicine. Drugs aren't released until the probability of killing someone is vastly below the probability of saving someone. Human testing is avoided until it's reasonably safe, rather than risking a few lives to get a potentially life-saving drug out years earlier.

Comment author: tygorton 20 May 2012 05:29:38AM *  -2 points [-]

"A purely rational person would be nigh omniscient"

Even at current human intelligence levels? I don't see how pure rationality without the ability to crunch massive amounts of data extremely fast would make someone omniscient, but I may be missing something.

"If a combustible engine does more good than bad (which it does)"

Of course, I'm playing devil's advocate with this post a bit, but I do have some uncertainty about.... well, your certainty about this :)

What if a purely rational mind decides that while there is a high probability that the combustible engine would bring about more "good" than "bad", the probable risks compels them to reject its production in favor of first improving the technology into something with a better reward/risk ratio? A purely rational mind would certainly recognize that, over time, the resource of gasoline derived from oil would lead to shortages and potential global warfare. This is a rather high risk probability. Perhaps a purely rational mind would opt to continue development until a more sustainable technology could be mass produced, greatly reducing the potential need for war/pollution/etc. Keep in mind, we have yet to see the final aftermath of our combustible engine reliance.

"The lives lost due to technological advancements have been dwarfed by the lives saved."

How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology? How many people lead very low quality lives today due to over population? Would a purely rational mind make decisions to limit population rather than help them explode?

Does a purely rational mind value life less or more? Are humans MORE expendable to a purely rational mind so long as it is 51% beneficial, or is there a rational reason to value each individual life more passionately?

I feel that we tend to associate pure rationality with a rather sci-fi notion of robotic intelligence. In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true, a purely rational mind may have other reasons than compassion to value individual life MORE rather than less, even when measured against a potential benefit.

The questions seem straight forward at first, but is it possible that we lean toward the easy answers that may or may not be highly influenced by very irrational cultural assumptions?

Comment author: tygorton 20 May 2012 05:06:51AM 0 points [-]

Nice, I'm going to experiment with this. It is like a thought experiment that intentionally creates the opportunity for an "accidental discovery", which are usually the most useful.

Is a Purely Rational World a Technologically Advanced World?

-3 tygorton 20 May 2012 04:40AM

What would our world be today if humans had started off with a purely rational intelligence?

It seems as though a dominant aspect of rationality deals with risk management.  For example, an irrational person might feel that the thrill of riding a zip line for a few seconds as being well worth the risk of injuring themselves, contracting a flesh eating bug,  and losing a leg along with both hands (sorry, but that story has been freaking me out the past few days, I in no way mean to trivialize the woman’s situation).  A purely rational person would (I’m making an assumption here because I am certainly not a rational person) recognize the high probability of something going wrong and determine that the risks were too steep when compared with the minimal gain of a short-lived thrill.

But how does a purely rational intelligence—even an intelligence at the current human level with a limited ability to analyze probabilities—impact the advancement of technology?  As an example, would humanity have moved forward with the combustible engine and motor vehicles as purely rational beings?  History shows us that humans tend to leap headlong into technological advancements with very little thought regarding the potential damage they may cause.  Every technological advancement of note has had negative impacts that may have been deemed too steep as probability equations from a purely rational perspective.

Would pure rationality have severely limited the advancement of technology?

Taken further, would a purely rational intelligence far beyond human levels be so burdened by risk probabilities as to render it paralyzed… suspended in a state of infinite stagnation?  OR, would a purely rational mind simply ensure that more cautious advancement take place (which would certainly have slowed things down)?

Many of humanity’s great success stories begin as highly irrational ventures that had extremely low chances for positive results.  Humans, being irrational and not all that intelligent, are very capable of ignoring risk or simply not recognizing the level of risk inherent in any given situation.  But to what extent would a purely rational approach limit a being’s willingness to take action? 

*I apologize if these questions have already been asked and/or discussed at length.  I did do some searches but did not find anything that seemed specifically related to this line of thought.*

Comment author: [deleted] 19 May 2012 09:38:32PM 2 points [-]

Kagan does feel that death is "bad", but she only throws this in at the very end after spending the entirety of the article arguing the opposite.

He.

Also, everything up to the paragraph starting with "Alternatively, if all facts can be dated..." is an argument for the badness of death in the presence of undateable facts (which seems to me the more reasonable position).

So no, he didn't spend the entirety of the article arguing the opposite.

In response to comment by [deleted] on Oh, mainstream philosophy.
Comment author: tygorton 19 May 2012 09:58:23PM 0 points [-]

Right..... :) Oops. Fixed.

But the same paragraph continues with: But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don't exist?

It feels like the article is playing devil's advocate but I perceived that the bulk of it was playing to the tune that the sentiment of death being "bad" is rather irrational.

Comment author: tygorton 19 May 2012 09:14:18PM *  2 points [-]

The last lines of the article:

So is death bad for you? I certainly think so, and I think the deprivation account is on the right track for telling us why. But I have to admit: Puzzles still remain.

Kagan does feel that death is "bad", but he only throws this in at the very end after spending the entirety of the article arguing the opposite.

One of his dominant questions is: Why do we feel bad about the loss of time after our death as opposed to feeling bad about the loss of time before our birth. I won't go into detail here about the article's content, but I do have a thought about it.

This is just me running with an idea in the moment, so I apologize if it is not well organized:

Let's say we have just purchased tickets to a concert. It features a band we have always wanted to see play live and the concert is several months away. We may certainly feel impatient and agonize over the wait, but in some sense the anticipation is a build-up to the inevitable moment of pleasure we feel when the actual day arrives, followed by the actual moment when we are at the concert hearing the band play in a crowd of people. Once the concert is over, it is over in every sense. The anticipation--having something to look forward to--is over, AND the event itself is over.

If we look at being born and subsequently dying as though they are similar to buying tickets to a concert and attending the concert, I think we can define why the time before the concert is not perceived as "bad" but the time after the concert has ended could certainly be percieved as "bad". Before we are born, the events of the world can be perceived as the build-up, the anticipation phase, or "buying the ticket". The world is being prepped for our entry. Life itself is the concert, it is the show we all want to be a part of.... we want to be in that crowd hearing the music. When the concert is over, there is an inevitable sense of loss. Everything leading up to the concert fueled the ultimate enjoyment of the concert itself. What comes after the concert can only be seen as "less appealing", or "bad" in comparison to the build-up to and excitement of the concert itself.

In other words, we see the events leading up to something we "want" as being positive, even if they present some level of agitation due to impatience or a strong desire to just get there already. We inherently know that the waiting will make it all that much sweeter. Yet the end of something we "want" to continue is difficult to define as anything but "bad".

Being upset about the time lost BEFORE our birth would be like being upset about missing a concert we never wanted to buy tickets for in the first place.

Comment author: tygorton 19 May 2012 08:14:48PM *  1 point [-]

I have never been good at math and a high percentage of content discussed here is over my head. However, I am hoping this does not exclude me from a sincere attempt to grasp the general concepts and discuss them as best I can. In other words, I'm hoping my enthusiasm makes up in some way for my total ignorance.

My take on this is that, within a mathematical equation, if a specific variable does not have a discernible impact on the resulting value, it is irrelevant to the equation. Such a variable may exist merely as a conceptual "comfort" to the human method of perceiving the universe, but that doesn't mean it serves any meaningful/rational purpose within the equation. If pure rationality is the ideal, then all "truths" should be reduced to their absolute smallest value. In other words, trim the fat no matter how damn tasty it is.

If all possibilities exist at all times as variable probabilities, I can begin to grasp the irrelevance of time as being necessary to arrive at meaningful insights about the universe. If time always exists as an infinite quantity, it may as well be zero because along an infinite timeline, all possibilities, even those with extremely finite probability, will exist.

I am wholly new to all of these concepts and as I stated, math might as well be a rapid-fire auctioneer speaking a foreign language. The above thoughts are the best I could solidify and I would love to know if I'm even in A ballpark... not THE ballpark, but at least A ballpark that is somewhere near relevant.

View more: Next