Irrational hardware vs. rational software
I am passionately fond of the idea of creating an “Art of Rationality” sensibility/school as described in the [A Sense That More is Possible](http://lesswrong.com/lw/2c/a_sense_that_more_is_possible/) article.
The obstacle I see as most formidable in such an undertaking is the fact that, no matter how much “rational software” our brains absorb, we cannot escape the fact that we exist within the construct of “irrational hardware”.
My physical body binds me to countless irrational motivations. Just to name a few: 1) Sex. In an overpopulated world, what is the benefit of yearning for sexual contact on a daily basis? How often does the desire for sex influence rational thought? Is “being rational” sexy? If not, it is in direct conflict with my body’s desire and therefore, undesirable (whereas being able to “kick someone’s ass” is definitely sexy in cultural terms) 2) Mortality. Given an expiration date, it becomes fairly easy to justify immediate/individually beneficial behavior above long term/expansively beneficial behavior that I will not be around long enough to enjoy. 3) Food, water, shelter. My body needs a bare minimum in order to survive. If being rational conflicts with my ability to provide my body with its basic needs (because I exist within an irrational construct)… what are the odds that rationality will be tossed out in favor of irrational compliance that assures my basic physical needs will be met?
As far as I can tell, being purely rational is in direct opposition to being human. In essence, our hardware is in conflict with rationality.
The reason there is not a “School of Super Bad Ass Black Belt Rationality” could be as simple as…. It doesn't make people want to mate with you. It’s just not sexy in human terms.
I’m not sure being rational will be possible until we transcend our flesh and blood bodies, at which point creating “human friendly” AI would be rather irrelevant. If AI materializes before we transcend our flesh and blood bodies, it seems more likely that human beings will cause a conflict than the purely rational AI, so shouldn't the focus be toward human transcendence rather than FAI?
Is a Purely Rational World a Technologically Advanced World?
What would our world be today if humans had started off with a purely rational intelligence?
It seems as though a dominant aspect of rationality deals with risk management. For example, an irrational person might feel that the thrill of riding a zip line for a few seconds as being well worth the risk of injuring themselves, contracting a flesh eating bug, and losing a leg along with both hands (sorry, but that story has been freaking me out the past few days, I in no way mean to trivialize the woman’s situation). A purely rational person would (I’m making an assumption here because I am certainly not a rational person) recognize the high probability of something going wrong and determine that the risks were too steep when compared with the minimal gain of a short-lived thrill.
But how does a purely rational intelligence—even an intelligence at the current human level with a limited ability to analyze probabilities—impact the advancement of technology? As an example, would humanity have moved forward with the combustible engine and motor vehicles as purely rational beings? History shows us that humans tend to leap headlong into technological advancements with very little thought regarding the potential damage they may cause. Every technological advancement of note has had negative impacts that may have been deemed too steep as probability equations from a purely rational perspective.
Would pure rationality have severely limited the advancement of technology?
Taken further, would a purely rational intelligence far beyond human levels be so burdened by risk probabilities as to render it paralyzed… suspended in a state of infinite stagnation? OR, would a purely rational mind simply ensure that more cautious advancement take place (which would certainly have slowed things down)?
Many of humanity’s great success stories begin as highly irrational ventures that had extremely low chances for positive results. Humans, being irrational and not all that intelligent, are very capable of ignoring risk or simply not recognizing the level of risk inherent in any given situation. But to what extent would a purely rational approach limit a being’s willingness to take action?
*I apologize if these questions have already been asked and/or discussed at length. I did do some searches but did not find anything that seemed specifically related to this line of thought.*
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)