If we were purely rational, then we could be trusted a lot more with dangerous technologies. The tough case is when anyone smart enough to invent something big is made rational, and ordinary/dumb people are as they are. So let's suppose just inventors are rational.
Motor vehicles are not that dangerous, compared to their benefits.
http://www.infoplease.com/ipa/A0922292.html
The peak death rate by motor vehicle peaked at 269 per million in 1970. The motorization of ambulances alone saves far more than one in a thousand.
Further helping, we'd have had seat belts and air bags and crumple zones, forward radar auto-brake, automatic lane centering, etc. much sooner. And, for that matter, we'd have been working on efficiency and alternate energy sources, thereby mitigating the other drawbacks.
We probably wouldn't have dismantled our light rail system, too.
The peak death rate by motor vehicle peaked at 269 per million in 1970. The motorization of ambulances alone saves far more than one in a thousand.
Interesting. I had always thought about cars being among the most dangerous things ever (them being the leading cause of death for people my age, sex and country), but I had never thought of looking at the flip side.
A lot of people who used horse-powered travel in the late 19th century used carriages and the like. Taxis with horses were pretty common. So a direct comparison to the dangers of horseback riding may be not called for. On the other hand, horses also created a highly unsanitary environment due to horse excrement and urine. I don't know how much that impacted disease substantially. I'd expect it not to have that large an impact, but I'm not sure even what the first steps would be in making an estimate for that.
I'm not sure even what th first steps would be in making an estimate for that.
Correlating recorded disease rates with recorded horses per capita would be a place to start, though of course there are many confounding factors.
Probably. On the other hand, it'd be quite impractical for most people in a several-million-inhabitant city to have a horse.
The peak death rate by motor vehicle peaked at 269 per million in 1970. The motorization of ambulances alone saves far more than one in a thousand.
This is a good statistic and I am pleased to have learned it.
The peak death rate by motor vehicle peaked at 269 per million in 1970. The motorization of ambulances alone saves far more than one in a thousand.
How many of those ambulance uses are of fairly old people? A lot of motor accidents occur for a fairly young cohort, so I'm not sure this is a great comparison. Still, the basic point seems strong.
We probably wouldn't have dismantled our light rail system, too.
Can you expand on your logic for this? Light rail has heavy upkeep costs.
I personally would have died at 20 without ambulance motorization, for instance, and I don't think I'm a 1 in 4000 outlier. Appendicitis doesn't always happen at a convenient time, nor is it always recognized promptly. Right there I'd guess we're talking over one in a thousand.
As for light rail, it does have costs. So do buses. The numbers I can find on buses put them ahead only wherever there is no existing rail system.
I would have been at least permanently brain-damaged, if not dead, without fast ambulances. Rapid response is the difference between recovering from a stroke and, well, not.
How many of those ambulance uses are of fairly old people? A lot of motor accidents occur for a fairly young cohort, so I'm not sure this is a great comparison. Still, the basic point seems strong.
Yeah, the first thing I thought was to compare QALYs rather than number of lives, too. But then I thought that ambulances are more useful for ‘sudden’ emergencies such as accidents than for ‘slower’ ones such as cancer, and then maybe a higher fraction of ‘lives saved by ambulances’ are young, otherwise healthy people than apparently obvious.
Would pure rationality have severely limited the advancement of technology?
I doubt it. Technology is a tool, and turning down tools doesn't sound that rational as rules of thumb go. If there's some strict advantage to not using technology a lot of the time (like playing to our evolutionary preferences or something), you should still have it around even if you keep it locked in a vault. Still, you'll need it eventually to stave off some mortal sickness or asteroid or to avoid the volcano that's set to explode in six months, etc. I imagine our hypothetical purely rational agent would be far better than us at managing dangerous tech.
Strategies would be different for an individual as opposed to societies. Both would as a first approximation only be as cautious as they need to be in order to preserve themselves. That's where the difference between local and global disasters comes into play.
A disaster that can kill an individual won't usually kill a society. The road to progress for society has been paved by countless individual failures, some of which took a heavy toll, but in the end they never destroyed everything. It may be a gamble for an individual to take a risk that could destroy it, and risk-averse people will avoid it. But for society as a whole, non-risk averse individuals will sometime strike the motherlode, especially as the risk to society (the loss of one or a few individuals out of the adventurous group at a time) is negligible enough. Such individuals could therefore conceivably be an asset. They'd explore venues past certain local optima for instance. This would also benefit those few individuals who'd be incredibly successful from time to time, even if most people like them are destined to remain in the shadow.
Of course nowadays even one person could fail hard enough to take everything along with it. That mat be why you get that impression that rational people are perhaps too cautious, and could hamper progress. The rules for the game have changed, you can't just be careless anymore.
Would pure rationality have severely limited the advancement of technology?
It depends on what the utility function of said pure rationality was.
(I've seen the argument that average utilitarians would have never invented agriculture, as the average human in 19th century was worse off than the average human in palaeolithic times.)
What does it mean or something to be purely rational? Having a narrow notion of what one means by that will probably impact th question a lot. I'm not aware of any good definition for a purely rational agent of limited intelligence.
A purely rational person would be nigh omniscient. If a combustible engine does more good than bad (which it does), a purely ration person would realize this.
If you want to know how we'd act if we just weren't biased about risks, but were just as imprecise, consider: would it be worth while to have been substantially more cautious? Barring nuclear weapons, I doubt it. The lives lost due to technological advancements have been dwarfed by the lives saved. A well-calibrated agent would realize this, and proceed with a lesser level of caution.
There are areas where we're far too cautious, such as medicine. Drugs aren't released until the probability of killing someone is vastly below the probability of saving someone. Human testing is avoided until it's reasonably safe, rather than risking a few lives to get a potentially life-saving drug out years earlier.
A purely rational person would be nigh omniscient. If a combustible engine does more good than bad (which it does), a purely ration person would realize this.
That's not the definition of rationality that's usually used around here. The one used around here is much more conservative on the scale of counterfactual ability.
A purely rational person would be nigh omniscient
I agree with the rest of your point, but I think I'm misinterpreting this statement, because it seems like an overstatement to me. That is, I'd restate your second sentence as "If there was dispute over whether use of a combustion engine did more good than bad, a purely rational person would be able to effectively investigate and correctly determine the answer." As you say, I'm fairly certain that the combustion engine created more benefit than than harm to humanity.
"A purely rational person would be nigh omniscient"
Even at current human intelligence levels? I don't see how pure rationality without the ability to crunch massive amounts of data extremely fast would make someone omniscient, but I may be missing something.
"If a combustible engine does more good than bad (which it does)"
Of course, I'm playing devil's advocate with this post a bit, but I do have some uncertainty about.... well, your certainty about this :)
What if a purely rational mind decides that while there is a high probability that the combustible engine would bring about more "good" than "bad", the probable risks compels them to reject its production in favor of first improving the technology into something with a better reward/risk ratio? A purely rational mind would certainly recognize that, over time, the resource of gasoline derived from oil would lead to shortages and potential global warfare. This is a rather high risk probability. Perhaps a purely rational mind would opt to continue development until a more sustainable technology could be mass produced, greatly reducing the potential need for war/pollution/etc. Keep in mind, we have yet to see the final aftermath of our combustible engine reliance.
"The lives lost due to technological advancements have been dwarfed by the lives saved."
How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology? How many people lead very low quality lives today due to over population? Would a purely rational mind make decisions to limit population rather than help them explode?
Does a purely rational mind value life less or more? Are humans MORE expendable to a purely rational mind so long as it is 51% beneficial, or is there a rational reason to value each individual life more passionately?
I feel that we tend to associate pure rationality with a rather sci-fi notion of robotic intelligence. In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true, a purely rational mind may have other reasons than compassion to value individual life MORE rather than less, even when measured against a potential benefit.
The questions seem straight forward at first, but is it possible that we lean toward the easy answers that may or may not be highly influenced by very irrational cultural assumptions?
How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology?
Overpopulation isn't caused by technology. It's caused by having too many kids, and not using resources well enough. Technology has drastically increased our efficiency with resources, allowing us to easily grow enough to feed everyone.
Does a purely rational mind value life less or more?
The utility function is not up for grabs. Specifying that a mind is rational does not specify how much it values life.
I was answering based on the idea that these are altruistic people. I really don't know what would happen in a society full of rational egoists.
In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true...
Does a purely rational mind value life less or more?
Specifying that a mind is rational does not specify how much it values life.
That is correct but it is also probably the case that rational mind would propagate better from it's other values, to the value of it's own life. For instance if your arm is trapped under boulder, human as is would either be unable to cut off own arm, or do it at suboptimal time (too late), compared to the agent that can propagate everything that it values in the world, to the value of it's life, and have that huge value win vs the pain. Furthermore, it would correctly propagate pain later (assuming it knows it'll eventually have to cut off own arm) into the decision now. So it would act as if it values life more and pain less.
Well, the value of life, lacking specifiers, should be able to refer to the total of the value of life (as derived from other goals and intrinsic value if any); my post is rather explicit in that it speaks of the total. Of course you can take 'value life' to mean only the intrinsic value of life, but it is pretty clear that is not what OP meant if we assume that OP is not entirely stupid. He is correct in the sense that the full value of life is affected by rationality. Rational person should only commit suicide in some very few circumstances where it truly results in maximum utility given the other values not accomplished if you are dead (e.g. so that your children can cook and eat your body, or like in "28 days later" killing yourself in the 10 seconds after infection to avoid becoming a hazard, that kind of stuff). It can be said that irrational person can't value the life correctly (due to incorrect propagation).
Well, one has to distinguish between purely rational being that has full set of possible propositions with correct probabilities assigned to them, and a bounded agent which has a partial set of possible propositions, generated gradually by exploring starting from some of the most probable propositions; the latter can't even do Bayesian statistics properly due to the complex non-linear feedback via proposition generation process, and is not omniscient enough to foresee as well as your argument requires.
What would our world be today if humans had started off with a purely rational intelligence?
It seems as though a dominant aspect of rationality deals with risk management. For example, an irrational person might feel that the thrill of riding a zip line for a few seconds as being well worth the risk of injuring themselves, contracting a flesh eating bug, and losing a leg along with both hands (sorry, but that story has been freaking me out the past few days, I in no way mean to trivialize the woman’s situation). A purely rational person would (I’m making an assumption here because I am certainly not a rational person) recognize the high probability of something going wrong and determine that the risks were too steep when compared with the minimal gain of a short-lived thrill.
But how does a purely rational intelligence—even an intelligence at the current human level with a limited ability to analyze probabilities—impact the advancement of technology? As an example, would humanity have moved forward with the combustible engine and motor vehicles as purely rational beings? History shows us that humans tend to leap headlong into technological advancements with very little thought regarding the potential damage they may cause. Every technological advancement of note has had negative impacts that may have been deemed too steep as probability equations from a purely rational perspective.
Would pure rationality have severely limited the advancement of technology?
Taken further, would a purely rational intelligence far beyond human levels be so burdened by risk probabilities as to render it paralyzed… suspended in a state of infinite stagnation? OR, would a purely rational mind simply ensure that more cautious advancement take place (which would certainly have slowed things down)?
Many of humanity’s great success stories begin as highly irrational ventures that had extremely low chances for positive results. Humans, being irrational and not all that intelligent, are very capable of ignoring risk or simply not recognizing the level of risk inherent in any given situation. But to what extent would a purely rational approach limit a being’s willingness to take action?
*I apologize if these questions have already been asked and/or discussed at length. I did do some searches but did not find anything that seemed specifically related to this line of thought.*