You're asking, then, whether I would trade off the satisfaction of having other tribe-members help me punish the one who hurt me, against the probability of being hurt? In the specific circumstances you have set up, I absolutely would. The reason is that, if there's no punishment for murder, there is presumably likewise no punishment for retaliation in kind. So I don't need my tribe to enact any punishment, I can do it myself, gaining even more satisfaction. This really seems like the best of both worlds: You get to take personal vengeance on anyone who wrongs you, with no dilution through 'official' coalitions; while the probability of actually being wronged goes down.
What jobs are safe in an automated future?
The trends are clear, more and more work that was previously done by humans are being shifted to automated systems. Factories with thousands of workers has been replaced by highly efficient facilities containing industrial robots and a few human operators, bank tellers by online banking, most parts of any logistics chain by different types of automatic sorting, moving, and sending mechanisms. Offices are run by less and less people as we're handling and processing fewer and fewer physical documents. In any area less people than before are needed to do the same work as before. The world is becoming automated.
These developments are not only here to stay - they are accelerating. Most of what is done by humans today could easily be done by computers in a near future. I would personally guess that most professions existing today could be replaced by affordable automated equivalents within 30 years. My question is: What jobs will be the last ones to go, and why?
Often education is pointed out as safe bet to ensure being needed in the future, and while that is true its not the whole story. First of all, in basically all parts of the world the fraction of the population with an academic degree is growing fast. Higher education will probably not be as good as a differentiator in the future. Second, while degrees in the fields hot in the future is hot in the future there is no guarantee that the degrees hot today will be of any use later on. Third, there is a misconception that highly theoretical tasks done by skilled experts will be among the last to go. But due to their theoretical nature such tasks are fairly easy represent virtually.
Of course as we progress technologically new doors are opening and the hottest job year 2030 might not even exist today. Any suggestions?
Yes of course, you are free to do it yourself, but it is assumed on the large scale that that even including retaliations (which are crimes), crime rates would go down. And in a society with no punishments would it be rational to do that? (Given that the friends or relatives of that guy could come after you for coming after him for coming after you and so on..?
The punishment dilemma
Here is a thought experiment for you. There will be some bold assumptions here, and they may be regarded unrealistic. I am aware of that and the purpose of this query is not to propose some truths about society in general, but to isolate certain characteristics of preferences regarding the societal institutions of law enforcement and punishment.
Assume that there existed a highly trustworthy model that showed beyond reasonable doubt that crime rates anti-correlated with harshness of punishments imposed on criminals. So basically, if policies changed towards shorter sentences, lower fines and lighter penalties, the number of criminal acts decreased (in every category).
Further assume that this was empirically tested and each time penalties went down, fewer and fewer crimes was committed. But the dependence was not linear so if we would get rid of punishments all together - there would still be murders, rapes, robberies etc. But, the crime rates would be minimized in that case. To summarize: We knew that crime rates would be at minimum if there was no consequences at all.
With no penalties, somebody could simply kill or rape your mother, sister or child and move in next door and live a nice and happy life in front of your very eyes, without society doing anything about it! Bare in mind now that this is the situation where the probability of your mother, sister or child being abused, robbed or killed is minimized!
Would it be reasonable to go trough with this demobilization that would spare lots of innocent people all the pain of getting robbed and abused, given that those criminals still out there can do anything they want and go free?
Good! Now I have two recently published very interesting books to read! Khaneman and Michael Nielsen's Reinventing Discovery. (I'll submit a review M.N.'s this as soon as I've read it)
This does not match how I observe my own brain to work. I see the guaranteed million vs the 1% risk of nothing and think "Oh no, what if I lose. I'd sure feel bad about my choice then.". Of course, thinking more deeply I realize that the 10% chance of an extra $4 million outweighs that downside, but it is not as obvious to my brain even though I value it more. If I were less careful, less intelligent, or less introspective, I feel that I would have 'gone with my instinct' and chosen 1A. (It is probably a good thing I am slightly tired right now, since this process happened more slowly than usual, so I think that I got a better look at it happening.)
You see, the reason for why it is discussed as an "effect" or "paradox" is that even if your risk aversion ("oh no what if I lose") is taken into account, it is strange to take 1A together with 2B. A risk averse person might "correctly" chose 1A, but that for person to be consistent in its choices has to chose 2A. Not 1A and 2B together.
My suggestion is that the slight increase in complexity in 1A adds to your risk (external risk+internal risk) and therefore within your given risk profile makes 1A and 2B a consistent combination.
Test method for the hypothesis: Use two samples of people: People who have reason to trust their mathematical ability more (say undergraduate math majors) and people who don't (the general undergrad population). If your hypothesis is correct then the math majors should display less of an irrationality in this context. That's hard to distinguish between them being just more rational in general, so this should be controlled in some way using other tests of rationality levels that aren't as mathematical (such as say vulnerability to the conjunction fallacy in story form)
This seems worth testing. I hypothesize that if one does so and controls for any increase in general rationality one won't get a difference between the math majors and the general undergraduates. Moreover, I suspect but am much less certain chance that even without controlling for any general increase in rationality, the math majors will show about as much of an Allais effect as the other undergrads.
One way of testing: Have two questions just like in Allais experiment. Make the experiment in five different versions where choice 1B has increasing complexity but same expected utility. See if 1B-aversion correlates with increasing complexity.
Yep. We shouldn't use "rational" when we merely mean "correct", "optimal", "winning", or "successful".
Rationality is a collection of techniques for improvement of beliefs and actions. It is not a destination.
'Rational' as in rational agent is a pretty well defined concept in rational choice theory/game theory/decision theory. That is what I refer to when I use the word.
Rational to distrust your own rationality?
There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
From Wikipedia:
The Allais paradox arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:
Experiment 1 Experiment 2 Gamble 1A Gamble 1B Gamble 2A Gamble 2B Winnings Chance Winnings Chance Winnings Chance Winnings Chance $1 million 100% $1 million 89% Nothing 89% Nothing 90% Nothing 1% $1 million 11% $5 million 10% $5 million 10% Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone.
However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B.
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own "calculations". 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.
Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the "irrational" set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!
Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn't a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do - and that would explain Allais paradox as an effect of rational behavior).
Basically - Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI's?
Why interfering and not letting your kid develop his own ways? Answering "How are you?" in detail sounds to me as a fantastic trait of his personality.
When I was 7-years old I stopped calling my parents mom and dad and switched over to calling them by their names. I just couldn't understand the logic of other people call them one thing and me calling the something else. Happily nobody tried to "correct" me according to social rules, and still today it wouldn't cross my mind to call my mother 'mother'!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Downvoted for groundless assumptions and for failing to google the basics. There are more, not fewer physical documents produced, because it is easier to produce them, the number of bank tellers has actually increased, etc.
Name three.
1 Maybe I should clarify: Are the tasks previously done by bank tellers becoming automated? Yes. The fact that the number bank tellers has increased does not invalidate my statement. If there were no internet banking or ATMs then increase would be much larger right? So its trivial to see that the number of bank tellers can increase at the same time as bank teller jobs are lost to automated systems.
2 I'll give you an extreme one. I am a few steps away of earning a degree in theoretical physics specializing in quantum information theory. Theoretical quantum information theory is nothing but symbol manipulation in a framework on existing theorems of linear algebra. With enough resources pretty much all of the research could be done by computers alone. Algorithms could in principle put mathematical statements together, other algorithms testing the meaningfulness of the output and so on.. but that a discussion interesting enough to have its own thread. I just mean that theoretical work is not immune to automation.
Organize all the known mathematics and physics of 1915 in a computer running the right algorithms, the ask it: 'what is gravity?' Would it output General theory of relativity? I think so.