Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I could discuss the large scale effects of piracy (copyright infringement) for days! From a game-theoretical/utilitarian -, ethical - or any other perspective. I have a set of views and suggestions for topics that could be interesting to break down and address, but instead of writing a long post addressing many different topics, Ill start with the first one in my mind.
Just a thought:
For a subset of activities you could map the question of the ethical status of illegal downloading of a software p (preferred choice) to the existence of a certain kind of element a in a set S, which I'll call the set of alternatives (assuming the risk of getting caught is very small).
Lets say that you for some reason need a graphics editor and your preferred choice is Photoshop CS5. You could either:
- Buy it (650$ on Amazon).
- Download it (free)
In the case you have chosen to illegally download a copy of the software, some people would compare that to stealing (certainly the folks at Adobe). Would that really be fair to say? At least in my opinion that depends on whether or not you would have bought a copy in the absence of the 'download' alternative. Your preferred choice is indeed Photoshop CS5, but that is one among many choices, the rest being in the set of alternatives S. Most users with illegal copies wouldn't pay the 650$ when there are free alternatives. Those alternatives may be much less attractive with less features but many of them would still do the job.
So if there exist an a in S, such that you would prefer a over p in the absence of alternative 2, then in a game between you and Adobe, the choice a would not be Pareto optimal. Your utility is maximized by choosing p (downloading Photoshop), Adobes utility left unchanged. --> Maximizing total utility (ignoring potential side-effects, such as effects overall attitude towards piracy and so on)
Today there exists an S for almost anything.
Whats your opinion on this in regards to utility maximization (utility of society). Can we really break it down like this looking at the individual case?
The trends are clear, more and more work that was previously done by humans are being shifted to automated systems. Factories with thousands of workers has been replaced by highly efficient facilities containing industrial robots and a few human operators, bank tellers by online banking, most parts of any logistics chain by different types of automatic sorting, moving, and sending mechanisms. Offices are run by less and less people as we're handling and processing fewer and fewer physical documents. In any area less people than before are needed to do the same work as before. The world is becoming automated.
These developments are not only here to stay - they are accelerating. Most of what is done by humans today could easily be done by computers in a near future. I would personally guess that most professions existing today could be replaced by affordable automated equivalents within 30 years. My question is: What jobs will be the last ones to go, and why?
Often education is pointed out as safe bet to ensure being needed in the future, and while that is true its not the whole story. First of all, in basically all parts of the world the fraction of the population with an academic degree is growing fast. Higher education will probably not be as good as a differentiator in the future. Second, while degrees in the fields hot in the future is hot in the future there is no guarantee that the degrees hot today will be of any use later on. Third, there is a misconception that highly theoretical tasks done by skilled experts will be among the last to go. But due to their theoretical nature such tasks are fairly easy represent virtually.
Of course as we progress technologically new doors are opening and the hottest job year 2030 might not even exist today. Any suggestions?
Here is a thought experiment for you. There will be some bold assumptions here, and they may be regarded unrealistic. I am aware of that and the purpose of this query is not to propose some truths about society in general, but to isolate certain characteristics of preferences regarding the societal institutions of law enforcement and punishment.
Assume that there existed a highly trustworthy model that showed beyond reasonable doubt that crime rates anti-correlated with harshness of punishments imposed on criminals. So basically, if policies changed towards shorter sentences, lower fines and lighter penalties, the number of criminal acts decreased (in every category).
Further assume that this was empirically tested and each time penalties went down, fewer and fewer crimes was committed. But the dependence was not linear so if we would get rid of punishments all together - there would still be murders, rapes, robberies etc. But, the crime rates would be minimized in that case. To summarize: We knew that crime rates would be at minimum if there was no consequences at all.
With no penalties, somebody could simply kill or rape your mother, sister or child and move in next door and live a nice and happy life in front of your very eyes, without society doing anything about it! Bare in mind now that this is the situation where the probability of your mother, sister or child being abused, robbed or killed is minimized!
Would it be reasonable to go trough with this demobilization that would spare lots of innocent people all the pain of getting robbed and abused, given that those criminals still out there can do anything they want and go free?
There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
The Allais paradox arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:
Experiment 1 Experiment 2 Gamble 1A Gamble 1B Gamble 2A Gamble 2B Winnings Chance Winnings Chance Winnings Chance Winnings Chance $1 million 100% $1 million 89% Nothing 89% Nothing 90% Nothing 1% $1 million 11% $5 million 10% $5 million 10%
Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone.
However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B.
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own "calculations". 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.
Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the "irrational" set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!
Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn't a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do - and that would explain Allais paradox as an effect of rational behavior).
Basically - Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI's?
I formulated a little problem. Care to solve it?
You are given the following information:
Your task is to hide a coin in your house (or any familiar finite environment).
After you've hidden the coin your memory will be erased and restored to a state just before you receiving this information.
Then you will be told about the task (i.e that you have hidden a coin), and asked to try to find the coin.
If you find it you'll lose, but you will be convinced that if you find it you win.
So now you're faced with finding an optimal strategy to minimize the probability of finding the coin within a finite time-frame.
Bear in mind that any chain of reasoning leading up to a decision of location can be generated by you while trying to find the coin.
You might come to the conclusion that there cant exist an optimal strategy other than randomizing. But if you randomize, then you have the risk of placing the coin at a location where it can be easily found, like on a table or on the floor. You could eliminate those risky locations by excluding them as alternatives in your randomization process, but that would mean including a chain of reasoning!