I agree with Nathan, but I think 1 or 2 per week would be ideal. What do people think about moving to a system of laws an social norms focused on rationally minimizing our odds of death or harm, rather than on maintaining certain principles.
To take an example that gets extreme negative reactions, human societies don't force random sets of people involuntarily into medical experiments that could adversely impact their health, even though every individual human might have our odds of health outcomes improved if we did have such a policy. Does that make us currently irrational for not pursuing such a policy? I think it does. If each individual human would odds-wise be better off healthwise if we engaged in mandatory compulsory drafting into medical experiments than if we didn't, then I think it's irrational for human societies not to do this. And I think this general principle applies widely to other areas of rule-making and social policy.
Is any expert in the fields of applied ethics and social policy studying this? Or done so in the past (no cheap throw away lines about Nazis or Tuskeegee please). Directions to links and publications are welcome.
I'm especially interested in responses from Anders Sandberg and TGGP. Contributors are welcome to respond in this thread on this topic anonymously for obvious reasons.
Maintaining certain principles? Currently our legal system is based on maximizing corporation profit with a bit of a front saying "justice", so I'd say that is a much better alternative. We definitely do a shitty job with criminals in the USA as the system is set up to get people to repeat behaviors rather than change them. Social norms aren't exactly controllable as far as I know. Unless you are also suggesting a mass brainwashing campaign armed with modern psychology I don't see how we could change that much. Then again, it would also be insanely difficult to change the legal system. If we make a lunar or martian colony then maybe?
If such experiments could be carried out as effectively and efficiently as possible then I would agree that such experiments shouldn't be disallowed. After all, if at some point we are going to be uploading minds we better get started replacing bits of people's brains that they've lost.
Where I disagree is compulsory drafting of otherwise healthy happy individuals. Such a draft would lead to serious terror throughout the populace. Depending on how extreme the experiments effects are (read: death) it may lead to serious anti-science activists backed by personal tragedy.
There are plenty of people in the world who are living on the streets, or otherwise who suffering terribly from lost limbs, missing organs, ect, and would be happy to get a hot meal and a place to sleep in exchange for some experimental proceedure. There are enough maimed people that we don't need to inflict additional damage.
I admit that there may be some areas where we need 100% healthy individuals, but it is more rational to move a quadriplegic to a robot body instead of someone with control of their limbs. Then again, by focusing on minorites, even a large number of them, we may skew our findings somewhat.
(edit: haha whoops replying to something from 2007)
That would be a utilitarian legal system, trying to maximize utility/happiness or minimixing pain/harm. I'm not an expert on this field, but there is a big literature of comments and criticisms of utilitarianism of course. Whether that is evidence enough that it is a bad idea is harder to say. Clearly it would not be feasible to implement something like this in western democratic countries today, both because on the emphasis of human rights but also (and this is probably more strong) many people have moral intuitions that it is wrong to act like this.
That of course leads into the whole validity of moral intuition issue, which some of my Oxford colleauages are far better placed to explain (mostly because they are real ethicists unlike me). But basically consequentialists like Peter Singer suspect that moral intuitions are irrational, while moderates like Steve Clarke argue that while they can contain relevant information (but we better rationally untangle what it is) and many conservatives regard them as a primary source of moral sentiment we really shouldn't mess with. I guess this is where we get back to overcoming bias again: many moral intuitions look a lot like biases, but whether they are bad biases we ought to get rid of or not is tricky to tell.
My personal view is that this kind of drafting indeed is wrong, and one can argue it both from a Kantian perspective (not using other people as tools), a rights perspective (the right to life and liberty trumps the social good; the situation is not comparable with the possibly valid restrictions during epidemics since my continued good health does not hurt the rights of anybody else) and a risk perspective (the government might with equal justification extend this drafting to other areas like moving people to undesirable regions or more dangerous experiments, and there might be a serious risk of public choice bureaucracy and corruption). The ease I had to bring up a long list counterarguments of course shows my individualist biases. But it is probably easier to get people to join this kind of trials voluntarily simply by telling them it is heroic, for science/society/progress/children - or just pay them well.
I'm personally interested in how we could do the opposite: spontaneous, voluntary and informal epidemology that uses modern information technology to gather data on a variety of things (habits, eating, drugs taken) and then compiles it into databases that enable datamining. A kind of wikiempidemology or flickrepidemology, so to say. Such data would be far messier and harder to interpret than nice clean studies run by proper scientists, but with good enough automatic data acquisition and enough people valuable information ought to be gathered anyway. However, we need to figure out how to handle the many biases that will creep into this kind of experiment. Another job for Overcoming Bias!
Anders, Thanks for the really interesting response. Perhaps I should be pitching this idea to leading utilitarians and finding out the groundwork they've already laid in this area.
I do think many "moral intuitions" fall neatly with already articulated biases, such as Eww bias.
One thing I'm not sure if you picked up on from my post. I don't think randomly drafting people into medical experiments to benefit human health/medical knowledge would just help society. I think it helps all of us individuals at risk of being so drafted, provided it's structured in such a way that our risk of disease and death ends up net lower than if human medical experimentation wasn't being done in this way.
I'd think economists might look at our humoring of various "moral intuitions"/biases as a sort of luxury spending, or waste. There also might be a cost in terms of human life, health, etc. that could legitimately be described as morally horrific.
It goes to the problem of how people often think shooting and killing 3 people is much worse than fraud, corruption, or waste that wipes out hundreds of millions of dollars of wealth, although objectively that reduction in global wealth might mean a much greater negative impact on human life and health.
I am not convinced that a utilitarian legal system is much different than the systems of modern Western societies. Most laws in such societies are passed on grounds of ethics, efficiency, or a combination of both. Many people assume that laws passed on ethical grounds are inefficient by utilitarian standards, but I don't think that's necessarily true.
Consider murder laws. These are typically justified with a moral argument: life is sacred. But when a person is killed by another, the cost is not just some abstract violation of moral principle--since it is people that hold morals, and since those morals (usually) specify that loss of life is a bad thing, murders impose costs on members of society. These costs could conceivably be measured ("How much would you be willing to pay to revive person X?") and should be part of any cost-benefit calculation aimed at judging the value of a "moral" law. Of course, murder laws yield obvious gains in economic efficiency as well (disincentive to kill->more security->more commerce, peace of mind etc.), but I am considering only the moral side of things to make my point.
Why don't we have compulsory clinical trials? I wager they wouldn't pass a proper cost-benefit analysis.
Although in many respects I feel I am of like mind with Hopefully Anonymous, I am not as fully committed to maximizing life (my own or anybody's) as he is. For right now I would rather not die but continue living. However, I do not rule out that I might at some point or in some circumstances prefer death (the current lifespan found in first world countries is unusual and many cultures have glorified death, so I don't think it can be said that such commitment to survival is universal). This does not mean I have any misgivings about Eliezer's work toward life-extension though.
I personally would rather live in a society oriented around goals such as the ones Hopefully Anonymous describes than one based on principles. That being said, I don't think goals can be objectively determined and I would not necessarily have complete trust in the institutions whose responsibility it is to seek those goals. Following a previous Stirnerite egoist, Benjamin Tucker, I would prefer if people could voluntarily form contractually arranged societies seeking whichever goals they see fit (or being based on principles, as many seem to prefer those and I do not begrudge them their preference). I doubt this one would result in one model of society and would instead be what Keith Preston has referred to as "panarchy".
The situation Hopefully Anonymous describes in which everyone is better off is usually referred to as "Pareto optimal". One of the troubles with Pareto optimality (besides being nearly impossible to attain, as the randomly selected individual whose health is adversely affected could be worse off than he might have been otherwise) is that "better" is inherently subjective, which is why I favor voluntarily agreed to contracts. Because I do not consider any normative statements/beliefs to have any objective truth value, I can't very well go around calling people "irrational" for them, but I certainly can have a very low opinion of them, just as I do for those who like music I despise. I do think that people often do a poor job of thinking about positive facts and in that sense can be called "irrational", and if they were Bayesian rational I suspect many normative beliefs would also be different (in the direction of consequentialism). I suspect though that is partly because I am imagining the type of person I would expect to be Bayesian rational contrasted with my stereotype of especially irrational people and confusing correlation with causation to some extent. Counterfactuals of this type can often be misleading and bring to mind the saying that "If my aunt had balls she'd be my uncle".
If my word was law a lot of things would be different about society, but forced medical experiments rank rather low on the list. I do not take the libertarian non-aggression principle as an "axiom" that is never to be violated, but I think very highly of it with regard to reducing conflict. No matter how much one might think that the people simply ought to act like the New Socialist/[insert here] Man, in reality they often don't and whining about how unenlightened they are won't change the stubborn reality.
Finally, I do not know if I want my own society to be creating public goods when we might be able to free-ride off the discoveries of others.
Just because research shows that human beings are insane, does not mean that turning power over to a government composed of human beings will cause it to fix the problem.
Eliezer, I don't think the approach I'm suggesting needs to be done through government. For example, it could be done extragovernmentally, and then it would require an excercise a government power to prevent extragovernmental agents from carrying it out.
TGGP, it sounds like you're saying that if certain social arrangements become too yucky to optimize yor personal odds of persistence (and I understand maximizing general odds is different than maximizing personal odds) then you'd rather die (or at least take an increased odds of death)? I can't say I relate to that point of view, at all.
Nathan, I think the reason we don't have compulsory medical trials is probably explained more by "functional not optimal" than the possibility that it doesn't pass cost-benefit. Here I'm specifically making randomized compulsory medical trials contingent on the degree that they pass cost-benefit. It seems to me to be such a naturally beneficial idea (at least on some levels) that I'm curious if utilitarians like Singer have at least done the analysis.
I'm sure readers other than me have pet ideas that they'd like to see exposed to community scrutiny so I hope some other readers throw out some bombs, too.
Another interest is a better version of the "Nobel Prize Sperm Bank". A version individualists could support, structured by encouraging volunteering financial incentives, and incorporating both donated (or purchased) eggs, sperms, surrogate wombs, and adoptive parents, with the genetic material selected from those most talented at solving existential threats humanity faces (not necessarily nobel prize winners) the surrogates and adoptive parents probably being less talented, but still the best at some combination of nuturing and existential threat-solving, and each offspring having an endowed trust that gives them financial rewards for each stage of education and professional development they choose to complete geared towards making them an expert at solving existential threats. I think all this could be done with current laws and social norms in the West. If the singularity is coming, this is all probably unecessary (or, more ominously, useless), but if there are barriers to AI of which we're currently unaware, this could speed up solving the challenges (in particular) of our current aging/SENS problem, and various other difficult existential problems of which we're currently aware or are unknown.
I think this relates to overcoming bias, because I'm not sure of objections to doing something like this other than a social aesthetic bias that this would be yucky, or that people smart at solving difficult challenge that humanity faces arise magically.
I hope I'm not misunderstanding Hopefully Anonymous' question here, but it would seem to me that a society whose laws are based solely on rationally minimizing harm would have to be a society with little or no freedom. I guess it depends on what is defined as the maximum risk the society is willing to endure to the physical safety of the individuals (and I presume the structure of the society as a whole). I think Eliezer certainly hinted at what I feel the main problem would be with a society whose laws are made to counteract individual irrationality. In doing so you would have to take away individual choice for the most part. That can be argued as a good thing to a point perhaps. But if you follow the goal of reduction of risk of harm far enough (which it would most certainly be rational to do so if that is your goal) you'd end up in a place where no one can do much of anything because nearly every activity we engage in on a daily basis has some amount of risk associated with it.
Driving a car for example would most certainly have to be done away with then, your risk of serious injury or death is pretty good, and the amount of overall harm to society in terms of wages lost, damage to property, and productive lives lost is quite high. Although I doubt the benefits to the society as a whole of driving would not outweigh the costs, so in such a society perhaps it would be allowed.
I'm not sure I understand how such a system could come about extragovernmentally. Could you elaborate? While social norms certainly appear outside of government influence, laws are a function of government and really only carry weight because the government has the right (or at least ability) to enforce them. For your example of compulsory medical trials (or anything similar to that) I think it would take a very drastic changing of social norms to make such a thing work without a government forcing people to take part in a medical experiments.
I did not mean to write this much at all. But it's an interesting topic for sure and it's not one I'm well educated on unfortunately. The above was just my initial reaction to the question.
What's the point of freedom? Is it god-given? an illusion? is it utilitarian (for example to promote innovation and economic growth through market participation) within certain threshhold levels to the degree that it helps maximize our mutual odds of persistence? Personally, I lean at least towards the latter justification for promoting certain amounts of free agency for people in society. But I think it's a bit arbitrary that freedom can be curtailed to forestall death from a threat in one hour's time, or one day's time, or one week's time, but not in a few decade's time (as would be attempted with the compulsory medical trial participation example).
Hopefully Anonymous, my point is that optimal is functional. If we find that our "optimal" policy is not functional, we need to expand the scope of our cost-benefit analysis.
If enough people are seriously disgusted by the possibly of compulsory trials (and I think they would be), the policy is unlikely to pass a cost-benefit test. When people balk that a particular policy will take their freedom, they are essentially saying "this policy would cause me harm, since I value my freedom." We need to look outside the most obvious costs and benefit when we evaluate policies.
A related example: by superficial utilitarian standards, compulsory medical trials for only the lowest-income members of society might seem a better policy than randomized trials in which anyone can be chosen regardless of economic status. After all, high-income people are far more likely to be meaningfully contributing to society. But the "poor only" law plainly violates our sense of equity and fairness, which is equivalent to saying it imposes large costs on us.
So I don't think, as savagehenry says, that a society built on minimizing harm would be much less free, provided we define "harm" sufficiently broadly. People value their freedom too highly, and loss of freedom is quite harmful given these values.
Hopefully anonymous,
How might you suggest that people be forced to comply with the terms of the forced clinical trials? Many involve a daily dose of meds. Those who do not want to participate in the trial but are forced are not likely to comply with their prescribed dosage.
Many people believe freedom is god-given. Others, whether or not they believe in god, believe freedom is a human right, and are morally opposed to those attempting to curtail our freedoms, even if it has the possibility of benefitting the world. Maybe that makes them selfish, or maybe we are convinced that there is something fundamentally true in the idea that the end does not always justify the means.
I'm interested in responses to these lines:
...
"But I think it's a bit arbitrary that freedom can be curtailed to forestall death from a threat in one hour's time, or one day's time, or one week's time, but not in a few decade's time (as would be attempted with the compulsory medical trial participation example)."
and
"I don't think randomly drafting people into medical experiments to benefit human health/medical knowledge would just help society. I think it helps all of us individuals at risk of being so drafted, provided it's structured in such a way that our risk of disease and death ends up net lower than if human medical experimentation wasn't being done in this way.
I'd think economists might look at our humoring of various "moral intuitions"/biases as a sort of luxury spending, or waste. There also might be a cost in terms of human life, health, etc. that could legitimately be described as morally horrific.
It goes to the problem of how people often think shooting and killing 3 people is much worse than fraud, corruption, or waste that wipes out hundreds of millions of dollars of wealth, although objectively that reduction in global wealth might mean a much greater negative impact on human life and health."
...
I think it's worth looking into if waste from eww bias-derived moral intuitions on topics such as freedom actually result in social waste such that the net freedom for all humans is lower. For example, we all may be more likely to die as a result of failing to have randomized compulsory medical trials at this stage of human history. Thus, by not engaging in this temporary fix, are we reducing a lot more freedom 50 years from now.
The valuing of freedom now more than freedom later (if that's what this is) is, parallels a classic bias of preferring less money now rather than more money later, beyond the advantages of Time Value of Money.
I thought Cochran and Harpending's letter was the most interesting. As for Murray, I think he tends to mythologize more than give primacy to empiricism. I find a Murray vs. Patricia Williams type dialectic to be annoying, performative, and mostly about manufacturing American cultural norms (while drowning out more interesting and critical voices). So I'm glad the discussion on the topics related to human intelligence is expanding, and expanding beyond some narrow Left/Right performance.
forgot to include the link:
http://www.commentarymagazine.com/cm/main/viewArticle.html?id=10916&page=all
This is interesting and somewhat relevant to the topic of this blog:
http://www.aft.org/pubs-reports/american_educator/issues/summer07/Crit_Thinking.pdf
I am kicking myself for not doing this earlier, but Hopefully Anonymous, I think you would be interested in the writings of Mencius Moldbug, beginning with his Formalist Manifesto and hopefully including Political Sanity in One Easy Step, The Magic of Symmetric Sovereignty and The Fnargland Grand Challenge.
I have recently been wondering whether atheism is more irrational than belief in a deity. Given that both require a leap of faith - is it the case that atheism requires a greater leap? Assuming that the probability of the creation of my conciousness is almost infinity to one given the rigours of natural selection, is it not more rational to believe that it is the result of a supernatural will?
Following from this, and to meet the requirement for "overcoming bias", given that I am a child of an urban environment, is it perhaps a bias to see the creativity of humanity all around myself and thereby extrapolate from this single data (reference) point that there is not much else out there, thereby pushing me towards a more irrational atheism?
Answers much appreciated.
Mark, until I read Kurzweil's interesting argument that we're most likely living in a simulation (within a simulation, etc. -almost all the way down) I thought there was more likely than not no intelligent creator of our apparent reality. Now it seems to me the stronger argument that our apparent reality is a simulation of some other intelligence's reality, with some abstractions/reductions of their more complex reality. Just like we've already filled the Earth with various (and increasingly better) simulations of the universe and our own apparent reality).
I now wish I could pass the link to you of a paper written by an Oxford professor of philosophy who created probabilities for events likelto precipitate the end of humanity, eg nucleur armageddon, grey gloop etc. The probability that the more complex society presses end-game on our particular simulation is alarmingly high.
Mark, alarmingly high? I don't see how that probability can be calculated as any higher than the existential threat of quantum flux or other simple, random end to our apparent reality, but I'd be interested in seeing the paper.
I've thought about the Jesus Camp video you presented to me. I am curious, why those particular examples to get your point accross regarding the post?
Just Curious Anna
This makes notions of representative democracy, at least in the USA, seem a bit silly:
http://andrewsullivan.theatlantic.com/the_daily_dish/2007/07/one-problem-wit.html
The link details evidence that most Americans have very low knowledge levels of the basics of American government.
Nick and Eliezer, are you still Singularitarians?
http://en.wikipedia.org/wiki/Singularitarian
The idea that people are actively working to bring about self-improving, smarter-than-humanity intelligences scares me, because I think you're blind to your own ruthless selfishness (not meant pejoratively) and thus think that by creating something smarter than us (and therefore you) it can also attempt to be kind to us, as you perceive yourself to be attempting to be kind to people generally.
In contrast, I don't see either of you as Gandhi-types (here I'm referring to the archetypal elements of Gandhi's self-cultivated image, not his actual life-in-practice). It may be a hubris-derived bias that makes you think otherwise. I don't see any singulatarians performing and attempt to engage in minimal pleasurable resource use to maximize their ability to save currently existing lives. Instead I see thousands or millions of people dying daily, permanently, while leading singularians enjoy a variety of life's simple pleasures.
My prescriptive solution: more selfishness, fear, and paranoia on your end. Be thankful that you're apparently (big caveat) one of the smartest entities in apparent reality and there's apparently nothing of much greater intelligence seeking resources in your shared environment. Rather than consciously try to bring about a singularity, I think we should race against a naturally occuring singularity to understand the various existential threats to us and to minimize them.
At the same time, I think we should try to realistically assess more mundane existential threats and threats to our personal persistence, and try to minimize these too with what seems to be the best proportionate energy and effort.
But the rationalizations of why people are trying to inentionally create a self-improving intelligence smarter than humanity seem to me to be very, very weak, and could be unecessarily catastrophic to our existence.
Maybe the Brazilian Appeals Court was right?
http://apnews.myway.com/article/20070718/D8QEV3703.html
I'd like to lobby for a new open thread to be created weekly.
This statement seems to me to be extraordinarily (relative to the capabilities of the presumed authors) ungrounded in empiricism. All sorts of ideas in it are framed as declarative fact, when I think they should be more accurately presented as conjecture or aspirations of unknown certainty. I'm very interested in the Singularity Institute people at overcomingbias addressing these concerns directly.
The statement the above post refers to:
http://www.singinst.org/overview/whyworktowardthesingularity
Topic for election time? Should most rational agents bother voting in national elections, given some typical costs and benefits of doing so? Are the voters typically behaving more rationally than the abstainers, or the other way around? Is such voting behaviour better explained as a social signalling mechanism than by its effect on who is in power?
You folks tell me if this deserves its own thread.
Science, Engineering, and Uncoolness
Nowadays, it seems that the correlation between sciency stuff, social ineptitude, and uncoolness, is cemented in the mind of the public. But this seems to be very era-specific, even time-specific.
As a lesswronger, I find this ironic: In Islamic countries, "scientists" are called with the same word use for religious leaders and other teachers, "olama", literally "knowers", historically there's been a huge overlap between the two, and, when one of these folks speaks, you're supposed to shut up and listen. This is still true to this day. There might not be much wealth to be gained from marrying a scientist, but there was status; amusingly enough, it's in modern-day materialism that is pushing them into irrelevance as money becomes, more and more, the sole measure of status.
In the West, in the XIXth century, Science and Progress were hip and awesome. Being a scientist of some sort was practically a requirement for any pulp hero. In the USA, an era of great works of engineering that had a dramatic impact on life quality made engineers heroes of popular fiction, men of knowledge and rigor who would not bow down to money and lawyer-cushioned bourgeois, or to corrupt and fickle politicians, men who would stand up against injustice and get the job done no matter what. Everyone wanted to call themselves an engineer, and the word was rampantly abused into meaninglessness; florists called themselves "flower engineers"! That's how cool being an engineer was.
In the Soviet Union, as long as they didn't step on the toes of the Party, scientists were highly acclaimed and respected, they got tons of honor and status. There was a huge emphasis on technological progress, on mankind reaching its full potential, at least on paper.
Nowadays, nearly the entire leadership of China is made of technicians and engineers. Not lawyers, or economists, or literati. These people only care about one thing, getting the job done, and that's what Science does.
So, I've really got to ask, when and how did Science and Engineering become "uncool", and why are they termed "geek", the term used for sideshow circus performers whose specialty was eating chickens alive (or something like that)? How can the process be reversed? Because, from a utilitarian standpoint, Science being cool and appreciated and respectable is kind of important.
If you mean you want to post the actual fanfic here: this is probably not the best place for fanfics; try fanfiction.net, perhaps?
If you mean you want to post the fanfic somewhere else: what do the instructions somewhere-else say?
If you mean the fanfic is already posted but you want to post a link to it here: easiest is probably to put a comment in the current media thread (called "March 2017 Media Thread"). To make a link, write something like this: [what you want displayed](url for link).
By request of the community, an Open Thread for free-form comments, so long as they're still related to the basic project of this blog.
A word on post requests: You're free to ask, but the authors can't commit to posting on requested topics - it's hard enough to do the ones we have in mind already.