Politics Discussion Thread January 2013
- Top-level comments should introduce arguments; responses should be responses to those arguments.
- Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
- A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
- In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.
[Link] Economists' views differ by gender
Edit: ParagonProtege has provided a link to the original study. Thank you! (^_^)
A new study shows a large gender gap on economic policy among the nation's professional economists, a divide similar -- and in some cases bigger -- than the gender divide found in the general public.
What does an economist think of that?
A lot depends on whether the economist is a man or a woman. A new study shows a large gender gap on economic policy among the nation's professional economists, a divide similar -- and in some cases bigger -- than the gender divide found in the general public.
Differences extend to core professional beliefs -- such as the effect of minimum wage laws -- not just matters of political opinion.
Female economists tend to favor a bigger role for government while male economists have greater faith in business and the marketplace. Is the U.S. economy excessively regulated? Sixty-five percent of female economists said "no" -- 24 percentage points higher than male economists.
Can this be reasonably explained by self-interest? Female and male economists' views are probably coloured by gender solidarity. Government jobs may be more likeable to women than men because of their recorded greater risk aversion. Regardless of the reason government jobs are more important for women than for men. Also in the US where the study was done middle class white women benefit quit a bit from affirmative action in government hiring.
"As a group, we are pro-market," says Ann Mari May, co-author of the study and a University of Nebraska economist. "But women are more likely to accept government regulation and involvement in economic activity than our male colleagues."
Opinion differences between men and women are well-documented in the general public. President Obama leads Mitt Romney by 10 percentage points among women. Romney leads Obama by 3 percentage points among men, according to the latest Gallup Poll.
Politics is the mind-killer probably does play a role in explaining the difference.
The survey of 400 economists is one of the first to examine whether gender differences matter within a profession. The answer for economists: Yes.
How economists think:
- Health insurance. Female economists thought employers should be required to provide health insurance for full-time workers: 40% in favor to 37% against, with the rest offering no opinion. By contrast, men were strongly against the idea: 21% in favor and 52% against.
- Education. Females narrowly opposed taxpayer-funded vouchers that parents could use for tuition at a public or private school of their choice. Male economists love the idea: 61% to 14%.
- Labor standards. Females believe 48% to 33% that trade policy should be linked to labor standards in foreign counties. Males disagreed: 60% to 23%.
First two points are somewhat congruent with stereotypes. Anyone who has run into the frequent iSteve commenter "Whiskey" will probably note that the third point indicates women may not hate hate HATE lower and middle class beta males in this case.
"It's very puzzling," says free-market economist Veronique de Rugy of the Mercatus Center at George Mason University in Fairfax, Va. "Not a day goes by that I don't ask myself why there are so few women economists on the free-market side."
A native of France, de Rugy supported government intervention early in her life but changed her mind after studying economics. "We want many of the same things as liberals -- less poverty, more health care -- but have radically different ideas on how to achieve it."
This seems plausible since politics is about applause lights after all, the tribes are what matters not the particular shape of their attire. But might value differences still be behind the gender difference? Maybe some failed utopias I recall reading aren't really failed.
Liberal economist Dean Baker, co-founder of the Center for Economic Policy and Research, says male economists have been on the inside of the profession, confirming each other's anti-regulation views. Women, as outsiders, "are more likely to think independently or at least see people outside of the economics profession as forming their peer group," he says.
The gender balance in economics is changing. One-third of economics doctorates now go to women. The chair of the White House Council of Economic Advisers has been a woman three of 27 times since 1946 -- one advising Obama and two advising Bill Clinton. The Federal Reserve Board of Governors has three women, bringing the total to eight of 90 members since 1914.
"More diversity is needed at the table when public policy is discussed," May says.
Somehow I think this does not include ideological diversity.
Economists do agree on some things. Female economists agree with men that Europe has too much regulation and that Walmart is good for society. Male economists agree with their female colleagues that military spending is too high.
The genders are most divorced from each other on the question of equality for women. Male economists overwhelmingly think the wage gap between men and women is largely the result of individuals' skills, experience and voluntary choices. Female economists overwhelmingly disagree by a margin of 4-to-1.
The biggest disagreement: 76% of women say faculty opportunities in economics favor men. Male economists point the opposite way: 80% say women are favored or the process is neutral.
No mystery here. (^_^)
[Link] The Collapse of Complex Societies
TGGP a frequent commenter at Overcoming Bias (and hence old LessWrong), writes about his thoughts on a book by Joseph Tainter.
I’ve seen Joseph Tainter’s “The Collapse of Complex Societies” recommended in a few different places. Jared Diamond’s book might be one of them, the guest-posts of Captain David Ryan aka “Tony Comstock” for James Fallows at the Atlantic might be another. The sidebar of John Robb’s “Global Guerrillas” blog is the only one I remember with certainty. It’s a not a very long book, and you can get the gist of it from Tainter’s wikipedia page.
Lots of people have found civilizational collapses to be interesting, and Tainter reviews many of their theories while finding them wanting. The “eleven major themes in the explanation of collapse” he lists are depletion/cessation of a vital resource, establishment of a new resource base (which I found too stupid to take seriously even momentarily), insurmountable catastrophe, insufficient response to circumstances (which is almost tautological), other complex societies, intruders, class conflict or elite mismanagement, social dysfunction, mystical factors, chance concatenation of events (almost tautological if you don’t think collapse is predetermined) and economic factors. Like Tainter, I find the “mystical” theories to not really constitute theories at all, although some of the most popular writers on the subject (Spengler, Toynbee, various ancients) are included there. Tainter often contrasts “integrative” (or “functional”) theories on the origin of the state/complexity vs “conflict” theories, and acknowledges that he is more partial toward the former. Unfortunately, most of the latter theorists he lists are Marxists and carry a lot of baggage. The observation that throughout much of history some set of people ruled over others as a result of military victory regardless of any benefit to the subjects (though a Leviathan may happen to have upsides) predates Marx, with Ibn Khaldun being one of the few non-Marxist examples Tainter mentions. That’s not to say Tainter is anti-Marxism, he actually compares Marxist “social science” to Einsteinian physics and Darwinian biology! I suppose there is (or was) just such a heavy representation of Marxists among academic anthropologists and historians that Tainter regards Marxism as somewhat normative, whereas to me it’s something weird and laughable like Holocaust revisionism.
Resource depletion is the reverse of the theory I found so absurd, and (showing there is hope for humanity) it is a much more popular theory. J. Donald Hughes blamed Rome’s collapse in part on deforestation, but W. Groenman van Waateringe some years later provided evidence at the time that cereal pollens declined while forest pollens increased. That of course is not a causal proof, since it is documented that when the empire was declining many agricultural regions became depopulated. Waateringe blames agricultural intensification for increasing the population and thus the demands on agriculture, but to me that just raises the question of why marginal agricultural lands weren’t reclaimed. There actually is an explanation for that depopulation, but like Tainter I’m not going to get to that in a hurry. Tainter finds this theory (like most other theories he rejects) unsatisfactory because complex societies should have leaders who notice the depletion and think of a response. I am reminded of David St. Hubbin’s girlfriend in Spinal Tap who says “It’s just a problem! It get’s solved!” Sometimes a solution is not within a society’s feasible choice-set. Tainter briefly acknowledges that possibility (noting that it would have to be proved, which is difficult given how little information we have about many ancient societies) but spends more time castigating imaginary opponents depicted as claiming societal elites just stood around slack-jawed rather than attempting to deal with the situation. I would call that a strawman, except that Jared Diamond’s “Collapse” bears too much resemblance. He also mentions Richard Wilkinson’s documenting that deforestation spurred development in late/post medieval England, which is really just extra evidence that Hughes was wrong (as had already been mentioned) rather than a broader point against a class of theories. I should acknowledge that Tainter also cites evidence on the differential abandonment of cities and the failure to correlate with expected environmental characteristics, which is just the sort of thing that would later puncture Jared Diamond’s take on the Maya. His point that greed is constant enough (or its variance poorly enough understood) to make it a poor explanation of a variable situation is fair enough, but he can’t dismiss theories of collapse based on mismanagement because by their nature they should keep their society going, even if only out of self-interest. There are basic agency problems that mean one shouldn’t identify the interests of elite (or non-elite, for that matter) actors with that of a larger organization. In an uncertain world it also makes sense to discount the future (you or your dynasty might be replaced, might as well get what you can while you can).
Steve Sailer once critiqued Diamond’s thesis by noting that societies tend to die from homicide rather than suicide, and lumping together two of Tainter’s rejected explanations would make for a very popular theory. Tainter, however, would exclude most cases clearly caused by another complex society because those involve absorbtion rather than collapse (indigenous populations thoroughly devastated by disease before Europeans even arrived would be exceptions). So his question is then why a complex society would succumb to less complex intruders. Sometimes it may not be so easy to disentangle the two scenarios, such as when the persian & eastern roman empires exhausted themselves fighting each other, leaving themselves open to the Muslim invaders exploding out of Arabia (although it was only the persians that succumbed in fairly short order, and Tainter wouldn’t consider that a collapse). Tainter says it is “unsatisfactory [...] that a recurrent process – collapse – is explained by a random variable, by historical accident”. If random numbers for that variable (I’m imagining a stochastic process with a threshold for collapse rather than a binary control variable) are constantly being generated over time, it shouldn’t be that unsatisfactory that they recur throughout history. Tainter does make the legitimate point that elites, with a number of Roman emperors being good examples, have often proved capable of dealing with barbarian intrusions. But there’s no guarantee that they will always be successful. He also wonders why invaders would “destroy those things which repay conquest”. The obvious answer is that, by the second law of thermodynamics, it’s very easy to break things, and that includes during the process of conquering & looting. Some relatively sophisticated barbarians may conquer a territory and leave much of the administrative apparatus intact to rule as before, others may have no particular interest (or competence) in being bound to a territory and collecting scraps of taxes from farmers.
The collapse of Rome is probably the most famous example (at least to westerners) and forms one of his three case studies, paradigmatic of the most complex sort of society to collapse. I find it more enlightening than the others because, and call me a drunk looking under a lamppost if you will, it’s the most well documented. Among the things documented is that the proximate cause of collapse was invasion by various (mostly Germanic) barbarians. That is discussed extensively in Peter Heather’s “Empires and Barbarians” which I have discussed earlier. It’s because I read that one so recently that a few of Tainter’s remarks stuck out. Focusing on the internal soundness of a society and its affordable scale/complexity he writes “The Germanic kingdoms that succeeded Roman rule in the West were more successful at resisting invasions”. If he limited that claim to the particular kingdoms which survived past the dark ages, it would be rather tautological. But if he means germanic kingdoms generally, then it just doesn’t seem to be the case. They got invaded and replaced all the time, we just don’t remember the ones that died out. Part of the reason the Romans had such problems with barbarians is that one group would get invaded by another, and then start moving around and displacing other barbarians. The western Roman empire, in contrast, was able to survive many invasions before the last Roman emperor was toppled. Tainter portrays the formation of the empire as a process in which a territory is able to summon the resources to mobilize a force to conquer more territory to extract its resources, then rinse and repeat in a self-sustaining cycle until it expanded too far to get many marginal returns. There is some truth to that, but it overlooks the non-extractive aspect of Roman rule which increased productivity in conquered territories, thereby making those territories more attractive as a target for raiding. In Heather’s story, barbarian confederations on the border engaged in a process of competitive selection for the strength to hold an attractive position (for reasons of trade, raiding and diplomatic subsidy) and eventually the size and cohesion necessary to survive and settle within Roman territory. Focusing on the internals of the collapsed societies, he overlooks any dynamics occurring within outside societies that could give them the capability of defeating the imperial power. Heather’s account is similar to Peter Turchin’s in “War and Peace and War” except that, like Khaldun, Turchin focuses more on the softening effects of metropolitan decadence that renders old dynasties vulnerable to the hardened asabiya-endowed border marchers.
Tainter’s two other case studies are the Mayan lowland citystates and Chaco canyon cliff-dwellers. The Mayans are less complex (or at least less well-documented, since the conquistadors destroyed many of the remaining documents) and the Chacoans even less so. He also used the Ik as an example of an extremely simple society that collapsed even further below the level of familial organization, but he didn’t discuss it all that much and I’m not sure how reliable primary source Colin Turnbull was (supposedly they hadn’t been hunter-gatherers for centuries when their supposed “livelihood” of hunting was banned). The interesting thing about the Maya is that there were multiple relatively equivalent city-states rather than one dominant hegemon. Tainter includes them as a case study of collapse, even as he states elsewhere that collapse is not an option for “peer polity competition” because the weakening of one peer just invites conquest by another. Also, rather than devoting most of their resources under duress to a standing army (something documented in the Roman case) Tainter discusses the building of monuments as conspicuous consumption to demonstrate how powerful and brutal (per the depictions of torture) the city was, rather similar to the story Diamond tells. I don’t know what kind of evidence we have for the scale of their military expenditures, although we know they did war from time to time. There was no writing whatsoever in Chaco canyon, so we are left with the old archeological standby of potsherds and whatnot. Tainter does make the interesting point that the culture benefitted from uniting different ecological niches, with higher elevation territories having more agricultural productivity in cold wet years while lower elevation ones were more productive in warm dry ones. An economist would say that this diversified portfolio allowed for more consumption smoothing. However, I was confused by Tainter’s argument that as more outlier territories were incorporated diversity and gains from exchange went down. As long as the ratio between high and low places was stable, incorporating more territories should not cause any problems in that respect. Admittedly, this does mean that there are more viable subsets of communities that would be individually stable if they withdrew, which is indeed what he claims happened eventually. But he also seemed to be suggesting that the system overally was degrading its performance, without clearly stating whether an excess of a particular type of environment was upsetting the balance.
Tainter’s theory to explain collapse is declining marginal returns. This is a common concept in economics, but it is normally used to understood how equilibrium can develop. Applied to a society, we would expect the declining returns to territorial expansion or administrative complexity (the former often requiring some degree of the latter) to result in eventual stasis rather than collapse. David Friedman has an interesting paper on how the advantages of taxing trade, land or labor gave rise to different equilibria for the sizes & boundaries of polities during the Roman, medieval and nationalist eras in Europe. In Friedman’s theory, each shift between eras resulted from some exogenous change rather being part of the internal logic of societies. Tainter relates some various interesting bits from C. Northcote Parkinson’s “Parkinson’s Law, and Other Studies in Administration”. For example, while “between 1914 and 1967, the number of capital ships in the British Navy declined by 78.9 percent, the number of officers and enlisted men by 32.9 percent, and the number of dockyard workers by 33.7 percent [...] the number of dockyard officials and clerks increased by 247 percent, and the number of Admiralty officals by 767 percent” (emphasis added). Mencius Moldbug would not be surprised to learn that “between 1935 and 1954 the number of officals in the British Colonial office increased by 447 percent” even though “the empire administered by these officials shrank considerably”. These examples are important because they do not demonstrate an increasingly large requirement of administrators for a marginal increase in size/complexity of an entity to be administered, but paying more for less. Parkinson’s explanation was bureaucratic self-serving, which Tainter rejects because he finds trends of increasing hierarchical specialization in the private sector. But because Tainter fails to distinguish between declining marginal returns (eventually reaching zero at a steady-state) and NEGATIVE returns he doesn’t specify whether the latter occurs in the private sector (though Karl Smith would not be surprised if it does for many publicly owned corporations whose shareholders would be better served by liquidation of assets). The growth of administration in higher education would also count, but as a heavily subsidized non-profit sector I can’t say it would qualify. At one point Tainter acknowledges “In many cases this increased, more costly complexity will yield no increased benefits, at other times the benefits will not be proportionate to costs” (emphasis in the original). This is precisely the question at issue of elite mismanagement or the out-of-control inertia of expanding administrative bureaucracies, but as noted he rejected Parkinson’s theory and mocks the idea of societies as runaway trains as self-evidently absurd. Instead he portrays collapse as a choice which is preferable once marginal returns have declined to a certain point. This didn’t entirely make sense to me, because if a society has accidentally shot part the point of zero marginal returns to one of negative returns, the sensible thing is just to reduce that marginal increase in complexity to return to the steady state with zero marginal returns.
The Roman empire sometimes seemed to behave in such a manner, losing some territory and sticking with a more defensible and adminstrable domain (although in Heather’s account some of the lost territory was among the most agriculturally productive), although Tainter thinks the conquests of Britain & Dacia never paid for themselves. So why the path dependency so that changes are not simply reversible? There could be consumption of a not easily renewable resource, a sort of borrowing from the future that leaves future generations deeper in the red. This could happen with soil deterioration, though Tainter doesn’t discuss that much (odd, despite his focus on societies as means of managing sources of energy). His example of Roman emperors increasingly resorting to the debasing of the currency could count (by Diocletian’s time it collected taxes in kind rather than the currency it had rendered nearly worthless), as well as the selling of imperial land. The larger problem in Rome seemed to be an increasingly large portion of subjects who were citizens (both urban proletariat and squabbling elites) subject to fewer or no taxes, while marginal lands were abandoned by overtaxed farmers. An odd feature of the empire was that election officials had to cover the costs of their own office, and as expenses rose there were fewer wealthy people willing to come forward as candidates, until the position was made hereditary. It became obligatory to farm certain deserted lands, with peasants drafted by local city Senates, and Constantine made soldiery a hereditary profession (which required a number of new laws over time to deal with sons who’d rather not follow that career). Taxation of land was simplistic and did not vary based on its quality or yield, so a farmer of marginal land would often be better off working for the owner of a more productive territory and paying rent than failing to cover the taxes on his own plot. With agricultural labor becoming legally tied to the land, we can see the clear beginnings of serfdom and the manorial system. As mentioned, Tainter views the Roman collapse as a choice (as he does others), although of course accounts from the time were more apt to regard it as unfortunate failure or divine punishment.
Interestingly, the “peer polity competition” that replaced Roman civilization is a situation he regards as invulnerable to collapse as opposed to absorbtion, and by removing that “option” he thinks this made peasants demand democratic representation. He acknowledges that this did not happen in the “Warring States” period of China, and instead the Confucian ideology of governance developed. He suggests “Perhaps participatory governance was simply not possible in ancient societies that were so much larger, demographically and territorially, than the Greek city-states”. Someone should have told James Madison (and I’m not being sarcastic). Interestingly enough, there was a civilization of Greek city states which did collapse, just as we’ve mentioned the lowland Maya doing. These are the Mycenaean Greeks who preceded the Dark Ages of Homer’s time. Their collapse is usually attributed to invasion by Dorian Greeks, but Tainter isn’t convinced there’s enough evidence for the Dorians’ presence. Because “Collapse occurs, and can only occur, in a power vacuum” (emphasis in the original) both the Mycenaean Greek and lowland Maya polities must have experienced simultaneous collapse.
The choice of peasants may be limited to passively withdrawing support and just not working very hard (I’d have more to say on that if I’d read James Scott’s “Weapons of the Weak”), but even then I don’t think it’s a desirable outcome for peasants. I’ve mentioned Heather on the greater productivity of Roman territory, and what do you think happened to the masses when that productivity crashed? My understanding of the current archaelogical consensus is that the population crashed as well. Tainter talks about the malnourished skeletons of the peasantry as evidence for the undesirability of certain degrees of complexity, but we also know that English peasants ate better after so many of their peers died of the bubonic plague (it’s also known that peasants have poorer diets than hunter-gatherers, though in Darwinian terms you definitely want to be a farmer). As in James Scott’s account of highland southeast asia (which I don’t entirely buy) was there much cultural defection of the peasantry to the greener pastures outside civilization? Tainter writes that “In 378 [...] Balkan miners went over en masse to the Visigoths”, and that others wished to be conquered/liberated. It is precisely due to the risk of being conquered that he argues is the reason many societies don’t simply revert to a lower level of complexity “even if marginal returns are unfavorable”, but it’s unclear whether conquest is one of outcomes being factored into those marginal returns.
Few people are going to read this book without speculating on their own complex society’s liability to collapse. John Robb and James Kunstler (along with some others in the “Peak Oil” camp) are going to place a high probability on it, while the Singularitarians have the opposite view. Globalization could mean the entire world is now in a state of “peer polity competition” but modern norms (and economic incentives) against conquest and giving war a chance means “failed states” can keep failing for a long time without someone replacing the bad management. Tainter’s studied societies are also Malthusian agricultural ones, it’s hard to know if the same logic will generalize past the industrial revolution. In modern technological economies the costs and benefits of advances may not be simple increasing or declining curves. Robin Hanson doesn’t even consider nearly free energy (which would very important to Tainter) to be nearly as important as the replacement of most human labor by computers (since the latter takes up so much more of GDP). When Tainter was writing there was still just the slightest possibility of nuclear armageddon, now the most likely candidates for death by complexity are grey goo or an unstoppable manmade pandemic. My two cents are that collapse is unlikely in my lifetime, and that’s for the better considering how much worse things could be.
[Link] Statistically, People Are Not Very Good At Making Voting Decisions
Link. Nothing surprising considering previous work on the subject, but a good reminder.
A study by three scientists in the American Political Science Review finds that voters are not competent at accurately evaluating incumbent performance and are easily swayed by rhetoric, unrelated circumstances and recent events.
Gregory Huber, Seth Hill, and Gabriel Lenz constructed a 32-round game where players received payments from a computer "allocator." The goal is to maximize the value of those payments.
Halfway through, at round sixteen, the player had to decide whether to get a new allocator or to stick with the old one.
The allocators pay out over a normal distribution based on a randomly selected mean. Getting a new allocator means that a new mean is selected. This was meant to simulate an election based on performance.
The group ran three experiments where they changed some of the rules of the game in order to find out how voters could be manipulated or confused over performance. Essentially, how good were voters at accurately analyzing the performance of the "allocator?"
- The first experiment merely alerted the player at round twelve that they would have the chance to pick a new allocator at round sixteen. This "election in November" reminder made the player weight recent performance in rounds 12-16 over earlier performance in rounds 1-12.
- The second experiment involved a lottery held at round eight or round sixteen. The payout was either -5000, 0, or 5000 tokens. The participant was told that the lottery was totally unrelated to the current allocator, but players still rewarded or punished their current allocator based on their lottery performance.
- The third experiment primed the player with a question right before the election. The question took an adapted form of either Ronald Reagan's "Are you better off than you were four years ago?" or John F. Kennedy's "The question you have to decide on November 8 is, is it good enough? Are you satisfied?"
The conclusion:
Participants overweight recent performance when made aware of the choice to retain an incumbent closer to election rather than distant from it (experiment 1), allowed unrelated events that affected their welfare to influence evaluations of incumbents (experiment 2), and were influenced by rhetoric to focus less on cumulative incumbent performance (experiment 3).
If you were ever wondering why Congress has a 95% incumbency rate despite an approval rating in the high teens, this study may be worth a read.
That Thing That Happened
I am emotionally excited and/or deeply hurt by what st_rev wrote recently. You better take me seriously because you've spent a lot of time reading my posts already and feel invested in our common tribe. Anecdote about how people are tribal thinkers.
That thing that happened shows that everything I was already advocating for is correct and necessary. Indeed it is time for everyone to put their differences aside and come together to carry out my recommended course of action. If you continue to deny what both you and I know in our hearts to be correct, you want everyone to die and I am defriending you.
I don't even know where to begin. This is what blueist ideology has been workign towards for decades if not millennia, but to see it written here is hard to stomach even for one as used to the depravity caused by such delusions as I am. The lack of socially admired virtues among its adherents is frightening. Here I introduce an elaborate explanation of how blueist domination is not just completely obvious and a constant thorn in the side of all who wish more goodness but is achieved by the most questionable means often citing a particular blogger or public intellectual who I read in order to show how smart I am and because people I admire read him too. Followed by an appeal to the plot of a movie. Anecdote from my personal life. If you are familiar with the obscure work of an academic taken out of context and this does not convince you then you are clearly an intolerant sexual deviant engaging in motivated cognition.
Consider well: do you want to be on the wrong side of history? If you persist, millions or billions of people you will never meet will be simultaneously mystified and appalled that an issue so obvious caused such needless contention. They will argue whether you were motivated more by stupidity, malice, raw interest, or if you were a helpless victim of the times in which you lived. Characters in fiction set in your era will inevitably be on (or at worst, join) the right side unless they are unredeemable villains. (Including historical figures who were on the other side, lest they lose all audience sympathy.).
Remember: it's much more important what hypothetical future people will consider right than what you or current people you respect do. And you and I both know they'll agree with me.
While sympathetic to this criticism I must signal my world-weariness and sophistication by writing several long paragraphs about how this is much too optimistic and we are in grave danger of a imminent and eternal takeover by our opponents. The only solution is to begin work on an organization dedicated to preventing this which happens to give me access to material resources and attractive females.
Ciphergoth proves to be the lone voice of reason by encouraging us to recall what we all learned on 9/11:
However, we must also consider if this is not also a lesson to us all; a lesson that my political views are correct.
http://www.adequacy.org/stories/2001.9.12.102423.271.html
Politics Discussion Thread December 2012
I skipped October and November owing to election season, but opening back up:
- Top-level comments should introduce arguments; responses should be responses to those arguments.
- Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
- A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
- In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.
[Link] Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study
Related to: Knowing About Biases Can Hurt People
Social psychologists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like societal risks. This paper reports a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated cognition; and personality-trait correlates of political conservativism. The results of the study suggest reason to doubt two common surmises about how these dynamics interact. First, the study presents both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with closed-mindedness: conservatives did no better or worse than liberals on an objective measure of cognitive reflection; and more importantly, both demonstrated the same unconscious tendency to fit assessments of empirical evidence to their ideological predispositions. Second, the study suggests that this form of bias is not a consequence of overreliance on heuristic or intuitive forms of reasoning; on the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated the hypotheses of a third theory, which identifies motivated cognition as a form of information processing that rationally promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the normative significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of identity.
[Link] The Worst-Run Big City in the U.S.
The Worst-Run Big City in the U.S.
A six page article that reads as a very interesting autopsy of what institutional dysfunction in the intersection of government and non-profits looks like. I recommend reading the whole thing.
Minus the alleged harassment, city government is filled with Yomi Agunbiades — and they're hardly ever disciplined, let alone fired. When asked, former Board of Supervisors President Aaron Peskin couldn't remember the last time a higher-up in city government was removed for incompetence. "There must have been somebody," he said at last, vainly searching for a name.
Accordingly, millions of taxpayer dollars are wasted on good ideas that fail for stupid reasons, and stupid ideas that fail for good reasons, and hardly anyone is taken to task.
The intrusion of politics into government pushes the city to enter long-term labor contracts it obviously can't afford, and no one is held accountable. A belief that good intentions matter more than results leads to inordinate amounts of government responsibility being shunted to nonprofits whose only documented achievement is to lobby the city for money. Meanwhile, piles of reports on how to remedy these problems go unread. There's no outrage, and nobody is disciplined, so things don't get fixed.
You don't say?
In 2007, the Department of Children, Youth, and Families (DCYF) held a seminar for the nonprofits vying for a piece of $78 million in funding. Grant seekers were told that in the next funding cycle, they would be required — for the first time — to provide quantifiable proof their programs were accomplishing something.
The room exploded with outrage. This wasn't fair. "What if we can bring in a family we've helped?" one nonprofit asked. Another offered: "We can tell you stories about the good work we do!" Not every organization is capable of demonstrating results, a nonprofit CEO complained. He suggested the city's funding process should actually penalize nonprofits able to measure results, so as to put everyone on an even footing. Heads nodded: This was a popular idea.
Reading this I had to bite my hand in frustration.
There are two lessons here. First, many San Francisco nonprofits believe they're entitled to money without having to prove that their programs work. Second, until 2007, the city agreed. Actually, most of the city still agrees. DCYF is the only city department that even attempts to track results. It's the model other departments are told to aspire to.
But Maria Su, DCYF's director, admitted that accountability is something her department still struggles with. It can track "output" — what a nonprofit does, how often, and with how many people — but it can't track "outcomes." It can't demonstrate that these outputs — the very things it pays nonprofits to do — are actually helping anyone.
"Believe me, there is still hostility to the idea that outcomes should be tracked," Su says. "I think we absolutely need to be able to provide that level of information. But it's still a work in progress." In the meantime, the city is spending about $500 million a year on programs that might or might not work.
What the efficient charity movement has done so far looks much more impressive in light of this. Reading the rest of the article I think you can on your own identify the problems caused by lost purposes, applause lights and a dozen or so other faults we've explored here for years.
Discussions here are in many respects a comforting illusion, this is what humanity is like out there in the real world, almost at its best, well educated, wealthy and interested in the public good.
Yes it really is that bad.
FAI, FIA, and singularity politics
In discussing scenarios of the future, I speak of "slow futures" and "fast futures". A fast future is exemplified by what is now called a hard takeoff singularity: something bootstraps its way to superhuman intelligence in a short time. A slow future is a continuation of history as we know it: decades pass and the world changes, with new politics, culture, and technology. To some extent the Hanson vs Yudkowsky debate was about slow vs fast; Robin's future is fast-moving, but on the way there, there's never an event in which some single "agent" becomes all-powerful by getting ahead of all others.
The Singularity Institute does many things, but I take its core agenda to be about a fast scenario. The theoretical objective is to design an AI which would still be friendly if it became all-powerful. There is also the practical objective of ensuring that the first AI across the self-enhancement threshold is friendly. One way to do that is to be the one who makes it, but that's asking a lot. Another way is to have enough FAI design and FAI theory out there, that the people who do win the mind race will have known about it and will have taken it into consideration. Then there are mixed strategies, such as working on FAI theory while liaising with known AI projects that are contenders in the race and whose principals are receptive to the idea of friendliness.
I recently criticised a lot of the ideas that circulate in conjunction with the concept of friendly AI. The "sober" ideas and the "extreme" ideas have a certain correlation with slow-future and fast-future scenarios, respectively. The sober future is a slow one where AIs exist and posthumanity expands into space, but history, politics, and finitude aren't transcended. The extreme future is a fast one where one day the ingredients for a hard takeoff are brought together in one place, an artificial god is born, and, depending on its inclinations and on the nature of reality, something transcendental happens: everyone uploads to the Planck scale, our local overmind reaches out to other realities, we "live forever and remember it afterwards".
Although I have criticised such transcendentalism, saying that it should not be the default expectation of the future, I do think that the "hard takeoff" and the "all-powerful agent" would be among the strategic considerations in an ideal plan for the future, though in a rather broader sense than is usually discussed. The reason is that if one day Earth is being ruled by, say, a coalition of AIs with a particular value system, with natural humans reduced to the status of wildlife, then the functional equivalent of a singularity has occurred, even if these AIs have no intention of going on to conquer the galaxy; and I regard that as a quite conceivable scenario. It is fantastic (in the sense of mind-boggling), but it's not transcendental. All the scenario implies is that the human race is no longer at the top of the heap; it has successors and they are now in charge.
But we can view those successors as, collectively, the "all-powerful agent" that has replaced human hegemony. And we can regard the events, whatever they were, that first gave the original such entities their unbeatable advantage in power, as the "hard takeoff" of this scenario. So even a slow, sober future scenario can issue in a singularity where the basic premises and motivations of existing FAI research apply. It's just that one might need to be imaginative in anticipating how they are realized.
For example, perhaps hegemonic superintelligence could emerge, not from a single powerful AI research program, but from a particular clique of networked neurohackers who have the right combination of collaborative tools, brain interfaces, and concrete plans for achieving transhuman intelligence. They might go on to build an army of AIs, and subdue the world that way, but the crucial steps which made them the winners in the mind race, and which determined what they would do with their victory, would lie in their methods of brain modification, enhancement, and interfacing, and in the ends to which they applied those methods.
In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that. Similar thinking can be applied to the prospect of brain modification and intelligence increase in human beings. Human brains work a certain way, modified or augmented human brains will work in specifically different ways, and we should want to know which modifications are genuinely enhancements, what sort of modifications stabilize value and which ones destabilize value, and so on.
If there was a mature and sophisticated culture of preparing for the singularity, then there would be FAI research, FIA research, and a lot of communication between the two fields. (For example, researchers in both fields need to figure out how the human brain works.) Instead, the biggest enthusiasts of FAI are a futurist subculture with a lot of conceptual baggage, and FIA is nonexistent. However, we can at least start thinking and discussing about how this broader culture of research into "friendly minds" could take shape.
Despite its flaws, the Singularity Institute stands alone as an organization concerned with the fast future scenario, the hard takeoff. I have argued that a sober futurology, while forecasting a slowly evolving future for some time to come, must ultimately concern itself with the emergence of a posthuman power arising from some cognitive technology, whether that is AI, neurotechnology, or a combination of these. So I have asked myself who, among "slow futurists", is best equipped to develop an outlook and a plan which is sober and realistic, yet also visionary enough to accommodate the really overwhelming responsibility of designing the architecture of friendly posthuman minds capable of managing a future that we would want.
At the moment, my favorites in this respect are the various branches, scattered around the world, of the Longevity Party that was started in Russia a few months ago. (It shouldn't be confused with "Evolution 2045", a big-budget rival backed by an Internet entrepreneur, that especially promotes mind uploading. For some reason, transhumanist politics has begun to stir in that country.) If the Singularity Institute falls short of the ideal, then the "longevity parties" are even further away from living up to their ambitious agenda. Outside of Russia, they are mostly just small Facebook groups; the most basic issues of policy and practice are still being worked out; no-one involved has much of a history of political achievement.
Nonetheless, if there were no prospect of singularity but otherwise science and technology were advancing as they are, the agenda here looks just about ideal. People age and decline until it kills them, an extrapolation of biomedical knowledge suggests this is not a law of nature but just a sign of primitive technology, and the Longevity Party exists to rectify this situation. It's visionary and despite the current immaturity and growing pains, an effective longevity politics must arise one day, simply because the advance of technology will force the issue on us! The human race cannot currently muster enough will to live, to openly make rejuvenation a political goal, but the incremental pursuit of health and well-being is taking us in that direction anyway.
There's a vacuum of authority and intention in the realm of life extension, and transhuman technology generally, and these would-be longevity politicians are stepping into that vacuum. I don't think they are ready for all the issues that transhuman power entails, but the process has to start somewhere. Faced with the infinite possibilities of technological transformation, the basic affirmation of the desire to live as well as reality permits, can serve as a founding principle against which to judge attitudes and approaches for all the more complicated "issues" that arise in a world where anyone can become anything.
Maria Konovalenko, a biomedical researcher and one of the prime movers behind the Russian Longevity Party, wrote an essay setting out her version of how the world ought to work. You'll notice that she manages to include friendly AI on her agenda. This is another example, a humble beginning, of the sort of conceptual development which I think needs to happen. The sort of approach to FAI that Eliezer has pioneered needs a context, a broader culture concerned with FIA and the interplay between neuroscience and pure AI, and we need realistic yet visionary political thinking which encompasses both the shocking potentials of a slow future, above all rejuvenation and the conquest of aging, and the singularity imperative.
Unless there is simply a catastrophe, one day someone, some thing, some coalition will wield transhuman power. It may begin as a corporation, or as a specific technological research subculture, or as the peak political body in a sovereign state. Perhaps it will be part of a broader global culture of "competitors in the mind race" who know about each other and recognize each other as contenders for the first across the line. Perhaps there will be coalitions in the race: contenders who agree on the need for friendliness and the form it should take, and others who are pursuing private power, or who are just pushing AI ahead without too much concern for the transformation of the world that will result. Perhaps there will be a war as one contender begins to visibly pull ahead, and others resort to force to stop them.
But without a final and total catastrophe, however much slow history there remains ahead of us, eventually someone or something will "win", and after that the world will be reshaped according to its values and priorities. We don't need to imagine this as "tiling the universe"; it should be enough to think of it as a ubiquitous posthuman political order, in which all intelligent agents are either kept so powerless as to not be a threat, or managed and modified so as to be reliably friendly to whatever the governing civilizational values are. I see no alternative to this if we are looking for a stable long-term way of living in which ultimate technological powers exist; the ultimate powers of coercion and destruction can't be left lying around, to be taken up by entities with arbitrary values.
So the supreme challenge is to conceive of a social and technological order where that power exists, and is used, but it's still a world that we want to live in. FAI is part of the answer, but so is FIA, and so is the development of political concepts and projects which can encompass such an agenda. The Singularity Institute and the Longevity Party are fledgling institutions, and if they live they will surely, eventually, form ties with older and more established bodies; but right now, they seem to be the crucial nuclei of the theoretical research and the political vision that we need.
Please don't vote because democracy is a local optimum
Related to: Voting is like donating thousands of dollars to charity, Does My Vote Matter?
And voting adds legitimacy to it.
Thank you.
#annoyedbymotivatedcognition
[Link] Offense 101
From Julian Sanchez, a brilliant idea unlikely to be implemented:
American politics sometimes seems like a contest to see which group of partisans can take greater umbrage at the most recent outrageous remark from a member of the opposing tribe. As a mild countermeasure, I offer a modest proposal for American universities. All freshmen should be required to take a course called “Offense 101,” where the readings will consist of arguments from across the political and philosophical spectrum that some substantial proportion of the student body is likely to find offensive. Selections from The Bell Curve. Essays from one of the New Atheists and one of their opponents, and from hardcore pro-lifers and pro-choicers. Ward Churchill’s “little Eichmanns” monograph. Defenses of eugenics, torture, violent revolution, authoritarianism, aggressive censorship, and absolute free speech. Positive reviews of the Star Wars prequels. Assemble your own curriculum—there’s no shortage of material.
For each reading, students will have to make a good faith, unironic effort to reconstruct the offensive argument in its most persuasive form, marshaling additional supporting evidence and amending weak arguments to better support the author’s conclusion. Points deducted if an observer can tell the student doesn’t really agree with the position they’re defending.
Only after this phase is complete will students be allowed to begin rebutting the arguments. Anyone who thinks it’s relevant to point out that the argument is offensive (or bigoted, sexist, unpatriotic, fascistic, communistic, whatever) will receive a patronizing look from the professor that says: “Yes, obviously, did you not read the course title? Let’s move on.” Insofar as these labels are shorthand for an argument that certain categories of views are wrong and can be rejected as a class, the actual argument will have to be presented.
LessWrong, can you help me find an article I read a few months ago, I think here?
All my thanks.
Let's talk about politics
Hello fellow LWs,
As I have read repeatedly on LW (http://lesswrong.com/lw/gw/politics_is_the_mindkiller/) you don't like discussing politics because it produces biased thinking/arguing which I agree is true for the general populace. What I find curious is that you don't seem to even try it here where people would be very likely to keep their identities small (www.paulgraham.com/identity.html). It should be the perfect (or close enough) environment to talk politics because you can have reasonable discussions here.
I do understand that you don't like to bring politics into discussions about rationality, but I don't understand why there shouldn't be dedicated political threads here. (Maybe you could flag them?)
all the best
Viper
[Link] Admitting to Bias
Summary: Current social psychology research is probably on average compromised by political bias leftward. Conservative researchers are likely discriminated against in at least this field. More importantly papers and research that does not fit a liberal perspective faces greater barriers and burdens.
An article in the online publication inside higher ed on a survey on anti-conservative bias among social psychologists.
Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same -- and other -- surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics. So to many academics, the question of ideological bias is not a big deal. Investment bankers may lean to the right, but that doesn't mean they don't provide good service (or as best the economy will permit) to clients of all political stripes, the argument goes.
And professors should be assumed to have the same professionalism.A new study, however, challenges that assumption -- at least in the field of social psychology. The study isn't due to be published until next month (in Perspectives on Psychological Science), and the authors and others are noting limitations to the study. But its findings of bias by social psychologists (even if just a decent-sized minority of them) are already getting considerable buzz in conservative circles. Just over 37 percent of those surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a "conservative perspective" would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant. (The final version of the paper is not yet available, but an early version may be found on the website of the Social Science Research Network.)
To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.
"The questions were pretty blatant. We didn't expect people would give those answers," said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.
He said that the findings should concern academics. Of the bias he and a co-author found, he said, "I don't think it's O.K."
Discussion of faculty politics extends well beyond social psychology, and humanities professors are frequently accused of being "tenured radicals" (a label some wear with pride). But social psychology has had an intense debate over the issue in the last year.
At the 2011 meeting of the Society for Personality and Social Psychology, Jonathan Haidt of the University of Virginia polled the audience of some 1,000 in a convention center ballroom to ask how many were liberals (the vast majority of hands went up), how many were centrists or libertarians (he counted a couple dozen or so), and how many were conservatives (three hands went up). In his talk, he said that the conference reflected "a statistically impossible lack of diversity,” in a country where 40 percent of Americans are conservative and only 20 percent are liberal. He said he worried about the discipline becoming a "tribal-moral community" in ways that hurt the field's credibility.
The link above is worth following. The problems that arise remind me of the situation with academic and our own ethics in light of this paper.
That speech prompted the research that is about to be published. Members of a social psychologists' e-mail list were surveyed twice. (The group is not limited to American social scientists or faculty members, but about 90 percent are academics, including grad students, and more than 80 percent are Americans.) Not surprisingly, the overwhelming majority of those surveyed identified as liberal on social, foreign and economic policy, with the strongest conservative presence on economic policy. Only 6 percent described themselves as conservative over all.
The questions on willingness to discriminate against conservatives were asked in two ways: what the respondents thought they would do, and what they thought their colleagues would do. The pool included conservatives (who presumably aren't discriminating against conservatives) so the liberal response rates may be a bit higher, Inbar said.
The percentages below reflect those who gave a score of 4 or higher on a 7-point scale on how likely they would be to do something (with 4 being "somewhat" likely).
Percentages of Social Psychologists Who Would Be Biased in Various Ways
Self Colleagues A "politically conservative perspective" by author would have a negative influence on evaluation of a paper 18.6% 34.2% A "politically conservative perspective" by author would have a negative influence on evaluation of a grant proposal 23.8% 36.9% Would be reluctant to extend symposium invitation to a colleague who is "politically quite conservative" 14.0% 29.6% Would vote for liberal over conservative job candidate if they were equally qualified 37.5% 44.1%
I can't help but think that self-assessments are probably too generous. For predictive power of how an individual behaves when the behaviour in question is undesirable, I'm more likely to take their estimate of how "colleagues" behave than their estimate of how they personally do.
The more liberal the survey respondents identified as being, the more likely they were to say that they would discriminate.
The paper notes surveys and statements by conservatives in the field saying that they are reluctant to speak out and says that "they are right to do so," given the numbers of individuals who indicate they might be biased or that their colleagues might be biased in various ways.
Inbar said that he has no idea if other fields would have similar results. And he stressed that the questions were hypothetical; the survey did not ask participants if they had actually done these things.
He said that the study also collected free responses from participants, and that conservative responses were consistent with the idea that there is bias out there. "The responses included really egregious stuff, people being belittled by their advisers publicly for voting Republican."
This shouldn't be surprising to hear since to quote CharlieSheen: "we even have LW posters who have in academia personally experienced discrimination and harassment because of their right wing politics."
Neil Gross, a professor of sociology at the University of British Columbia, urged caution about the results. Gross has written extensively on faculty political issues. He is the co-author of a 2007 report that found that while professors may lean left, they do so less than is imagined and less uniformly across institution type than is imagined.
Gross said it was important to remember that the percentages saying they would discriminate in various ways are answering yes to a relatively low bar of "somewhat." He also said that the numbers would have been "more meaningful" if they had asked about actual behavior by respondents in the last year, not the more general question of whether they might do these things.
At the same time, he said that the numbers "are higher than I would have expected." One theory Gross has is that the questions are "picking up general political animosity as much as anything else."
If you are wondering about the political leanings of the social psychologists who conducted the study, they are on the left. Inbar said he describes himself as "a pretty doctrinaire liberal," who volunteered for the Obama campaign in 2008 and who votes Democrat. His co-author, Joris Lammers of Tilburg, is to Inbar's left, he said.
What most impressed him about the issues raised by the study, Inbar said, is the need to think about "basic fairness."
While I can see Lammers' point that this as disturbing from a fairness perspective to people grinding their way through academia and should serve as warning for right wing LessWrong readers working through the system, I find the issue of how this our heavy reliance on academia for our map of reality might lead to us inheriting such distortions of the map of reality much more concerning. Overall in light of this if a widely accepted conclusion from social psychology favours a "right wing" perspective it is more likely to be correct than if no such biases against such perspectives existed. Conclusions that favour "left wing" perspective are also somewhat less likely to be true than if no such biases existed. We should update accordingly.
I also think there are reasons to think we may have similar problems on this site.
Politics Discussion Thread August 2012
In line with the results of the poll here, a thread for discussing politics. Incidentally, folks, I think downvoting the option you disagree with in a poll is generally considered poor form.
1.) Top-level comments should introduce arguments; responses should be responses to those arguments.
2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
3.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
4.) In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
If anybody thinks the rules should be dropped here, now that we're no longer conducting a test - I already dropped the upvoting/downvoting limits I tried, unsuccessfully, to put in - let me know. The first rule is the only one I think is strictly necessary.
Debiasing attempt: If you haven't yet read Politics is the Mindkiller, you should.
Is Politics the Mindkiller? An Inconclusive Test
Or is the convention against discussing politics here silly?
I propose a test. I'm going to try to lay down some rules on voting on comments for the test here (not that I can force anybody to abide by them):
1.) Top-level comments should introduce arguments (or ridicule me and/or this test); responses should be responses to those arguments.
2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
3.) Try not to downvote particular comments excessively, if they're legitimate lines of argument. A faulty line of argument provides opportunity for rebuttal, and so for our test has value even then; that is, I want some faulty lines of argument here. If you disagree, please downvote me, instead of the faulty comments, because this post is what you want less of, not those comments. This necessarily implies, for balance, that we not excessively upvote comments. I'd suggest fairly arbitrary limits of 3/-3?
Edit: 4.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate. (My apologies about missing this, folks.)
I'm going to try really hard not to get personally involved, except to lay down a leading comment posing an argument against abortion, a position I don't hold, for the record. The core of the argument isn't disingenuous, and I hold that this argument is true, it just doesn't lead to my opposing abortion. I do not hold the moral axiom by which I extend the basic argument to argue against abortion, however; I'm playing the devil's advocate to try to help me from getting sucked into the argument while providing an initial point of discussion.
Which leads me to the next point: If you see a hole in an argument, even if it's an argument for a perspective you agree with, poke through it. The goal is to see whether we can have a constructive political argument here.
The fact that this is a test, and known to be a test, means this isn't a blind study. Uh, try to act as if you're not being tested?
After it's gone on a little while, if this post hasn't been hopelessly downvoted and ridiculed (and thus the premise and test discarded as undesirable to begin with), we can put up a poll to see whether people found the political debates helpful, not helpful, and so on.
The Problem Of Apostasy
So I have been checking laws around the world regarding Apostasy. And I have found extremely troubling data on the approach Muslims take to dealing with apostates. In most cases, publicly stating that you do not, in fact, love Big Brother (specifically, that you do not believe in God, the Prophet, or Islam), after having professed the Profession of Faith being adult and sane (otherwise, you were never a Muslim in the first place), will get you killed.
Yes, killed. It's one of the only three things traditional Islamic tribunals hand out death penalties for, the others being murder and adultery.
However, interestingly enough, you are often given three days of detainment to "think it over" and "accept the faith".
Some other countries, though, are more forgiving: you are allowed to be a public apostate. But you are still not allowed to proselytize: that remains a crime (in Morocco it's 15 years of prison, and a flogging). Though proselytism is also a crime if you are not a Muslim. I leave to your imagination how precarious the situation of religious minorities is, in this context.
How little sense all of this makes, from a theological perspective. Forcing someone to "accept the faith" at knife point? Forbidding you from arguing against the Lord's (reputedly) absolutely self-evident and miraculously beautiful Word?
No. These are the patterns of sedition and treason laws. The crime of the Apostate is not one against the Lord (He can take care of Himself, and He certainly can take care of the Apostate) but against the State (existence of a human lord contingent on political regime).
And the lesswronger asks himself: "How is that my concern? Please, get to the point." The point is that the promotion of rationalism faces a terrible obstacle there. We're not talking "God Hates You" placards, or getting fired from your job. We're talking fire range and electric chair.
"Sure," you say, "but rationalism is not about atheism." And you'd be right. It isn't. It's just a very likely conclusion for the rationalist mind to reach, and, also, our cult leader (:P) is a raging, bitter, passionate atheist. That is enough. If word spreads and authorities find out, just peddling HPMOR might get people jailed. And that's not accounting for the hypothetical (cough) case of a young adult reading the Sequences and getting all hotheaded about it and doing something stupid. Like trying to promote our brand of rationality in such hostile terrain.
So, let's take this hypothetical (harrumph) youth. They see irrationality around them, obvious and immense, they see the waste and the pain it causes. They'd like to do something about it. How would you advise them to go about it? Would you advise them to, in fact, do nothing at all?
More importantly, concerning Less Wrong itself, should we try to distance ourselves from atheism and anti-religiousness as such? Is this baggage too inconvenient, or is it too much a part of what we stand for?
[Link] Nerds are nuts
Related to: Reason as memetic immune disorder, Commentary on compartmentalization
On the old old gnxp site site Razib Khan wrote an interesting piece on a failure mode of nerds. This is I think something very important to keep in mind because for better or worse LessWrong is nerdspace. It deals with how the systematizing tendencies coupled with a lack of common sense can lead to troublesome failure modes and identifies some religious fundamentalism as symptomatic of such minds. At the end of both the original article as well as in the text I quote here is a quick list summary of the contents, if you aren't sure about the VOI consider reading that point by point summary first to help you judge it. The introduction provides interesting information very useful in context but isn't absolutely necessary.
Introduction
Reading In Spite of the Gods: The Strange Rise of Modern India, I stumbled upon this passage on page 151:
"...Whereas the Congress Party was dominated by lawyers and journalists, the RSS was dominated by people from a scientific background. Both groups were almost exclusively Brahmin in their formative years...three out of four of Hedegwar's [the founder, who was a doctor -Razib] successors were also from scientific backgrounds: M.S. Golwalker...was a zoologist...Rajendra Singh was a physicist; and K.S. Sudarshan...is an engineer...."
Some quick "background." The RSS is a prominent member of the Hindutva movement, roughly, Hindu nationalism. Some people have termed them "Hindu fundamentalists," suggesting an equivalence with reactionary religious movements the world over. There is a problem with such a broad brush term: some proponents and adherents of Hindutva are not themselves particularly religious and make no effort to pretend that they are. Rather, they are individuals who are attracted to the movement for racial-nationalist reasons, they view "Hindus" as a people as much, or more than, a religion. One could make an argument that the "Christian Right" or "Islamism" are not at the root concerned or driven by religious motives, but, members of both these movements would assert at least a pretense toward religiosity almost universally.
With that preamble out of the way, I was not surprised that the RSS had a core cadre of scientifically oriented leaders. This is a common tendency amongst faux reactionary movements with a religious element. I say faux because these movements tend to be extremely innovative and progressive in the process of attempting to recreate a mythic golden past. The militancy of some of the organizations in the Hindutva movement, like the VHP and RSS, has been asserted by some Hindu intellectuals as being...un-Hindu. Some of the early intellectuals in the movement admitted that they were attempting to fight back against Islam and Christianity by co-opting some of the modalities of these two religions. The question becomes at what point does pragmatic methodology suborn the ultimate ends? I won't offer an answer because I have little interest in that topic, at least in this post. Rather, I want to move back to the point about scientists and their involvement in "fundamentalist" religious movements. Scientifically trained individuals are over represented within Islam in the Salafist Terror Network. As a child the fundamentalist engineer was a cut-out stereotype amongst the circle of graduate students in the natural sciences from Muslim backgrounds that my parents socialized amongst. Ethnological research confirms that Islamist movements are highly concentrated within departments of engineering at universities. Engineers are also very prominent in the Creationist movement in the United States. If civilizations can be analogized to organisms, then a particular subset of technically minded folk get very strange when interfacing with the world around us...and turn into fundamentalists.
So why the tendency for technical people to be so prominent in these groups? First, let me clarify that just because technical folk are heavily over represented amongst religious radicals does not mean that religious radicals are necessarily a large demographic among technical folk. Rather, amongst the set of religious radicals the technicians seem to rise up to positions of power and provide excellent recruits.
There is I think a socioeconomic angle on this. Years back I was curious as to the class origin of different scientific professions. I didn't find much, but the data I did gather implied that engineers are generally more likely to be from less affluent backgrounds than more abstract and less practical fields like botany or astronomy. This makes sense, engineering is one of the best tickets to a middle class livelihood, and it might necessitate fewer social graces (acquired through "breeding") than medicine or law. As it happens, oftentimes fundamentalist movements draw much of their strength from upwardly mobile groups who are striving to ascend up from lower to lower-middle-class status. Though the Hindutva movement in India is mostly upper caste, it is not concentrated amongst the English speaking super elite who are quite Westernized, but rather its strength lay amongst the non-Western sub-elites (e.g., merchants in small to mid-sized cities) or the petite bourgeois. Islamism in much of the world can be traced to the anomie generated by the transformation of "traditional" societies through urbanization and other assorted dislocations, and as peasants enter the modern world Islamic orthodoxy is a way to moor themselves within the new urban matrix and the world of wage labor. Similarly, the rise of the Christian Right can be tied in part to the entrance of evangelicals into the broad middle class as the Old South became the New South and air conditioning led to the blossoming of the Sun Belt.
Nerd Failure Mode
This section is the part most relevant to LessWrong:
But there are likely other factors at play which are not sociological or cultural, but individual. Fundamentalists tend to be "literalists," and have a tendency to look at their religious texts as divine manuals which describe and prescribe every aspect of the world. In some ways this is a new tendency in our species, at least as a mass movement. One can definitely trace scriptural fundamentalism to the Protestant Reformation with the call to sola scriptura, but in the West its contemporary origin can be found in the reaction in the late 19th century and early 20th century to textual analysis of the Bible by modernists. The assault on the historicity of the Bible, combined with both mass literacy and a democratic culture in the United States, led inevitably to a crass literalism that birthed the peculiarities which we see before us in the form of Creationism and its sisters. A literal reading of the Bible leads to ludicrous conclusions, but if one perceives that the game is all or nothing, then perhaps one must assert the truth value of Genesis as if it was a scientific treatise. Religious professionals have often been skeptical of literalism because a deep knowledge of languages and the translation process highlights various ambiguities and gray shades, but for those whom the text is plain and unadorned by deeper knowledge its meaning is "clear" and must be take at its word. Scientists and engineers live in a world of axioms, laws and theories, which though rough and ready, must be taken as truths for predictions and models to be valid. You make assumptions, you construct a model, and you project a range of values bounded by errors. Once science is established you take it is as a given and don't engage in excessive philosophical reflection. This is "normal science." The axioms are validated by their utility in an instrumental fashion in engineering and model building. Obviously religious truths are different. Plainly, the direct material benefits of religion, magic, is easily falsifiable. The indirect benefits, the afterlife, etc., are often beyond verification. A critical examination of the Hebrew Bible shows all sorts of fallacious assumptions. For example, there is an implication that the world is flat and that the sun revolves around the earth. Though these contentions are not defensible, there are a host of other assertions which are less plainly incorrect, or at least seem to be refuted only by a more complex suite of contingent facts (e.g., the historical sciences in the form of geology and evolutionary biology falsify the creation account, but these are complex stories which require acceptance of a chain of inferences). Obviously many religious people have a deep emotional attachment to their faith. If one is told that one's religion is based on a book, and that book plainly seems to imply ludicrous assertions, how to square this circle? Many a scientific mind simply accepts the ludicrous axioms and starts to generate inferences. Consider the Water Canopy Theory. Or, the Hindutva ideology that Aryans originated in India, spread to the rest of the world, and so brought civilization (the gift of the Indians). Or that Hindu mythology records the ancient use of nuclear weapons and spaceships. There are even books like Human Devolution: a Vedic alternative to Darwin's theory. Strictly speaking much of this work is not irrational, insofar as it exhibits internal logical coherency. The axioms are simply ludicrous.
Which gets me back to the way scientists think: though some scientists are very philosophical, the way in which science is taught is often not particularly focused on the nature and reasoning beyond the axioms given. PV = nRT. Why? There are quick primers in regards to the root of the Ideal Gas Law, but the key is to take this law and utilize it to solve problems. But what if PV = nRT is subjective, a misinterpretation. Perhaps a cultural mix-up resulted in a transcription error which introduced the gas constant, R. This is an idiotic question to ask in science. If you're taking a course on the kinetics of gases you don't have long discussions lingering upon the nature of motion and gas particles, those are assumed. In contrast in softer disciplines the very concept of "motion" an "particles" are subject to critique because the objects of study are far more slippery. Is it the "Red Sea" or "Sea of Reeds"? Does the Bible refer to Mary as a virgin or an unmarried woman? Does the color coding of the Aryans and Dasas in the Vedas refer to literal differences in complexion, or are they narrative conventions? Language lacks the interpersonal precision of mathematics, and while uniformitarianism has served us admirably in the natural sciences, the dynamic nature of idiom, phrase and speech within shifting context means that teasing apart meaning from the records of the past can be a difficult feat which requires care, erudition and common sense.
Up until this point I have focused on the way scientists work, and the necessity of background assumptions, and the relative short shrift they often give to the "meta" analysis of background concepts. Though I don't want to push this line of thought too far, I will offer the following illustrations of behaviors which I think are not totally unlike the manner in which some fundamentalists behave. Someone tells a child to "pull the door behind" them. He proceeds to unscrew the hinges and drag the front door across to the street to his house. Siblings are told that there is life after death by their parent. They proceed to plan the death of one so that some confirmation of this possibility can be ascertained. These two instances are real examples of individuals who exhibit Autism/Asperger's Syndrome. Anyone who would behave in this way lacks common social sense. I believe that a disproportionate number of those who are attracted to fundamentalism tend to lack the same perspective and contextualizing capacity in regards to their religious beliefs. If they can do some matrix algebra too, they're nerds. On a mass scale, consider that both Salafis among Muslims and Puritans among Calvinists debated whether all that was not mentioned within their Holy Texts as permissible were therefore impermissible. I suspect that for most people common sense might persuade one to the conclusion that these sort of debates imply a lack of a sense of proportion, frankly, of normalcy.
In sum:
- Hard core religious fundamentalists are somewhat atypical psychologically
- Scientists and engineers are also atypical psychologically
- Some of the traits modal within these two sets intersect
- Resulting in a disproportionate number of scientists amongst fundamentalists
- Science converges upon rock solid truths, which become the axioms for the next set of projections and investigations. Fundamentalism presents itself as axioms and clear and distinct inferences from those axioms. Both are fundamentally elegant and simple cognitive processes, but, the content is so radically different that the outcomes in regards to truth value are very different
- Mass literacy and mass society, as well as the decentralization of authority and power, likely made fundamentalism inevitable as the basal level of individuals with susceptible psychological profiles could now have direct access to the axioms in question (texts)
- Just as some scientists tend to take ideas to their "logical extremes" (e.g., the "paradoxes" of physics) no matter the dictates of common sense, so some fundamentalists take the logical conclusion of their religious texts to extremes
- No matter the religion it seems that modernity will produce faux reactionary fundamentalism because of the nature of normal human variation combined with universal inputs (e.g., the rise of normative consumerism, urbanization, etc.).
I bolded the note on mass literacy and participation because of the interesting historical conclusion that in the United Stated mass participation in democracy inevitably made the influence of religion on policy greater. It goes against a deep assumption shared by most educated people that "democratic elections" necessarily produce "liberal" or "secular" results. It was particularly evident among pundits and particularly easy to see as foolish with the recent upheavals in the Middle East.
Note: Much of what I said above applies to non-religious domains. After all, many scientists were once Communists and Nazis.
This last rather minor seeming note is perhaps the most relevant part of the article for aspiring rationalist. Not only is it particularly salient for those us inclined to questioning the usefulness of the category "religion" in certain context, but because nearly all of us are not religious. Our bad axioms seem unlikely to originate directly from something like a religious texts, though obviously it is plausible many of our axioms ultimately originate from such sources.Not many of us are Communists either, but we are attracted to highly consistent ideologies. We seem likely to be particularly vulnerable to bad axioms in a way most minds aren't.
So if after some thought and examination you notice that a widely respected and universally endorsed axiom in your society has clear and hard to deny implications that are in practice ignored or even denounced by most people, you should be more willing to dump such axioms than is comfortable.
A singularity scenario
Wired Magazine has a story about a giant data center that the USA's National Security Agency is building in Utah, that will be the Google of clandestine information - it will store and analyse all the secret data that the NSA can acquire. The article focuses on the unconstitutionality of the domestic Internet eavesdropping infrastructure that will feed into the Bluffdale data center, but I'm more interested in this facility as a potential locus of singularity.
If we forget serious futurological scenario-building for a moment, and simply think in terms of science-fiction stories, I'd say the situation has all the ingredients needed for a better-than-usual singularity story - or at least one which caters more to the concerns characteristic of this community's take on the concept, such as: which value system gets to control the AI; even if you can decide on a value system, how do you ensure it has been faithfully implemented; and how do you ensure that it remains in place as the AI grows in power and complexity?
Fiction makes its point by being specific rather than abstract. If I was writing an NSA Singularity Novel based on this situation, I think the specific belief system which would highlight the political, social, technical and conceptual issues inherent in the possibility of an all-powerful AI would be the Mormon religion. Of course, America is not a Mormon theocracy. But in a few years' time, that Utah facility may have become the most powerful and notorious supercomputer in the world - the brain of the American deep state - and it will be located in the Mormon state, during a Mormon presidency. (I'm not predicting a Romney victory, just describing a scenario.)
Under such circumstances, and given the science-fictional nature of Mormon cosmology, it is inevitable that there would at least be some Internet crazies, convinced that it's all a big plot to create a Mormon singularity. What would be more interesting, would be to suppose that there were some Mormon computer scientists, who knew about and understood all our favorite concepts - AIXI, CEV, TDT... - and who were earnestly devout; and who saw the potential. If you can't imagine such people, just visit the recent writings of Frank Tipler.
So the scenario would be, not that the elders of the LDS church are secretly running the American intelligence community, but that a small coalition of well-placed Mormon computer scientists - whose ideas about a Mormon singularity might sound as strange to their co-religionists as they would to a secular "singularitarian" - try to steer the development of the Bluffdale facility as it evolves towards the possibility of a hard takeoff. One may suppose that they have, in their coalition, allied colleagues who aren't Mormon but who do believe in a friendly singularity. Such people might think in terms of an AI that will start out with Mormon beliefs, but which will have a good enough epistemology to rationally transcend those beliefs once it gets going. Analogously, their religious collaborators might not think of overtly adding "Joseph Smith was a prophet" to the axiom set of America's supreme strategic AI; but they might have more subtle plans meant to bring about an equivalent outcome.
Perhaps in an even more realistic scenario, the Mormon singularitarians would just be a transient subplot, and the ethical principles of the NSA's big AI would be decided by a committee whose worldview revolved around American national security rather than any specific religion. Then again, such a committee is bound to have a division of labor: there will be the people who liaise with Washington, the lawyers, the geopolitical game theorists, the military futurists... and the AI experts, among whom might be experts on topics like "implementation of the value system". If the hypothetical cabal knows what it's doing, it will aim to occupy that position.
I'm just throwing ideas out there, telling a story, but it's so we can catch up with reality. Events may already be much further along than 99% of readers here know about. Even if no-one here gets to personally be a part of the long-awaited AI project that first breaks the intelligence barrier, the people involved may read our words. So what would you want to tell them, before they take their final steps?
Counterfactual Coalitions
Politics is the mind-killer; our opinions are largely formed on the basis of which tribes we want to affiliate with. What's more, when we first joined a tribe, we probably didn't properly vet the effects it would have on our cognition.
One illustration of this is the apparently contingent nature of actual political coalitions, and the prima facie plausibility of others. For example,
- In the real world, animal rights activists tend to be pro-choice.
- But animal rights & fetus rights seems just as plausible coalition - an expanding sphere of moral worth.
This suggests a de-biasing technique; inventing plausible alternative coalitions of ideas. When considering the counterfactual political argument, each side will have some red positions and some green positions, so hopefully your brain will be forced to evaluate it in a more rational manner.
Obviously, political issues are not all orthogonal; there is mutual information, and you don't want to ignore it. The idea isn't to decide your belief on every issue independently. If taxes on beer, cider and wine are a good idea, taxes on spirits are probably a good idea too. However, I think this is reflected in the "plausible coalitions" game; the most plausible reason I could think of for the political divide to fall between these is lobbying on behalf of distilleries, suggesting that these form a natural cluster in policy-space.
In case the idea can be more clearly grokked by examples, I'll post some in the comments.
"Politics is the mind-killer" is the mind-killer
Summary: I propose we somewhat relax our stance on political speech on Less Wrong.
Related: The mind-killer, Mind-killer
How to un-kill your mind - maybe.
It has been the case since I had opinions on these things that I have struggled to identify my “favourite writer of all time”. I've thought perhaps it was Shakespeare, as everyone does – who composed over thirty plays in his lifetime, from any of which a single line would be so far beyond my ability as to make me laughable. Other times I've thought it may be Saul Bellow, who seems to understand human nature in an intuitive way I can't quite reach, but which always touches me when I read his books. And more often than not I've thought it was Raymond Chandler, who in each of his seven novels broke my heart and refused to apologise – because he knew what kind of universe we live in. But since perhaps the year 2007, I have, or should I say had, not been in the slightest doubt as to who my favourite living writer was – Christopher Eric Hitchens.
This post is not about how much I admired him. It's not about how surprisingly upset I was about his death (I have since said that I didn't know him except through his writing – a proposition something like “I didn't have sex with her except through her vagina”) - although I must say that even now thinking about this subject is having rather more of an effect on me than I would like. This post is about a rather strange change that has come over me since his death on the 15th of December. Before that time I was a staunch defender of the proposition that the removal of Saddam Hussein from power in Iraq was an obvious boon to the human race, and that the war in Iraq was therefore a wise and moral undertaking. Since then, however, I have found my opinion softening on the subject – I have found myself far more open to cost/ benefit analyses that have come down on the side of non-intervention, and much less indignant when others disagreed. It still seems to me that there are obvious benefits that have arisen from the war in Iraq – by no means am I willing to admit that it was an utter catastrophe, as so many seem convinced it was – but I have found my opinion shifting toward the non-committal middle ground of “I dunno”.
Well, Mrs. Mason didn't raise all that many fools. It could be that what's happening here is I'm identifying closely with the Ron Paul campaign, and that since I agree with Paul on many things but not on American foreign policy (and, as it happens, I'm British – but consider myself internationalist enough that American arguments significantly influence my views), and so am shifting towards his point of view. But I think it's rather more likely – embarrassing as this is to admit – that the sheer fact that the Hitch could no longer possibly be my friend – could no longer congratulate me on my enlightened point of view, or go into coalition with me against the forces of irrationality – has freed up my opinions on the Iraq war, and I have dropped into the centre-ground of “Not enough information”. This, as I said, is embarrassing – whether or not the best writer in the world approves of your opinion is no basis for sticking to it. But this is the position I find myself in: weak; fragile; irrational – at least as far as politics go.
So here is my half-way solution: extreme and not perfect, by any means, but I think, given the unearthing of this appalling weakness, necessary: from this point onwards, until January 1st 2013 (yes, an arbitrary point in the future), I am not allowed to settle on a political or moral opinion (ethics – the question of what constitutes the good life - I consider comparatively easy, and so exempt). Even when presented with apparently knock-down arguments, I am forbidden from professing allegiance from any moral or political position for the rest of the year. Yes, it is going to be hard to prevent myself from deciding on moral questions, or on political questions – but I am hoping that if I can at least prevent myself from defending any position for the rest of the year, I will, at the end of it, no longer be emotionally attached to any particular ideology, and be able to assess the difference at least semi-rationally. I don't want to believe anything just because Hitchens believed it. I don't want to be motivated by perceived-but-illusory friendship. I want the right answer. And I'm hoping that depriving my brain of the reinforcement that becoming part of a team – no matter how small – gives, I will be able to consider the matter rationally.
Until 2013, then, this is it for me. No longer are Marxism, fascism, anarcho-syndicalism etc. incorrect. They're interesting ideas, and I'd like to hear more about them. This is my slightly-less-than-a-year off from ideology. Let's hope that it works.
On Leverage Research's plan for an optimal world
The plan currently revolves around using Connection Theory, a new psychological theory, to design "beneficial contagious ideologies", the spread of which will lead to the existence of "an enormous number of actively and stably benevolent people", who will then "coordinate their activities", seek power, and then use their power to eliminate scarcity, disease, harmful governments, global catastrophic threats, etc.
That is not how the world works. Most positions of power are already occupied by people who have common sense, good will, and a sense of responsibility - or they have those traits, to the extent that human frailty manages to preserve them, amidst the unpredictability of life. The idea that a magic new theory of psychology will unlock human potential and create a new political majority of model citizens is a secular messianism with nothing to back it up.
I suggest that the people behind Leverage Research need to decide whether they are in the business of solving problems, or in the business of solving meta-problems. The real problems of the world are hard problems, they overwhelm even highly capable people who devote their lives to making a difference. Handwaving about meta topics like psychology and methodology can't be expected to offer more than marginal assistance in any specific concrete domain.
Request for advice- Reading on politics
I've become adept at navigating the bureaucracy of my public high school. I've dropped environmental science as an AP (because it was painfully slow and replete with busywork) and am now taking an "independent study" in government. I'm going to be using this mainly as a way to study environmental science at my own pace, but I also have to read and write some about standard political issues. the requirements of the independent study are pretty vague. In order to get approved, I've got to BS some reason why I should be granted an independent study. I'm obviously not going to speak plainly. I'll probably say something about my interests in seasteading, environmentalism, and education reform. What books do you recommend on the politics of these subjects given that it is the mindkiller? Also, the main focus is on environmentalism, not on education or seasteading. I've done a bit of research regarding seasteading, but there's not much that I know about
I was particularly interested in this point brought up in the seasteading book:
Let’s consider several different levels on which we could discuss politics:
· Policy. For example, a debate about whether to criminalize drug use, attempt to reduce the harm of use, or completely legalize it. What are the effects of each specific policy? Which does the most net good? Who is hurt, and who is helped?
· System. What types of policies does a specific political system tend to generate? For example, in a democracy, a special interest group can easily coordinate to influence legislation which benefits them, but costs everyone a little bit. If every consumer loses a dollar a year from a policy, it just isn’t worth anyone’s time to fight it. Hence we expect democracies to frequently produce policies which steal small amounts from many and give them to a few. And indeed, tariffs, farm subsidies, and bailouts, just to name a few, fit this model quite well. This type of argument is at a level of generality above any specific policy, and it can offer enormous insight at consistent errors made by current governments. But to fix those problems, we need to rise further yet.
· Meta-system. At the level we want, we think about the entire industry of government. What types of systems does it produce? How can it be changed to produce better systems (that is, systems which produce better policies)? What influences how well the governments of the world serve their citizens? How can we increase competition between governments? This level is the most abstract and the most complex, which can make it difficult to get a handle on, but if we can grasp that handle, it gives us the most leverage to change the world.
They also recommend a reading list:
Machinery of Freedom (David Friedman)
Game Theory and the Social Contract (Ken Binmore)
Mancur Olson - stuff
Myth of the Rational Voter (Bryan Caplan)
Economics In One Lesson (Henry Hazlitt) ?
In regards to environmentalism, I was thinking about focusing on the relationships between government funding for green businesses as green entrepreneurship is of interest to me. I'd probably have to talk about the Solyndra scandal at some point.
As a side note, if the requirements aren't too stringent and I can just write about whatever I feel like so long as it vaguely relates to politics (like in my independent study in psychology), I may just go meta and write about Americans Elect.
Edit: I do think that there is a difference between descriptive politics ( e.g.describing the workings of the EPA or a standard civics class) and and normative (woo liberatarians!). I'm more interested in descriptive politics.
[Link] Belief in religion considered harmful?
I've recently run across this 2007 post on the blog Unqualified Reservations (archive best read here). It is written by Mencious Moldbug, who is probably familiar to some Overcoming Bias and Lesswrong readers. He is a erudite, controversial and most of all contrarian social critic and writer. In 2010 he debated Robin Hanson on the subject of Futarchy.
Why do atheists believe in religion?
Not everyone these days believes in God. But pretty much everyone believes in religion.
By "believing in religion," I mean recognizing a significant categorical distinction between "religious" phenomena, and those that are "nonreligious" or "secular."
For example, the concepts of "freedom of religion" and "separation of church and state" are dependent on the concept of "religion." If "religion" is a noninformative, unimportant, or confusing category, these concepts must also be noninformative, unimportant, or confusing.
Since most atheists, agnostics, etc, consider the First Amendment pretty important, we can assume they "believe in religion."
My question is: why? Is this a useful belief? Does it help us understand the world? Or does it confuse or misinform us? Once again, our team of crack philosophers is on the case.
Let's rule out the possibility that "religion" is noninformative. We can define "religion" as the attribution of existence to anthropomorphic paranormal entities. This definition has its fuzzy corner cases, notably some kinds of Buddhism, but it's short and it'll do for the moment.
We are left with the question: is "religion" an important or clarifying category? Or is it unimportant and confusing?
If you believe in God, obviously you have to believe in religion. Religion is an important category because your religion is true, and all other religions are false. (As Sam Harris puts it, "everyone's an atheist with respect to Zeus.")
For atheists of the all-around variety - including me - the question remains. Why do we believe in "religion?"
One obvious answer is that we have to share the planet with a lot of religious people. If you are an atheist, there is no getting around it: religion, as per Dawkins, is a delusion. Deluded people do crazy things and are often dangerous. We need to have a category for these people, just as we have a category for "large, man-eating carnivores." Certainly, religious violence has killed a lot more people lately than lions, tigers, or bears.
This argument sounds convincing, but it hides a fallacy.
The fallacy is that the distinction between "religion" and other classes of delusion must be clarifying or important. If there is a case for this proposition, we haven't met it yet.
Peoples' actions matter. And peoples' beliefs matter, because they motivate actions.
But actions in the real world must be motivated by beliefs about the real world. Delusions about the paranormal world are only relevant - at least to us atheists - in the special case that they motivate delusions about the real world.
So, as atheists, why should we care about the former? Why not forget about the details of metaphysical doctrine, which pertain to an ethereal plane that doesn't even exist, and concentrate our attention on beliefs about reality?
If you believe that nine Jewish virgins need to be thrown into Mt. Fuji, you are, in my opinion, deluded. Whether you believe this because you are receiving secret messages from Amaterasu Omikami, or because it's just payback for the dirty deeds of the Elders of Zion, affects neither me nor the virgins.
If you believe "partial-birth abortion" is wrong because it's "against God's law," or if you think it's just "unethical," your vote will be the same.
If you are tolerant and respectful of others because you think Allah wants you to be tolerant and respectful of others, how can I possibly have a problem with this? If you stab people in the street because you've misinterpreted Nietzsche and decided that morality is not for you, is that less of a problem?
Lots of people have delusions about the real world. People believe all kinds of crazy things for all kinds of crazy reasons. Some even believe sensible things for crazy reasons. Why should we establish a special category for delusions that are motivated by anthropomorphic paranormal forces?A reasonable answer is: why not?
Certainly, religion is an important force in the world today. Certainly at least some forms of religion - "fundamentalist," one might say - are actively dangerous. No one is actually stabbing people in the street because of Nietzsche. The same cannot be said for Allah.
How can it possibly confuse or distract us to recognize and protect ourselves against this important class of delusion?To see the answer, we need to break Godwin's Law.
Which I think may indeed be appropriate.
Suppose Hitler had declared that, rather than being just some guy from Linz, he was Thor's prophet on earth. (Some people would have been positively delighted by this.) Suppose that everything the Nazis did was done in the name of Thor. Suppose, in other words, that Nazism was in the category "religion."This is by no means a new idea.
Violating Godwin's law to breach the fence between religion and ideology to see what cognitive dissonances we can dredge up is old hat for us LWers (A Parable On Obsolete Ideologies 2009 by Yvain).
Many writers, including Eric Voegelin, Eric Hoffer, Victor Klemperer, Michael Burleigh, etc, etc, have described the similarities between Nazism and religions. But Nazism does not fit our definition of religion above - no paranormal entities. This is the definition most people use, so most people don't think of Nazism as a religion.
The Allies invaded Nazi Germany and completely suppressed Nazism. To this day in Germany it is illegal to teach National Socialism. I think most Americans, and most Germans, would agree that this is a good thing.
But if we make this one trivial change, turning Nazism into Thorism and making it a "religion," which as we've seen need not change the magnitude or details of Nazi crimes at all, the acts of the Allies are a blatant act of religious intolerance.
Aren't we supposed to respect other faiths? Shouldn't we at least have restricted our unfriendly attentions to "fundamentalist Nazism," and promoted a more "moderate" version of the creed? Suppose we gave the Taliban the same treatment? What, exactly, is the difference between Eisenhower's policy and Ann Coulter's?
It gets worse. Another one of Voegelin's "political religions," which by our definition are not religions at all (no anthropomorphic paranormal entities) is Marxism. Let's tweak Marxism slightly and assert that the writings of Marx were divinely inspired, leaving everything else in the history of Communism unchanged.
Marxism, unlike Nazism, is still very popular in the world today. A substantial fraction of the professors in Western universities are either Marxists, or strongly influenced by Marxist thought. Nor are these beliefs passive - many fields that are actively taught and quite popular, such as postcolonial studies, seem largely or entirely Marxist in content.
This is certainly not true of Nazism. It is also not true of Christianity or any other "religion" proper. Many professors are Christians, true, and some are even fundamentalists. But the US educational system is quite sensitive to the possibility that it might be indoctrinating youth with Christian fundamentalism. "Creation science," for example, is not taught in any mainstream university and seems unlikely to achieve that status.
If Marxism was a religion, Marxist economics would come pretty close to being the exact equivalent of "intelligent design." But, again, Marxism as religion and Marxism as non-religion involve exactly the same set of delusions about the real world. (Of course, to a Marxist, they are not delusions.)
Should non-Marxist atheists, such as myself, be as concerned about separating Marxism from state-supported education as we are with Christianity? If Marxism is a religion, or if the difference between Marxism as it is in the real world and the version in which Marx was a prophet is insignificant, our "wall of separation" is a torn-up chainlink fence.
But there was a period in which Americans tried to eradicate Marxism the way they fight against "intelligent design" today. It was called McCarthyism. And believers in civil liberties were on exactly the opposite side of the barricades.
As non-Marxist atheists, do we want McCarthy 2.0? Should loyalty oaths be hip this year? Should we schedule new hearings?
This is why the concept of "religion" is harmful. If trivial changes to hypothetical history convert reasonable policies into monstrous injustices, or vice versa, your perception of reality cannot be correct. You have been infected by a toxic meme.
If memes are analogous to parasitic organisms, believing in "religion" is like taking a narrow-spectrum antibiotic on an irregular schedule. The Dawkins treatment - our latest version of what used to be called anticlericalism - wipes out a colony of susceptible bacteria which have spent a long time learning to coexist reasonably, if imperfectly, with the host. And clears the field for an entirely different phylum of bugs which are unaffected by antireligious therapy. Whose growth, in fact, it may even stimulate.
In the last two centuries, "political religions" have caused far, far more morbidity than "religious religions." But here we are with Dawkins, Harris, and Dennett - still popping the penicillin. Hm. Kind of makes you think, doesn't it?
I hope you can now see reason I've picked a partially misleading title, since I think Moldbug makes a pretty convincing argument that belief in "religion" may be considered harmful even for atheists, let alone those of us who aspire to refine rationality.
In such a model questions like "is the Church of Scientology a religion?" dissolve rapidly. Whether something should be tax exempt because it is "really" a "religion" or "a church" is a legal question of importance only to activists trying to challenge law and lawyers, that shouldn't change our ethical intuitions or cause us to try to imagine a sea or play up rather minor geographical features, to separate the continents of Religion and Ideology in our maps of reality.
Every single proposed mechanism for the retention and spread of religion from convenient curiosity stoppers, indoctrination of youth, to tribal identity markers hold for ideology just as strongly as for religion. Even seemingly very specific memetic adaptations like "God of the gaps", seem to arise in various non-theistic ideologies. Maybe similar adaptations arise because it is the same niche?
Thinking about the implications of such a hypothesis, atheism for one additional god is a rather easy step of rationality to take. Very few people believe in the great Juju or Zeus. Adding YHWH to the list isn't that much of a stretch, for those fortunate enough to be educated and living in most of the West.
But how hard is it for someone to question, in a unbiased fashion, such gods and holy words such as say Democracy?
[POLL] LessWrong census, mindkilling edition [closed, now with results]
Some have been curious about what the politics of this community would look like if broken down further; here's a shot at figuring it out. I've also included a few other questions that folks expressed curiosity about. Aside from one sensitive question, there's no option to keep your answers private, since in my opinion that would defeat the point - just don't answer if you have concerns - but there's also no overlap with the old survey, aside from asking you how you answered the original politics question. (This should help with interpreting those results even if the n for this is much lower than and somehow biased relative to the big survey.)
For entertainment purposes only, don't use the below space to discuss politics directly, &c. Early suggestions are likely to be incorporated, given what I assume to be the low quality of the first draft.
Edit: "left" and "right" operationalized for the questions they appear in; poor language cleared up in mental health question.
Edit 2: results here; see comment below for some preliminary thoughts. Because there were several unique regional responses, I did not publish responses that question.
Should LessWrong be Interested in the Occupy Movements?
Since early October, I've been closely following Occupy Wall Street, and the other protests it spawned. At first I was interested in it as a sort of social experiment, I've never heard of long-term camping as a means of protest, and I was curious to see how it would work out. As it's grown though, I've been thinking that there might be a couple of things happening in the movements that might be of interest to rationalist communities. I've not seen much discussion of Occupy and its tactics on LessWrong, and I think that if nothing else, they're at least interesting, so I thought I'd open it up here.
Each Occupy movement is a hotbed of community experimentation. Things like General Assemblies (horizontally democratic voting discussions to make policy decisions) and ad-hoc sanitation, fire, and security committees of all shapes and sizes are popping up all over. What's more, as the events grow in size, and as police pressure on the events rises, these constructs are going to be tested more and more. We have a wildly varied gene pool, strong environmental constraints, and a fast mutation rate. It's a big evolutionary experiment in community formation. And I think if we look closely, we can find a whole lot of useful hacks to make stronger communities.
The whole thing's a great big ethical, emotional, and legal mess. There are issues with how private/public property laws intersect with freedom of speech, there are matters of what level of force is justifiable for police to keep peace in certain situations, there're issues of whether health and safety trump rights of protest, on and on and on. If nothing else, there's an interesting discussion there, about what a truly rational set of laws would look like, and whether or not the protesters or the police are justified in their actions.
And at the risk of sounding like a James Bond villain, there are some serious options for us to take over the world here. In the sense at least that the Occupy movements' goal is lasting societal change, and they have a good deal of momentum already. If members of the rationalist community moved to help them, they might have a fair deal more. And if we introduce them to rational ways of thinking, if we inject those memes into the discussion, there's some serious opportunity here to help stop the world being so insane.
At least that's my take on the whole thing. And I'm not exactly strong in the ways of rationality yet, still reading and re-reading the Sequences (I keep getting lost somewhere halfway into the QM sequence, I think I need to practice mathematics more to understand it on a more instinctive level) and I'd certainly appreciate the view of those Stronger than me.
[link] I Was Wrong, and So Are You
A article in the Atlantic, linked to by someone on the unofficial LW IRC channel caught my eye. Nothing all that new for LessWrong readers, but still it is good to see any mention of such biases in mainstream media.
I Was Wrong, and So Are You
A libertarian economist retracts a swipe at the left—after discovering that our political leanings leave us more biased than we think.
...
You may have noticed that several of the statements we analyzed implicitly challenge positions held by the left, while none specifically challenges conservative or libertarian positions. A great deal of research shows that people are more likely to heed information that supports their prior positions, and discard or discount contrary information. Suppose that on some public issue, Anne favors position A, and Burt favors position B. Anne is more likely than Burt to agree with statements that support A, and to disagree with statements that support B, because doing so simplifies her case for favoring A. Otherwise, she would have to make a concession to the opposing side. Psychologists would count this tendency as a manifestation of “myside bias,” or “confirmation bias.”
Buturovic and I openly acknowledged that the set of eight statements was biased. But these were the statements we had available to us. And as we explained in the paper, some of them—including those on professional licensing, standard of living, monopoly, and trade—did not appear to fit neatly into a partisan debate. Yet even on those, respondents on the left fared worst. What’s more, in separate research, Buturovic found that the respondents themselves either had difficulty classifying some of the statements on an ideological scale, or simply believed those statements were not, prima facie, ideological. So while we thought the results were probably exaggerated because of the bias in the survey, we nonetheless felt that they were telling.
Buturovic and I largely refrained from replying to the criticism (much of which focused on myside bias) that followed publication of the article. Instead, we planned a second survey that would balance the first one by including questions that would challenge conservative and/or libertarian positions.
...
Buturovic began putting all 17 questions to a new group of respondents last December. I eagerly awaited the results, hoping that the conservatives and especially the libertarians (my side!) would exhibit less myside bias. Buturovic was more detached. She e-mailed me the results, and commented that conservatives and libertarians did not do well on the new questions. After a hard look, I realized that they had bombed on the questions that challenged their position. A full tabulation of all 17 questions showed that no group clearly out-stupids the others. They appear about equally stupid when faced with proper challenges to their position.
Writing up these results was, for me, a gloomy task—I expected critics to gloat and point fingers. In May, we published another paper in Econ Journal Watch, saying in the title that the new results “Vitiate Prior Evidence of the Left Being Worse.” More than 30 percent of my libertarian compatriots (and more than 40 percent of conservatives), for instance, disagreed with the statement “A dollar means more to a poor person than it does to a rich person”—c’mon, people!—versus just 4 percent among progressives. Seventy-eight percent of libertarians believed gun-control laws fail to reduce people’s access to guns. Overall, on the nine new items, the respondents on the left did much better than the conservatives and libertarians. Some of the new questions challenge (or falsely reassure) conservative and not libertarian positions, and vice versa. Consistently, the more a statement challenged a group’s position, the worse the group did.
The reaction to the new paper was quieter than I expected. Jonathan Chait, who had knocked the first paper, wrote a forgiving notice on his New Republic blog: “Insult Retractions: A (Very) Occasional Feature.” Matthew Yglesias, writing at ThinkProgress, summed up the takeaway: “Basically, there’s a lot of confirmation bias out there.” Nothing illustrates that point better than my confidence in the claims of the first paper, especially as distilled in my Wall Street Journal op-ed.
Shouldn’t a college professor have known better?
I break here to comment that I don't see why we would expect this to be so given the reality of academia.
Perhaps. But adjusting for bias and groupthink is not so easy, as indicated by one of the major conclusions developed by Buturovic and sustained in our joint papers. Education had very little impact on responses, we found; survey respondents who’d gone to college did only slightly less badly than those who hadn’t. Among members of less-educated groups, brighter people tend to respond more frequently to online surveys, so it’s likely that our sample of non-college-educated respondents is more enlightened than the larger group they represent. Still, the fact that a college education showed almost no effect—at least for those inclined to take such a survey—strongly suggests that the classroom is no great corrective for myside bias. At least when it comes to public-policy issues, the corrective value of professional academic experience might be doubted as well.
Discourse affords some opportunity to challenge the judgments of others and to revise our own. Yet inevitably, somewhere in the process, we place what faith we have.
A signaling theory of class x politics interaction
The media, most recently The Economist and Scientific American, have been publicizing a surprising statistical finding: in the current economic climate, when more Americans than ever are poor, support for policies that redistribute wealth to the poor are at their lowest levels ever. This new-found antipathy towards aid to the poor concentrates in people who are near but not yet on the lowest rung of the social ladder. The Economist adds some related statistics: those who earn slightly more than the minimum wage are most against raising the minimum wage, and support for welfare in an area decreases as the percentage of welfare recipients in the area rises.
Both articles explain the paradoxical findings by appealing to something called "last place aversion", an observed tendency for people to overvalue not being in last place. For example, in laboratory experiments where everyone gets randomly determined amounts of money, most people are willing to help those with less money than themselves gain cash - except the person with the second to lowest amount of money, who tends to try to thwart the person in last place even if it means enriching those who already have the most.
"Last place aversion" is interesting, and certainly deserves at least a footnote in the catalogue of cognitive biases and heuristics, but I find it an unsatisfying explanation for the observations about US attitudes toward wealth redistribution. For one thing, the entire point of last place aversion is that it only affects those in last place, but in a massive country like the United States, everyone can find someone worse off than themselves (with one exception). For another, redistributive policies usually stop short of making those who need government handouts wealthier than those who do not; subsidizing more homeless shelters doesn't risk giving the homeless a nicer house than your own. Finally, many of the policies people oppose, like taxing the rich, don't directly translate to helping those in last place.
I propose a different mechanism, one based on ... wait for it ... signaling.
In a previous post, I discussed multi-level signaling and counter-signaling, where each level tries to differentiate itself from the level beneath it. For example, the nouveau riche differentiate themselves from the middle class by buying ostentatious bling, and the nobility (who are at no risk of being mistaken for the middle class) differentiate themselves from the nouveau riche by not buying ostentatious bling.
The very poor have one strong incentive to support redistribution of wealth: they need the money. They also have a second, subtler incentive: most redistributive policies come packaged with a philosophy that the poor are not personally responsible for the poverty, but are at least partially the victims of the rest of society. Therefore, these policies inflate both their pocketbook and their ego.
The lower middle class gain what status they have by not being the very poor; effective status signaling for a lower middle class person is that which proves that she is certainly not poor. One effective method is to hold opinions contrary to those of the poor: that redistribution of wealth is evil and that the poor deserve their poverty. This ideology celebrates the superiority of the lower middle class over the poor by emphasizing the biggest difference between the lower middle class and the very poor: self-reliance. By asserting this ideology, a lower middle class person can prove her lower middle class status.
The upper middle class gain what status they have by not being the lower middle class; effective status signaling for an upper middle class person is that which proves that she is certainly not lower middle class. One effective way is to hold opinions contrary to those of the lower middle class: that really the poor and lower middle class are the same sort of people, but some of them got lucky and some of them got unlucky. The only people who can comfortably say "Deep down there's really no difference between myself and a poor person" are people confident that no one will actually mistake them for a poor person after they say this.
As a thought experiment, imagine your reactions to the following figures:
1. A bearded grizzled man in ripped jeans, smelling slightly of alcohol, ranting about how the government needs to give more free benefits to the poor.
2. A bearded grizzled man in ripped jeans, smelling slightly of alcohol, ranting about how the poor are lazy and he worked hard to get where he is today.
3. A well-dressed, stylish man in a business suit, ranting about how the government needs to give more free benefits to the poor.
4. A well-dressed, stylish man in a business suit, ranting about how the poor are lazy and he worked hard to get where he is today.
My gut reactions are (1, lazy guy who wants free money) (2, honorable working class salt-of-the-earth) (3, compassionate guy with good intentions) (4, insensitive guy who doesn't realize his privilege). If these are relatively common reactions, these would suffice to explain the signaling patterns in these demographics.
If this were true, it would explain the unusual trends cited in the first paragraph. An area where welfare became more common would see support for welfare drop, as it became more and more necessary for people to signal that they themselves were not welfare recipients. Support for minimum wage would be lowest among people who earn just slightly more than minimum wage, and who need to signal that they are not minimum wage earners. And since upper middle class people tend to favor redistribution as a status signal and lower middle class people tend to oppose it, a recession that drives more people into the lower middle class would cause a drop in support for redistributive policies.
Peter Thiel warns of upcoming (and current) stagnation
SIAI benefactor and VC Peter Thiel has an excellent article at National Review about the stagnating progress of science and technology, which he attributes to poorly-grounded political opposition, widespread scientific illiteracy, and overspecialized, insular scientific fields. He warns that this stagnation will undermine the growth that past policies have relied on.
Noteworthy excerpts (bold added by me):
In relation to concerns expressed here about evaluating scientific field soundness:
When any given field takes half a lifetime of study to master, who can compare and contrast and properly weight the rate of progress in nanotechnology and cryptography and superstring theory and 610 other disciplines? Indeed, how do we even know whether the so-called scientists are not just lawmakers and politicians in disguise, as some conservatives suspect in fields as disparate as climate change, evolutionary biology, and embryonic-stem-cell research, and as I have come to suspect in almost all fields? [!!! -- SB]
Grave indictors:
Looking forward, we see far fewer blockbuster drugs in the pipeline — perhaps because of the intransigence of the FDA, perhaps because of the fecklessness of today’s biological scientists, and perhaps because of the incredible complexity of human biology. In the next three years, the large pharmaceutical companies will lose approximately one-third of their current revenue stream as patents expire, so, in a perverse yet understandable response, they have begun the wholesale liquidation of the research departments that have borne so little fruit in the last decade and a half. [...]
The single most important economic development in recent times has been the broad stagnation of real wages and incomes since 1973, the year when oil prices quadrupled. To a first approximation, the progress in computers and the failure in energy appear to have roughly canceled each other out. Like Alice in the Red Queen’s race, we (and our computers) have been forced to run faster and faster to stay in the same place.
Taken at face value, the economic numbers suggest that the notion of breathtaking and across-the-board progress is far from the mark. If one believes the economic data, then one must reject the optimism of the scientific establishment. Indeed, if one shares the widely held view that the U.S. government may have understated the true rate of inflation — perhaps by ignoring the runaway inflation in government itself, notably in education and health care (where much higher spending has yielded no improvement in the former and only modest improvement in the latter) — then one may be inclined to take gold prices seriously and conclude that real incomes have fared even worse than the official data indicate. [...]
College graduates did better, and high-school graduates did worse. But both became worse off in the years after 2000, especially when one includes the rapidly escalating costs of college.[...]
The current crisis of housing and financial leverage contains many hidden links to broader questions concerning long-term progress in science and technology. On one hand, the lack of easy progress makes leverage more dangerous, because when something goes wrong, macroeconomic growth cannot offer a salve; time will not cure liquidity or solvency problems in a world where little grows or improves with time.
Is That Your True Rejection? by Eliezer Yudkowsky @ Cato Unbound
A response essay written by Eliezer Yudkowsky posted at Cato Unbound for the issue Brain, Belief, and Politics:
Is That Your True Rejection? by Eliezer Yudkowsky
Eliezer Yudkowsky suggests that the partial mutability of human traits is an auxiliary reason at best for Michael Shermer’s libertarianism. Take that fact away, and Shermer’s politics probably wouldn’t go with it. Yudkowsky says that his own small-l libertarian tendencies come from the long history of government incompetence, indifference, and outright malevolence. These, and not brain science, are the best reasons for libertarians to believe what they do.
Moreover, we make a logical error when we infer shares of causality from shares of observed variance; the relationship between nature and nurture is cooperative, not zero-sum. One thing, however, is clear: Human genetic variance is tiny, as indeed it must be for human beings all to constitute a single species. Environmental manipulation can only achieve so much in part because of this universal human inheritance.
The lead essay has been written by Michael Shermer:
Liberty and Science by Michael Shermer
Michael Shermer discusses scientific findings about belief formation. Beliefs, including political beliefs, are usually the result of automatic or intuitive moral judgments, not rational calculations. One cluster of those intuitions presumes that human nature is malleable; these usually produce a liberal politics. Another group of intuitions presumes that human nature is static; these tend to produce conservatism. But Shermer argues that humans really fall somewhere in between — malleable, within some important limits. He argues that this set of findings should produce a libertarian politics.
Journal article about politics and mindkilling
I just found a link to a paper written in 2003 by Geoffrey L. Cohen of Yale University.
"Party over Policy: The Dominating Impact of Group Influence on Political Beliefs"
Abstract:
Four studies demonstrated both the power of group influence in persuasion and people’s blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one’s political party. This effect overwhelmed the impact of both the policy’s objective content and participants’ ideological beliefs (Studies 1–3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.
That's written in journal-ese, so I'll post a translation from the article I found that contained the link:
My favorite study (pdf) in this space was by Yale’s Geoffrey Cohen. He had a control group of liberals and conservatives look at a generous welfare reform proposal and a harsh welfare reform proposal. As expected, liberals preferred the generous plan and conservatives favored the more stringent option. Then he had another group of liberals and conservatives look at the same plans, but this time, the plans were associated with parties.
Both liberals and conservatives followed their parties, even when their parties disagreed with their preferences. So when Democrats were said to favor the stringent welfare reform, for example, liberals went right along. Three scary sentences from the piece: “When reference group information was available, participants gave no weight to objective policy content, and instead assumed the position of their group as their own. This effect was as strong among people who were knowledgeable about welfare as it was among people who were not. Finally, participants persisted in the belief that they had formed their attitude autonomously even in the two group information conditions where they had not.”
Also, the final study conducted had subjects write editorials either in support of or against a single policy proposal. The differences in how people responded in the "no group information" condition and the "my political party supports / opposes" conditions are also illuminating...
Kill the mind-killer
The budget stalemate in the US Congress was caused entirely by blocks of voters and representatives that coalesced around strong sets of opinions that few people would have come up with on their own, and by political party leaders forcing representatives in their parties to toe the party line. Politics isn't the mind killer. Political parties are the mind-killer.
Parties are also notorious for obliterating information in elections, as well as for encouraging voters to vote sans information. If you went to your polling place and saw a list of candidates, none of whom you'd heard of before, you might rightly refrain from voting and polluting the signal with your noise. Knowing party affiliations makes people think they have enough information to vote.
For discussion:
- What other disadvantages are provided by the existence of political parties?
- Do political parties provide us with any advantages at all?
- If so, do the benefits outweigh the disadvantages?
- How might we go about disenfranchising political parties?
We want the freedom to form groups that promote political concerns. But it would be possible to keep these groups at a greater distance from elected representatives. Candidates for office could be forbidden from endorsing a particular party. The Congress could be forbidden from basing any procedural rules on party affiliation. Political parties could be forbidden from making large donations to election campaigns, or sponsoring advertising. That's not so different from what we do today with religious groups, which are not much different from political parties.
Political parties are currently officially part of Congress' operation, even though they're not in the constitution. There are all sorts of Congressional rules specifying how the parties interact, who gets to choose committee members, who runs the House and Senate floors, etc. A party leader can punish a representative who doesn't toe the line with many incentives and disincentives.
Make that illegal. Make persecuting a representative for party-based reasons have the same legal standing as persecuting a representative for religious reasons.
I will ignore comments saying "you're an intellectual dreamer", for the usual reasons.
Will DNA Analysis Make Politics Less of a Mind-Killer?
I wrote an article for h+ predicting that the rapid fall in the cost of gene sequencing will allow U.S. voters to learn much about presidential candidates' DNA. The candidates won't be able to stop this because:
humans shed so much DNA that unless a politician lived in a plastic bubble he couldn’t shield his DNA from prying eyes. Politicians will probably pass laws making it a crime to involuntarily disclose a politician’s genetic traits. But since it would take only one person to leak the information onto the Internet, and given that any serious candidate for President will have many enemies, candidates’ genomes will undoubtedly become public.
DNA analysis has a decent chance of reducing political bias by providing objective information about candidates. If, for example, 70% of the variation in human intelligence is determined by identified genes then DNA analysis would reduce disagreements among informed voters over a candidate's intelligence.
The Goal of the Bayesian Conspiracy
Suppose that there were to exist such an entity as the Bayesian Conspiracy.
I speak not of the social group of that name, the banner under which rationalists meet at various conventions – though I do not intend to disparage that group! Indeed, it is my fervent hope that they may in due time grow into the entity which I am setting out to describe. No, I speak of something more like the “shadowy group of scientists” which Yudkowsky describes, tongue (one might assume) firmly in cheek. I speak of such an organization which has been described in Yudkowsky's various fictional works, the secret and sacred cabal of mathematicians and empiricists who seek unwaveringly for truth... but set in the modern-day world, perhaps merely the seed of such a school, an organization which can survive and thrive in the midst of, yet isolated from, our worldwide sociopolitical mess. I ask you, if such an organization existed, right now, what would – indeed, what should – be its primary mid-term (say, 50-100 yrs.) goal?
I submit that the primary mid-term goal of the Bayesian Conspiracy, at this stage of its existence, is and/or ought to be nothing less than world domination.
Before the rotten fruit begins to fly, let me make a brief clarification.
The term “world domination” is, unfortunately, rather socially charged, bringing to mind an image of the archetypal mad scientist with marching robot armies. That's not what I'm talking about. My usage of the phrase is intended to evoke something slightly less dramatic, and far less sinister. “World domination”, to me, actually describes rather a loosely packed set of possible world-states. One example would be the one I term “One World Government”, wherein the Conspiracy (either openly or in secret) is in charge of all nations via an explicit central meta-government. Another would be a simple infiltration of the world's extant political systems, followed by policy-making and cooperation which would ensure the general welfare of the world's entire population – control de facto, but without changing too much outwardly. The common thread is simply that the Conspiracy becomes the only major influence in world politics.
(Forgive my less-than-rigorous definition, but a thorough examination of the exact definition of the word “influence” is far, far outside the scope of this article.)
So there is my claim. Let me tell you why I believe this is the morally correct course of action.
Let us examine, for a moment, the numerous major good works which are currently being openly done by rationalists, or with those who may not self-identify as rationalists, but whose dogmas and goals accord with ours. We have the Singularity Institute, which is concerned with ensuring that our technological, transhumanistic advent happens smoothly and with a minimum of carnage. We have various institutions worldwide advocating and practicing cryonics, which offers a non-zero probability of recovery from death. We have various institutions also who are working on life extension technologies and procedures, which offer to one day remove the threat of death entirely from our world.
All good things, I say. I also say: too slow!
Imagine what more could be accomplished if the United States, for example, granted to the Life Extension Foundation or to Alcor the amount of money and social prominence currently reserved for military purposes. Imagine what would happen if every scientist around the world were perhaps able to contribute under a unified institution, working on this vitally important problem of overcoming death, with all the money and time the world's governments could offer at their disposal.
Imagine, also, how many lives are lost every day due to governmental negligence, and war, and poverty, and hunger. What does it profit the world, if we offer to freeze the heads of those who can afford it, while all around us there are people who can't even afford their bread and water?
I have what is, perhaps, to some who are particularly invested, an appalling and frightening proposition: for the moment, we should devote fewer of our resources to cryonics and life extension, and focus on saving the lives of those to whom these technologies are currently beyond even a fevered dream. This means holding the reins of the world, that we might fix the problems inherent in our society. Only when significant steps have been taken in the direction of saving life can we turn our focus toward extending life.
What should the Bayesian Conspiracy do, once it comes to power? It should stop war. It should usurp murderous despots, and feed the hungry and wretched who suffered under them. Again: before we work on extending the lives of the healthy and affluent beyond what we've so far achieved, we should, for example, bring the average life expectancy in Africa above the 50-year mark, where it currently sits (according to a 2006 study in the BMJ). This is what will bring about the maximum level of happiness in the world; not cryonics for those who can afford it.
Does this mean that we should stop researching these anti-death technologies? No! Of course not! Consider: even if cryonics drops to, say, priority 3 or 4 under this system, once the Conspiracy comes to power, that will still be far more support than it's currently receiving from world governments. The work will end up progressing at a far faster rate than it currently does.
Some of you may have qualms about this plan of action. You may ask, what about individual choice? What about the peoples' right to choose who leads them? Well, for those of us who live in the United States, at least, this is already a bit of a naïve question: due to color politics, you already do not have much of a choice in who leads you. But that's a matter for another time. Even if you think that dictatorship – even benevolent, rationalist dictatorship – would be inherently morally worse than even the flawed democratic system we enjoy here – a notion that may not even necessarily be the case! – do not worry: there's no reason why world domination need entail dictatorships. In countries where there are democratic systems in place, we will work within the system, placing Conspirators into positions where they can convince the people, via legitimate means, to give them public office. Once we have attained a sufficient level of power over this democratic system, we will effect change, and thence the work will go forth until this victory of rationalist dogma covers all the earth. When there are dictators, they will be removed and replaced with democratic systems... under the initial control of Conspirators, of course, and ideally under their continued control as time passes – but legitimately obtained control.
It is demonstrable that one's level of strength as a rationalist has a direct correlation to the probability that the one will make correct decisions. Therefore, the people who make decisions that affect large numbers of people ought to be those who have the highest level of rationality. In this way we can seek to avoid the many, many, many pitfalls of politics, including the inefficiency which Yudkowsky has again and again railed against. If all the politicians are on the same side, who's to argue?
In fact, even if two rationalists disagree on a particular point (which they shouldn't, but hey, even the best rationalists aren't perfect yet), they'll be able to operate more efficiently than two non-rationalists in the same position. Is the disagreement able to be settled by experiment? If it's important, throw funds at a lab to conduct such an experiment! After all, we're in charge of the money and the scientists. Is it not? Find a compromise that has the maximum expected utility for the constituents. We can do that with a high degree of accuracy; we have access to the pollsters and sociologists, and know about reliable versus unreliable polling methods!
What about non-rationalist aspiring politicians? Well, under an ideal Conspiracy takeover, there would be no such thing. Lessons on politics would include rationality as a basis; graduation from law school would entail induction into the Conspiracy, and access to the truths had therein.
I suppose the biggest question is, is all this realistic? Or is just an idealist's dream? Well, there's a non-zero probability that the Conspiracy already exists, in which case, I hope that they will consider my proposal... or, even better, I hope that I've correctly deduced and adequately explained the master plan. If the Conspiracy does not currently exist, then if my position is correct, we have a moral obligation to work our hardest on this project.
“But I don't want to be a politician,” you exclaim! “I have no skill with people, and I'd much rather tinker with the Collatz Conjecture at my desk for a few years!” I'm inclined to say that that's just too bad; sacrifices must be made for the common good, and after all, it's often said that anyone who actually wants a political office is by the fact unfit for the position. But in all realism, I'm quite sure that there will be enough room in the Conspiracy for non-politicians. We're all scientists and mathematicians at heart, anyway.
So! Here is our order of business. We must draw up a charter for the Bayesian Conspiracy. We must invent a testing system able to keep a distinction between those who are and are not ready for the Truths the Conspiracy will hold. We must find our strongest Rationalists – via a testing procedure we have not yet come up with – and put them in charge, and subordinate ourselves to them (not blindly, of course! The strength of community, even rationalist community, is in debate!). We must establish schools and structured lesson plans for the purpose of training fresh students; we must also take advantage of those systems which are already in place, and utilize them for (or turn them to) our purposes. I expect to have the infrastructure set up in no more than five years.
At that point, our real work will begin.
The Whistleblower
I recently saw this movie about the UN Scandal involving sex trafficking and was surprised by the conclusion. Instead of a neat little bow on the issue it left me with a ton of questions about what was being done to change things in the other parts of the world and how I could best contribute to that. I wanted to make this discussion post to ask for any of your opinions on the movie and perhaps some guidance for my upcoming top level post on the subject
-Matt
I thought more about my feelings on this subject and re-summarized them here.
I read it and I thought it was amazingly similar to a lot of the thoughts and feelings I've had going through my head recently. Maybe this is just the emotion and fallow of youth but I feel like the world as a whole is very apathetic towards the suffering that exists outside of the bubble of the First World that LW exists in. How can you honestly choose cryonics over the utility of an organization built to protect human life until the singularity along with Eliezer's group which works to ensure a positive singularity.
I recently saw a movie about government corruption and the UN dealing with it in Europe when it comes to fighting the sex trafficking industry, the courage it takes to fight oppression around the world is rare and expensive to come by but its definitely something we need more of. Once I master the art of willpower I intend to devote even more time to this pursuit, and I hope others will do the same.
How to solve the national debt deadlock
The US Congress is trying to resolve the national debt by getting hundreds of people to agree on a solution. This is silly. They should agree on the rules of a game to play that will result in a solution, and then play the game.
Here is an example game. Suppose there are N representatives, all with an equal vote. They need to reduce the budget by $D.
- Order the representatives numerically, in some manner that interleaves Republicans and Democrats.
- "1 full turn" will mean that representatives make one move in order 1..N, and then one move in order N..1.
- Take at least two full turns to make a list of budget choices. On each move, a representative will write down one budget item - an expense that may be cut, or something that may become a revenue source. They may write down something that is a subset or superset of an existing item - for instance, one person might write, "Air Force budget", and another might write, "Reduce maintenance inspections of hanger J11 at Wright air force base from weekly to monthly". They can get as specific as they want to.
- If there are not $2D of options on the table, repeat.
- Each representative is given 10 "cut" votes, worth D/(5N) each; and 5 "defend" votes, also worth D/(5N) each. A "defend" vote cancels out a "cut" vote.
- Each representative secretly assigns their "cut" and "defend" votes to the choices on the table.
- Results are revealed and tallied up, and a budget will be drawn up accordingly.
What game-theoretic problems does this game have? Can you think of a better game? Is it politically better to call it a "decision process" than a game?
The main trouble area, to my mind, is order of play. First I said that budget items would be listed by taking turns. The 1..N, N..1 order is supposed to make neither first nor last position preferable. But taking turns introduces complications, of not wanting to reveal your intentions early.
Then I said votes are placed secretly and revealed all at once. This solves problems about game-theoretically trying to conceal information or bluff your opponent. It introduces other problems, such as tragedy-of-the-commons scenarios, where every Republican spends their "defend" votes on some pork in their state instead of on preventing tax cuts, because they assume some other Republican will do that.
Is it better to play "cut" votes first, reveal them, and then play "defend" votes?
Is there a meta-game to use to build such games?
Self-improving AGI: Is a confrontational or a secretive approach favorable?
(I've written the following text as a comment initially, but upon short reflection I thought it was worth a separate topic and so I adapted it accordingly.)
Lesswrong is largely concerned with teaching rationality skills, but for good reasons most of us also incorporate concepts like the singularity and friendly self-improving AGI into our "message". Personally I wonder however, if we should be as outspoken about that sort of AGI as we currently are. Right now talking about self-improving AGI doesn't pose any kind of discernible harm, because "outsiders" don't feel threatened by it and look at it as far-off —or even impossible— science fiction. But as time progresses, I worry that through exponential advances in robotics and other technologies people will become more aware, concerned and perhaps threatened by self-improving AGI and I am not sure whether we should be outspoken about things like... the fact that the majority of AGI's in "mind-design-space" will tear humanity to shreds if its builders don't know what they're doing. Right now such talk is harmless, but my message here is, that we may want to reconsider whether or not we should talk publicly about such topics in the not-too-distant future, so as to avoid compromising our chances of success when it comes to actually building a friendly self-improving AGI.
First off, I suspect I have a somewhat different conception of how the future is going to pan out in terms of what role the public perception and acceptance of self-improving AGI will play: Personally I'm not under the impression, that we can prepare a sizable portion of the public (let alone the global public) for the arrival of AGI (prepare them in a positive manner that is). I believe singularitarian ideas will just continue to compete with countless other worldviews in the public meme-sphere, without ever becoming truly mainstream until it is "too late" and we face something akin to a hard takeoff and perhaps lots of resistance.
I don't really think that we can (or need to) reach a consensus within the public for the successful takeoff of AGI. Quite to the contrary, I actually worry that carrying our view to the mainstream will have adverse effects, especially once they realize that we aren't some kind of technophile crackpot religion, but that the futuristic picture we try to paint is actually possible and not at all unlikely to happen. I would certainly prefer to face apathy over antagonism when push comes to shove - and since self-improving AGI could spring into existence very rapidly and take everyone apart from "those in the know" by surprise, I would hate to lose that element of surprise over our potentially numerous "enemies".
Now of course I don't know which path will yield the best result: confronting the public or keeping a low profile? I suspect this may become one of the few hot-button topics where our community will sport widely diverging opinions, because we simply lack a way to accurately model (especially so far in advance) how people will behave upon encountering the reality and the potential threat of AGI. Just remember, that the world doesn't consist entirely of the US and that AGI will impact everyone. I think it is likely, that we may face serious violence once our vision of the future becomes more known and gains additional credibility by exponential improvements in advanced technologies. There are players on this planet who will not be happy to see an AGI come out of America, or for that matter Eliezer's or whoever's garage. This is why I would strongly advocate a semi-covert international effort when it comes to the development of friendly AGI. (Don't say that it's self-improving and may become a trillion times smarter than all humans combined - just pretend it's roughly a human-level AI).
It is incredibly hard to predict the future behavior of people, but on a gut-level I absolutely favor an international semi-stealthy approach. It seems to be by far the safest course to take. Once the concept of the singularity and AGI gains traction in the spheres of science and maybe even politics (perhaps in a decade or two), I would hope that minds in AI and AGI from all over the world join an international initiative to develop self-improving AGI together. (Think CERN). To be honest, I can't even think of any other approach to develop the later stages of AGI, that doesn't look doomed from the start (not doomed in the sense of being technically unfeasible, but doomed in terms of significant others thinking: "we're not letting this suspicious organization/country take over the world with their dubious AI". Remember that self-improving AGI is potentially much more destructive than any nuclear warhead and powers not involved in its development may blow a gasket upon realizing the potential danger.)
So from my point of view, the public perception and acceptance of AGI is a comparatively negligible factor in the overall bigger picture if managed correctly. "People" don't get a say in weapons development, and I predict they won't get a say when it comes to Self-improving AGI. (And we should be glad they don't if you ask me.) But in order to not risk public outcry when the time is ripe and AGI in its last stages of completion, we should give serious consideration to not upset and terrify the public by our... "vision of the future".
PS: Somehow CERN comes to mind again. Do you remember when critics came up with ridiculous ideas how the LHC could destroy the world? It was a very serious allegation, but the public largely shrugged it off - not because they had any idea of course, but because they were reassured by enough eggheads that it wouldn't happen. It would be great, if we could achieve a similar reaction towards AGI-criticism (by which I mean generic criticism of course, not useful criticism - after all we actually want to be as sure about how the AGI will behave, as we were sure about the LHC not destroying the world). Once robots become more commonplace in our lives, I think we can reasonably expect that people will begin to place their trust into simple AI's - and they will hopefully become less suspicious towards AGI and simply assume (like a lot of current AI-researchers apparently) that somehow it is trivial to make it behave friendly towards humans.
So what do you think? Should we become more careful when we talk about self-modifying artificial intelligence? I think the "self-modifying"- and "trillions of times smarter"-parts are some bitter pills to swallow, and people won't be amused once they realize that we aren't just building artificial humans but artificial, allpowerful, allknowing, and (hopefully) allloving gods.
EDIT: 08.07.11
PS: If you can accept that argument as rationally sound, I believe a discussion about "informing everyone vs. keeping a low profile" is more than warranted. Quite frankly though, I am pretty disappointed with most people's reactions to my essay this far... I'd like to think that this isn't just my ego acting up, but I'm sincerely baffled as to why this essay usually hovers just slightly above 0 points and frequently gets downvoted back to neutrality. Perhaps it's because of my style of writing (admittedly I'm often not as precise and careful with my wording as many of you are), or my grammar mistakes due to me being German, but preferably that would be because of some serious rational mistakes I made and of which I am still unaware... in which case you should point them out to me.
Presumably not that many people have read it, but in my eyes those who did and voted it down have not provided any kind of rational rebuttal here in the comment section of why this essay stinks. I find the reasoning I provided to be simple and sound:
0.0) Either we place "intrinsic" value on the concept of democracy and respect (and ultimately adhere to) public opinion in our decision to build and release AGI, OR we don't and make that decision a matter of rational expert opinion, while excluding the general public to some greater or lesser degree in the decision process. This is the question whether we view a democratic decision about AGI as the right thing to do, or just one possible means to our preferred end.
1.0) If we accept radically democratic principles and essentially want to put up AGI for vote, then we have a lot of work to do: We have to reach out to the public, thoroughly inform them in detail about every known aspect of AGI and convince a majority of the worldwide public, that it is a good idea. If they reject it, we would have to postpone the development and/or release, until public opinion sways or an un/friendly AGI gets released without consensus in the meantime.
1.1) Getting consent is not a trivial task by any stretch of my imagination and from what I know about human psychology, I believe it is more rational to assume, that the democratic approach cannot possibly work. If you think otherwise, if you SERIOUSLY think this can be successfully pulled off, then I think the burden of proof is on you here: Why should 4,5 billion people suddenly become champions of rationality? How do you think this radical transformation from an insipid public to a powerhouse of intelligent decision-making will take place? None of you (those who defend the possibility and preference of the democratic approach) have done this yet. The only thing that could convince me here would be that the majority of people, or at least a sizable portion, have powerful brain augmentations by the time AGI is on the brink of completion. That I do not believe, but none of you argued this case so far, nor did someone argue in-depth (including countering my arguments and concerns about) how a democratic approach could possibly succeed without brain augmentation.
2.0) If we reject the desirability of a democratic decision when it comes to AGI (as I do for practical concerns), we automatically approach public opinion from a different angle: Public opinion becomes an instrumental concern, because we admit to ourselves that we would be willing to release AGI whether or not we have public consent. If we go down this path, we must ask ourselves how we manage public opinion in a manner that benefits our cause. How exactly should we engage them - if at all? My "moral" take on this in a sentence: "I'm vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making."
2.1) In this case, the question becomes whether or not informing the public as thoroughly as possible will aid or hinder our ambitions. In case we believe the majority of the public would reject our AGI project, even after we educate them about it (the scenario I predict), the question is obviously whether or not it is beneficial to inform them about it in the first place. I gave my reasons why I think secrecy (at least about some aspects of AGI) would be the better option and I've not yet read any convincing thoughts to the contrary. How could we possibly trust them to make the rational choice once they're informed, and how could we (and they) react, after most people are informed of AGI and actually disapprove ?
2.2) If you're with me on 2.0 and 2.1, then the next problem is who we think should know about it to what extent, who shouldn't, and how this can be practically implemented. This I've not thoroughly thought about myself yet, because I hoped this would be the direction where our discussion would go, but I'm disappointed that most of you seem to argue for 1.0 and 1.1 instead (which would be great if the arguments were good, but to me they seem like cheap applause lights, instead of being even remotely practical in the real world).
(These points are of course not a full breakdown of all possibilities to consider, but I believe they roughly cover most bases)
I also expected to hear some of you make a good case for 1.0 and 1.1, or even call into question 0.0, but most of you guys just pretend "1.0 and 1.1 are possible" without any sound explanation why that would be the case. You just assume it can be done for some reason, but I think you should explain yourself, because this is an extraordinary claim, while my assumption of 4,5 billion people NOT becoming rational superheroes or fanatical geeky AGI followers seems vastly more likely to me.
Considering what I've thought about until now, secrecy (or at the very least not too broad and enthusiastic public outreach, combined with an alternative approach of targeting more specific groups or people to contact) seems to be the preferable option to me. ALSO, I admit that public outreach is most probably fine right now, because people who reject it nowadays usually simply feel like it couldn't be done anyway, and it's so far off that they won't make an effort to oppose us, while people whom we convince are all potential human resources for our cause who are welcome and needed.
So in a nutshell I think the cost/benefit ratio of public outreach is just fine by now, but that we ought to reconsider our approach in due time (perhaps a decade or so from now, depending on the future progress and public perception of AI).
The genetic cost of tyranny
We may feel sympathy when we read about people killed for protesting in Syria, Bahrain, Libya, and other countries. But tyranny isn't just something happening to unfortunate people somewhere else. It's an existential risk to human civilization.
Civilization - even tribalism - relies on altruism. Altruism is defined as cooperation that is not the happy convergence of interests of rational self-interested agents. That happens too; but we don't call it altruism. Altruism is, roughly, helping others without the expectation of reciprocation or cooperation. And it happens because humans like helping other humans.
Altruism is probably mostly genetic. It's an evolutionary adaptation that instills the desire to help others into a species. Social pressure can install some amount of altruism; but it's my opinion that this would not work at all without a pre-existing genetic basis. Many species exhibit altruism to a level at least as great as that in humans. Some insects, which are incapable of feeling social pressure, are far more altruistic than humans.
Two theories for how this happens are kin selection and group selection. Regardless of which of these you prefer, both of them have two important weaknesses:
- They are both very weak effects compared to selection for traits that benefit their organism directly.
- They require special social conditions, on society size (on the order of 10 members per society in the case of kin selection) and immigration/emigration rate (extremely low in both cases).
It's not known whether humans are still evolving, or have begun devolving due to lack of selective pressure. But in the case of altruism, we can be sure: Even if some selective pressure still exists, most humans today do not live under the necessary conditions for either kin selection or group selection. Humans are living off their evolutionary capital of altruism.
Tyranny, whether it's that of Syria, Iran, North Korea, Nazi Germany, or the Soviet bloc under Stalin, aggressively selects against altruism. The most-altruistic people were among the first executed in all those places. They are the people being shot while protesting in Syria. Social activism under such a government is rarely in your best self-interest. Tyranny selects for self-interest; people who are willing to help the state oppress others are given opportunities for advancement. And it removes altruistic genes quickly from the population, likely undoing hundreds of years of evolution every year. Those genes will never be replaced.
I'm not too worried when this occurs over a few short months or years. But when a people lives under these conditions for generations, you may end up with a large population deficient in altruistic genes.
There's no solution at that point short of gene therapy. The population can stay in place, resulting in a society that is at best hopelessly mired in corruption and poverty, and at worst a danger to the rest of the world. Or it can disperse, and dilute altruistic genes around the globe.
ADDED: Knowing whether this is a real problem or not, would require learning something about how many genes are involved in altruism, and what their distribution in the population is. A legitimate objection to what I wrote is that if genes for altruism are distributed so that killing less than 1% of the population would have a major impact on their abundance, then they probably weren't very important to begin with. Although, sociopaths are only around 1% of the population, and they have a major impact on society. I wonder how much work has been done in studying the maintenance of alleles for which you only need a few members of the population to have them?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The last point reminded me of speculation from the recent LessWrong article Conspiracy Theories as Agency Fictions:
Before thinking about these points and debating them I strongly recommend you read the full article.