The Unfriendly Superintelligence next door
Markets are powerful decentralized optimization engines - it is known. Liberals see the free market as a kind of optimizer run amuck, a dangerous superintelligence with simple non-human values that must be checked and constrained by the government - the friendly SI. Conservatives just reverse the narrative roles.
In some domains, where the incentive structure aligns with human values, the market works well. In our current framework, the market works best for producing gadgets. It does not work so well for pricing intangible information, and most specifically it is broken when it comes to health.

We treat health as just another gadget problem: something to be solved by pills. Health is really a problem of knowledge; it is a computational prediction problem. Drugs are useful only to the extent that you can package the results of new knowledge into a pill and patent it. If you can't patent it, you can't profit from it.
So the market is constrained to solve human health by coming up with new patentable designs for mass-producible physical objects which go into human bodies. Why did we add that constraint - thou should solve health, but thou shalt only use pills? (Ok technically the solutions don't have to be ingestible, but that's a detail.)
The gadget model works for gadgets because we know how gadgets work - we built them, after all. The central problem with health is that we do not completely understand how the human body works - we did not build it. Thus we should be using the market to figure out how the body works - completely - and arguably we should be allocating trillions of dollars towards that problem.
The market optimizer analogy runs deeper when we consider the complexity of instilling values into a market. Lawmakers cannot program the market with goals directly, so instead they attempt to engineer desireable behavior by ever more layers and layers of constraints. Lawmakers are deontologists.
As an example, consider the regulations on drug advertising. Big pharma is unsafe - its profit function does not encode anything like "maximize human health and happiness" (which of course itself is an oversimplification). If allowed to its own devices, there are strong incentives to sell subtly addictive drugs, to create elaborate hyped false advertising campaigns, etc. Thus all the deontological injunctions. I take that as a strong indicator of a poor solution - a value alignment failure.
What would healthcare look like in a world where we solved the alignment problem?
To solve the alignment problem, the market's profit function must encode long term human health and happiness. This really is a mechanism design problem - its not something lawmakers are even remotely trained or qualified for. A full solution is naturally beyond the scope of a little blog post, but I will sketch out the general idea.
To encode health into a market utility function, first we create financial contracts with an expected value which captures long-term health. We can accomplish this with a long-term contract that generates positive cash flow when a human is healthy, and negative when unhealthy - basically an insurance contract. There is naturally much complexity in getting those contracts right, so that they measure what we really want. But assuming that is accomplished, the next step is pretty simple - we allow those contracts to trade freely on an open market.
There are some interesting failure modes and considerations that are mostly beyond scope but worth briefly mentioning. This system probably needs to be asymmetric. The transfers on poor health outcomes should partially go to cover medical payments, but it may be best to have a portion of the wealth simply go to nobody/everybody - just destroyed.
In this new framework, designing and patenting new drugs can still be profitable, but it is now put on even footing with preventive medicine. More importantly, the market can now actually allocate the correct resources towards long term research.
To make all this concrete, let's use an example of a trillion dollar health question - one that our current system is especially ill-posed to solve:
What are the long-term health effects of abnormally low levels of solar radiation? What levels of sun exposure are ideal for human health?
This is a big important question, and you've probably read some of the hoopla and debate about vitamin D. I'm going to soon briefly summarize a general abstract theory, one that I would bet heavily on if we lived in a more rational world where such bets were possible.
In a sane world where health is solved by a proper computational market, I could make enormous - ridiculous really - amounts of money if I happened to be an early researcher who discovered the full health effects of sunlight. I would bet on my theory simply by buying up contracts for individuals/demographics who had the most health to gain by correcting their sunlight deficiency. I would then publicize the theory and evidence, and perhaps even raise a heap pile of money to create a strong marketing engine to help ensure that my investments - my patients - were taking the necessary actions to correct their sunlight deficiency. Naturally I would use complex machine learning models to guide the trading strategy.
Now, just as an example, here is the brief 'pitch' for sunlight.

If we go back and look across all of time, there is a mountain of evidence which more or less screams - proper sunlight is important to health. Heliotherapy has a long history.
Humans, like most mammals, and most other earth organisms in general, evolved under the sun. A priori we should expect that organisms will have some 'genetic programs' which take approximate measures of incident sunlight as an input. The serotonin -> melatonin mediated blue-light pathway is an example of one such light detecting circuit which is useful for regulating the 24 hour circadian rhythm.
The vitamin D pathway has existed since the time of algae such as the Coccolithophore. It is a multi-stage pathway that can measure solar radiation over a range of temporal frequencies. It starts with synthesis of fat soluble cholecalciferiol which has a very long half life measured in months. [1] [2]
- Cholecalciferiol (HL ~ months) becomes
- 25(OH)D (HL ~ 15 days) which finally becomes
- 1,25(OH)2 D (HL ~ 15 hours)
The main recognized role for this pathway in regards to human health - at least according to the current Wikipedia entry - is to enhance "the internal absorption of calcium, iron, magnesium, phosphate, and zinc". Ponder that for a moment.
Interestingly, this pathway still works as a general solar clock and radiation detector for carnivores - as they can simply eat the precomputed measurement in their diet.
So, what is a long term sunlight detector useful for? One potential application could be deciding appropriate resource allocation towards DNA repair. Every time an organism is in the sun it is accumulating potentially catastrophic DNA damage that must be repaired when the cell next divides. We should expect that genetic programs would allocate resources to DNA repair and various related activities dependent upon estimates of solar radiation.
I should point out - just in case it isn't obvious - that this general idea does not imply that cranking up the sunlight hormone to insane levels will lead to much better DNA/cellular repair. There are always tradeoffs, etc.
One other obvious use of a long term sunlight detector is to regulate general strategic metabolic decisions that depend on the seasonal clock - especially for organisms living far from the equator. During the summer when food is plentiful, the body can expect easy calories. As winter approaches calories become scarce and frugal strategies are expected.
So first off we'd expect to see a huge range of complex effects showing up as correlations between low vit D levels and various illnesses, and specifically illnesses connected to DNA damage (such as cancer) and or BMI.
Now it turns out that BMI itself is also strongly correlated with a huge range of health issues. So the first key question to focus on is the relationship between vit D and BMI. And - perhaps not surprisingly - there is pretty good evidence for such a correlation [3][4] , and this has been known for a while.
Now we get into the real debate. Numerous vit D supplement intervention studies have now been run, and the results are controversial. In general the vit D experts (such as my father, who started the vit D council, and publishes some related research[5]) say that the only studies that matter are those that supplement at high doses sufficient to elevate vit D levels into a 'proper' range which substitutes for sunlight, which in general requires 5000 IU day on average - depending completely on genetics and lifestyle (to the point that any one-size-fits all recommendation is probably terrible).
The mainstream basically ignores all that and funds studies at tiny RDA doses - say 400 IU or less - and then they do meta-analysis over those studies and conclude that their big meta-analysis, unsurprisingly, doesn't show a statistically significant effect. However, these studies still show small effects. Often the meta-analysis is corrected for BMI, which of course also tends to remove any vit D effect, to the extent that low vit D/sunlight is a cause of both weight gain and a bunch of other stuff.
So let's look at two studies for vit D and weight loss.
First, this recent 2015 study of 400 overweight Italians (sorry the actual paper doesn't appear to be available yet) tested vit D supplementation for weight loss. The 3 groups were (0 IU/day, ~1,000 IU / day, ~3,000 IU/day). The observed average weight loss was (1 kg, 3.8 kg, 5.4 kg). I don't know if the 0 IU group received a placebo. Regardless, it looks promising.
On the other hand, this 2013 meta-analysis of 9 studies with 1651 adults total (mainly women) supposedly found no significant weight loss effect for vit D. However, the studies used between 200 IU/day to 1,100 IU/day, with most between 200 to 400 IU. Five studies used calcium, five also showed weight loss (not necessarily the same - unclear). This does not show - at all - what the study claims in its abstract.
In general, medical researchers should not be doing statistics. That is a job for the tech industry.
Now the vit D and sunlight issue is complex, and it will take much research to really work out all of what is going on. The current medical system does not appear to be handling this well - why? Because there is insufficient financial motivation.
Is Big Pharma interested in the sunlight/vit D question? Well yes - but only to the extent that they can create a patentable analogue! The various vit D analogue drugs developed or in development is evidence that Big Pharma is at least paying attention. But assuming that the sunlight hypothesis is mainly correct, there is very little profit in actually fixing the real problem.
There is probably more to sunlight that just vit D and serotonin/melatonin. Consider the interesting correlation between birth month and a number of disease conditions[6]. Perhaps there is a little grain of truth to astrology after all.
Thus concludes my little vit D pitch.
In a more sane world I would have already bet on the general theory. In a really sane world it would have been solved well before I would expect to make any profitable trade. In that rational world you could actually trust health advertising, because you'd know that health advertisers are strongly financially motivated to convince you of things actually truly important for your health.
Instead of charging by the hour or per treatment, like a mechanic, doctors and healthcare companies should literally invest in their patients long-term health, and profit from improvements to long term outcomes. The sunlight health connection is a trillion dollar question in terms of medical value, but not in terms of exploitable profits in today's reality. In a properly constructed market, there would be enormous resources allocated to answer these questions, flowing into legions of profit motivated startups that could generate billions trading on computational health financial markets, all without selling any gadgets.
So in conclusion: the market could solve health, but only if we allowed it to and only if we setup appropriate financial mechanisms to encode the correct value function. This is the UFAI problem next door.
The Mr. Hyde of Oxytocin
What comes to mind when you hear the word ‘oxytocin?’ Is it ‘love’, ‘cuddle hormone’, ‘bliss?’ If so, you may be more aware of the Dr. Jekyll of oxytocin rather than the Mr. Hyde. Oxytocin, just like almost every biochemical molecule, is hormetic. It confers positive effects in one context, but negative in another. In the case of oxytocin, a person with a secure attachment style interacting with a familiar group of people that he/she likes, will experience the positive effects of oxytocin. However, someone with an anxious attachment style interacting with a group of people that he/she does not yet fully feel trusting and familiar with will experience the negative effects of oxytocin. Why does the same molecule produce pro-social effects for one person, yet anti-social for another?
Oxytocin redirects more attentional resources towards noticing social stimuli. This increase in the salience of social information enhances the ability to detect expressions, recognize faces, and other social cues. The effect of increased social cognitive abilities is constrained by personality traits and situational context, resulting in either anti-social or pro-social behavior.
Oxytocin also promotes more interest in social cues by increasing affiliative motivation, a desire to get along with others. The increase in affiliative motivation results in pro-social behavior if the person already tends towards having an interest in bonding with people outside their close friend circle. However, an increase in affiliative motivation for those with anxious attachment styles results in a stronger pursuit to feel closer to only the person he/she is attached to.
A couple, Tom and Mary, have just moved to a new town and are attending their first service at a new church. Tom has a secure attachment style and isn’t prone to social anxieties. Tom is optimistic, has a positive bias, is generally content, and sees people as good, trusting, and friendly. Mary has an anxious attachment style, a negative bias, social anxiety, baseline mood neutral, and sees people as potential threats, competitors, untrustworthy, selfish, and egotistical. During the service, Tom and Mary’s oxytocin levels increase by being in a community. As a result of their different dispositions, Tom exhibits the Dr. Jekyll of oxytocin, whereas Mary exhibits the Mr. Hyde.
At the end of the service, Mary determines that she doesn’t like the church, whereas Tom thinks it is perfect. Mary felt that the people were judgmental and that they didn’t like her and Tom. Tom felt that the people were friendly, accepting, and eager for them to join.
Most social cues are ambiguous. A person’s character traits are instrumental in interpreting the cues as negative or positive. Tom is more likely to interpret facial expressions as positive, whereas Mary sees them as negative. Tom interprets neutral expressions to indicate acceptance, kindness, and friendliness. Mary sees neutral expressions as judgmental and unkind. This creates a fear of rejection, feeling threatened, and propagates a negative bias.
The increase in oxytocin leads to quicker detection and interpretation of facial expressions. Interpreting inchoate facial expressions fosters interpretations based on expectations versus what is actually intended. A person is starting to smile, but before the smile is developed, Mary believes that the person is about to laugh and ridicule her. Mary then scowls at her, turning what was going to be a smile into a negative expression. Tom interprets the inchoate expression as a smile, smiles, and turns the inchoate expression into a genuine smile.
Oxytocin amplifies one’s character traits of pro-social or anti-social tendencies. Oxytocin does increase the feelings of bonding for all, but in different ways. People with pro-social tendencies will feel closer to their communities and greater circle of friends. People with anti-social tendencies will just feel closer to their close circle of friends and people they already trust.
Cross-posted from my blog: https://evolvingwithtechnology.wordpress.com.
References:
http://dept.psych.columbia.edu/~kochsner/pdf/Bartz_et_al_2011_Social_oxytocin.pdf
http://www.attachedthebook.com/about-the-book/ by Amir Levine and Rachel Heller.
Why capitalism?
Note: I'm terrible at making up titles, and I think that the one I gave may give the wrong impression. If anyone has a suggestion on what I should change it to, it would be much appreciated.
As I've been reading articles on less wrong, it seems to me that there are hints of an underlying belief which states that not only is capitalism a good economic paradigm, it shall remain so. Now, I don't mean to say anything like 'Capitalism is Evil!' I think that capitalism can, and has, done a lot of good for humanity.
However, I don't think that capitalism will be the best economic paradigm going into the future. I used to view capitalism as an inherent part of the society we currently live in, with no real economic competition.
I recently changed my views as a result of a book someone recommended to me 'The zero marginal cost society' by Jeremy Rifkin. In it, the author states that we are in the midst of a third industrial revolution as a result of a new energy/production and communications matrix i.e. renewable energies, 3-D printing and the internet.
The author claims that these three things will eventually bring their respective sectors marginal costs to zero. This is significant because of a 'contradiction at the heart of capitalism' (I'm not sure how to phrase this, so excuse me if I butcher it): competition is at the heart of capitalism, with companies constantly undercutting each other as a result of new technologies. These technological improvement allow a company to produce goods/services at a more attractive price whilst retaining a reasonable profit margin. As a result, we get better and better at producing things, and it lets us produce goods at ever decreasing costs. But what happens when the costs of producing something hit rock bottom? That is, they can go no lower.
3D printing presents a situation like this for a huge amount of industries, as all you really need to do is get some designs, plug in some feedstock and have a power source ready. The internet allows people to share their designs for almost zero cost, and renewable energies are on the rise, presenting the avenue of virtually free power. All that's left is the feedstock, and the cost of this is due to the difficulty of producing it. Once we have better robotics, you won't need anyone to mine/cultivate anything, and the whole thing becomes basically free.
And when you can get your goods, energy and communications for basically free, doesn't that undermine the whole capitalist system? Of course, the arguments presented in the book are much more comprehensive, and it details an alternative economic paradigm called the Commons. I'm just paraphrasing here.
Since my knowledge of economics is woefully inadequate, I was wondering if I've made some ridiculous blunder which everyone knows about on this site. Is there some fundamental reason why Jeremy Rifkin's is a crackpot and I'm a fool for listening to him? Or is it more subtle than that? I ask because I felt the arguments in the book pretty compelling, and I want some opinions from people who are much better suited to critiquing this sort of thing than I.
Here is a link to the download page for the essay titled 'The comedy of the Commons' which provides some of the arguments which convinced me:
http://digitalcommons.law.yale.edu/fss_papers/1828/
A lecture about the Commons itself:
http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2009/ostrom_lecture.pdf
And a paper (?) about governing the commons:
http://www.kuhlen.name/MATERIALIEN/eDok/governing_the_commons1.pdf
And here is a link to the author's page, along with some links to articles about the book:
http://www.thezeromarginalcostsociety.com/pages/Milestones.cfm
http://www.thezeromarginalcostsociety.com/pages/Press--Articles.cfm
An article displaying some of the sheer potential of 3D printers, and how it has the potential to change society in a major way:
http://singularityhub.com/2012/08/22/3d-printers-may-someday-construct-homes-in-less-than-a-day/
Edit: Drat! I forgot about the stupid questions thread. Should I delete this and repost it there? I mean, I hope to discuss this topic with others, so it seems suitable for the DISCUSSION board, but it may also be very stupid. Advice would be appreciated.
Announcing LessWrong Digest
I've been making rounds on social media with the following message.
Great content on LessWrong isn't as frequent as it used to be, so not as many people read it as frequently. This makes sense. However, I read it at least once every two days for personal interest. So, I'm starting a LessWrong/Rationality Digest, which will be a summary of all posts or comments exceeding 20 upvotes within a week. It will be like a newsletter. Also, it's a good way for those new to LessWrong to learn cool things without having to slog through online cultural baggage. It will never be more than once weekly. If you're curious here is a sample of what the Digest will be like.
https://docs.google.com/document/d/1e2mHi7W0H2toWPNooSq7QNjEhx_xa0LcLw_NZRfkPPk/edit
Also, major blog posts or articles from related websites, such as Slate Star Codex and Overcoming Bias, or publications from the MIRI, may be included occasionally. If you want on the list send an email to:
lesswrongdigest *at* gmail *dot* com
Users of LessWrong itself have noticed this 'decline' in frequency of quality posts on LessWrong. It's not necessarily a bad thing, as much of the community has migrated to other places, such as Slate Star Codex, or even into meatspace with various organizations, meetups, and the like. In a sense, the rationalist community outgrew LessWrong as a suitable and ultimate nexus. Anyway, I thought you as well would be interested in a LessWrong Digest. If you or your friends:
- find articles in 'Main' are too infrequent, and Discussion only filled with announcements, open threads, and housekeeping posts, to bother checking LessWrong regularly, or,
- are busying themselves with other priorities, and are trying to limit how distracted they are by LessWrong and other media
the LessWrong Digest might work for you, and as a suggestion for your friends. I've fielded suggestions I transform this into a blog, Tumblr, or other format suitable for RSS Feed. Almost everyone is happy with email format right now, but if a few people express an interest in a blog or RSS format, I can make that happen too.
Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104
New chapter!
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 104.
There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.
Research Priorities for Artificial Intelligence: An Open Letter
The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.
Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.
Who are your favorite "hidden rationalists"?
Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.
I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.
So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!
Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?
Here's my list, to kick things off:
- Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
- Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
- Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life.
Feedback requested by Intentional Insights on workbook conveying rational thinking about meaning and purpose to a broad audience
We at Intentional Insights would appreciate your help with feedback on optimize a workbook that conveys rational thinking to find meaning and purpose in life for a broad audience. Last time, we asked for your feedback, and we changed our content offerings based on comments we received from fellow Less Wrongers, as you can see from the Edit to this post. We would be glad to update our beliefs again and revise the workbook based on your feedback.
For a bit of context, the workbook is part of our efforts to promote rational thinking to a broad audience and thus raise the sanity waterline. It’s based on research on how other societies besides the United States helped their citizens find meaning and purpose, such as research I did on the Soviet Union and Zuckerman did on Sweden and Denmark. It’s also based on research on the contemporary United States by psychologists such as Steger, Duffy and Dik, Seligman, and others.
The target audience is reason-minded youth and young adults, especially secular-oriented ones. The goal is to get such people to engage with academic research on how our minds work, and thus get them interested in exploring rational thinking more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself. The workbook is written in a style aimed to create cognitive ease, with narratives, personal stories, graphics, and research-based exercises.
Here is the link to the workbook draft itself. Any and all suggestions are welcomed, and thanks for taking the time to engage with this workbook and give your feedback – much appreciated!
A bit of word-dissolving in political discussion
I found Scott Alexander's steelmanning of the NRx critique to be an interesting, even persuassive critique of modern progressivism, having not been exposed to this movement prior to today. However I am also equally confused at the jump from "modern liberal democracies are flawed" to "restore the devine-right-of-kings!" I've always hated the quip "democracy is the worst form of government, except for all the others" (we've yet tried), but I think it applies here.
Of course, with the prompting to state my own thoughts, I simply had to go and start typing them out. The following contains obvious traces of my own political leanings and philosophy (in short summary: if "Cthulhu only swims left", then I AM CTHULHU... at least until someone explains to me what a Great Old One is doing out of R'lyeh and in West Coast-flavored American politics), but those traces should be taken as evidence of what I believe rather than statements about it.
Because what I was actually trying to talk about, is rationality in politics. Because in fact, while it is hard, while it is spiders, all the normal techniques work on it. There is only one real Cardinal Sin of Attempting to be Rational in Politics, and it is the following argument, stated in generic form that I might capture it from the ether and bury it: "You only believe what you believe for political reasons!" It does not matter if those "reasons" are signaling, privilege, hegemony, or having an invisible devil on your shoulder whispering into your bloody ear: to impugn someone else's epistemology entirely at the meta-level without saying a thing against their object-level claims is anti-epistemology.
Now, on to the ranting! The following are more-or-less a semi-random collection of tips I vomited out for trying to deal with politics rationally. I hope they help. This is a Discussion post because Mark said that might be a good idea.
- Dissolve "democracy", and not just in the philosophical sense, but in the sense that there have been many different kinds of actually existing democracies. There are always multiple object-level implementations of any meta-level idea, and most political ideas are sufficiently abstract to count as meta-level. Even if, for purposes of a thought experiment, you find yourself saying, "I WILL ONLY EVER CONSIDER SYSTEMS THAT COUNT AS DEMOCRACY ACCORDING TO MY INTUITIVE DEMOCRACY-P() PREDICATE!", one can easily debate whether a mixed-member proportional Parliament performs better than a district-based bicameral Congress, or whether a pure Westminster system beats them both, or whether a Presidential system works better, or whatever. Particular institutional designs yield particular institutional behaviors, and successfully inducing complex generalizations across large categories of institutional designs requires large amounts of evidence -- just as it does in any other form of hierarchical probabilistic reasoning.
- Dissolve words like "democracy", "capitalism", "socialism", and "government" in the philosophical sense, and ask: what are the terminal goals democracy serves? How much do we support those goals, and how much do current democratic systems suffer approximation error by forcing our terminal goals to fit inside the hypothesis space our actual institutions instantiate? For however much we do support those goals, why do we shape these particular institutions to serve those goals, and not other institutions? For all values of X, mah nishtana ha-X hazeh mikol ha-X-im? is a fundamental question of correct reasoning. (Asking the question of why we instantiate particular institutions in particular places, when one believes in democratic states, is the core issue of democratic socialism, and I would indeed count myself a democratic socialist. But you get different answers and inferences if you ask about schools or churches, don't you?)
- Learn first to explicitly identify yourself with a political "tribe", and next to consider political ideas individually, as questions of fact and value subject to investigation via epistemology and moral epistemology, rather than treating politics as "tribal". Tribalism is the mind-killer: keeping your own explicit tribal identification in mind helps you notice when you're being tribalist, and helps you distinguish your own tribe's customs from universal truths -- both aids to your political rationality. And yes, while politics has always been at least a little tribal, the particular form the tribes take varies through time and space: the division of society into a "blue tribe" and a "red tribe" (as oft-described by Yvain on Slate Star Codex), for example, is peculiar to late-20th-century and early-21st-century USA. Those colors didn't even come into usage until the 2000 Presidential election, and hadn't firmly solidified as describing seemingly separate nationalities until 2004! Other countries, and other times, have significantly different arrangements of tribes, so if you don't learn to distinguish between ideas and tribes, you'll not only fail at political rationality, you'll give yourself severe culture shock the first time you go abroad.
- General rule: you often think things are general rules of the world not because you have the large amount of evidence necessary to reason that they really are, but because you've seen so few alternatives that your subjective distribution over models contains only one or two models, both coarse-grained. Unquestioned assumptions always feel like universal truths from the inside!
- Learn to check political ideas by looking at the actually-existing implementations, including the ones you currently oppose -- think of yourself as bloody Sauron if you have to! This works, since most political ideas are not particularly original. Commons trusts exist, for example, the "movement" supporting them just wants to scale them up to cover all society's important common assets rather than just tracts of land donated by philanthropists. Universal health care exists in many countries. Monarchy and dictatorship exist in many countries. Religious rule exists in many countries. Free tertiary education exists in some countries, and has previously existed in more. Non-free but subsidized tertiary education exists in many countries. Running the state off oil revenue has been tried in many countries. Centrally-planned economies have been tried in many countries. And it's damn well easier to compare "Canadian health-care" to "American health-care" to "Chinese health-care", all sampled in 2014, using fact-based policy studies, than to argue about the Visions of Human Life represented by each (the welfare state, the Company Man, and the Lone Fox, let's say) -- which of course assumes consequentialism. In fact, I should issue a much stronger warning here: argumentation is an utterly unreliable guide to truth compared to data, and all these meta-level political conclusions require vast amounts of object-level data to induce correct causal models of the world that allow for proper planning and policy.
- This means that while the Soviet Union is not evidence for the total failure of "socialism" as I use the word, that's because I define socialism as a larger category of possible economies that strictly contains centralized state planning -- centralized state planning really was, by and large, a total fucking failure. But there's a rationality lesson here: in politics, all opponents of an idea will have their own definition for it, but the supporters will only have one. Learn to identify political terminology with the definitions advanced by supporters: these definitions might contain applause lights, but at least they pick out one single spot in policy-space or society-space (or, hopefully, a reasonably small subset of that space), while opponents don't generally agree on which precise point in policy-space or society-space they're actually attacking (because they're all opposed for their own reasons and thus not coordinating with each-other).
- This also means that if someone wants to talk about monarchies that rule by religious right, or even about absolute monarchies in general, they do have to account for the behavior of the Arab monarchies today, for example. Or if they want to talk about religious rule in general (which very few do, to my knowledge, but hey, let's go with it), they actually do have to account for the behavior of Da3esh/ISIS. Of course, they might do so by endorsing such regimes, just as some members of Western Communist Parties endorsed the Soviet Union -- and this can happen by lack of knowledge, by failure of rationality, or by difference of goals.
- And then of course, there are the complications of the real world: in the real world, neither perfect steelman-level central planning nor perfect steelman-level markets have ever been implemented, anywhere, with the result that once upon a time, the Soviet economy was allocatively efficient and prices in capitalist West Germany were just as bad at reflecting relative scarcities as those in centrally-planned East Germany! The real advantage of market systems has ended up being the autonomy of firms, not allocative optimality (and that's being argued, right there, in the single most left-wing magazine I know of!). Which leads us to repeat the warning: correct conclusions are induced from real-world data, not argued from a priori principles that usually turn out to be wildly mis-emphasized if not entirely wrong.
- Learn to notice when otherwise uninformed people are adopting political ideas as attire to gain status by joining a fashionable cause. Keep in mind that what constitutes "fashionable" depends on the joiner's own place in society, not on your opinions about them. For some people, things you and I find low-status (certain clothes or haircuts) are, in fact, high-status. See Yvain's "Republicans are Douchebags" post for an example in a Western context: names that the American Red Tribe considers solid and respectable are viewed by the American Blue Tribe as "douchebag names".
- A heuristic that tends to immunize against certain failures of political rationality: if an argument does not base itself at all in facts external to itself or to the listener, but instead concentrates entirely on reinterpreting evidence, then it is probably either an argument about definitions, or sheer nonsense. This is related to my comments on hierarchical reasoning above, and also to the general sense in which trying to refute an object-level claim by meta-level argumentation is not even wrong, but in fact anti-epistemology.
- A further heuristic, usable on actual electioneering campaigns the world over: whenever someone says "values", he is lying, and you should reach for your gun. The word "values" is the single most overused, drained, meaningless word in politics. It is a normative pronoun: it directs the listener to fill in warm fuzzy things here without concentrating the speaker and the listener on the same point in policy-space at all. All over the world, politicians routinely seek power on phrases like "I have values", or "My opponent has no values", or "our values" or "our $TRIBE values", or "$APPLAUSE_LIGHT values". Just cross those phrases and their entire containing sentences out with a big black marker, and then see what the speaker is actually saying. Sometimes, if you're lucky (ie: voting for a Democrat), they're saying absolutely nothing. Often, however, the word "values" means, "Good thing I'm here to tell you that you want this brand new oppressive/exploitative power elite, since you didn't even know!"
- As mentioned above, be very, very sure about what ethical framework you're working within before having a political discussion. A consequentialist and a virtue-ethicist will often take completely different policy positions on, say, healthcare, and have absolutely nothing to talk about with each-other. The consequentialist can point out the utilitarian gains of universal single-payer care, and the virtue-ethicist can point out the incentive structure of corporate-sponsored group plans for promoting hard work and loyalty to employers, but they are fundamentally talking past each-other.
- Often, the core matter of politics is how to trade off between ethical ideals that are otherwise left talking past each-other, because society has finite material resources, human morals are very complex, and real policies have unintended consequences. For example, if we enact Victorian-style "poor laws" that penalize poverty for virtue-ethical reasons, the proponents of those laws need to be held accountable for accepting the unintended consequences of those laws, including higher crime rates, a less educated workforce, etc. (This is a broad point in favor of consequentialism: a rational consequentialist always considers consequences, intended and unintended, or he fails at consequentialism. A deontologist or virtue-ethicist, on the other hand, has license from his own ethics algorithm to not care about unintended consequences at all, provided the rules get followed or the rules or rulers are virtuous.)
- Almost all policies can be enacted more effectively with state power, and almost no policies can "take over the world" by sheer superiority of the idea all by themselves. Demanding that a successful policy should "take over the world" by itself, as everyone naturally turns to the One True Path, is intellectually dishonest, and so is demanding that a policy should be maximally effective in miniature (when tried without the state, or in a small state, or in a weak state) before it is justified for the state to experiment with it. Remember: the overwhelming majority of journals and conferences in professional science still employ frequentist statistics rather than Bayesianism, and this is 20 years after the PC revolution and the World Wide Web, and 40 years after computers became widespread in universities. Human beings are utility-satisficing, adaptation-executing creatures with mostly-unknown utility functions: expecting them to adopt more effective policies quickly by mere effectiveness of the policy is downright unrealistic.
- The Appeal to Preconceptions is probably the single Darkest form of Dark Arts, and it's used everywhere in politics. When someone says something to you that "stands to reason" or "sounds right", which genuinely seems quite plausible, actually, but without actually providing evidence, you need to interrogate your own beliefs and find the Equivalent Sample Size of the informative prior generating that subjective plausibility before you let yourself get talked into anything. This applies triply in philosophy.
The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach
The Centre for Effective Altruism, the group behind 80,000 Hours, Giving What We Can, the Global Priorities Project, Effective Altruism Outreach, and to a lesser extent The Life You Can Save and Animal Charity Evaluators, is looking to grow its team with a number of new roles:
- Giving What We Can: Director of Research
- Giving What We Can: Communications Manager
- 80,000 Hours: Head of Research
- Central CEA: Chief Operating Officer
- Global Priorities Project: Research Fellow (accepting expressions of interest at this point)
- We are also looking for 'graduate volunteers' for Giving What We Can in 2015, particularly over the summer
We are so keen to find great people that if you introduce us to someone new who we end up hiring, we will pay you $1,000 for the favour! If you know anyone awesome who would be a good fit for us please let me know: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. They can also book a short meeting with me directly.
We may be able to sponsor outstanding applicants from the USA.
Applications close Friday 5th December 2014.
Why is CEA an excellent place to work?
First and foremost, “making the world a better place” is our bottom line and central aim. We work on the projects we do because we think they’re the best way for us to make a contribution. But there’s more.
The specifics of what we are looking for depend on the role and details can be found in the job descriptions. In general, we're looking for people who have many of the following traits:
- Self-motivated, hard-working, and independent;
- Able to deal with pressure and unfamiliar problems;
- Have a strong desire for personal development;
- Able to quickly master complex, abstract ideas, and solve problems;
- Able to communicate clearly and persuasively in writing and in person;
- Comfortable working in a team and quick to get on with new people;
- Able to lead a team and manage a complex project;
- Keen to work with a young team in a startup environment;
- Deeply interested in making the world a better place in an effective way, using evidence and research;
- A good understanding of the aims of the Centre for Effective Altruism and its constituent organisations.
I hope to work at CEA in the future. What should I do now?
Of course this will depend on the role, but generally good ideas include:
- Study hard, including gaining useful knowledge and skills outside of the classroom.
- Degrees we have found provide useful training include: philosophy, statistics, economics, mathematics and physics. However, we are hoping to hire people from a more diverse range of academic and practical backgrounds in the future. In particular, we hope to find new members of the team who have worked in operations, or creative industries.
- Write regularly and consider starting a blog.
- Manage student and workplace clubs or societies.
- Work on exciting projects in your spare time.
- Found a start-up business or non-profit, or join someone else early in the life of a new project.
- Gain impressive professional experience in established organisations, such as those working in consulting, government, politics, advocacy, law, think-tanks, movement building, journalism, etc.
- Get experience promoting effective altruist ideas online, or to people you already know.
- Use 80,000 Hours' research to do a detailed analysis of your own future career plans.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)