Link: Evidence-Based Medicine Has Been Hijacked
John Ioannidis has written a very insightful and entertaining article about the current state of the movement which calls itself "Evidence-Based Medicine". The paper is available ahead of print at http://www.jclinepi.com/article/S0895-4356(16)00147-5/pdf.
As far as I can tell there is currently no paywall, that may change later, send me an e-mail if you are unable to access it.
Retractionwatch interviews John about the paper here: http://retractionwatch.com/2016/03/16/evidence-based-medicine-has-been-hijacked-a-confession-from-john-ioannidis/
(Full disclosure: John Ioannidis is a co-director of the Meta-Research Innovation Center at Stanford (METRICS), where I am an employee. I am posting this not in an effort to promote METRICS, but because I believe the links will be of interest to the community)
Look for Lone Correct Contrarians
Related to: The Correct Contrarian Cluster, The General Factor of Correctness
(Content note: Explicitly about spreading rationalist memes, increasing the size of the rationalist movement, and proselytizing. I also regularly use the word 'we' to refer to the rationalist community/subculture. You might prefer not to read this if you don't like that sort of thing and/or you don't think I'm qualified to write about that sort of thing and/or you're not interested in providing constructive criticism.)
I've tried to introduce a number of people to this culture and the ideas within it, but it takes some finesse to get a random individual from the world population to keep thinking about these things and apply them. My personal efforts have been very hit-or-miss. Others have told me that they've been more successful. But I think there are many people that share my experience. This is unfortunate: we want people to be more rational and we want more rational people.
At any rate, this is not about the art of raising the sanity waterline, but the more general task of spreading rationalist memes. Some people naturally arrive at these ideas, but they usually have to find them through other people first. This is really about all of the people in the world who are like you probably were before you found this culture; the people who would care about it, and invest in it, as it is right now, if only they knew it existed.
I'm going to be vague for the sake of anonymity, but here it goes:
I was reading a book review on Amazon, and I really liked it. The writer felt like a kindred spirit. I immediately saw that they were capable of coming to non-obvious conclusions, so I kept reading. Then I checked their review history in the hope that I would find other good books and reviews. And it was very strange.
They did a bunch of stuff that very few humans do. They realized that nuclear power has risks but that the benefits heavily outweigh the risks given the appropriate alternative, and they realized that humans overestimate the risks of nuclear power for silly reasons. They noticed when people were getting confused about labels and pointed out the general mistake, as well as pointing out what everyone should really be talking about. They acknowledged individual and average IQ differences and realized the correct policy implications. They really understood evolution, they took evolutionary psychology seriously, and they didn't care if it was labeled as sociobiology. They used the word 'numerate.'
And the reviews ranged over more than a decade of time. These were persistent interests.
I don't know what other people do when they discover that a stranger like this exists, but the first thing that I try to do is talk to them. It's not like I'm going to run into them on the sidewalk.
Amazon had no messaging feature that I could find, so I looked for a website, and I found one. I found even more evidence, and that's certainly what it was. They were interested in altruism, including how it goes wrong; computer science; statistics; psychology; ethics; coordination failures; failures of academic and scientific institutions; educational reform; cryptocurrency, etc. At this point I considered it more likely than not that they already knew everything that I wanted to tell them, and that they already self-identified as a rationalist, or that they had a contrarian reason for not identifying as such.
So I found their email address. I told them that they were a great reviewer, that I was surprised that they had come to so many correct contrarian conclusions, and that, if they didn't already know, there was a whole culture of people like them.
They replied in ten minutes. They were busy, but they liked what I had to say, and as a matter of fact, a friend had already convinced them to buy Rationality: From AI to Zombies. They said they hadn't read much relative to the size of the book because it's so large, but they loved it so far and they wanted to keep reading.
(You might postulate that I found a review by a user like this on a different book because I was recommended this book and both of us were interested in Rationality: From AI to Zombies. However, the first review I read by this user was for a book on unusual gardening methods, that I found in a search for books about gardening methods. For the sake of anonymity, however, my unusual gardening methods must remain a secret. It is reasonable to postulate that there would be some sort of sampling bias like the one that I have described, but given what I know, it is likely that this is not that. You certainly could still postulate a correlation by means of books about unusual gardening methods, however.)
Maybe that extra push made the difference. Maybe if there hadn't been a friend, I would've made the difference.
Who knew that's how my morning would turn out?
As I've said in some of my other posts, but not in so many words, maybe we should start doing this accidentally effective thing deliberately!
I know there's probably controversy about whether or not rationalists should proselytize, but I've been in favor of it for awhile. And if you're like me, then I don't think this is a very special effort to make. I'm sure sometimes you see a little thread, and you think, "Wow, they're a lot like me; they're a lot like us, in fact; I wonder if there are other things too. I wonder if they would care about this."
Don't just move on! That's Bayesian evidence!
I dare you to follow that path to its destination. I dare you to reach out. It doesn't cost much.
And obviously there are ways to make yourself look creepy or weird or crazy. But I said to reach out, not to reach out badly. If you could figure out how to do it right, it could have a large impact. And these people are likely to be pretty reasonable. You should keep a look out in the future.
Speaking of the future, it's worth noting that I ended up reading the first review because of an automated Amazon book recommendation and subsequent curiosity. You know we're in the data. We are out there and there are ways to find us. In a sense, we aren't exactly low-hanging fruit. But in another sense, we are.
I've never read a word of the Methods of Rationality, but I have to shoehorn this in: we need to write the program that sends a Hogwarts acceptance letter to witches and wizards on their eleventh birthday.
Genetic "Nature" is cultural too
I'll admit it: I am confused about genetics and heritability. Not about the results of the various twin studies - Scott summarises them as "~50% of the variation is heritable and ~50% is due to non-shared environment", which seems generally correct.
But I am confused about what this means in practice, due to arguments like "contacts are very important for business success, rich people get much more contacts than poor people, yet business success is strongly correlated with genetic parent wealth" and such. Assuming that genetics strongly determines... most stuff... goes against so many things we know or think we know about how the world works. And by "we" I mean lots of different people with lots of different political views - genetic determinism means, for instance, that current variations in regulation and taxes are pretty unimportant for individual outcomes.
Now, there are many caveats about the genetic results, particularly that they measure the variance of a factor rather than its absolute importance (and hence you get results like variation in nutrition being almost invisible as an explanation for variation in height), but it's still hard to figure out what this all means.
Then we have Scott's latest post, which points out that "non-shared environment" is not the same as "nurture", since it includes, for instance, dumb luck.
However, "heritable" is not the same as as "nature", either. For instance, sexism and racial prejudices, if they are widespread, come under the "heritable" effects rather than the "environment" ones. And then it gets even more confusing.
Widespread prejudice is not "environment". Rarer prejudice is.
For instance, imagine that we lived in a very sexist society where women were not allowed to work at all. Then there would be an extremely high, almost perfect, correlation between "having a Y chromosome" and "having a job". But this would obviously be susceptible to a cultural fix.
Obviously racial effects can have the same effect. It covers anything visible. So a high heritability is compatible with genetics being a cause of competence, and/or prejudice against visible genetic characteristics being important ("Our results indicate that we either live in a meritocracy or a hive of prejudice!").
Note that as prejudices get less widespread, they move from showing up on the genetic variation, to showing up in the environmental variation side. So widespread prejudices create a "nature" effect, rarer ones create a "nurture" effect. Evenly reducing the magnitude of a prejudice, however, doesn't change the side it will show up on.
Positional genetic goods: Beauty... and IQ?
Let's zoom in on one of those visible genetic characteristics: beauty. As Robin Hanson is fond of pointing out, beautiful people are more successful, and are judged as more competent and cooperative than they actually are. Therefore if we have a gene that increases both beauty and IQ, we would expect it's impact on success to be high. In the presence of such a gene, the correlation between IQ and success would be higher than it should objectively be. This suggest a (small) note of caution on the "mutation load" hypotheses; if reducing mutation load increases factors such as beauty, then we would expect increased success without necessarily increased competence.
But is it possible that IQ itself is in part a positional good? Consider that success doesn't just depend on competence, but on social skills, ability to present yourself well in an interview, and how managers and peers judge you. If IQ affects or covaries with one or another of those skills, then we would be overemphasising the importance of IQ in competence. Thus attempts to genetically boost IQ could give less impact than expected. The person whose genome was changed would benefit, but at the (partial) expense of everyone else.
Do people know of experiments (or planned experiments) that disentangle these issues?
On Making Things
(Content note: This is basically just a story about how I accidentally briefly made something that I find very unfun into something very fun, for the sake of illustrating how surprising it was and how cool it would be if everyone could do things like this more often and deliberately. You also might get a kick out of this story in the way that you might get a kick out of How It's Made, or many of Swimmer963's posts on swimming and nursing, or Elo's post on wearing magnetic rings. If none of that interests you, then you might consider backing out now.)
I'm learning math under the tutelage of a friend, and I go through a lot of paper. I write a lot of proofs so there can be plenty of false starts. I could fill a whole sheet of paper, decide that I only need one result to continue on my way, and switch to a blank sheet. Since this is how I go about it, I thought that a whiteboard would be a really good idea. The solution is greater surface area and practical erasure.
I checked Amazon; whiteboards are one of those products with polarized reviews. I secretly wondered if ten percent of all whiteboards manufactured don't just immediately permanently stain. Maybe I was being a little risk-averse, but I decided to hold off on buying one.
Then I remembered that I make signs for a living, and I realized that I could probably just make a whiteboard myself.
I had a good rapport with my supervisor. I have breaks and lunch time, and the boundaries are kind of fuzzy, so the time wouldn't be an issue. I didn't have to print anything, so I wouldn't be taking up time on the printers or using ink.
Maybe everyone knows what 'vinyl' is and I don't need to explain this, but the stuff that 'PVC pipes' (PVC stands for polyvinyl chloride) are made out of can be formed into thin elastic sheets. Manufacturers apply adhesive and paper backing to these sheets and sell them to people so they can pull off the paper and stick the vinyl to stuff. You can print on some of it too. It comes on long rolls, typically 54 in. or 60 in., sort of like tape or paper towels. If you ever see a vehicle that belongs to a business with all sorts of art all over it, then it's probably printed on vinyl.
It's kind of hard to print on a really short roll without everything going horribly awry, so we have tons of rolls with like 10 ft. by 54 in. sheets on them that just get thrown away.
If you scratch a vinyl print, the ink will come right off. So we laminate the vinyl before we apply it. Most of our products are laminated with a laminate by the enigmatic name of '8518', but today we happened to be using a very particular and rarely used dry erase laminate. So naturally I ran one of those extra sheets of vinyl through the laminator after I finished the job that I was really supposed to be doing.
And we keep these things called 'drops', which are just sheets of substrate material, stuff that you might apply vinyl to or print on, that were cut off from other things that were made into signs, and then never touched again. Sometimes you can make a sign out of one. People forget about them and don't like to use them because they're usually dirtier and more damaged than stock substrate, so we have a ton of them. It might be corrugated plastic (like cardboard, but plastic), or foamboard (two pieces of paper glued to a sheet of foam), or much thicker, non-elastic PVC.
And this is when I started to think that this was becoming a kind of important experience.
I looked at the drops lined up on the shelf. I definitely didn't want to use foamboard; it's extremely fragile, you can't pull the vinyl off if you mess up, it would dent when I pressed too hard with the marker, and it most generally sucks in every way possible except cost. Corrugated plastic is also quite fragile, and it has linear indentations between the flutes that vinyl would conform to; I wanted the board to be flat. PVC is a better alternative than both, but drops can sit for a long time, and large sheets of PVC warp under their own weight; I wanted a relatively large board and I didn't want it to be warped. So I went for a product that we refer to as 'MaxMetal'; two sheets of aluminum sandwiched around a thicker sheet of plastic. It's much harder to warp, and I could be confident that it would be a solid writing surface. PVC is solid, but it's not metal.
I was looking through the MaxMetal drops, trying to find the right one, realizing that I hadn't decided what dimensions I wanted the board to be, and I felt a little jump in my chest. That was me finally noticing how much fun I was having. And immediately after that, I realized that even though I had implicitly expected to do everything that I had done, I was surprised at how much fun I was having. I had failed to predict how much fun I would have doing those things. It seemed like something worth fixing.
I finally chose a precisely cut piece that was approximately 30 in. wide by 24 in. high. And then I made the board. I separated some of the vinyl from the backing, and I cut off a strip of backing, and I applied part of the vinyl sheet to one edge of the board. I put the end of the sheet with the strip of stuck vinyl between two mechanical rollers, left the substrate flat, flipped the vinyl sheet over the top of the machine and past the top of the substrate sheet, pulled up more of the backing, and rolled it through to press the two sheets together while I pulled the backing off of the vinyl. I put the product on a table, turned it upside down, cut off the excess vinyl with my trusty utility knife, and rounded the corners off by half an inch for safety and aesthetics. I took an orange Expo marker to it, and made a giant signature, and it worked. A microfiber rag erased it just fine even after letting it sit for half an hour. I cut off some super heavy duty, I-promise-this-is-safe double-sided tape, rolled it up, and took it home, so I could mount the board to my bedroom wall. I made a pretty snazzy whiteboard for myself. It was cool.
There probably aren't a lot of signmakers on LessWrong, but there are a lot of programmers. I don't see them talk about this experience a lot, but I figure it's pretty similar; what it feels like to use something that you made, or watch it work. And I'm sure there are other people with other things.
But it seems worth saying explicitly, "Maybe you should make stuff because it's fun."
That was my main explanation for how fun it was, for awhile. But there were a lot of other things when I thought about it more.
I technically had to solve problems, but they were relatively simple and rewarding to solve.
It felt a little forbidden, doing something creative for yourself at work when you're really only there to stay alive. Even a lame taboo is usually a nice kick.
And my time was taken up by responsibility, I was doing real work between all of those steps, so I could look forward to the next step in the creation process while doing something that I normally drag myself through. The day flew by when I started making that thing. When could I fit in some time for my whiteboard?
And it was fun because the meta-event was interesting; I never thought that I could do exactly the same work activity, and a small context change would change it from boring, old work to fun. I was laminating vinyl and fetching drops and rounding corners, but it wasn't for a vehicle wrap, or a sign, or a magnet; it was for my whiteboard, and that changed everything. I was glad that I noticed that, and hopeful that I could find a way to deliberately apply it in the future.
And I was using non-universal, demanded skills, that many people could acquire, but not instantly. It was cool to feel like I was being resourceful in a very particular way that most people never would.
And there weren't too many choices, and the choices weren't ambiguous. The dimensions of the board, including thickness, were limited to the dimensions of the drops, and I'd have to make very precise cuts through a hard material if I wanted a board that wasn't the size of an existing one. A whiteboard is mostly a plain white surface, there isn't much design to be done. I only had quarter-inch and half-inch corner rounders; it's one of those or square corners. What if I had more choices, either about the design of the board, or in a different domain with way more choices by default? I might be a human and regret every choice that I actually make because all of those other foregone choices combined are so much more salient.
And it seems helpful that the whiteboard was being made for a noble purpose: so that I could conserve paper and continue to study mathematics at the same time, and do so much more conveniently. I think it would have been less fun if I was making a whiteboard so that I could see what it's like to snap a whiteboard in half with cinder blocks and a bowling ball, or if I was making one because I just thought it would be cool to have one.
And instead of paying $30-$50, I paid nothing. It felt like I won.
I've thought for quite a while, but not on this level, that there should be an applied fun theory; that it seemed a bit strange that you wouldn't go further with the idea that you could find deliberate ways to make your world more fun, and try to make the present more fun, as opposed to just the distant future. And not in the way where you critically examine the suggestions that people usually generate when you ask for a list of activities that are popularly considered fun, but in the way where you predict that things are fun because you understand how fun works, and your predictions come true. Hopefully I offered up something interesting with respect to that line of inquiry.
But of course, fun seems like just the sort of thing that you could easily overthink. At the very least it's not the sort of domain where you want deep theories that don't generate practical advice for too long. But I still think it seems worth thinking about.
AIFoom Debate - conclusion?
I've been going through the AIFoom debate, and both sides makes sense to me. I intend to continue, but I'm wondering if there're already insights in LW culture I can get if I just ask for them.
My understanding is as follows:
The difference between a chimp and a human is only 5 million years of evolution. That's not time enough for many changes.
Eliezer takes this as proof that the difference between the two in the brain architecture can't be much. Thus, you can have a chimp-intelligent AI that doesn't do much, and then with some very small changes, suddenly get a human-intelligent AI and FOOM!
Robin takes the 5-million year gap as proof that the significant difference between chimps and humans is only partly in the brain architecture. Evolution simply can't be responsible for most of the relevant difference; the difference must be elsewhere.
So he concludes that when our ancestors got smart enough for language, culture became a thing. Our species stumbled across various little insights into life, and these got passed on. An increasingly massive base of cultural content, made of very many small improvements is largely responsible for the difference between chimps and humans.
Culture assimilated new information into humans much faster than evolution could.
So he concludes that you can get a chimp-level AI, and to get up to human-level will take, not a very few insights, but a very great many, each one slowly improving the computer's intelligence. So no Foom, it'll be a gradual thing.
So I think I've figured out the question. Is there a commonly known answer, or are there insights towards the same?
Use unique, non-obvious terms for nuanced concepts
Naming things! Naming things is hard. It's been claimed that it's one of the hardest parts of computer science. Now, this might sound surprising, but one of my favoritely named concepts is Kahneman's System 1 and System 2.
I want you to pause for a few seconds and consider what comes to mind when you read just the bolded phrase above.
Got it?
If you're familiar with the concepts of S1 and S2, then you probably have a pretty rich sense of what I'm talking about. Or perhaps you have a partial notion: "I think it was about..." or something. If you've never been exposed to the concept, then you probably have no idea.
Now, Kahneman could have reasonably named these systems lots of other things, like "emotional cognition" and "rational cognition"... or "fast, automatic thinking" and "slow, deliberate thinking". But now imagine that it had been "emotional and rational cognition" that Kahneman had written about, and the effect on the earlier paragraph.
It would be about the same for those who had studied it in depth, but now those who had heard about it briefly (or maybe at one point knew about the concepts) would be reminded of that one particular contrast between S1 and S2 (emotion/reason) and be primed to think that was the main one, forgetting about all of the other parameters that that distinction seeks to describe. Those who had never heard of Kahneman's research might assume that they basically knew what the terms were about, because they already have a sense of what emotion and reason are.
This is related to a concept known as overshadowing, when a verbal description of a scene can cause eyewitnesses to misremember the details of the scene. Words can disrupt lots of other things too, including our ability to think clearly about concepts.
An example of this in action is Ask and Guess Culture model (and later Tell, and Reveal). People who are trying to use the models become hugely distracted by the particular names of the entities in the model, which only have a rough bearing on the nuanced elements of these cultures. Even after thinking about this a ton myself, I still found myself accidentally assuming that questions an Ask Culture thing.
So "System 1" and "System 2" have several advantages:
- they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source
- they don't overshadow people who do know them into assuming that the names contain the most important features
Another example that I think is decent (though not as clean as S1/S2) is Scott Alexander's use of Red Tribe and Blue Tribe to refer to culture clusters that roughly correspond to right and left political leanings in the USA. (For readers in most other countries: the US has their colors backwards... blue is left wing and red is right wing.) The colors make it reasonably easy to associate and remember, but unless you've read the post (or talked with someone who has) you won't necessarily know the jargon.
Jargon vs in-jokes
All of the examples I've listed above are essentially jargon—terminology that isn't available to the general public. I'm generally in favour of jargon! If you want to precisely and concisely convey a concept that doesn't already have its own word, then you have two options.
"Coining new jargon words (neologisms) is an alternative to formulating unusually precise meanings of commonly-heard words when one needs to convey a specific meaning." — fubarobfusco on a LW thread
Doing the latter is often safe when you're in a technical context. "Energy" is a colloquial term, but it also has a precise technical meaning. Since in technical contexts, people will tend assume that all such terms have technical meanings (or even learn said meanings early on) there is little risk of confusion here. Usually.
I'm going to make a case that it's worth treating nuanced concepts like in-jokes: don't make the meaning feel like it's in the term. Now, I'm not sold that this is a good idea all the time, but it seems to have some merit to it. I'm interested in where it works and where it doesn't; don't take this article to suggest I think it's unilaterally good. Let's jam on where it's good.
Communication is built on shared understanding. Much of this comes from the commons: almost all of words you're reading in this blog posts are not words that you and I had to guarantee we both understood with each other, before I could write the post. Sometimes, blog posts (or books, lectures, etc) will contain definitions, or will try to triangulate a concept with examples. The author hopes that the reader will indeed have a similar handle on the word they're using after reading the definition. (The reader may not, of course. Also they they might think they do. Or be confused.)
When you have the chance to interact with someone in real-time, 1-on-1, you can often gauge their understanding because they'll try to paraphrase the thing, and you can usually tell if the thing that they say is the kind of thing someone who understood would say. This is great, because then you can feel confident that you can use that concept as a building block in explaining further concepts.
One common failure mode of communication is when people assume that they're using the same building blocks as each other, when in fact, they're using importantly different concepts. The is the issue that rationalist taboo is designed to combat: forbid use of a confounding word and force the conversationalists to build the concept up from component parts again.
Another way to reduce the occurrence of this sort of thing is to use jargon and in-jokes, because then the person is going to draw a blank if they don't already have the shared understanding. Since you had to be there, and if you weren't, something key is obviously missing.
I once had a long conversation with someone, and we ended up using a lot of the objects we had with us as props when explaining certain concepts. This had the curious effect that if we wanted to reference our shared understanding of the earlier concept, we could refer to the object and it became really clear that it was our shared understanding we were referencing, not some more general thing. So I could say "the banana thing" to refer to him having explored the notion that evilness is a property of the map, not the territory, by remarking that a banana can't be evil but that we can think it evil.
The important thing here is that it felt like it was easier to point clearly at that topic by saying "the banana thing", because we both knew what that was and didn't need to accidentally overshadow it, by saying "the objects aren't evil thing" which might eventually get turned into a catchphrase that seems to contain meaning but never actually contained the critical insight.
This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hanging out with a particular person or small group. When you have a concept to explore, you'd grab an unused toy that seemed to suit it decently well, and then you'd gesture with it while explaining the concept. Then later you could refer to "the pink sparkly ball thing" or simply "this thing" while gesturing at the ball. Possibly, the other person wouldn't remember, or not immediately. But if they did, you could be much more confident that you were on the same page. It's a kind of shared mnemonic handle.

In some ways, this is already a natural part of human communication: I recall years ago talking to a friend and saying "oh, it's like the thing we talked about on my porch last summer" and she immediately knew what I meant. I'm basically proposing to take it further, by using props or by inventing new words.
Unfortunately, terms often end up losing their nuance, for various reasons. Sometimes this happens because the small concept they were trying to point at happens to be surrounded by a vacuum, so it expands. Other times because of shibboleths and people wanting to use in-group words. Or the words are used playfully and poetically, for humor purposes, which then makes it less clear that they once had a precise meaning.
This suggests there might be a kind of terminological inflation thing going on. And to the extent that signalling by using jargon is anti-inductive, that'll dilute things too.
I think if you're trying to think complex thoughts, it's worth developing specialized language, not just with groups of people, but even in 1-on-1 contexts. Of course, pay attention so you don't use terms with people who totally don't know them.
And this, this developing of shared language beyond what's strictly necessary but still worthwhile... this, perhaps, we might call the pink and purple ball thing.
(this article crossposted from malcolmocean.com)
Unofficial Canon on Applied Rationality
I have been thinking for a while that it would be useful if there was something similar to the Less Wrong Canon on Rationality for the CFAR material. Maybe, it could be called the 'CFAR Canon on Applied Rationality'. To start on this I have compiled a collection of descriptions for the CFAR techniques that I could find. I have separated the techniques into a few different sections. The sections and descriptions have mostly been written by me, with a lot of borrowing from other material, which means that they may not accurately reflect what CFAR actually teaches.
Please note that I have not attended any CFAR workshops, nor am I affiliated with CFAR in any way. My understanding of these techniques comes from CFAR videos, blogs and other websites which I have provided links to. If I have missed any important techniques or if my understanding of any of the techniques is incorrect or if you can provide links to the research that these techniques are based on, please let me know and I will update this post.
Warning:
Learning this material based solely on the descriptions written here may be unhelpful, arduous or even harmful. (See Duncan_Sabien's full comment for more information on this) It is because the material is very hard to learn correctly. Most of the techniques below involve in one way or another volitionally overriding your instinctual, intuitive or ingrained behaviours and thoughts. These are thoughts which not only often feel enticing and alluring, but that also often feel unmistakably right. If you are anything like me, then you should be very careful if you are trying to learn this material alone. For you will be prone to rationalization, taking shortcuts and making mistakes.
My recommendations for trying to learn this material are:
- learn it deeply and be sure to put what you have learnt into practice. It will often help if you take notes on what works for you and what doesn't. Also take note of the 'Mindsets and perspectives that help you in discovering potential situations that you could end up valuing' section as these are very important.
- get the help of experts or other people who have already expended great amounts of effort in trying to implement this material like the people at cfar. This will save you a great amount of stress and effort as it will allow you to avoid a plethora of potential mistakes and inefficiencies. If you really want to learn this material, then you should deeply consider attending a CFAR workshop.
- get the help of or involve friends. As Duncan_Sabien has said:
It is better on almost every axis with instructors, mentors, friends, companions—people to help you avoid the biggest pitfalls, help you understand the subtle points, tease apart the interesting implications, shore up your motivation, assist you in seeing your own mistakes and weaknesses. None of that is impossible on your own, but it's somewhere between one and two orders of magnitude more efficient and more efficacious with guidance".
- be dubious of your mental models. Beware thoughts and ideas that feel unequivocally right especially if they are solely located internally rather than also being expressed or formulated externally.
- You might want to bookmark this page instead of reading it all at once as it is quite long.
Sections:
Is Spirituality Irrational?
[Originally published at Intentional Insights in response to Religious and Rational]
Spirituality and rationality seem completely opposed. But are they really?
To get at this question, let's start with a little thought experiment. Consider the following two questions:
1. If you were given a choice between reading a physical book (or an e-book) or listening to an audiobook, which would you prefer?
2. If you were given a choice between listening to music, or looking at the grooves of a phonograph record through a microscope, which would you prefer?
But I am more interested in the answer to a third question:
3. For which of the first two questions do you have a stronger preference between the two options?
Most people will have a stronger preference in the second case than the first. But why? Both situations are in some sense the same: there is information being fed into your brain, in one case through your ears and in the other through your eyes. So why should people's preference for ears be so much stronger in the case of music than books?
There is something in the essence of music that is lost in the translation between an audio and a visual rendering. The same loss happens for words too, but to a much lesser extent. Subtle shades of emphasis and tone of voice can convey essential information in spoken language. This is one of the reasons that email is so notorious for amplifying misunderstandings. But the loss in much greater in the case of music.
The same is true for other senses. Color is one example. A blind person can abstractly understand what light is, and that color is a byproduct of the wavelength of light, and that light is a form of electromagnetic radiation... yet there is no way for a blind person to experience subjectively the difference between red and blue and green. But just because some people can't see colors doesn't mean that colors aren't real.
The same is true for spiritual experiences.
Now, before I expand that thought, I want to give you my bona fides. I am a committed rationalist, and an atheist (though I don't like to self-identify as an atheist because I'd rather focus on what I *do* believe in rather than what I don't). So I am not trying to convince you that God exists. What I want to say is rather that certain kinds of spiritual experiences *might* be more than mere fantasies made up out of whole cloth. If we ignore this possibility we risk shutting ourselves off from a vital part of the human experience.
I grew up in the deep south (Kentucky and Tennessee) in a secular Jewish family. When I was 12 my parents sent me to a Christian summer camp (there were no other kinds in Kentucky back in those days). After a week of being relentlessly proselytized (read: teased and ostracized), I decided I was tired of being the camp punching bag and so I relented and gave my heart to Jesus. I prayed, confessed my sins, and just like that I was a member of the club.
I experienced a euphoria that I cannot render into words, in exactly the same way that one cannot render into words the subjective experience of listening to music or seeing colors or eating chocolate or having sex. If you have not experienced these things for yourself, no amount of description can fill the gap. Of course, you can come to an *intellectual* understanding that "feeling the presence of the holy spirit" has nothing to do with any holy spirit. You can intellectually grasp that it is an internal mental process resulting from (probably) some kind of neurotransmitter released in response to social and internal mental stimulus. But that won't allow you to understand *what it is like* any more than understanding physics will let you understand what colors look like or what music sounds like.
Happily, there are ways to stimulate the subjective experience that I'm describing other than accepting Jesus as your Lord and Savior. Meditation, for example, can produce similar results. It can be a very powerful experience. It can even become addictive, almost like a drug.
I am not necessarily advocating that you go try to get yourself a hit of religious euphoria (though I wouldn’t discourage you either -- the experience can give you some interesting and useful perspective on life). Instead, I simply want to convince you to entertain the possibility that people might profess to believe in God for reasons other than indoctrination or stupidity. Religious texts and rituals might be attempts to share real subjective experiences that, in the absence of a detailed modern understanding of neuroscience, can appear to originate from mysterious, subtle external sources.
The reason I want to convince you to entertain this notion is that an awful lot of energy gets wasted by arguing against religious beliefs on logical grounds, pointing out contradictions in the Bible and whatnot. Such arguments tend to be ineffective, which can be very frustrating for those who advance them. The antidote for this frustration is to realize that spirituality is not about logic. It's about subjective experiences that not everyone is privy to. Logic is about looking at the grooves. Spirituality is about hearing the music.
The good news is that adopting science and reason doesn’t mean you have to give up on spirituality any more than you have to give up on music. There are myriad paths to spiritual experience, to a sense of awe and wonder at the grand tapestry of creation, to the essential existential mysteries of life and consciousness, to what religious people call “God.” Walking in the woods. Seeing the moons of Jupiter through a telescope. Gathering with friends to listen to music, or to sing, or simply to share the experience of being alive. Meditation. Any of these can be spiritual experiences if you allow them to be. In this sense, God is everywhere.
The Philosophical Implications of Quantum Information Theory
I was asked to write up a pithy summary of the upshot of this paper. This is the best I could manage.
One of the most remarkable features of the world we live in is that we can make measurements that are consistent across space and time. By "consistent across space" I mean that you and I can look at the outcome of a measurement and agree on what that outcome was. By "consistent across time" I mean that you can make a measurement of a system at one time and then make the same measurement of that system at some later time and the results will agree.
It is tempting to think that the reason we can do these things is that there exists an objective reality that is "actually out there" in some metaphysical sense, and that our measurements are faithful reflections of that objective reality. This hypothesis works well (indeed, seems self-evidently true!) until we get to very small systems, where it seems to break down. We can still make measurements that are consistent across space and time, but as soon as we stop making measurements, then things start to behave very differently than they did before. The classical example of this is the two-slit experiment: whenever we look at a particle we only ever find it in one particular place. When we look continuously, we see the particle trace out an unambiguous and continuous trajectory. But when we don't look, the particle behaves as if it is in more than one place at once, a behavior that manifests itself as interference.
The problem of how to reconcile the seemingly incompatible behavior of physical systems depending on whether or not they are under observation has come to be called the measurement problem. The most common explanation of the measurement problem is the Copenhagen interpretation of quantum mechanics which postulates that the act of measurement changes a system via a process called wave function collapse. In the contemporary popular press you will often read about wave function collapse in conjunction with the phenomenon of quantum entanglement, which is usually referred to as "spooky action at a distance", a phrase coined by Einstein, and intended to be pejorative. For example, here's the headline and first sentence of the above piece:
More evidence to support quantum theory’s ‘spooky action at a distance’
It’s one of the strangest concepts in the already strange field of quantum physics: Measuring the condition or state of a quantum particle like an electron can instantly change the state of another electron—even if it’s light-years away." (emphasis added)
This sort of language is endemic in the popular press as well as many physics textbooks, but it is demonstrably wrong. The truth is that measurement and entanglement are actually the same physical phenomenon. What we call "measurement" is really just entanglement on a large scale. If you want to see the demonstration of the truth of this statement, read the paper or watch the video or read the original paper on which my paper and video are based. Or go back and read about Von Neumann measurements or quantum decoherence or Everett's relative state theory (often mis-labeled "many-worlds") or relational quantum mechanics or the Ithaca interpretation of quantum mechanics, all of which turn out to be saying exactly the same thing.
Which is: the reason that measurements are consistent across space and time is not because these measurements are a faithful reflection of an underlying objective reality. The reason that measurements are consistent across space and time is because this is what quantum mechanics predicts when you consider only parts of the wave function and ignore other parts.
Specifically, it is possible to write down a mathematical description of a particle and two observers as a quantum mechanical system. If you ignore the particle (this is a formal mathematical operation called a partial trace of an operator matrix ) what you are left with is a description of the observers. And if you then apply information theoretical operations to that, what pops out is that the two observers are in classically correlated states. The exact same thing happens for observations made of the same particle at two different times.
The upshot is that nothing special happens during a measurement. Measurements are not instantaneous (though they are very fast ) and they are in principle reversible, though not in practice.
The final consequence of this, the one that grates most heavily on the intuition, is that your existence as a classical entity is an illusion. Because measurements are not a faithful reflection of an underlying objective reality, your own self-perception (which is a kind of measurement) is not a faithful reflection of an underlying objective reality either. You are not, in point of metaphysical fact, made of atoms. Atoms are a very (very!) good approximation to the truth, but they are not the truth. At the deepest level, you are a slice of the quantum wave function that behaves, to a very high degree of approximation, as if it were a classical system but is not in fact a classical system. You are in a very real sense living in the Matrix, except that the Matrix you are living in is running on a quantum computer, and so you -- the very close approximation to a classical entity that is reading these words -- can never "escape" the way Neo did.
As a corollary to this, time travel is impossible, because in point of metaphysical fact there is no time. Your perception of time is caused by the accumulation of entanglements in your slice of the wave function, resulting in the creation of information that you (and the rest of your classically-correlated slice of the wave function) "remember". It is those memories that define the past, you could even say create the past. Going "back to the past" is not merely impossible it is logically incoherent, no different from trying to construct a four-sided triangle. (And if you don't buy that argument, here's a more prosaic one: having a physical entity suddenly vanish from one time and reappear at a different time would violate conservation of energy.)
Anxiety and Rationality
Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties. I have, so I thought I’d share my LessWrong-inspired strategies. This is my first post, so feedback and formatting help are welcome.
First things first: the techniques developed by this community are not a panacea for mental illness. They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed. In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious. When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted. I become non-functional, and the error is clear.
Second: the best way to attack anxiety is to do the things that make your anxieties go away. That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep. If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep. Likewise for techniques that have little to no scientific evidence, but are a good placebo. A placebo effect is still an effect.
Finally, like all advice, this comes with Implicit Step Zero: “Have enough executive function to give this a try.” If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read. The advice for functioning better is not always identical to the advice for functioning at all. If there’s interest in an “improving your executive function” post, I’ll write one eventually. It will be late, because my executive function is not impeccable.
Simple updating is my personal favorite for attacking specific anxieties. A general sense of impending doom is a very tricky target and does not respond well to reality. If you can narrow it down to a particular belief, however, you can amass evidence against it.
Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness. The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it. Learning to make beliefs pay rent is much easier than making harmful aliefs go away. The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality. Update accordingly.
The first thing I do is identify the situation and why it’s dysfunctional. The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety. Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired. So I take the anxiety all the way through to its implication. The algorithm is something like this:
- Notice sense of doom
- Notice my avoidance behaviors (not opening my email, walking away from my desk)
- Ask “What am I afraid of?”
- Answer (it's probably silly)
- Ask “What do I think will happen?”
- Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)
In the “asking for help” scenario, the answer to “what do I think will happen” is implausible. It’s extremely unlikely that I’ll get fired for it! This helps take the gravitas out of the anxiety, but it does not make it go away.* After (6), it’s usually easy to do an experiment. If I ask my coworkers for help, will I get fired? The only way to know is to try.
…That’s actually not true, of course. A sense of my environment, my coworkers, and my general competence at work should be enough. But if it was, we wouldn’t be here, would we?
So I perform the experiment. And I wait. When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper. I label it “didn’t get fired.” Because again, even if it’s negative, I didn’t get fired.
This takes a lot of tick marks. Cutting down the Washington monument with a spoon, remember?
The tick marks don’t have to be physical. I prefer it, because it makes the “updating” process visual. I’ve tried making a mental note and it’s not nearly as effective. Play around with it, though. If you’re anything like me, you have a lot of anxieties to experiment with.
Usually, the anxiety starts to dissipate after obtaining several tick marks. Ideally, one iteration of experiments should solve the problem. But we aren’t ideal; we’re mentally ill. Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur. I occasionally panic when I have to return to work after taking a sick day. I ask my husband to remind me that I won’t get fired. I ask him to remind me that he’ll still love me if I do get fired. If this sounds childish, it’s because it is. Again: we’re mentally ill. Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense. Childish-but-helpful is much better than mature-and-harmful, if you have to choose.
I still have tiny ugh fields around my anxiety triggers. They don’t really go away. It’s more like learning not to hit someone you’re angry at. You notice the impulse, accept it, and move on. Hopefully, your harmful alief starves to death.
If you perform your experiment and doom does occur, it might not be you. If you can’t ask your boss for help, it might be your boss. If you disagree with your spouse and they scream at you for an hour, it might be your spouse. This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky. Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result. This is designed for situations where your alief is obviously silly. Where you know it’s silly, and need to throw evidence at your brain to internalize it. It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead).
*using this technique for several months occasionally stops the anxiety immediately after step 6.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)