Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Hundreds of Less Wrong posts summarize or repackage work previously published in professional books and journals, but Less Wrong also hosts lots of original research in philosophy, decision theory, mathematical logic, and other fields. This post serves as a curated index of Less Wrong posts containing significant original research.
Obviously, there is much fuzziness about what counts as "significant" or "original." I'll be making lots of subjective judgment calls about which suggestions to add to this post. One clear rule is: I won't be linking anything that merely summarizes previous work (e.g. Stuart's summary of his earlier work on utility indifference).
Update 02/16/2013: Added Save the princess: A tale of AIXI and utility functions, Naturalism versus unbounded (or unmaximisable) utility options, A brief history of ethically concerned scientists, Beyond Bayesians and Frequentists, Domesticating reduced impact AIs
Update 03/25/2013: Added VNM agents and lotteries involving an infinite number of possible outcomes, Self-assessment in expert AI predictions, Why AI may not foom, A problem with "playing chicken with the universe" as an approach to UDT
Highly Advanced Epistemology 101 for Beginners. Eliezer's bottom-up guide to truth, reference, meaningfulness, and epistemology. Includes practical applications and puzzling meditations.
Counterfactual resiliency test for non-causal models. Stuart Armstrong suggests testing non-causal models for "counterfactual resiliency."
Thoughts and problems with Eliezer's measure of optimization power. Stuart Armstrong examines some potential problems with Eliezer's concept of optimization power.
Free will. Eliezer's particular compatibilist-style solution to the free will problem from reductionist viewpoint.
The absolute Self-Selection Assumption. A clarification on anthropic reasoning, focused on Wei Dai’s UDASSA framework.
SIA, conditional probability, and Jaan Tallinn’s simulation tree. Stuart Armstrong makes the bridge between Nick Bostrom’s Self-Indication Assumption (SIA) and Jann Tallinn’s of superintelligence reproduction.
Mathematical Measures of Optimization Power. Alex Altair tackles one approach to mathematically formalizing Yudkowsky’s Optimization Power concept.
Several weeks ago, the NYC Rationality Meetup Group began discussing outreach, both for rationality in general and the group in particular. A lot of interesting problems were brought up. Should we be targeting the average person, or sticking to the cluster of personality-types that Less Wrong already attracts? How quickly should we introduce people to our community? What are the most effective ways to spread the idea of rationality, and what are the most effective ways of actually encouraging people to undertake rational actions?
Those are all complex questions with complex answers, which are beyond the scope of this post. I ended up focusing on the question: "Is ' Rationality' the word we want to use when we're pitching ourselves?" I do not think it's worthwhile to try and change the central meme of the Less Wrong community, but it's not obvious that the new, realspace communities forming need to use the same central meme.
This begat a simpler question: "What does the average person think of when they hear the word ' Rationality?' What positive or negative connotations does it have?" Do they think of straw vulcans and robots? Do they think of effective programmers or businessmen? Armed with this knowledge, we can craft a rationalist pitch that is likely to be effective at the average person, either by challenging their conception of rationality or by bypassing keywords that might set off memetic immune systems.
I've hated group projects since about Grade 3. I grew up assuming that at some point, working in groups would stop being a series of trials and tribulations, and turn into normal, sane people working just like they would normally, except on a shared problem. Either they would change, or I would change, because I am incredibly bad at teamwork, at least the kind of it that gets doled out in the classroom. I don’t have the requisite people skills to lead a group, but I’m too much of a control freak to meekly follow along when the group wants to do a B+ project and I would like an A+. Drama inevitably ensues.
I would like to not have this problem. An inability to work in teams seems like a serious handicap. There are very few jobs that don’t involve teamwork, and my choice of future career, nursing, involves a lot.
My first experiences in the workplace, as a lifeguard, made me feel a little better about this. There was a lot less drama and a lot more just getting the work done. I think it has a lot to do with a) the fact that we’re paid to do a job that’s generally pretty easy, and b) the requirements of that job are pretty simple, if not easy. There is drama, but it rarely involves guard rotations or who’s going to hose the deck, and I can safely ignore it. Rescues do involve teamwork, but it’s a specific sort of teamwork where the roles are all laid out in advance, and that’s what we spent most of our training learning. Example: in a three-guard scenario, the guard to notice an unconscious swimmer in the water becomes guard #1: they make a whistle signal to the others and jump in, while guard #2 calls 911 and guard #3 clears the pool and does crowd control. There isn’t a lot of room for drama, and there isn’t much point because there is one right way to do things, everyone knows the right way to do things, and there isn’t time to fight about it anyway.
I’m hoping that working as a nurse in a hospital will be more this and less like the school-project variety of group work. The roles are defined and laid out; they’re what we’re learning right now in our theory classes. There’s less of a time crunch, but there’s still, usually, an obviously right way to do things. Maybe it gets more complicated when you have to approach a colleague for, say, not following the hand-hygiene rules, or when the rules the hospital management enforces are obviously not the best way to do things, but those are add-ons to the job, not its backbone.
But that’s for bedside nursing. Research is a different matter, and unfortunately, it’s a lot more like school. I’m taking a class about research right now, and something like 30% or 40% of our mark is on a group project. We have to design a study from beginning to end: problem, hypothesis, type of research, research proposal, population and sample, methods of measurement, methods of analysis, etc. My excuse that “I dislike this because it has absolutely no real-world relevance” is downright wrong, because we’re doing exactly what real researchers would do, only with much less resources and time, and I do like research and would like to work in that milieu someday.
Conflict with my group-members usually comes because I’m more invested in the outcome than the others. I have more motivation to spend time on it, and a higher standard for "good enough". Even if I think the assignment is stupid, I want to do it properly, partly for grades and partly because I hate not doing things properly. I don’t want to lead the group, because I know I’m terrible at it, but no one else wants to either because they don’t care either way. I end up feeling like a slave driver who isn’t very good at her job.
This time I had a new sort of problem. A group asked me to join them because they thought I was smart and would be a good worker. They set a personal deadline to have the project finished nearly a month before it was due. They had a group meeting, which I couldn’t go to because I was at work, and assigned sections, and sent out an email with an outline. I skimmed the email and put it aside for later, since it seemed less than urgent to me. ...And all of a sudden, at our next meeting, the project was nearly finished. No one had hounded me; they had just gone ahead and done it. Maybe they had a schema in their heads that hounding the non-productive members of the team would lead to drama, but I was offended, because I felt that in my case it wouldn’t have. I would have overridden my policy of doing my work at the last minute, and just gotten it done. It’s not like I didn’t care about our final grade.
My pride was hurt (the way my classmate told me was by looking at my computer screen in the library, where I’d started to do the part assigned to me in the outline, and saying “you might as well not save that, I already did it.”) I didn’t feel like fighting about it, so I emailed the prof and asked if I could do the project on my own instead of with a team. She seemed confused that I wanted to do extra work, but assented.
I didn’t want to do extra work. I wanted to avoid the work of team meetings, team discussions, team drama... But that’s not how real-world research works. Refusing to play their game means I lose an opportunity to improve my teamwork skills, and I’m going to need those someday, and not just the skills acquired through lifeguarding. Either I need to turn off my control-freak need to have things my way, or I need to become charismatic and good at leading groups, and to do either of those things, I need a venue to practice.
Does anyone else here have the same problem I do? Has anyone solved it? Does anyone have tips for ways to improve?
Edit: reply to comment by jwendy, concerning my 'other' kind of problem.
"I probably didn't say enough about it in the article, if you thought it seemed glossed over, but I thought a lot about why this happened at the time, and I was pretty upset (more than I should have been, really, over a school project) and that's why I left the group...because unlike type#2 team members, I actually cared a lotabout making a fair contribution and felt like shit when I hadn't. I never consciously decided to procrastinate, either...I just had a lot of other things on my plate, which is pretty much inevitable during the school year, and all of a sudden, foom!, my part of the project is done because one of the girls was bored on the weekend and had nothing better to do. (Huh? When does this ever happen?)
So I guess I'm like a team #2 member in that I procrastinate when I can get away with it, but like a team#1 member in that I do want to turn in quality work and get an A+. And I want it to my my quality work, not someone else's with my name on it."
I think it was justified to be surprised when the new kind of problem happened to me. If I'm more involved/engaged than all the students I've worked with in the past, that doesn't mean I'm the most engaged, but it does mean I have a schema in my brain for 'no one has their work finished until a week after they say they will'.
I think I’ve always had certain stereotypes in my mind about research. I imagine a cutting-edge workplace, maybe not using the newest gadgets because these things cost money, but at least using the newest ideas. I imagine staff of research institutions applying the scientific method to boost their own productivity, instead of taking for granted the way that things have always been done. Maybe those were the naive ideas of someone who had never actually worked in a research field.
At the medical research institute where I work one day a week, I recently spent an entire seven-hour day going down a list of patient names, searching them on the hospital database, deciding whether they met the criteria for a study, and typing them into a colour-coded spreadsheet. The process had maybe six discrete steps, and all of them were purely mechanical. In seven hours, I screened about two hundred and fifty patients. I was paid $12.50 an hour to do this. It cost my employer 35 cents for each patient that I screened, and these patients haven't been visited, consented or included in any study. They're still only names on a spreadsheet. I’ve been told that I learn and work quickly, but I know I do this task inefficiently, because I’m not a simple computer program. I get bored. I make mistakes. Heaven forbid, I get distracted and start reading the nurses’ notes for fun because I find them interesting.
In 7 hours, I imagine that someone slightly above my skill level could write a simple program to do the same task. They wouldn’t screen any patients in those 7 hours, but once the program was finished, they could use it forever, or at least until the task changed and the program had to be modified. I don’t know how much it would cost the organization to employ a programmer; maybe it would cost more than just having me do it. I don’t know whether allowing that program to access the confidential database would be an issue. But it seems inefficient to pay human brains to do work that they’re bad at, that computers would be better at, even if those human brains belong to undergrad students who need the money badly enough not to complain.
One of the criteria I looked at when screening patients was whether they did their dialysis at a clinic in my hometown. They have to be driving distance, because my supervisor has to drive around the city and pick up blood samples to bring to our lab. I crossed out 30 names without even looking them up because I could see at a glance that they were a nearby city an hour’s drive away. How hard would it be to coordinate with the hospital in that city? Have the bloodwork analyzed there and the results emailed over? Maybe it would be non-trivially hard; I don’t know. I didn’t ask my supervisor because it isn’t my job to make management decisions. But medical research benefits everyone. A study with more patients produces data that’s statistically more valid, even if those patients live an hour’s drive away.
The office where I work is filled with paper. Floor-to-ceiling shelves hold endless binders full of source documents. Every email has to be printed and filed in a binder. Even the nurses’ notes and patient charts are printed off the database. It’s a legal requirement. The result is that we have two copies of everything, one online and one on paper, consuming trees. Running a computer consumes fossil fuels, of course. I don’t know for sure which is more efficient, paper or digital, but I do know that both is inefficient. I did ask my supervisor about this, and apparently it’s because digital records could be lost or deleted. How much would it take to make them durable enough?I guess that more than my supervisor, I see a future where software will do my job, where technology allows a study to be coordinated across the whole world, where digital storage will be reliable enough. But how long will it take for the laws and regulations to change? For people to change? I don’t know how many of my complaints are valid. Maybe this is the optimal way to do research, but it doesn’t feel like it. It feels like a papier-mâché of laws and habits and trial-and-error. It doesn't feel planned.
Eliezer Yudkowsky identifies scholarship as one of the Twelve Virtues of Rationality:
Study many sciences and absorb their power as your own. Each field that you consume makes you larger... It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...
First, consider the evangelical atheist community to which I belong. There is a tendency for lay atheists to write "refutations" of theism without first doing a modicum of research on the current state of the arguments. This can get atheists into trouble when they go toe-to-toe with a theist who did do his homework. I'll share two examples:
- In a debate with theist Bill Craig, agnostic Bart Ehrman paraphrased David Hume's argument that we can't demonstrate the occurrence of a miracle in the past. Craig responded with a PowerPoint slide showing Bayes' Theorem, and explained that Ehrman was only considering prior probabilities, when of course he needed to consider the relevant conditional probabilities as well. Ehrman failed to respond to this, and looked as though he had never seen Bayes' Theorem before. Had Ehrman practiced the virtue of scholarship on this issue, he might have noticed that much of the scholarly work on Hume's argument in the past two decades has involved Bayes' Theorem. He might also have discovered that the correct response to Craig's use of Bayes' Theorem can be found in pages 298-341 of J.H. Sobel’s Logic and Theism.
- In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?” Craig pointed this out and handily won that part of the debate. Had Hitchens had a passing understanding of science or explanation, he could have avoided looking foolish, and also spent more time on substantive objections to theism. (One can give a "Who made God?" objection to theism that has some meat, but that's not the one Hitchens gave. Hitchens' objection concerned an infinite regress of explanations, which is just as much a feature of science as it is of theism.)
The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of diligent workers before you, and get off somewhere near where the road finally fades into fresh jungle. Study enough to have a view of the current state of the debate so you don't waste your time on paths that have already dead-ended, or on arguments that have already been refuted. Catch up before you speak up.
This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.
The Less Wrong community is a particularly smart and well-read bunch, but of course it doesn't always embrace the virtue of scholarship.
Consider the field of formal epistemology, an entire branch of philosophy devoted to (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory. These are central discussion topics at Less Wrong, and yet my own experience suggests that most Less Wrong readers have never heard of the entire field, let alone read any works by formal epistemologists, such as In Defense of Objective Bayesianism by Jon Williamson or Bayesian Epistemology by Luc Bovens and Stephan Hartmann.
A crucial question towards the beginning of any research project is, why should my group succeed in elucidating an answer to a question where others may have tried and failed?
Here's how I'm going about dividing the possible worlds, but I'm interested to see if anyone has any other strategies. First, the whole question is conditional on nobody having already answered the particular question you're interested in. So, you first need an exhaustive lit review, that should scale in intensity based on how much effort you expect to actually expend on the project. Still nothing? These are the remaining possibilities:
1) Nobody else has ever thought of your question, even though all of the pieces of knowledge needed to formulate it have been known for years. If the field has many people involved, the probability of this is vanishingly small and you should systematically disabuse yourself of your fantasies if you think like this often. Still... if true, the prognosis: a good sign.
2) Nobody else has ever thought of your question, because it wouldn't have been ask-able without pieces of knowledge that were discovered just recently. This is common in fast-paced fields and it's why they can be especially exciting. The prognosis: a good sign, but work quickly!
3) Others have thought of your question, but didn't think it was interesting enough to devote serious attention to. We should take this seriously, as how informed others choose to allocate their attention is one of our better approximations to real prediction markets. So, the prognosis: bad sign. Figure out whether you can not only answer your question but validate its usefulness / importance, too.
4) Others have thought of your question, thought it was interesting, but have never tried to answer it because of resource or tech restraints, which you do not face. Prognosis: probably the best-case scenario.
5) Others have thought of your question and run the relevant tests, but failed to get any consistent / reliable results. It'd be nice if there were no publication bias but of course there is--people are much more likely to publish statistically significant, positive results. Due to this bias, it is sometimes hard to tell precisely how many dead skeletons and dismembered brains line your path, and because of this uncertainty you must assign this possibility a non-zero probability. The prognosis: a bad sign, but do you feel lucky?
6) Others have thought of your question, run the relevant tests, and failed to get consistent / reliable results, but used a different method than the one you will use. Your new tech might clear up some of the murkiness, but it's important here to be precise about which specific issues your method solves and which it doesn't. The prognosis: all things equal, a good sign.
These are the considerations we make when we decide whether to pursue a given topic. But even if you do choose to pursue the question, some of these possibilities have policy recommendations for how to proceed. For example, using new tech, even if it's not necessarily demonstrably better in all cases, seems like a good idea given the possibility of #6.
Most of the research on cognitive biases and other psychological phenomena that we draw on here is based on samples of students at US universities. To what extent are we uncovering human universals, and to what extent facts about these WEIRD (Western, Educated, Industrialized, Rich, and Democratic) sample sources? A paper in press in Behavioural and Brain Sciences the evidence from studies that reach outside this group and highlights the many instances in which US students are outliers for many crucial studies in behavioural economics.
Epiphenom: How normal is WEIRD?
Henrich, J., Heine, S. J., & Norenzayan, A. (in press). The Weirdest people in the world? (PDF) Behavioral and Brain Sciences.
Broad claims about human psychology and behavior based on narrow samples from Western societies are regularly published in leading journals. Are such species-generalizing claims justified? This review suggests not only that substantial variability in experimental results emerges across populations in basic domains, but that standard subjects are in fact rather unusual compared with the rest of the species - frequent outliers. The domains reviewed include visual perception, fairness, categorization, spatial cognition, memory, moral reasoning and self‐concepts. This review (1) indicates caution in addressing questions of human nature based on this thin slice of humanity, and (2) suggests that understanding human psychology will require tapping broader subject pools. We close by proposing ways to address these challenges.
View more: Next