Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

How about testing our ideas?

31 [deleted] 14 September 2012 10:28AM

Related to:  Science: Do It Yourself, How To Fix Science, Rationality and Science posts from this sequence, Cargo Cult Science, "citizen science"

You think you have a good map, what you really have is a working hypothesis

You did some thought on human rationality, perhaps spurred by intuition or personal experience. Building it up you did your homework and stood on the shoulders of other people's work giving proper weight to expert opinion. You write an article on LessWrong, it gets up voted, debated and perhaps accepted and promoted as part of a "sequence". But now you'd like to do that thing that's been nagging you since the start, you don't want to be one of those insight junkies consuming fun plausible ideas forgetting to ever get around to testing them. Lets see how the predictions made by your model hold up! You dive into the literature in search of experiments that have conveniently already tested your idea.

It is possible there simply isn't any such experimental material or that it is unavailable. Don't get me wrong, if I had to bet on it I would say it is more likely there is at least something similar to what you need than not. I would also bet that some things we wish where done haven't been so far and are unlikely to be for a long time. In the past I've wondered if we can in the future expect CFAR or LessWrong to do experimental work to test many of the hypotheses we've come up with based on fresh but unreliable insight, anecdotal evidence and long fragile chains of reasoning. This will not happen on its own.

With mention of CFAR, the mind jumps to them doing expensive experiments or posing long questionnaires with small samples of students and then publishing papers, like everyone else does. It is the respectable thing to do and it is something that may or may not be worth their effort. It seems doable. The idea of LWers getting into the habit of testing their ideas on human rationality beyond the anecdotal seems utterly impractical. Or is it?

That ordinary people can band together to rapidly produce new knowledge is anything but a trifle

How useful would it be if we had a site visited by thousands or tens of thousands solving forms or participating in experiments submitted by LessWrong posters or CFAR researchers? Something like this site. How useful would it be if we made such a data set publicly available? What if we could in addition to this data mine how people use apps or an online rationality class? At this point you might be asking yourself if building knowledge this way even possible in fields that takes years to study. A fair question, especially for tasks that require technical competence, the answer is yes.

I'm sure many at this point, have started wondering about what kinds of problems biased samples might create for us. It is important to keep in mind what kind of sample of people you get to participate in the experiment or fill out your form, since this influences how confident you are allowed to be about generalizations. Learning things about very specific kinds of people is useful too. Recall this is hardly a unique problem, you can't really get away from it in the social sciences. WEIRD samples aren't weird in academia. And I didn't say the thousands and tens of thousands people would need to come from our own little corner of the internet, indeed they probably couldn't. There are many approaches to getting them and making the sample as good as we can. Sites like yourmorals.org tried a variety of approaches we could learn from them. Even doing something like hiring people from Amazon Mechanical Turk can work out surprisingly well

LessWrong Science: We do what we must because we can

The harder question is if the resulting data would be used at all. As we currently are? I don't think so. There are many publicly available data sets and plenty of opportunities to mine data online, yet we see little if any original analysis based on them here. We either don't have norms encouraging this or we don't have enough people comfortable with statistics doing so. Problems like this aren't immutable. The Neglected Virtue of Scholarship noticeably changed our community in a similarly profound way with positive results. Feeling that more is possible I think it is time for us to move in this direction.

Perhaps just creating a way to get the data will attract the right crowd, the quantified self people are not out of place here. Perhaps LessWrong should become less of a site and more of a blogosphere. I'm not sure how and I think for now the question is a distraction anyway. What clearly can be useful is to create a list of models and ideas we've already assimilated that haven't been really tested or are based on research that still awaits replication. At the very least this will help us be ready to update if relevant future studies show up. But I think that identifying any low hanging fruit and design some experiments or attempts at replication, then going out there and try to perform them can get us so much more. If people have enough pull to get them done inside academia without community help great, if not we should seek alternatives.

Neurological reality of human thought and decision making; implications for rationalism.

3 Dmytry 22 January 2012 02:39PM

The human brain is a massively parallel system. The best such system can do for doing anything efficiently and quickly is to have different small portions of brain compute and submit their partial answers and progressively reduce, combine and cherrypick - a process of which we seem to have almost no direct awareness of, and can only conjecture indirectly as it is the only way thought can possibly work on such slow clocked (~100..200hz) such extremely parallel hardware which uses up a good fraction of body's nutrient supply.

Yet it is immensely difficult for us to think in terms of parallel processes. We have very little access to how the parallel processing works in our heads, and we have very limited ability of considering a parallel process in parallel in our heads. We are only aware of some serial-looking self model within ourselves - a model that we can most easily consider - and we misperceive this model as self; believing ourselves to be self aware when we are only aware of that model which we equated to self.

People aren't, for the most part, discussing how to structure the parallel processing for maximum efficiency or rationality, and applying that to their lives. It's mostly the serial processes that are being discussed. The necessary, inescapable reality of how mind works is entirely sealed from us, and we are not directly aware of it, nor are we discussing and sharing how that works. Whatever little is available, we are not trained to think in those terms - the culture trains us to think in terms of serial, semantic process that would utter things like "I think, therefore I am".

This is in a way depressing to realize.

But at same time this realization brings hope - there may be a lot of low hanging fruit left if the approach has not been very well considered. I personally have been trying to think of myself as of parallel system with some agreement mechanism for a long while now. It does seem to be a more realistic way to think of oneself, in terms of understanding why you make mistakes and how they can be improved, but at same time as with any complex approach where you 'explain' existing phenomena there's a risk of being able to 'explain' anything while understanding nothing.

I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; literature is rife with references to "a part of me wanted", and perhaps we should all take this as much more than allegory. Perhaps the way you work when you decide to do or not do something, is really best thought of as a disagreement of multiple systems with some arbitration mechanism forcing default action; perhaps training - the drill-response kind of training, not simply informing oneself - could allow to make much better choices in the real time, to arrive at choices rationally rather than via some sort of tug of war between regions that propose different answers and the one that sends the strongest signal winning the control.

Of course that needs to be done very cautiously as in the complex and hard to think topics in general its easy to slip towards fuzzy logic where each logical step contains a small fallacy which leads to rapid divergence to the point that you can prove or explain anything. The Freudian style id/ego/superego as simple explanation for literally everything which predicts nothing is not what we want.

2011 Survey Results

94 Yvain 05 December 2011 10:49AM

A big thank you to the 1090 people who took the second Less Wrong Census/Survey.

Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.

I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.

Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.

continue reading »

2011 Less Wrong Census / Survey

77 Yvain 01 November 2011 06:28PM

The final straw was noticing a comment referring to "the most recent survey I know of" and realizing it was from May 2009. I think it is well past time for another survey, so here is one now.

Click here to take the survey

I've tried to keep the structure of the last survey intact so it will be easy to compare results and see changes over time, but there were a few problems with the last survey that required changes, and a few questions from the last survey that just didn't apply as much anymore (how many people have strong feelings on Three Worlds Collide these days?)

Please try to give serious answers that are easy to process by computer (see the introduction). And please let me know as soon as possible if there are any security problems (people other than me who can access the data) or any absolutely awful questions.

I will probably run the survey for about a month unless new people stop responding well before that. Like the last survey, I'll try to calculate some results myself and release the raw data (minus the people who want to keep theirs private) for anyone else who wants to examine it.

Like the last survey, if you take it and post that you took it here, I will upvote you, and I hope other people will upvote you too.

Official Less Wrong Redesign: Call for Suggestions

20 Louie 20 April 2011 05:56PM

In the next month, the administrators of Less Wrong are going to sit down with a professional designer to tweak the site design. But before they do, now is your chance to make suggestions that will guide their redesign efforts.

How can we improve the Less Wrong user experience? What features aren’t working? What features don’t exist? What would you change about the layout, templates, images, navigation, comment nesting, post/comment editing, side-bars, RSS feeds, color schemes, etc? Do you have specific CSS or HTML changes you'd make to improve load time, SEO, or other valuable metrics?

The rules for this thread are:

  • One suggestion per comment.
  • Upvote all comments you’d like to see implemented.


BUT DON’T JUMP TO THE COMMENTS JUST YET: Take a few minutes to collect your thoughts and write down your own ideas before reading others’ suggestions. Less contamination = more unique ideas + better feature coverage!

Thanks for your help!

What I've learned from Less Wrong

79 Louie 20 November 2010 12:47PM

Related to: Goals for which Less Wrong does (and doesn’t) help

I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.

1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”

2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.

continue reading »

Yes, a blog.

88 Academian 19 November 2010 01:53AM

When I recommend LessWrong to people, their gut reaction is usually "What? You think the best existing philosophical treatise on rationality is a blog?"

Well, yes, at the moment I do.

"But why is it not an ancient philosophical manuscript written by a single Very Special Person with no access to the massive knowledge the human race has accumulated over the last 100 years?"

Besides the obvious? Three reasons: idea selection, critical mass, and helpful standards for collaboration and debate.

Idea selection.

Ancient people came up with some amazing ideas, like how to make fire, tools, and languages. Those ideas have stuck around, and become integrated in our daily lives to the point where they barely seem like knowledge anymore. The great thing is that we don't have to read ancient cave writings to be reminded that fire can keep us warm; we simply haven't forgotten. That's why more people agree that fire can heat your home than on how the universe began.

Classical philosophers like Hume came up with some great ideas, too, especially considering that they had no access to modern scientific knowledge. But you don't have to spend thousands of hours reading through their flawed or now-uninteresting writings to find their few truly inspiring ideas, because their best ideas have become modern scientific knowledge. You don't need to read Hume to know about empiricism, because we simply haven't forgotten it... that's what science is now. You don't have to read Kant to think abstractly about Time; thinking about "timelines" is practically built into our language nowadays.

See, society works like a great sieve that remembers good ideas, and forgets some of the bad ones. Plenty of bad ideas stick around because they're viral (self-propagating for reasons other than helpfulness/verifiability), so you can't always trust an idea just because it's old. But that's how any sieve works: it narrows your search. It keeps the stuff you want, and throws away some of the bad stuff so you don't have to look at it.

LessWrong itself is an update patch for philosophy to fix compatibility issues with science and render it more useful. That it would exist now rather than much earlier is no coincidence: right now, it's the gold at the bottom of the pan, because it's taking the idea filtering process to a whole new level. Here's a rough timeline of how LessWrong happened:

continue reading »

Goals for which Less Wrong does (and doesn't) help

57 AnnaSalamon 18 November 2010 10:37PM

Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.

In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”.  He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.  

So far, excellent news, right?  A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future.  Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.

The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”.  Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments.  But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.  And they were the sort of errors LW could have helped with.  And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.

So, in case it’s useful to others, a review of what LW is useful for.

continue reading »

References & Resources for LessWrong

90 XiXiDu 10 October 2010 02:54PM

A list of references and resources for LW

Updated: 2011-05-24

  • F = Free
  • E = Easy (adequate for a low educational background)
  • M = Memetic Hazard (controversial ideas or works of fiction)


Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. (And Khan Academy followed by BetterExplained plus the help of Google and Wikipedia ought to be enough to let anyone read anything directed at the scientifically literate.) Most of these references aren't prerequisite, and only a small fraction are pertinent to any particular post on LessWrong. Do not be intimidated, just go ahead and start reading the Sequences if all this sounds too long. It's much easier to understand than this list makes it look like.

Nevertheless, as it says in the Twelve Virtues of Rationality, scholarship is a virtue, and in particular:

It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.

continue reading »

New Discussion section on LessWrong!

17 Emile 28 September 2010 01:08PM

There is a new discussion section on LessWrong.

According to the (updated) About page:

The Less Wrong discussion area is for topics not yet ready or not suitable for normal top level posts. To post a new discussion, select "Post to: Less Wrong Discussion" from the Create new article page. Comment on discussion posts as you would elsewhere on the site.

Votes on posts are worth ±10 points on the main site and ±1 point in the discussion area. [...] anyone can post to the discussion area.

(There is a link at the top right, under the banner)

View more: Prev | Next