Neurological reality of human thought and decision making; implications for rationalism.
The human brain is a massively parallel system. The best such system can do for doing anything efficiently and quickly is to have different small portions of brain compute and submit their partial answers and progressively reduce, combine and cherrypick - a process of which we seem to have almost no direct awareness of, and can only conjecture indirectly as it is the only way thought can possibly work on such slow clocked (~100..200hz) such extremely parallel hardware which uses up a good fraction of body's nutrient supply.
Yet it is immensely difficult for us to think in terms of parallel processes. We have very little access to how the parallel processing works in our heads, and we have very limited ability of considering a parallel process in parallel in our heads. We are only aware of some serial-looking self model within ourselves - a model that we can most easily consider - and we misperceive this model as self; believing ourselves to be self aware when we are only aware of that model which we equated to self.
People aren't, for the most part, discussing how to structure the parallel processing for maximum efficiency or rationality, and applying that to their lives. It's mostly the serial processes that are being discussed. The necessary, inescapable reality of how mind works is entirely sealed from us, and we are not directly aware of it, nor are we discussing and sharing how that works. Whatever little is available, we are not trained to think in those terms - the culture trains us to think in terms of serial, semantic process that would utter things like "I think, therefore I am".
This is in a way depressing to realize.
But at same time this realization brings hope - there may be a lot of low hanging fruit left if the approach has not been very well considered. I personally have been trying to think of myself as of parallel system with some agreement mechanism for a long while now. It does seem to be a more realistic way to think of oneself, in terms of understanding why you make mistakes and how they can be improved, but at same time as with any complex approach where you 'explain' existing phenomena there's a risk of being able to 'explain' anything while understanding nothing.
I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; literature is rife with references to "a part of me wanted", and perhaps we should all take this as much more than allegory. Perhaps the way you work when you decide to do or not do something, is really best thought of as a disagreement of multiple systems with some arbitration mechanism forcing default action; perhaps training - the drill-response kind of training, not simply informing oneself - could allow to make much better choices in the real time, to arrive at choices rationally rather than via some sort of tug of war between regions that propose different answers and the one that sends the strongest signal winning the control.
Of course that needs to be done very cautiously as in the complex and hard to think topics in general its easy to slip towards fuzzy logic where each logical step contains a small fallacy which leads to rapid divergence to the point that you can prove or explain anything. The Freudian style id/ego/superego as simple explanation for literally everything which predicts nothing is not what we want.
2011 Survey Results
A big thank you to the 1090 people who took the second Less Wrong Census/Survey.
Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.
I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.
Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.
2011 Less Wrong Census / Survey
The final straw was noticing a comment referring to "the most recent survey I know of" and realizing it was from May 2009. I think it is well past time for another survey, so here is one now.
I've tried to keep the structure of the last survey intact so it will be easy to compare results and see changes over time, but there were a few problems with the last survey that required changes, and a few questions from the last survey that just didn't apply as much anymore (how many people have strong feelings on Three Worlds Collide these days?)
Please try to give serious answers that are easy to process by computer (see the introduction). And please let me know as soon as possible if there are any security problems (people other than me who can access the data) or any absolutely awful questions.
I will probably run the survey for about a month unless new people stop responding well before that. Like the last survey, I'll try to calculate some results myself and release the raw data (minus the people who want to keep theirs private) for anyone else who wants to examine it.
Like the last survey, if you take it and post that you took it here, I will upvote you, and I hope other people will upvote you too.
Official Less Wrong Redesign: Call for Suggestions
In the next month, the administrators of Less Wrong are going to sit down with a professional designer to tweak the site design. But before they do, now is your chance to make suggestions that will guide their redesign efforts.
How can we improve the Less Wrong user experience? What features aren’t working? What features don’t exist? What would you change about the layout, templates, images, navigation, comment nesting, post/comment editing, side-bars, RSS feeds, color schemes, etc? Do you have specific CSS or HTML changes you'd make to improve load time, SEO, or other valuable metrics?
The rules for this thread are:
- One suggestion per comment.
- Upvote all comments you’d like to see implemented.
BUT DON’T JUMP TO THE COMMENTS JUST YET: Take a few minutes to collect your thoughts and write down your own ideas before reading others’ suggestions. Less contamination = more unique ideas + better feature coverage!
Thanks for your help!
What I've learned from Less Wrong
Related to: Goals for which Less Wrong does (and doesn’t) help
I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.
1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”
2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.
Yes, a blog.
When I recommend LessWrong to people, their gut reaction is usually "What? You think the best existing philosophical treatise on rationality is a blog?"
Well, yes, at the moment I do.
"But why is it not an ancient philosophical manuscript written by a single Very Special Person with no access to the massive knowledge the human race has accumulated over the last 100 years?"
Besides the obvious? Three reasons: idea selection, critical mass, and helpful standards for collaboration and debate.
Idea selection.
Ancient people came up with some amazing ideas, like how to make fire, tools, and languages. Those ideas have stuck around, and become integrated in our daily lives to the point where they barely seem like knowledge anymore. The great thing is that we don't have to read ancient cave writings to be reminded that fire can keep us warm; we simply haven't forgotten. That's why more people agree that fire can heat your home than on how the universe began.
Classical philosophers like Hume came up with some great ideas, too, especially considering that they had no access to modern scientific knowledge. But you don't have to spend thousands of hours reading through their flawed or now-uninteresting writings to find their few truly inspiring ideas, because their best ideas have become modern scientific knowledge. You don't need to read Hume to know about empiricism, because we simply haven't forgotten it... that's what science is now. You don't have to read Kant to think abstractly about Time; thinking about "timelines" is practically built into our language nowadays.
See, society works like a great sieve that remembers good ideas, and forgets some of the bad ones. Plenty of bad ideas stick around because they're viral (self-propagating for reasons other than helpfulness/verifiability), so you can't always trust an idea just because it's old. But that's how any sieve works: it narrows your search. It keeps the stuff you want, and throws away some of the bad stuff so you don't have to look at it.
LessWrong itself is an update patch for philosophy to fix compatibility issues with science and render it more useful. That it would exist now rather than much earlier is no coincidence: right now, it's the gold at the bottom of the pan, because it's taking the idea filtering process to a whole new level. Here's a rough timeline of how LessWrong happened:
Goals for which Less Wrong does (and doesn't) help
Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality
We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.
In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”. He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.
So far, excellent news, right? A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future. Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.
The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”. Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments. But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be. And they were the sort of errors LW could have helped with. And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.
So, in case it’s useful to others, a review of what LW is useful for.
References & Resources for LessWrong
A list of references and resources for LW
Updated: 2011-05-24
- F = Free
- E = Easy (adequate for a low educational background)
- M = Memetic Hazard (controversial ideas or works of fiction)
Summary
Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. (And Khan Academy followed by BetterExplained plus the help of Google and Wikipedia ought to be enough to let anyone read anything directed at the scientifically literate.) Most of these references aren't prerequisite, and only a small fraction are pertinent to any particular post on LessWrong. Do not be intimidated, just go ahead and start reading the Sequences if all this sounds too long. It's much easier to understand than this list makes it look like.
Nevertheless, as it says in the Twelve Virtues of Rationality, scholarship is a virtue, and in particular:
It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.
New Discussion section on LessWrong!
There is a new discussion section on LessWrong.
According to the (updated) About page:
The Less Wrong discussion area is for topics not yet ready or not suitable for normal top level posts. To post a new discussion, select "Post to: Less Wrong Discussion" from the Create new article page. Comment on discussion posts as you would elsewhere on the site.
Votes on posts are worth ±10 points on the main site and ±1 point in the discussion area. [...] anyone can post to the discussion area.
(There is a link at the top right, under the banner)
Less Wrong Should Confront Wrongness Wherever it Appears
In a recent discussion about a controversial topic which I will not name here, Vladimir_M noticed something extremely important.
Because the necessary information is difficult to obtain in a clear and convincing form, and it's drowned in a vast sea of nonsense that's produced on this subject by just about every source of information in the modern society.
I have separated it from its original context, because this issue applies to many important topics. There are many topics where the information that most people receive is confused, wrong, or biased, and where nonsense drowns out truth and clarity. Wherever this occurs, it is very bad and very important to notice.
There are many reasons why it happens, many of which have been explicitly studied and discussed as topics here. The norms and design of the site are engineered to promote clarity and correctness. Strategies for reasoning correctly are frequently recurring topics, and newcomers are encouraged to read a large back-catalog of articles about how to avoid common errors in thinking (the sequences). A high standard of discourse is enforced through voting, which also provides rapid feedback to help everyone improve their writing. Since Well-Kept Gardens Die by Pacifism, when the occasional nutjob stops by, they're downvoted into invisibility and driven away - and while you wouldn't notice from the comment archives, this has happened lots of times.
Less Wrong has the highest accuracy and signal to noise ratio of any blog I've seen, other than those that limit themselves to narrow specialties. In fact, I doubt anyone here knows a better one. The difference is very large. While we are certainly not perfect, errors on Less Wrong are rarer and much more likely to be spotted and corrected than on any similar site, so a community consensus here is a very strong signal of clarity and correctness.
As a result, Less Wrong is well positioned to find and correct errors in the public discourse. Less Wrong should confront wrongness wherever it appears. Wherever large amounts of utility depend on clear and accurate information, it's not already prevalent, and we have the ability to produce or properly filter that information, then we ought to do so and lots of utility depends on it. Even if it's incompatible with status signaling, or off topic, or otherwise incompatible with non-vital social norms.
So I propose the following as a community norm. If a topic is important, the public discourse on it is wrong for any reason, it hasn't appeared on Less Wrong before, and a discussion on Less Wrong would probably bring clarity, then it is automatically considered on-topic. By important, I mean topics where inaccurate or confused beliefs would cost lots of utility for readers or for humanity. Approaching a topic from a new and substantially different angle doesn't count as a duplicate.
EDIT: This thread is producing a lot of discussion about what Less Wrong's norms should be. I have proposed a procedure for gathering and filtering these discussions into a top-level post, which would have the effect of encouraging people to enforce them through voting and comments.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)