The human brain is a massively parallel system. The best such system can do for doing anything efficiently and quickly is to have different small portions of brain compute and submit their partial answers and progressively reduce, combine and cherrypick - a process of which we seem to have almost no direct awareness of, and can only conjecture indirectly as it is the only way thought can possibly work on such slow clocked (~100..200hz) such extremely parallel hardware which uses up a good fraction of body's nutrient supply.

Yet it is immensely difficult for us to think in terms of parallel processes. We have very little access to how the parallel processing works in our heads, and we have very limited ability of considering a parallel process in parallel in our heads. We are only aware of some serial-looking self model within ourselves - a model that we can most easily consider - and we misperceive this model as self; believing ourselves to be self aware when we are only aware of that model which we equated to self.

People aren't, for the most part, discussing how to structure the parallel processing for maximum efficiency or rationality, and applying that to their lives. It's mostly the serial processes that are being discussed. The necessary, inescapable reality of how mind works is entirely sealed from us, and we are not directly aware of it, nor are we discussing and sharing how that works. Whatever little is available, we are not trained to think in those terms - the culture trains us to think in terms of serial, semantic process that would utter things like "I think, therefore I am".

This is in a way depressing to realize.

But at same time this realization brings hope - there may be a lot of low hanging fruit left if the approach has not been very well considered. I personally have been trying to think of myself as of parallel system with some agreement mechanism for a long while now. It does seem to be a more realistic way to think of oneself, in terms of understanding why you make mistakes and how they can be improved, but at same time as with any complex approach where you 'explain' existing phenomena there's a risk of being able to 'explain' anything while understanding nothing.

I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; literature is rife with references to "a part of me wanted", and perhaps we should all take this as much more than allegory. Perhaps the way you work when you decide to do or not do something, is really best thought of as a disagreement of multiple systems with some arbitration mechanism forcing default action; perhaps training - the drill-response kind of training, not simply informing oneself - could allow to make much better choices in the real time, to arrive at choices rationally rather than via some sort of tug of war between regions that propose different answers and the one that sends the strongest signal winning the control.

Of course that needs to be done very cautiously as in the complex and hard to think topics in general its easy to slip towards fuzzy logic where each logical step contains a small fallacy which leads to rapid divergence to the point that you can prove or explain anything. The Freudian style id/ego/superego as simple explanation for literally everything which predicts nothing is not what we want.

New Comment
43 comments, sorted by Click to highlight new comments since:

the literature is ripe with references to "a part of me wanted", and perhaps we should all take this as much more than allegory.

See, for instance, Minsky's Society of Mind, and Internal Family Systems.

I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; the literature is ripe with references to "a part of me wanted", and perhaps we should all take this as much more than allegory.

Uh, hasn't connectionism been the dominant view in the philosophy of mind since at least the 1980s?

Dennett argues that human consciousness is essentially a single-threaded program laboriously constructed to run on an essentially parallel machine ("Consciousness explained"; I'm unfortunately still reading.)

We can, I believe, access the underlying machine. Processing social cues (cf. autists), playing Quake or protein folding is too hard for our single-threaded conscious brains, but clearly within human capacity. From personal experience, many high-level mathematicians (professors etc.) also have highly effective intuitions (of course, the ideas still have to be checked by the conscious mind - even highly effective intuitions produce quite a bit of nonsense, and almost always forget some details.)

Not to mention, we can catch baseballs, run, and select targets from a field of distractors, among many other feats of unconscious data processing and system control. This ought not be surprising... while consciousness might (or might not) be single-threaded software running on parallel hardware, there are of course other functions that the hardware performs.

The first thing that should be noted is that any theory of a massively parallel system simply must be abstracted in order for humans to be able to understand it. Take, for example, anything that tries to describe the behavior of a large population of people: economics, sociology, political science, etc. We always create high-level abstract concepts (describing the behavior of groups of people rather than the fine details of every single individual).

Keeping that in mind, psychology is intentionally high-level and uses abstract concepts, which have an as-of-yet unclear correspondence to the lower level descriptions of the brain we have from neuroscience. This relation is analogous to that between high-level descriptions of large populations, and actions of individuals.

The answer then, in my opinion, is to keep working towards bridging the gap between the lowest level which we have a near-deterministic understanding of (we know how individual neurons work and a little about how they are connected in the brain), and the higher level intuitive descriptions of mind which are descriptive but not predictive. The massive parallelism required by the low level theories is NOT ignored, so far as I know, by neuroscientists and neuropsychologists, which makes me a bit confused as to why you think further emphasizing the role of parallelism is necessary.

Unless of course, you are criticizing the intuitive, "folk psychology" understanding of the mind. That, however, is arguably instilled in us evolutionarily (Dennett has argued for this).

The problem is that the folk psychology creeps into everything.

For example, abstracting the parallel system as serial one - the problem is that it is not always possible even though it feels very much that it must be possible. Consider two people trying to turn a steering wheel in different direction - one wants to turn left, other wants to turn right, nobody wants car to crash into a light pole in the middle, the car crashes into light pole. It seems to me that our decision making is much too often similar to this.

Yet we model such incorrect decision making as some sort of evaluation and comparison of options using some fallacious but nonetheless sensible-ish logical-ish rules; rather than the contest of inclination and brute signal strength.

The two people pulling steering wheel in opposite ways end up crashing into the obstacle in the middle not because they are an entity that for some fallacious reasons believes that middle path between two extremes will combine the best, or be a reasonable compromise. Nobody in that car thinks that the car is best off driving into a lightpole in the middle. Yet we address it as a middle ground fallacy and write long explanations why middle ground fallacy is a fallacy. Then it doesn't really work because both sides always knew that driving in the middle is unsuitable and it was the reason why they were pulling so hard away from middle - but sadly in opposite directions.

Now, try to consider a single individual in that position. Given limited time, the brain being distributed system, there will be some disagreement between subsystems, and parts will be coming up with partial solutions to partial problems, which don't work together.

The answer then, in my opinion, is to keep working towards bridging the gap between the lowest level which we have a near-deterministic understanding of (we know how individual neurons work and a little about how they are connected in the brain), and the higher level intuitive descriptions of mind which are descriptive but not predictive.

This would be awesome! School would be so much better if psychology could be understood from a neuroscience point of view...and vice versa, I guess.

If that is to happen, the bridge needs to be built from the higher-level intuitive downwards. Neuroscience is already building up from the bottom, so the unexplored and key parts are more likely in the upper-middle. If they were in the lower-middle, we'd probably feel closer to a solution by now.

Good point. Although I'm not sure exactly how you'd go about building downwards from intuitions. Has that ever been done before?

The main hitch to that type of progress is that there is too much infighting between which model is right in neuroscience, and which model is right in psychology, nobody has a sturdy enough raft to set sail into the unknown between them. How I would go about it would be to risk being wrong and start off from the most likely track in psychology, and invent the factors that, if followed through, would result in a currently accepted model of psychology. Like flavor for quarks. It would necessarily be mostly theoretical until the answers it gives become useful. Then, repeat as necessary.

[-]Shmi00

Do you have any interest in working on something like that?

My automatic answer is YES!!!!!, but I don't exactly have relevant schooling.

Also, from what I've seen I tend to clash slightly with psychology majors...I had a roommate in 4th year psychology and we used to have hours-long debates where she would eventually accuse me of being a reductionist (which to me is a good thing).

[-]Shmi00

Why necessarily psychology? You can go the biology route, then take a grad neuroscience program, though I suppose this is nearly impossible to pull off while working as a nurse full time.

Yeah... I may end up doing it, or something like that. My mother and father are making bets with each other on me ending up back in school for a significant chunk of my life.

EDIT: On rereading, I don't think my critique was appropriate. I still think this post belongs in Discussion rather than Main, because it's a loosely developed idea.

Human brain is a massively parallel system.

I wouldn't normally bother pointing out typos, but this is in the first sentence. You mean "The human brain is a massively parallel system."

the literature is ripe with references to "a part of me wanted", and perhaps we should all take this as much more than allegory.

It's awkward that every comment so far is about this sentence, but don't you mean "literature" not "the literature"? Are you talking about what people say, or what philosophers say? "A part of me wanted" doesn't sound like a philosophical comment to me.

English is obviously not his first language, and the article is perfectly comprehensible despite the grammatical flubs.

Yeah, I know. Those two just particularly jumped out to me.

So you caught the difficulty with definite articles, but missed the "ripe with" for "rife with" in the same sentence?

Doesn't anyone think that it is very rude to comment in someone else's language unless it is not understandable - just plain RUDE? If someone wants help with language they can ask. Language is a tool not a weapon.

Language is a tool not a weapon.

Correcting someone's grammar and diction = sharpening their tool for them.

Or sharpening their weapon ;) Editing the article now.

Doesn't anyone think that it is very rude to comment in someone else's language unless it is not understandable - just plain RUDE?

Sometimes. For example I'd probably consider it slightly rude to reply to this with "s/comment in/comment on/". That said it is a borderline case since 'comment in someone else's language' actually means something (unintended) and so I needed to read your comment twice then look up the context before I could guess what you actually meant to say.

In the case of top level posts in main corrections are entirely appropriate. A certain standard is expected for main level posts. If that standard is not met then the alternative to polite correction is a silent downvote - many people prefer the correction.

If someone wants help with language they can ask.

If someone is particularly sensitive to correction they probably shouldn't make top posts - or, preferably they can ask someone to proofread for them before they post. This is actually what many people do anyway even if they have no language difficulties whatsoever. In fact there are people who have volunteered to proofread drafts for others as their way to contribute.

Language is a tool not a weapon.

Typo and grammar corrections don't hurt as much as having your arm hacked off by a claymore either. I certainly don't consider Solvent's comment an attack.

I don't think this is rude at all. One of the things I like about Less Wrong, and which seems characteristic of it, is that the writing in posts - style and form as well as more basic stuff - is often constructively discussed with a view to improving the author's writing.

I read the post, and didn't have much to say about the content. I felt a little bit bad about just correcting the grammar without having anything of substance to say, but it was in Main so I did so anyway. I tried to be polite.

OK, I over reacted. Several others have said that it is acceptable in Main - so be it. I guess it does not bother others as much as it bothers me and I won't comment on corrections in future.

I might not go as far as "very rude", but I basically agree. I don't find corrections like these useful, and I doubt I would even if I was the one writing in a second language and being corrected, except when my errors were genuinely obscuring my meaning. One serious comment about what I am saying is worth any number of such trifles.

I guess that Dmytry's native language is Russian, which does not have a definite article, and so it is unsurprising if he sometimes uses "the" inaccurately. But having sussed that immediately on seeing his name and the first three words, it's of no further importance. I'm not here to give or to receive language lessons.

Comments on the language mistakes can be helpful for the author, but probably best sent as private messages rather than public comments, since they don't contribute much to the discussion otherwise.

Though I'm not sure if there's any UI shortcut for sending a private message related to a specific article, so you'll have to go to the user's page and phrase the message specifically to refer to the article, and that's a lot more work than just writing a public comment here...

Also, if significant numbers of people adopt this strategy, the result is I get lots of PMs telling me I used "who" instead of "whom", which seems a waste of energy.

Also, if significant numbers of people adopt this strategy, the result is I get lots of PMs telling me I used "who" instead of "whom", which seems a waste of energy.

I imagine they would stop once you correct the mistake. The inconvenience to you beyond the work of fixing your mistake seems to be seeing a few more messages once when you click the inbox. The benefit is that you improve the reception that your posts get (by virtue of slightly improved reading experience without any jarring errors to ignore.)

I meant by comparison to the strategy of leaving the correction in a comment rather than a PM, which has the same post-improving benefits without the multiple-inbox-entries inconvenience.

Admittedly, it has the added cost of creating a comment that lots of other people expend marginal time reading (which as you say also translates to some cost to me, even supposing I'm indifferent to the inconvenience of others, in terms of the reception my posts get).

OTOH, the PM strategy has the added cost-to-others of having N times as many people take the time to write such a comment, being unaware of their predecessors.Admittedly, the cost of that to me is lower. Actually, it might even be a benefit to me, since once I correct the error they pointed out it's quite likely they'll think better of me than if I hadn't made the error to begin with.

I meant by comparison to the strategy of leaving the correction in a comment rather than a PM, which has the same post-improving benefits without the multiple-inbox-entries inconvenience.

You're right of course. The cost to other people is far higher if they all message you. All for the slight benefit to you of not being publicly criticized.

Which may not even be a benefit. Being criticized publicly, and publicly responding to that criticism in a socially admired fashion, can be a net status gain.

Milking that kind of thing for status takes finesse but it is possible. Also useful for enhancing likability for those who already have high status.

(nods) Of course, for high-status individuals who are good at this particular maneuver it's also an opportunity to reinforce the public-criticism social norm, which increases their comparative advantage within the community.

[-][anonymous]-30

Though I'm not sure if there's any UI shortcut for sending a private message related to a specific article, so you'll have to go to the user's page and phrase the message specifically to refer to the article, and that's a lot more work than just writing a public comment here...

Select, Cntrl-c, click, click, type subject, paste, type correction.

Not much work at all if you are familiar with the interface, somewhat more so the first time you do it and are getting used to the interface.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; the literature is ripe with references to "a part of me wanted", and perhaps we should all take this as much more than allegory.

What philosophers or texts in particular do you have in mind here? I see how the question might be a bit beside the point, but I'm having a hard time coming up with examples of a 'single serial computing entity' as a view of the mind.

IIRC, Dante in the Divine Comedy pointed out inattentional blindness as evidence that the mind/soul is not reducible to smaller parts.

Well, pretty much any description of how anyone thinks. E.g. everywhere on the LessWrong itself you can see idea that an entity is comparing several options to pick the best (any sort of discussion about the decisions here). And you don't often see the view that part of the brain wanting to do something, other wanting to do something else, a tug of war where the result is not even picked based on some reasoning fallacy but the one view simply overpowers the other.

This is also with regards to the notion that parallel processes must be abstracted. You can't abstract the two people in a car pulling the wheel different ways as some kind of sensible serial process. It makes zero sense as a serial process.

[-][anonymous]00

And you don't often see the view that part of the brain wanting to do something, other wanting to do something else, a tug of war where the result is not even picked based on some reasoning fallacy but the one view simply overpowers the other.

It seems to me that LW discussions tend to focus on the model you ascribe to them because it's a model of rational decision making. What you describe as one part of the brain wanting something and another another doesn't sound like a rational decision. I mean especially the phrase "simply overpowers". No doubt something like that actually happens, but why should it be relevant to the way we make rational decisions?

But minimization of such tug of war is important for being able to actually do any rational decision making in the real life. Where some part of brain gets priority because it's this part of brain's job to handle situations like that, and switches off rational process.

It's not that we don't know how to rationally approach choosing of best option, it's that when electing a president or making conclusion on global warming or doing anything else that's relevant, some group allegiance process kicks in and the rationality goes out like a candle in a hurricane strength wind.

edit: Also, haven't we all in our lives did something against own best interest and society's best interest, despite knowing very well how to reason correctly to avoid inflicting this self harm? The first task of a rationalist who knows he's running on very glitchy hardware would be to try to understand how is his hardware glitching.

[-][anonymous]10

That's a fair point, but I guess I don't see how what you're describing is therefore a new model of thinking. If thinking is serial while irrational, impulsive action is non-serial, then the non-serial model of psychology doesn't come into conflict with the serial model of thinking. They could both be true.

Also, I sometimes feel like we should taboo the whole computer metaphor of thought. Hardware, software, glitching, etc.

I don't believe that the rational thinking can be serial, that is my point. Consider an un-obvious solution to a problem - a good effective solution. There has to be a search in the vast solution space, to arrive at this solution. The search that human brain can only perform in parallel, by breaking apart the space and combining results. When this search ignores important part of solution space you may end up with solution that is grossly suboptimal or goes against what you believe is your goals. Meanwhile, the serial, deliberate thought - it is usually impossible to enumerate all possible solutions and compare them in any reasonable time - one can compare some of the solutions deliberately, but those are picked by a parallel process which one can't introspect.

I did not mean to use words from CS in the metaphorical sense by the way. It's just that computing technology is the field that has good words for those concepts.