Describe the ways you can hear/see/feel yourself think.

11 Dmytry 27 January 2012 02:32PM

To avoid constantly generalizing from one example when it comes to human thought, I think we need a survey of the ways people can reflect on their thought process, as subset of the ways people can think.

Before having heard of the Francis Galton's study on the imagination I assumed that everyone thought in the similar way to me (except be better or worse at it), and would be puzzled why some people would believe in e.g. strong version of Sapir-Whorf hypothesis. (I couldn't even understand how they can not realize that to make even remotely coherent argument in support of this hypothesis they would have to be thinking outside the language - or so I thought)

There may be a very significant variation in how human thought process works, and how much of the process is accessible to reflection. In Richard Feynman's What Do You Care what Other People Think? he explores a technique for tying up part of the thought process by mentally counting, and exploring what sorts of thinking interfere with the counting. (I can't right now find a good online quote from those chapters and I do not have the book at hand to directly search; perhaps someone can help?)

I propose we describe the ways we believe we think, along with relevant self-observations supporting those beliefs. Just as a first step though - to get very rough idea. Note: lack of ability to reflect on something does not imply lack of function.

Then based on the responses we can form some hypotheses and make a proper survey perhaps to be combined with some set of cognitive tests. You perhaps should stop reading right now if you don't want to be primed with my own self description, but given that we all probably have been exposed to great deal of descriptions of thoughts the priming perhaps is not a big problem here.

So, for me, the distinct modes of thought (ALL coexisting in parallel at any time, except for the mental visualization which I am not using when I am busy using my eyes). The order is not related to importance:

1: Auditory based 'internal monologue'. This is like talking to oneself - e.g. i use it for counting. Observations: I can have internal monologue counting the seconds (as described by Feynman), and while i'm doing the counting i can't put thoughts in words or talk. That's the same for either English or Russian. Observation: right now i am internally 'hearing' the letters I am typing. When I stop typing I can imagine hearing "the ride of the valkyries", but I have difficulty doing that when I am typing. I can play back music when I am reading though. For some time shortly before falling asleep I can sometimes play orchestral music in my head or make myself 'hear' a song with utter unreal clarity, but not normally.

I can also use 1 to serially check something for validity.

2: Mental visualization. I use it a great deal for engineering related tasks and to big extent for math (e.g. if i need a function that does something i'll be thinking in terms of images, if I am thinking of an algorithm i usually employ imagery, etc). Observation: I am right now imagining a beach with waves crashing onto sandy shore. As I stopped typing I could imagine the sounds of the scene. This mode of thought is somewhat less subservient; i may be unable to get some images out of my mind at times. For me mental imagery interferes to some extent with visual processing of real world stimuli. I pretty much always have mental imagery when I am reading books (the ones evoking any imagery, at least). The mental imagery is not very stable. I can't visualize full rubik's cube in arbitrary position well enough to solve it in my head. I can visualize chessboard, but I never tried actually playing chess blindfolded and I don't think i'd do well at it.

3: Some weird logical inference type thought that is neither in a language nor visual, works by referencing what things are rather than word labels; it has no problem attaching what ever properties (like list of logical dependencies or feel of 'likehood') to statements. That's what I try to think rationally with.

Observations: As I am composing this message my verbal thought is tied up, and I can easily tie up visualization by visualizing the beach scene right now; at the same time I am thinking freely as of what points I want to discuss, and this seem not to be done in any particular language or conform to structure of language. I regard it as 'what I am really thinking' and I can only reflect on it with 1 second delay or so. Literally. I don't know what I am really thinking right now - I know what I was really thinking 1 second ago. I don't normally even reflect on 3. The 3 really easily gets distracted to thinking about unrelated things when I try to do some work.

3 seem to work at high speed. That's the kind of thoughts that I recall running through my head as I stumble or accidentally toss a cup off table and catch it (or not catch it).

If I invent or reinvent something, I think of it without having a word for it, and its a chore to choose good descriptions (such as name the variables and functions when I am working as programmer). Sometimes I am stuck being unable to recall a word; the reference to what the word means is in my head but either the word does not pop up, or the word that pops up is in the wrong language, or is not good enough fit and I feel a better fitting word is available. When programming I tend to think in terms of reference to what a function does, but often have trouble recalling how I named that function (I guess what I could've named it).

4: Insights, when solution to a problem just pops into my head with zero data about the process that arrived at this solution (even though the solution would come complete with the data to tell that its a good solution, I can't see what alternatives were considered and rejected). This seem however not substantially different from how the thought gets from one step to the next step; just takes longer time. Memories can pop up in similar fashion.

5: Well trained kinetic stuff. Observation: I can juggle while reading text off a page.

For me the 3 seem to be the weird one. 1,2,4 are often described in literature, 5 got to be quite universal. I can barely reflect on the 3 at all, and only with 1 second delay.

Raising awareness of existential risks - perhaps explaining at "personally stocking canned food" level?

13 Dmytry 24 January 2012 04:17PM

Many articles have been written on the topic of existential risks, and the need of greater public awareness. Here's my take - the existential risks are perhaps easier to explain with a simple example that does not yet trigger the 'too scary to be true' reflex. Example which one can easily explain to the people one knows and meets with regularly, and have them explain it to others. Example not involving any controversial predictions such as strong AI.

So, let's suppose that there's 1 in 500 years lethal flu-like pandemic risk, that kills 1/5 of the population. That's likely to be a gross underestimate, especially in the light of recent news. You can talk about that risk with mostly anyone for a while without any protest, building up some tolerance to the scariness of this scenario and letting them become accustomed with the notion, perhaps letting them suggest that real risk may be higher, perhaps 1 in 100 years.

Then you can compare it to personal everyday risks - specific to the audience. Homicide rates, car accidents, what ever everyday risks that we take counter measures against.

That's where this risk gets scary, and if you do comparisons right away some people just withdraw into a fantasy world where such pandemic is impossible or much less probable.

A lot of people, majority perhaps, are very sensitive about risks at this level. It is higher than risk of death by a car wreck, by homicide, and by many types of accidents, in about any developed country. We have seat belts and airbags, the cars are engineered for safety at non-insignificant expense, etc. We vaccinate against rare but severe diseases.

So here's the thing. Stocking up on food for two months could conceivably be as effective in the event of outbreak as seat belts, airbags, and road safety measures are for prevention of car crash fatalities; and there are many more scenarios than virus outbreak where those cans can improve your safety. Note: viruses normally don't survive for a long time in the environment, highly lethal viruses burn out the susceptible population, and after month or two a vaccine could become available. Non-interaction for months is doubly important as the sick can stay at home and avoid infecting others.

Thus it follows that one should a: stock up on preserved food and other necessities if possible (assuming western income levels and fairly low cost of the preserved food which can be further reduced by simply eating that food as it nears the expiration date - other places would need detailed cost benefit analysis), and b: try to explain this argument to others as this, too, would enhance the survival.

Meanwhile this argument should raise awareness of global risks, which are not insignificant, and make people more accepting of the notion that something globally bad may happen. The biggest problem with awareness of global risks is that they never happened in today's world, whereas for individual risk of such magnitude everyone knows someone who died of it not so long ago.

So, what do you think? Any other global risks that could serve as good, easy to understand example to make people accept better the notion that something globally bad may happen?

edit: typos

Neurological reality of human thought and decision making; implications for rationalism.

3 Dmytry 22 January 2012 02:39PM

The human brain is a massively parallel system. The best such system can do for doing anything efficiently and quickly is to have different small portions of brain compute and submit their partial answers and progressively reduce, combine and cherrypick - a process of which we seem to have almost no direct awareness of, and can only conjecture indirectly as it is the only way thought can possibly work on such slow clocked (~100..200hz) such extremely parallel hardware which uses up a good fraction of body's nutrient supply.

Yet it is immensely difficult for us to think in terms of parallel processes. We have very little access to how the parallel processing works in our heads, and we have very limited ability of considering a parallel process in parallel in our heads. We are only aware of some serial-looking self model within ourselves - a model that we can most easily consider - and we misperceive this model as self; believing ourselves to be self aware when we are only aware of that model which we equated to self.

People aren't, for the most part, discussing how to structure the parallel processing for maximum efficiency or rationality, and applying that to their lives. It's mostly the serial processes that are being discussed. The necessary, inescapable reality of how mind works is entirely sealed from us, and we are not directly aware of it, nor are we discussing and sharing how that works. Whatever little is available, we are not trained to think in those terms - the culture trains us to think in terms of serial, semantic process that would utter things like "I think, therefore I am".

This is in a way depressing to realize.

But at same time this realization brings hope - there may be a lot of low hanging fruit left if the approach has not been very well considered. I personally have been trying to think of myself as of parallel system with some agreement mechanism for a long while now. It does seem to be a more realistic way to think of oneself, in terms of understanding why you make mistakes and how they can be improved, but at same time as with any complex approach where you 'explain' existing phenomena there's a risk of being able to 'explain' anything while understanding nothing.

I propose that we should try to overcome the long standing philosophical model of mind as singular serial computing entity, but instead try approaching it from the parallel computing angle; literature is rife with references to "a part of me wanted", and perhaps we should all take this as much more than allegory. Perhaps the way you work when you decide to do or not do something, is really best thought of as a disagreement of multiple systems with some arbitration mechanism forcing default action; perhaps training - the drill-response kind of training, not simply informing oneself - could allow to make much better choices in the real time, to arrive at choices rationally rather than via some sort of tug of war between regions that propose different answers and the one that sends the strongest signal winning the control.

Of course that needs to be done very cautiously as in the complex and hard to think topics in general its easy to slip towards fuzzy logic where each logical step contains a small fallacy which leads to rapid divergence to the point that you can prove or explain anything. The Freudian style id/ego/superego as simple explanation for literally everything which predicts nothing is not what we want.

On accepting an argument if you have limited computational power.

22 Dmytry 11 January 2012 05:07PM

It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal's mugging and other exploits.

I've had a realization of a subconscious triviality: for me to accept an argument as true, it is not enough that I find no error in it. The argument must also be so structured that I would expect to have found an error if it was invalid (or I myself must make such structured version first). That's how mathematical proofs work - they are so structured that finding an error requires little computational power (only knowledge of rules and reliability); in the extreme case an entirely unintelligent machine can check a proof.

In light of this I propose that those who want to make a persuasive argument should try to structure the argument so it'd be easy to find flaws in it. This also goes for the thought experiments and hypothetical situations. Those seem rather often to be constructed with entirely opposite goal in mind - to obstruct the verification process or to try to prevent the reader from trying to find flaws.

Something else tangentially related to the arguments. The faulty models are the prime cause of decision errors; yet the faulty models are the staple of thought experiment; nobody raises an eyebrow as all models are ultimately imperfect.

However, to accept an argument based on imperfect model one must be capable of correctly propagating the error and estimating the error in the final conclusion, as a faulty model may be so constructed as to itself differ non substantially from the reality but in such a way that the difference diverges massively along the chain of reasoning. My example of this is the Trolley Problems. The faults of original model are nothing out of ordinary; simplified assumptions of the real world, perfect information, etc. Normally you can have those faults in model and still arrive at reasonably close outcome. The end result is throwing of fat people onto tracks, cutting up of travellers for organs, and similar behaviours which we intuitively know we could live a fair lot better without. How that happens? In real world the strongly asymmetrical relations of form 'death of 1 person saves 10 people' are very rare (as an emergent property of complexity of the real world that is lacking in the imaginary worlds of trolley problems), while the decision errors are not nearly so rare, so most of people killed to save others would end up killed in vain.

I don't know how models can be structured as to facilitate propagation of model's error. But it seems to be necessary for arguments based on models to be convincing.

Newcomb's problem - one boxer's introspection.

1 Dmytry 01 January 2012 03:16PM

So, just a small observation about Newcomb's problem:

It does matter to me who the predictor is.

If it is a substantially magical Omega, that predicts without fail, I will onebox - gamble that my decision might in fact cause a million in that box somehow (via simulation, via timetravel, via some handwavy sciencefictiony quantum mechanical stuff where the box content is entangled with me, via quantum murder even (like quantum suicide), it does not matter). I don't need to change anything about myself - I will win, unless I was wrong about how predictions are done and Omega failed.

If it is a human psychologist, or equivalent - well in that case I should make up here some rationalization to one box which looks like I truly believe it. I'm not going to do that because I see utility of writing a better post here to be larger than utility of winning in a future Newcomb's game show that is exceedingly unlikely to happen.

The situation with a fairly accurate human psychologist is drastically different.

The psychologist may have to put nothing into box B because you did well on particular subset of a test you did decades ago, or nothing because you did poorly. He can do it based on your relative grades for particular problems back in elementary school. One thing he isn't doing, is replicating non-trivial, complicated computation that you do in your head (assuming those aren't a mere rationalization fitted to arrive at otherwise preset conclusion). He may have been correct with previous 100 subjects via combination of sheer luck with unwillingness of previous 100 participants to actually think about it on spot, rather than solve it via cached thoughts and memes, requiring mere lookup of their personal history (they might have complex after the fact rationalizations of that decision but those are irrelevant). You can't in advance make yourself 'win' this by adjusting your Newcomb paradox specific strategy. You would have to adjust your normal life. E.g. I may have to change content of this post to win future Newcomb's paradox. Even that may not work if the prediction is based to events that happened to you and which shaped the way you think.

Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning

19 Dmytry 29 December 2011 01:09PM

Imagine that you are being asked a question; a moral question involving an imaginary world. From the prior experience with people, you have learnt that people behave in a certain way; people are, for the most part, applied thinkers and whatever is your answer, it will become a cached thought that will be applied in the real world, should the situation arise. The whole rationale behind thinking of imaginary worlds may be to create cached thoughts.

Your answer probably won't stay segregated in the well defined imaginary world for any longer than it takes the person who asked the question to switch the topic; it is the real world consequences you should be most concerned about.

Given this, would it not be rational to perhaps miss the point but answer that sort of question in the real world way?

To give a specific example, consider this question from The Least Convenient Possible World :

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

First of all, note that the question is not abstract "If [you are absolutely certain that] the only way to save 10 innocent people is to kill 1 innocent person, is it moral to kill?" . There's a lot of details. We are even told that this 1 is a traveller, I am not exactly sure why but I would think that it references kin selection related instincts; the traveller has lower utility to the village than a resident.

In light of how people process answers to such detailed questions, and how the answers are incorporated into the thought patterns - which might end up used in the real world - is it not in fact most rational not to address that kind of question exactly as specified, but to point out that one of the patients could be taken apart for the best of other 9 ? And to point out the poor quality of life and life expectancy of the surviving patients?

Indeed, as a solution one could gather all the patients and let them discuss how they solve the problem; perhaps one will decide to be terminated, perhaps they will decide to draw straws, perhaps those with the worst prognosis will draw the straws. If they're comatose one could have a panel of 12 peers make the decision. There could easily be trillions of possible solutions to this not-so-abstract problem, and the trillions is not a figure of speech here. Privileging one solution is similar to privileging a hypothesis.

In this example, the utility of any villager can be higher to the doctor than of the traveller who will never return, and hence the doctor would opt to take apart the traveller for the spare parts, instead of picking one of the patients based on some cost-benefit metric and sacrificing that patient for the best of the others. The choice we're asked about turn out to be just one of the options, chosen selfishly; it is deep selfishness of the doctor that makes him realize that killing the traveller may be justified, but not realize the same about one of the patients, for the selfishness did bias his thought towards exploring one line of reasoning but not the other.

Of course one can say that I missed the point, and one can employ backward reasoning and tweak the example by stating that those people are aliens, and the traveller is totally histocompatible with each patient, but none of the patients are compatible with each other (that's how alien immune systems work: there are some rare mutant aliens whose tissues are not at all rejected by any other).

But to do so would be to completely lose the point of why we should expend mental effort to search for alternative solutions. Yes it is defensive thinking - what does it defend us from though? In this case it defends us from making a decision based on incomplete reasoning or a faulty model. All real world decisions are, too, made in imaginary worlds - in what we imagine the world to be.

Morality requires a sort of 'due process'; the good faith reasoning effort to find the best solution rather than the first solution that the selfish subroutines conveniently present for consideration; to explore the models for faults; to try and think outside the highly abbreviated version of the real world one might initially construct when considering the circumstances.

The imaginary world situation here is just an example; and so is the answer an example of reasoning that should be applied to such situations - the reasoning that strives to explore the solution space and test the model for accuracy.

Something else which is tangential to the main point of this article. If I had 10 differently broken cars and 1 working one, I wouldn't even think of taking apart the working one for spare parts, I'd take apart one of the broken ones for spare parts. Same would apply to e.g. having 11 children, 1 healthy, 10 in need of replacement of different organs. The option that one would be thinking of is to take the one that's least likely to survive, sacrifice for other 9; no one in their mind would even think of taking apart the healthy one unless there's very compelling prior reasons. This seem to be something that we would only consider for any time for a stranger. There may be hidden kin selection based cognitive biases that affect our moral reasoning.

edit: I don't know if it is OK to be editing published articles but I'm a bit of obsessively compulsive perfectionist and I plan on improving it for publishing it in lesswrong (edit: i mean not lesswrong discussion), so I am going to take liberty at improving some of the points but perhaps also removing the duplicate argumentation and cutting down the verbosity.

View more: Prev