Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MugaSofer 10 January 2013 11:55:45AM -2 points [-]

Do you reason similarly for humans?

Comment author: Khaled 05 February 2013 12:19:14AM 0 points [-]

I can't think of another way to reason - does our brain dictate our goal, or receives a goal from somewhere and makes an effort to execute it accurately? I'd go with the first option, which to me means that whatever our brain (code) is built to do is our goal.

The complication in the case of humans might be the fact that we have more than one competing goal. It is as if this robot has a multi-tasking operating system, with one process trying to kill blue objects and another trying to build a pyramid out of plastic bottles. Normally they can co-exist somehow with some switching between processes or by just one process "not caring" about doing some activity at the current instance.

It gets ugly when the robot finds a few blue bottles. Then the robot becomes "irrational" with one process destroying what the other is trying to do. This is simply when you are on a healthy diet and see a slice of chocolate cake - you're processes are doing their jobs, but they are competing on resources - who gets to move your arms?

Let's then imagine that we have in our brains a controlling (operating) system that can get to decide which process to kill when they are in conflict. Will this operating system have a right and wrong decision? Or will whatever it does be the right thing according to its code - or else it wouldn't have done it?

Comment author: NickiH 13 February 2012 10:30:24PM 0 points [-]

This is interesting. But I'm not sure I followed it properly. Is there a post about Type 1/Type 2 mental processes? It might be good to link to it for those of us who need a refresher.

Comment author: Khaled 13 February 2012 11:18:36PM *  1 point [-]

I like Kahneman's lecture here http://www.youtube.com/watch?v=dddFfRaBPqg as it sums up the distinction nicely (thought it's a bit long) Edit: not sure if a post on LW exists though

Type 2 as an aggregation of Type 1 processes

6 Khaled 12 February 2012 03:07PM

This post assumes basic knowledge of Type 1/Type 2 (System 1/System 2) categorization of mental processes.

Background (safe to skip)

After my first reaction of surprise (consuming perhaps a few months) to the topic of heuristics and biases, and after a few more readings on neuropsychology, I started re-visiting my first reaction in more detail. Should it really be surprising to learn that humans are not rational? Anyone with a basic connection with humans should easily see that we act irrationally in many situations – snap decisions, impulses, etc. – so what was the source of my surprise?

My best guess (knowing my limits of interpolation) was that my surprise was not a result of discovering that we’re irrational, but rather that there was a scientific approach in existence aiming at finding more about those irrationalities, and that results of predictable irrationality were appearing; that might eventually lead to unifying different biases under the same theory or source.

The notion of Type 1 and Type 2 thinking (or System 1 and System 2) is for me a theory that has the power to unify most of the biases and perhaps predict others. Kahneman’s Thinking Fast and Slow adopts such an approach, attempting to explain many biases in terms of Type 2 thought.

Now, this connected with a question I had back in college when I first learned about Artificial Neural Networks (I was lucky to chose this as a topic to research and give a lecture to my colleagues on): “if this is how the brain works, how does logical/rational thought emerge?”

To my understanding, Connectionism and the self-organizing patterning system that is the brain would naturally result in Type 1 thought as a direct consequence. The question that I had persistently is how can Type 2 thought emerge from this hardware? Jonah Lehrer’s The Decisive Point suggests that different brain areas are (more) associated with each type of thought, but essentially (until proven otherwise), I assume that they all rely on essence on a patterning process, a connectionist model.

Migration of Skills

We know that many skills start in Type 2 and migrate to Type 1 as we get more “experienced” in them. When we first learn driving, we need to consciously think of every move, and the sequence of steps to perform, etc. We consciously engage in executing a known sequence before changing lanes (for example): look at the side mirror, look at the side to cover the blind spot, decrease speed, etc.

As we get more driving experience, we stop to consciously process those steps, they become automatic, and we can even engage in other conscious processes while driving (e.g. having a conversation, thinking about a meeting you have later, etc).

I believe this is key to understanding the relation between both types of thought, since it provides a kind of interface between them, it provides a way to compare the same process executing by both systems.

Simple Type 2 operations

So, having to experimental apparatus at hand, I had only the weak instrument of personal interpolation plus childhood memory. Starting with a simple operation, I decided to attempt to compare its execution by both systems. The operation: single digit addition.

As a child, 3+2 could have multiple interpretations depending on previous education. Two examples might be: (1) visualize 3 apples, visualize 2 apples, count how many apples “appear” in working memory, and that gives you the answer. (2) Hold your fist in front of you, stretch out each finger, counting incrementally until you reach 3, then start new “thread” at 0, stretch more fingers counting until you reach 2, while also incrementing the first thread that stopped at 3 – the result then is the number reached by the first thread.

The above is an attempt at analyzing how a child, using Type 2 processes, would find the answer to 3+2; while a grown up will simply look at “3+2” and “5” would “magically” pop up in her brain.

Now, the question is: can we interpret the child’s processes as a sequence of Type 1 operations? The key operation here is counting, everything else can be easily understood as Type 1 operations (for example, a connection between the written number “3” and a picture of three apples can be understood as Type 1). What happens in the child’s brain as he counts? As children we had to learn to count, probably by just repeating the numbers in order over and over again, to form a connection between them. After some practice, the number 1 form a connection to 2, which is connected to 3, etc. in a linked list that extends as we learn more numbers. So, combining this connection, with a connection between a written number and its location in this list (3 is one element higher than 2), a child can use Type 1 to count.

So, roughly and abstractly, a child’s brain adding 3+2 might go in a sequence like this: the visions of “3” would fire a picture of 3 apples (a younger child might need to perform a counting pattern to reach that step, which would also later migrate to Type 1), “2” would fire two apples, a child then starts counting (each number connected to the next, and the context of counting enforces this connection), crossing out each apple with each fired number, until all apples are crossed out.

Now this introduces the following mental operation: visualizing apples and performing operations on this visual image while counting (like crossing out or marking each counted apple). My wild guess here is that this, again, is reducible to Type 1 operations resulting from basic teacher instructions on additions, including visual demonstrations.

Levels of Type 1 to 2 Migration

Now, as pointed above, a younger child might need to apply counting to convert “3” to an image of 3 apples. As the child grows, she might have formed (by practice) the direct grown-up pattern that translates the image of “3+2” directly to “5”. She will then use this to add a number like 13+12 – utilizing “3+2”, “1+1+1”, and the carry 1 visual patterns. So the child would apply Type 2 addition utilizing several skills recently migrated to Type 1. As the child grows up, more layers of processes would migrate to Type 1, and the current Type 2 operations would become more efficient as they rely on those migrated skills.

So, what I am saying here, my guess, is that there is no clear distinction between the two Types. That Type 2 operations are simple those that use a large number of Type 1 steps, and hence is slower, non-automatic (as they are slow, there is more time for other processes to stop them from completing, and hence they seem to be controlled), and effortful.

Which connectionism pattern will be used

Now probably a grown up still has all those accumulated skills in place. Seeing “3+2”, I still have the ability to apply the apple technique, and also to apply the direct connection between “3+2” and “5”. Which one I use, I suggest, is based on two probably algorithms:

  1. Size: I use what I call the “Largest Available Recognizable Pattern” (LARP). This means, how many patterns I need to invoke to come to a result. The brain then keeps invoking patterns from largest (less total number of patterns) to smaller, until a reasonable result is reached
  2. Time: this is based on the quickest pattern, which would usually be equivalent to the largest.

And?

I totally confess that this is a wild guess, and an idea that is not at all fully developed. I am not aware if this idea had been suggested in a more mature way or not, so this is an attempt to mainly get feedback and resources from you, and perhaps to build it up into better structure.

The value of developing such a theory is that at some point it can be testable, and perhaps bring a better understanding of how we learn new skills, and more efficient ways to acquire and develop our skills.

Comment author: Oscar_Cunningham 10 August 2011 06:41:09PM 16 points [-]

I always get confused by experiments involving how generous people are with money, because if I took 5/5 instead of 6/1 I'd be taking $3 from the experimenters! Who am I to say that they are less deserving than my co-experimentee?

Comment author: Khaled 13 August 2011 02:11:03AM *  4 points [-]

You'd be taking $3 from the experimenters, but in return giving them data that represents your decision in the situation they are trying to simulate (which is a situation where only the two experimentees exist), though your point shows they didn't mange to set it up very accurately.

I realize it will be difficult to ignore the fact you mentioned once you notice it, just pointing out that not noticing it can be more advantageuos for the experimenter and yourself (not the other experimentee) - maybe another strategic ignorance

Comment author: Khaled 11 August 2011 09:27:02AM 2 points [-]

It might be of help to include elements of rationality within each course, in addition to a ToK course on it's own. For example, in physics it might be useful to teach theories that turned out to be incorrect, and to analyze how and why it seemed correct at one point of time, and by collecting more evidence etc. it turned out incorrect.

Perhaps this is too difficult to include in current curriculums, so it can be included in the ToK course as additional discussions? Kind of an application or case study of Bayes' theorem (it could be prone to hindsight bias, so this has to be taken into consideration, not to make the errors in the theory seem so obvious)

Comment author: Khaled 22 July 2011 10:23:15AM 2 points [-]

In relation to connectionism, wouldn't that be the expected behavior? Taking the example of Tide, wouldn't we expect "ocean" and "moon" to give a headstart to "Tide" when the "favorite detergent" fires up all detergent names in the brain. But we wouldn't expect "Tide", "favorite", and "why" to give a headstart to "ocean" and "moon"?

Perhaps the time between eliciting "Tide" and asking for the reason for choosing it would be relevant (since asking for the reason while the "ocean" and "moon" are still active in the brain can give more chance for choosing them as the reason)?

Comment author: whpearson 19 July 2011 10:35:52AM *  4 points [-]

Connectionism may be the best we've got. But it is not very good.

Take the recent example of improving performance on a task by reading a manual. If we were to try and implement something similar in a connectionist/reinforcement model we would have problems. We need positive and negative reinforcement to change the neural connection strengths but we wouldn't get those whilst reading a book, so how do we assimilate the non-inductive information stored in there? It is possible with feedback loops, those can be used to store information quickly in a connectionist system, however I haven't seen any systems use them or learn them on the sort of scale that would be needed for the civilization problem.

There are also more complex processes which seem out of its reach, such as learning a language using a language e.g. "En francais, le mot pour 'cat' est 'chat'".

Comment author: Khaled 19 July 2011 01:17:19PM *  1 point [-]

The idea of "virtual machines" mentioned in [Your Brain is (almost) Perfect] (http://www.amazon.com/Your-Brain-Almost-Perfect-Decisions/dp/0452288843) is tempting me to think in the direction of "reading a manual will trigger the nuerons involved in running the task and the reinforcements will be implemented on those 'virtual' runs".

How reading a manual will trigger this virtual run can be answered by the same way hearing "get me a glass of water" will trigger the neurons to do so, and if I get a "thank you" it will be reinforced. In the same way reading "to open the TV, click the red button on the remote" might trigger the neurons for opening a TV and reinforce the behavior in accordance to the manual.

I know this is quite a wild guess, but perhaps someone can elaborate on it in a more accurate manner

Comment author: sixes_and_sevens 18 July 2011 01:05:31PM 2 points [-]

By "victory condition", I mean a condition which, when met, determines the winning, losing and drawing status of all players in the game. A stopping rule is necessary for a victory condition (it's the point at which it is finally appraised), but it doesn't create a victory condition, any more than imposing a fixed stopping time on any activity creates winners and losers in that activity.

Comment author: Khaled 19 July 2011 10:02:33AM 1 point [-]

Can we know the victory condition from just watching the game?

In response to comment by [deleted] on Secrets of the eliminati
Comment author: sixes_and_sevens 18 July 2011 10:30:00AM 9 points [-]

Human games (of the explicit recreational kind) tend to have stopping rules isomorphic with the game's victory conditions. We would typically refer to those victory conditions as the objective of the game, and the goal of the participants. Given a complete decision tree for a game, even a messy stochastic one like Canasta, it seems possible to deduce the conditions necessary for the game to end.

An algorithm that doesn't stop (such as the blue-minimising robot) can't have anything analogous to the victory condition of a game. In that sense, its goals can't be analysed in the same way as those of a Connect Four-playing agent.

Comment author: Khaled 18 July 2011 11:51:49AM 2 points [-]

So if the blue-minimising robot was to stop after 3 months (the stop condition is measured by a timer), can we say that the robot's goal is to stay "alive" for 3 months? I cannot see a necessry link between deducing goals and stopping conditions.

A "victory condition" is another thing, but from a decision tree, can you deduce who loses (for Connect Four, perhaps it is the one who reaches the first four that loses).

Comment author: Khaled 18 July 2011 09:04:53AM 8 points [-]

But if whenever I eat dinner at 6I sleep better than when eating dinner at 8, can I not say that I prefer dinner at 6 over dinner at 8? Which would be one step over saying I prefer to sleep well than not.

I think we could have a better view if we consider many preferences in action. Taking your cyonics example, maybe I prefer to live (to a certain degree), prefer to conform, and prefer to procrastinate. In the burning-building situation, the living preference is playing more or less alone, while in the cryonics situation, preferences interact somewhat like oppsite forces and then motion happens in the winning side. Maybe this is what makes preferences seem like varying?

View more: Next