loqi comments on Extreme Rationality: It's Not That Great - Less Wrong

140 Post author: Yvain 09 April 2009 02:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (269)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 09 April 2009 08:20:47PM 1 point [-]

Influencing your subconscious in rational ways is not easy or simple.

How about influencing your subconscious in irrational ways? I find that much easier, myself. The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table. If you store the right entries under the right keys, it does useful things. The hardest part of hacking it is that there's no "view source" button or way to get a listing of what's already in there: you have to follow associative links or try keys that have worked for other people.

Well, I say hardest, but it's not so much hard as being sometimes tedious or time-consuming. The actually changing things part is usually quite quick. If it's not, you're almost certainly doing something wrong.

Comment author: loqi 10 April 2009 04:21:54PM 2 points [-]

The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table.

I'm suspicious of this characterization. I've made a couple surprising subconscious deductions in the past, and they forcefully reminded me that there's a very complex human brain down there doing very complex brain things on the sly all the time. You may have have learned some tricks to manipulate it, but I'd be surprised if you've done more than scratch the surface if you really just consider it to be a simple lookup table.

Comment author: pjeby 10 April 2009 04:36:17PM 0 points [-]

I didn't say it was a simple lookup table. It's indexed in lots of non-trivial ways; see e.g. my post here about "Spock's Dirty Little Secret". I just said that fundamentally, it's a lookup table.

I also didn't say it's not capable of complex behavior. A state machine is "just a lookup table", and that in no way diminishes its potential complexity of behavior.

When I say the subconscious doesn't "think", I specifically mean that if you point your built-in "mind projection" at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists.

This is a major misunderstanding -- if not THE major misunderstanding -- of the other-than-conscious mind. It's not really a mind, it's a "Chinese room".

That doesn't mean we don't have complex behavior or can't do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There's no "agency" there, no "mind". So if you use an agency model (including Ainslie's "interests" to some extent), you'll take incorrect approaches to change.

But if you realize it's a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I've been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike.

Meanwhile, it'd also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic "API" of the senses, so I don't worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.

Comment author: MendelSchmiedekamp 10 April 2009 06:30:12PM 0 points [-]

I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.

Comment author: pjeby 10 April 2009 06:38:52PM 0 points [-]

I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.

What, you mean like Sudoku or something?

Comment author: MendelSchmiedekamp 10 April 2009 06:53:52PM 0 points [-]

Sudoku would be one example. I meant generally puzzles or problems involving search spaces of combinations.

Comment author: pjeby 10 April 2009 07:23:39PM 0 points [-]

Well, I'll use sudoku since I've experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities.

Then, I tried playing some easy puzzles on our Tivo, and found that the "ding" reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement.

I'm still a little slower than my wife, but it's fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step.

This doesn't require a complex subconscious; it's sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they're seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it... which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan... I hear them auditorially verbalized as the scan goes, although it's variable at what point in the scan they pop up; sometimes it's early and my eyes scan forward or back to double check.

Is this the sort of thing you're asking about?

Comment author: MendelSchmiedekamp 11 April 2009 02:10:24PM 1 point [-]

It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.

I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.

But the danger with models is that they are always limiting in what they can reveal.

In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.

I suspect my models have similar problems, but it's always hardest to see them from within.

Comment author: pjeby 11 April 2009 02:50:20PM 1 point [-]

After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.

Of course. But mine is a model specifically oriented towards being able to change and re-program it -- as well as understanding more precisely how certain responses are generated.

One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to "single-step" the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.

How do you do that with a mind-projection model?

So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.

The problem with modeling one's self as a "person", is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.

Whereas, with my more "primitive" model:

  1. I can solve significant problems of myself or others by changing a conceptually-single "entry" in that table, and

  2. The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.

Personalizing one's unconscious responses leads to all kinds of unuseful carry-over from "adversarial" concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.

Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.

In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule.

Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.

Comment author: MendelSchmiedekamp 11 April 2009 03:49:15PM 0 points [-]

I'm not talking about a mind projection model, I'm talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person.

I've modeled my subconscious in a similar manner before, I've gained benefits from it not unlike some you describe. I've even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete.

Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn't let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I'm talking about.

So bluntly, but honestly, I feel what you're describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I'm hoping you'll be moving forward.

Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.

Search-engines and databases don't produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious - some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.

Comment author: Annoyance 11 April 2009 02:17:21PM 0 points [-]

Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the "conscious mind" to be such a table.

Dismissing the unconscious because it's supposedly a lookup table is thus wrong in two ways: firstly, it's not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it's capable of doing.

The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model's usefulness incalculable.