pjeby comments on Extreme Rationality: It's Not That Great - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (269)
Influencing your subconscious in rational ways is not easy or simple. But at the same time, simply because something is hard doesn't mean it should be discarded out of hand as a viable route to achieving your goals especially if those goals are important.
How about influencing your subconscious in irrational ways? I find that much easier, myself. The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table. If you store the right entries under the right keys, it does useful things. The hardest part of hacking it is that there's no "view source" button or way to get a listing of what's already in there: you have to follow associative links or try keys that have worked for other people.
Well, I say hardest, but it's not so much hard as being sometimes tedious or time-consuming. The actually changing things part is usually quite quick. If it's not, you're almost certainly doing something wrong.
I'm suspicious of this characterization. I've made a couple surprising subconscious deductions in the past, and they forcefully reminded me that there's a very complex human brain down there doing very complex brain things on the sly all the time. You may have have learned some tricks to manipulate it, but I'd be surprised if you've done more than scratch the surface if you really just consider it to be a simple lookup table.
I didn't say it was a simple lookup table. It's indexed in lots of non-trivial ways; see e.g. my post here about "Spock's Dirty Little Secret". I just said that fundamentally, it's a lookup table.
I also didn't say it's not capable of complex behavior. A state machine is "just a lookup table", and that in no way diminishes its potential complexity of behavior.
When I say the subconscious doesn't "think", I specifically mean that if you point your built-in "mind projection" at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists.
This is a major misunderstanding -- if not THE major misunderstanding -- of the other-than-conscious mind. It's not really a mind, it's a "Chinese room".
That doesn't mean we don't have complex behavior or can't do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There's no "agency" there, no "mind". So if you use an agency model (including Ainslie's "interests" to some extent), you'll take incorrect approaches to change.
But if you realize it's a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I've been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike.
Meanwhile, it'd also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic "API" of the senses, so I don't worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.
I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.
What, you mean like Sudoku or something?
Sudoku would be one example. I meant generally puzzles or problems involving search spaces of combinations.
Well, I'll use sudoku since I've experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities.
Then, I tried playing some easy puzzles on our Tivo, and found that the "ding" reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement.
I'm still a little slower than my wife, but it's fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step.
This doesn't require a complex subconscious; it's sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they're seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it... which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan... I hear them auditorially verbalized as the scan goes, although it's variable at what point in the scan they pop up; sometimes it's early and my eyes scan forward or back to double check.
Is this the sort of thing you're asking about?
It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.
I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.
But the danger with models is that they are always limiting in what they can reveal.
In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.
I suspect my models have similar problems, but it's always hardest to see them from within.
Of course. But mine is a model specifically oriented towards being able to change and re-program it -- as well as understanding more precisely how certain responses are generated.
One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to "single-step" the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.
How do you do that with a mind-projection model?
The problem with modeling one's self as a "person", is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.
Whereas, with my more "primitive" model:
I can solve significant problems of myself or others by changing a conceptually-single "entry" in that table, and
The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.
Personalizing one's unconscious responses leads to all kinds of unuseful carry-over from "adversarial" concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.
Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.
Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.
Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the "conscious mind" to be such a table.
Dismissing the unconscious because it's supposedly a lookup table is thus wrong in two ways: firstly, it's not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it's capable of doing.
The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model's usefulness incalculable.
"The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table."
Of all your errors thus far, those two are your most damaging.
I agree that the subconscious isn't just a giant lookup table, and that many people who make this error use it to justify practices which destroy other people's minds. But there are some important techniques of making the subconscious work better that are hard to invent unless you imagine that the subconscious is mostly a giant lookup table. pjeby uses these techniques in his practice. Do you deny pjeby's data that these techniques work? Do you even know which data made pjeby want to write "it's just a giant lookup table"? If you do know which data made pjeby want to write that, do you mean that it was wrong for him to write "the subconscious is just a giant lookup table" and not "the subconscious is mostly like just a giant lookup table"?
I feel like you don't think through the real details of what other people are thinking and how those details would have to actually interact with the high standards you have for the thoughts of those people. All you do is tell them that you think something they did means they broke a rule.
pjeby has provided very little data. He's claimed that his techniques work. He's described them in terms that (1) are supremely vague about what he actually does, and (2) seem to imply that he has gained the ability to change all sorts of things about the behaviour of the unconscious bits of his brain more or less at will.
There have been other people and groups that have made similar claims about their techniques. For instance, the Scientologists (though their claims about what they can do are more outlandish than pjeby's).
None of this means that pjeby is wrong, still less that he's not being honest with us: but it means that an appeal to "pjeby's data" is a bit naive. All we have so far -- unless there are gems hidden in threads I haven't read, which of course there might be -- are his claims.
Annoyance has a point here. A look-up table is a very limiting model for a subconscious.
What is the benefit you gain by assuming that there is no organizing structure, whether or not it is known to you, within your subconscious?
Personally, I prefer a continually evolving model, updating with experience and observations. With periodic sanity checks of varying scales of severity. Not unlike how I model people.
Of course this lends a resulting bias that I treat my subconscious a bit like a person, with encouragement, care, and deals. This can also lend positive outcomes like running subconscious mental operations for long term problem solving (a more active and volitional version of waiting for inspiration to strike) and encouraging those operations to have appropriate tracebacks to make it easier for me to consciously verify them.
Not sure if that would work for other folks though, cognitive infrastructure may vary.