Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lifelonglearner 24 March 2017 01:34:36AM 1 point [-]

Thanks for sharing! This looks like it'll be useful for my studies!

Comment author: lifelonglearner 22 March 2017 04:18:49AM 3 points [-]

Brief simplified summary (others feel free to correct):

gworley looks at the philosophical discourse surrounding the ideas of interpreting the work of others to get at their meaning (I think this is what hermeneutics is?), as well as how we still fundamentally only derive information from our senses. He notes the sort of clash that occurs today where the general "scientific" has shifted far into empiricism and doesn't do a good job of exploring the other underpinnings of our knowledge.

Comment author: MrMind 21 March 2017 08:07:05AM 1 point [-]

Yes, potential as used in physics: a quantity spreaded in space which gradient determines force (electric potential is just one the three potential in nature).

Comment author: lifelonglearner 21 March 2017 04:54:37PM 0 points [-]

Ah, got it. Thanks!

Comment author: MrMind 20 March 2017 04:39:39PM 0 points [-]

I don't see how Attractor Theory explains global modifications of preference. For instance:

the set of things that feel desirable to me after running a marathon may differ greatly from the set of things after I read a book

might be true but it's not warranted by a simple model of an attractor, which modifies only your local preferences.

Also, why use the term 'attractor', which has a very specific connotation? Does the concept conveys differences that aren't alreay covered by a well of potential?

Comment author: lifelonglearner 20 March 2017 09:25:39PM 0 points [-]

Yes, if I gave that impression, I apologize. I don't think that this does a good job of modeling global preference modifications.

I'm not well versed with the idea of potential in this context, so from my mindset, "attractor" seemed like the best term. Do you mean potential in the electrical sense? (The short answer here is that I'm not well-versed in domain knowledge.)

Comment author: RomeoStevens 18 March 2017 09:19:29PM 1 point [-]

Common limiting beliefs can be seen as particularly strong attractors along certain dimensions of mindspace that coping mechanisms use to winnow the affordance space down to something manageable without too much cognitive overhead. Regularities in behaviors that don't serve a direct purpose could also be seen as spandrels from our training data clustering things that don't necessarily map directly to causality. Ie you can get animals to do all sorts of wacky things with clicker training which then persist even if you start only rewarding a subset of the actions if the animal has no obvious way of unbundling the actions.

Comment author: lifelonglearner 18 March 2017 10:22:44PM 1 point [-]

That's interesting. I was pegging Attractors as physical actions, but I think the analogy can be loosely applied to mental concepts too (as I think you're doing here.)

I think that regularities can be strategically used as you suggest to create additional "anchors" to helpful habits in the real world. (EX: Having a running timer probably shouldn't actually affect my running habits in the normative sense, yet having a timer when running really makes it feel more official / Formal.)

Attractor Theory: A Model of Minds and Motivation

4 lifelonglearner 18 March 2017 03:12AM

Attractor Theory: A Model of Minds and Motivation

[Epistemic status: Moderately strong. Attractor Theory is a model based on the well-researched concept of time-inconsistent preferences combined with anecdotal evidence that extends the theory to how actions affect our preferences in general. See the Caveats at the end for a longer discussion on what this model is and isn’t.]

<Cross-posted from mindlevelup>


Introduction:

I’ve thinking about minds and motivation somewhat on/off for about a year now, and I think I now have a model that merges some related ideas together into something useful. The model is called Attractor Theory, and it brings together ideas from Optimizing Your Mindstate, behavioral economics, and flow.

Attractor Theory is my attempt to provide a way of looking at the world that hybridizes ideas from the Resolve paradigm (where humans Actually Try and exert their will) and the “click-whirr” paradigm (where humans are driven by “if-then” loops and proceduralized habits).

As a brief summary, Attractor Theory basically states that you should consider any action you take as being easier to continue than to start, as well as having meta-level effects on changing your perception of which actions feel desirable.


The Metaphor:

Here’s a metaphor that provides most of the intuitions behind Attractor Theory:

Imagine that you are in a hamster ball:


As a human inside this ball, you can kinda roll around by exerting energy. But it’s hard to do so all of the time — you’d likely get tired. Still, if you really wanted to, you could push the ball and move.


These are Utilons. They represent productivity hours, lives saved, HPMOR fanfictions written, or anything else you care about maximizing. You are trying to roll around and collect as many Utilons as possible.


But the terrain isn’t actually smooth. Instead, there are all these Attractors that pull you towards them. Attractors are like valleys, or magnets, or point charges. Or maybe electrically charged magnetic valleys. (I’m probably going to Physics Hell for that.)


The point is that they draw you towards them, and it’s hard to resist their pull.

Also, Attractors have an interesting property: Once you’re being pulled in by one, this actually modifies other Attractors. This usually manifests by changing how strongly other ones are pulling you in. Sometimes, though, this even means that some Attractors will disappear, and new ones may appear.


As a human, your goal is to navigate this tangle of Utilons and Attractors from your hamster ball, trying to collect Utilons.


Now you could just try to take a direct path to all the nearest Utilons, but that would mean exerting a lot of energy to fight the pull of Attractors that pull you in Utilon-sparse directions.

Instead, given that you can’t avoid Attractors (they’re everywhere!) and that you want to get as many Utilons as possible, the best thing to do seems to be to strategically choose which Attractors you’re drawn to and selectively choose when to exert energy to move from one to another to maximize your overall trajectory.



The Model:

Global Optimization:

In the above metaphor, actions and situations serve as Attractors, which are like slippery slopes that pull you in. Your agency is represented by the “meta-human” that inhabits the ball, which has some limited control when it comes to choosing which Attractor-loops to dive into and which ones to pop out of.

So the default view of humans and decisions seems to be something like viewing actions as time-chunks that we can just slot into our schedule. Attractor Theory attempts to present a model that moves away from that and shifts our intuitions to:

a) think less about our actions in a vacuum / individually

b) consider starting / stopping costs more

c) see our preferences in a more mutable light

It’s my hope that thinking about actions in as “things that draw you in” can better improve our intuitions about global optimization:

My point here is that, phenomenologically, it feels like our actions change the sorts of things we might want. Every time we take an action, this will, in turn, prime how we view other actions, often in somewhat predictable ways. I might not know exactly how they’ll change, but we can get good, rough ideas from past experience and our imaginations.

For example, the set of things that feel desirable to me after running a marathon may differ greatly from the set of things after I read a book on governmental corruption.

(I may still have core values, like wanting everyone to be happy, which I place higher up in my sense of self, which aren’t affected by these, but I’m mainly focusing on how object-level actions feel for this discussion. There’s a longer decision-theoretic discussion here that I’ll save for a later post.)

When you start seeing your actions in terms of, not just their direct effects, but also their effects on how you can take further actions, I think this is useful. It changes your decision algorithm to be something like:

“Choose actions such that their meta-level effects on my by my taking them allow me to take more actions of this type in the future and maximize the number of Utilons I can earn in the long run.”

By phrasing it this way, it makes it more clear that most things in life are a longer-term endeavor that involve trying to globally optimize, rather than locally. It also provides a model for evaluating actions on a new axis — the extent to which is influences your future, which seems like an important thing to consider.

(While it’s arguable that a naive view of maximization should by default take this into account from a consequentialist lens, I think making it explicitly clear, as the above formulation does, is a useful distinction.)

This allows us to better evaluate actions which, by themselves, might not be too useful, but do a good job of reorienting ourselves into a better state of mind. For example, spending a few minutes outside to get some air might not be directly useful, but it’ll likely help clear my mind, which has good benefits down the line.

Along the same lines, you want to view actions, not as one-time deals, but a sort of process that actively changes how you perceive other actions. In fact, these effects should somtimes perhaps be as important a consideration as time or effort when looking at a task.

Going Upstream:

Attractor Theory also conceptually models the idea of precommitment:

Humans often face situations where we fall prey to “in the moment” urges, which soon turn to regret. These are known as time-inconsistent preferences, where what we want quickly shifts, often because we are in the presence of something that really tempts us.

An example of this is the dieter who proclaims “I’ll just give in a little today” when seeing a delicious cake on the restaurant menu, and then feeling “I wish I hadn’t done that” right after gorging themselves.

Precommitment is the idea that you can often “lock-in” your choices beforehand, such that you will literally be unable to give into temptation when the actual choice comes before you, or entirely avoid the opportunity to even face the choice.

An example from the above would be something like having a trustworthy friend bring food over instead of eating out, so you can’t stuff yourself on cake because you weren’t even the one who ordered food.

There’s seems to be a general principle here of going “upstream”, such that you’re trying to target places where you have the most control, such that you can improve your experiences later down the line. This seems to be a useful idea, whether the question is about finding leverage or self-control.

Attractor Theory views all actions and situations as self-reinforcing slippery slopes. As such, it more realistically models the act of taking certain actions as leading you to other Attractors, so you’re not just looking at things in isolation.

In this model, we can reasonably predict, for example, that any video on YouTube will likely lead to more videos because the “sucked-in-craving-more-videos Future You” will have different preferences than “needing-some-sort-of-break Present You”.

This view allows you to better see certain “traps”, where an action will lead you deeper and deeper down an addiction/reward cycle, like a huge bag of chips or a webcomic. These are situations where, after the initial buy-in, it becomes incredibly attractive to continue down the same path, as these actions make reinforce themselves, making it easy to continue on and on…

Under the Attractor metaphor, your goal, then, is to focus on finding ways of being drawn to certain actions and avoidong others. You wan to find ways that you can avoid specific actions which you could lead you down bad spirals, even if the initial actions themselves may not be that distractiong.

The result is chaining together actions and their effects on how you perceive things in an upstream way, like precommitment.

Exploring, Starting, and Stopping:

Local optima is also visually represented by this model: We can get caught in certain chains of actions that do a good job of netting Utilons. Similar to the above traps, it can be hard to try new things once we’ve found an effective route already.

Chances are, though, that there’s probably even more Utilons to be had elsewhere. In which case, being able to break out to explore new areas could be useful.

Attractor Theory also does a good job of modeling how actions seem much harder to start than to stop. Moving from one Attractor to a disparate one can be costly in terms of energy, as you need to move against the pull of the current Attractor.

Moving from one Attractor to a disparate one can be costly in terms of energy, as you need to move against the pull of the current Attractor.

Once you’re pulled in, though, it’s usually easier to keep going with the flow. So using this model ascribes costs to starting and places less of a cost on continuing actions.

By “pulled in”, I mean making it feel effortless or desirable to continue with the action. I’m thinking of the feeling you get when you have a decent album playing music, and you feel sort of tempted to switch it to a better album, except that, given that this good song is already playing, you don’t really feel like switching.

Given the costs between switching, you want to invest your efforts and agency into, perhaps not always choosing the immediate Utilon-maximizing action moment-by-moment but by choosing the actions / situations whose attractors pull you in desirable directions, or make it such that other desirable paths are now easier to take.


Summary and Usefulness:

Attractor Theory attempts to retain willpower as a coherent idea, while also hopefully more realistically modeling how actions can affect our preferences with regards to other actions.

It can serve as an additional intuition pump behind using willpower in certain situations. Thinking about “activation energy” in terms of putting in some energy to slide into positive Attractors removes the mental block I’ve recently had on using willpower. (I’d been stuck in the “motivation should come from internal cooperation” mindset.)

The meta-level considerations when looking at how Attractors affect how other Attractors affect us provides a clearer mental image of why you might want to precommit to avoid certain actions.

For example, when thinking about taking breaks, I now think about which actions can help me relax without strongly modifying my preferences. This means things like going outside, eating a snack, and drawing as far better break-time activities than playing an MMO or watching Netflix.

This is because the latter are powerful self-reinforcing Attractors that also pull me towards more reward-seeking directions, which might distract me from my task at hand. The former activities can also serve as breaks, but they don’t do much to alter your preferences, and thus, help keep you focused.

I see Attractor Theory as being useful when it comes to thinking upstream and providing an alternative view of motivation that isn’t exactly internally based.

Hopefully, this model can be useful when you look at your schedule to identify potential choke-points / bottlenecks can arise, as a result of factors you hadn’t previously considered, when it comes to evaluating actions.


Caveats:

Attractor Theory assumes that different things can feel desirable depending on the situation. It relinquishes some agency by assuming that you can’t always choose what you “want” because of external changes to how you perceive actions. It also doesn’t try to explain internal disagreements, so it’s still largely at odds with the Internal Double Crux model.

I think this is fine. The goal here isn’t exactly to create a wholly complete prescriptive model or a descriptive one. Rather, it’s an attempt to create a simplified model of humans, behavior, and motivation into a concise, appealing form your intuitions can crystallize, similar to the System 1 and System 2 distinction.

I admit that if you tend to use an alternate ontology when it comes to viewing how your actions relate to the concept of “you”, this model might be less useful. I think that’s also fine.

This is not an attempt to capture all of the nuances / considerations in decision-making. It’s simply an attempt to partially take a few pieces and put them together in a more coherent framework. Attractor Theory merely takes a few pieces that I’d previously had as disparate nodes and chunks them together into a more unified model of how we think about doing things.

Comment author: Benquo 16 March 2017 02:22:11AM *  4 points [-]

I think I mean the same thing you mean by "real beliefs, rather than, say, belief-in-belief". So, I'm saying, it's not confirmation bias that causes the good thing, it's sincerity that makes the confirmation bias comparatively harmless.

Comment author: lifelonglearner 16 March 2017 03:57:03AM 0 points [-]

Gotcha, thanks.

Comment author: WalterL 15 March 2017 07:52:05PM 1 point [-]

Real belief is actually moderately rare. People don't generally believe stuff anymore that they might get laughed at about. Find one person who believes something they didn't read on wikipedia and it's a weird week.

Comment author: lifelonglearner 16 March 2017 12:59:38AM 0 points [-]

I grant that most people may not hold too many real beliefs, in the normal sense of the word, but is this also generally true of scientists who are conducting studies? It feels like you'd need to belief that X was true in order for you to run a study in the first place..?

Or are we assuming that most scientists are just running off weakly held beliefs and just "doing things"?

(I really don't know much about what the field might be like.)

Comment author: Benquo 15 March 2017 08:14:36PM 1 point [-]

It's sincerity that causes this sort of behavior.

Comment author: lifelonglearner 16 March 2017 12:57:39AM 1 point [-]

I'm unsure I have a good internal picture of what sincerity is pointing at. Does being sincere differ much from "truly, actually, super-duper, very much so" believing in something?

Comment author: lifelonglearner 15 March 2017 02:24:13PM 0 points [-]

The title says that sufficiently sincere confirmation bias is indistinguishable from real science. But I don't see how this differs too much from real science (the attitude of the NYU people versus scientists.)

You say:

What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test.

I'm a little confused. Isn't this just saying that these people held real beliefs, rather than, say, belief-in-belief? So when contrary evidence appeared, they were able to change their mind?

I dunno; I feel not super convinced that its confirmation bias which causes this sort of good epistemic behavior? (As in, I wouldn't expect this sort of thing in this sort of situation to happen much and this is maybe unique?)

View more: Next