Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW UI issue

14 gworley 24 March 2017 06:08PM

Not really sure where else I might post this, but there seems to be a UI issue on the site. When I hit the homepage of lesswrong.com while logged in I no longer see the user sidebar or the header links for Main and Discussion. This is kind of annoying because I have to click into an article first to get to a page where I can access those things. Would be nice to have them back on the front page.

Act into Uncertainty

4 lifelonglearner 24 March 2017 09:28PM

It’s only been recently that I’ve been thinking about epistemics in the context of figuring out my behavior and debiasing. Aside from trying to figure out how I actually behave (as opposed to what I merely profess I believe), I’ve been thinking about how to confront uncertainty—and what it feels like.

 

For many areas of life, I think we shy away from confronting uncertainty and instead flee into the comforting non-falsifiability of vagueness.


Consider these examples:


1) You want to get things done today. You know that writing things down can help you finish more things. However, it feels aversive to write down what you specifically want to do. So instead, you don’t write things down and instead just keep a hazy notion of “I will do things today”.


2) You try to make a confidence interval for a prediction where money is on the line. You notice yourself feeling uncomfortable, no matter what your bounds are; it feels bad to set down any number at all, which is accompanied by a dread feeling of finality.


3) You’re trying to find solutions to a complex, entangled problem. Coming up with specific solutions feels bad because none of them seem to completely solve the problem. So instead you decide to create a meta-framework that produces solutions, or argue in favor of some abstract process like a “democratized system that focuses on holistic workarounds”.


In each of the above examples, it feels like we move away from making specific claims because that opens us up to specific criticism. But instead of trying to improve the strengths of specific claims, we retreat to fuzzily-defined notions that allow us to incorporate any criticism without having to really update.


I think there’s a sense in which, in some areas of life, we’re embracing shoddy epistemology (e.g. not wanting to validate or falsify our beliefs) because of a fear of wanting to fail / put in the effort to update. I think this failure is what fuels this feeling of aversion.


It seems useful to face this feeling of badness or aversion with the understanding that this is what confronting uncertainty feels like. The best action doesn’t always feel comfortable and easy; it can just as easily feel aversive and final.


Look for situations where you might be flinching away from making specific claims and replacing them with vacuous claims that support all evidence you might see.


If you never put your beliefs to the test with specific claims, then you can never verify them in the real world. And if your beliefs don’t map well onto the real world, they don’t seem very useful to even have in the first place.

Musical setting of the Litany of Tarski

11 komponisto 23 March 2017 11:18AM

About a year ago, I made a setting of the Litany of Tarski for four-part a cappella (i.e. unaccompanied) chorus.

More recently, in the process of experimenting with MuseScore for potential use in explaining musical matters on the internet (it makes online sharing of playback-able scores very easy), the thought occurred to me that perhaps the Tarski piece might be of interest to some LW readers (if no one else!), so I went ahead and re-typeset it in MuseScore for your delectation. 

Here it is (properly notated :-)).

Here it is (alternate version designed to avoid freaking out those who aren't quite the fanatical enthusiasts of musical notation that I am).

Making equilibrium CDT into FDT in one+ easy step

6 Stuart_Armstrong 21 March 2017 02:42PM

In this post, I'll argue that Joyce's equilibrium CDT (eCDT) can be made into FDT (functional decision theory) with the addition of an intermediate step - a step that should have no causal consequences. This would show that eCDT is unstable under causally irrelevant changes, and is in fact a partial version of FDT.

Joyce's principle is:

Full Information. You should act on your time-t utility assessments only if those assessments are based on beliefs that incorporate all the evidence that is both freely available to you at t and relevant to the question about what your acts are likely to cause.

When confronted by a problem with a predictor (such as Death in Damascus or the Newcomb problem), this allows eCDT to recursively update their probabilities of the behaviour of the predictor, based on their own estimates of their own actions, until this process reaches equilibrium. This allows it to behave like FDT/UDT/TDT on some (but not all) problems. I'll argue that you can modify the setup to make eCDT into a full FDT.

 

Death in Damascus

In this problem, Death has predicted whether the agent will stay in Damascus (S) tomorrow, or flee to Aleppo (F). And Death has promised to be in the same city as the agent (D or A), to kill them. Having made its prediction, Death then travels to that city to wait for the agent. Death is known to be a perfect predictor, and the agent values survival at $1000, while fleeing costs $1.

Then eCDT fleeing to Aleppo with probability 999/2000. To check this, let x be the probability of fleeing to Aleppo (F), and y the probability of Death being there (A). The expected utility is then

  • 1000(x(1-y)+(1-x)y)-x                                                    (1)

Differentiating this with respect to x gives 999-2000y, which is zero for y=999/2000. Since Death is a perfect predictor, y=x and eCDT's expected utility is 499.5.

The true expected utility, however, is -999/2000, since Death will get the agent anyway, and the only cost is the trip to Aleppo.

 

Delegating randomness

The eCDT decision process seems rather peculiar. It seems to allow updating of the value of y dependent on the value of x - hence allow acausal factors to be considered - but only in a narrow way. Specifically, it requires that the probability of F and A be equal, but that those two events remain independent. And it then differentiates utility according to the probability of F only, leaving that of A fixed. So, in a sense, x correlates with y, but small changes in x don't correlate with small changes in y.

That's somewhat unsatisfactory, so consider the problem now with an extra step. The eCDT agent no longer considers whether to stay or flee; instead, it outputs X, a value between 0 and 1. There is a uniform random process Z, also valued between 0 and 1. If Z<X, then the agent flees to Aleppo; if not, it stays in Damascus.

This seems identical to the original setup, for the agent. Instead of outputting a decision as to whether to flee or stay, it outputs the probability of fleeing. This has moved the randomness in the agent's decision from inside the agent to outside it, but this shouldn't make any causal difference, because the agent knows the distribution of Z.

Death remains a perfect predictor, which means that it can predict X and Z, and will move to Aleppo if and only if Z<X.

Now let the eCDT agent consider outputting X=x for some x. In that case, it updates its opinion of Death's behaviour, expecting that Death will be in Aleppo if and only if Z<x. Then it can calculate the expected utility of setting X=x, which is simply 0 (Death will always find the agent) minus x (the expected cost of fleeing to Aleppo), hence -x. Among the "pure" strategies, X=0 is clearly the best.

Now let's consider mixed strategies, where the eCDT agent can consider a distribution PX over values of X (this is a sort of second order randomness, since X and Z already give randomness over the decision to move to Aleppo). If we wanted the agent to remain consistent with the previous version, the agent then models Death as sampling from PX, independently of the agent. The probability of fleeing is just the expectation of PX; but the higher the variance of PX, the harder it is for Death to predict where the agent will go. The best option is as before: PX will set X=0 with probability 1001/2000, and X=1 with probability 999/2000.

But is this a fair way of estimating mixed strategies?

 

Average Death in Aleppo

Consider a weaker form of Death, Average Death. Average Death cannot predict X, but can predict PX, and will use that to determine its location, sampling independently from it. Then, from eCDT's perspective, the mixed-strategy behaviour described above is the correct way of dealing with Average Death.

But that means that the agent above is incapable of distinguishing between Death and Average Death. Joyce argues strongly for considering all the relevant information, and the distinction between Death and Average Death is relevant. Thus it seems when considering mixed strategies, the eCDT agent must instead look at the pure strategies, compute their value (-x in this case) and then look at the distribution over them.

One might object that this is no longer causal, but the whole equilibrium approach undermines the strictly causal aspect anyway. It feels daft to be allowed to update on Average Death predicting PX, but not on Death predicting X. Especially since moving from PX to X is simply some random process Z' that samples from the distribution PX. So Death is allowed to predict PX (which depends on the agent's reasoning) but not Z'. It's worse than that, in fact: Death can predict PX and Z', and the agent can know this, but the agent isn't allowed to make use of this knowledge.

Given all that, it seems that in this situation, the eCDT agent must be able to compute the mixed strategies correctly and realise (like FDT) that staying in Damascus (X=0 with certainty) is the right decision.

 

Let's recurse again, like we did last summer

This deals with Death, but not with Average Death. Ironically, the "X=0 with probability 1001/2000..." solution is not the correct solution for Average Death. To get that, we need to take equation (1), set x=y first, and then differentiate with respect to x. This gives x=1999/4000, so setting "X=0 with probability 2001/4000 and X=1 with probability 1999/4000" is actually the FDT solution for Average Death.

And we can make the eCDT agent reach that. Simply recurse to the next level, and have the agent choose PX directly, via a distribution PPX over possible PX.

But these towers of recursion are clunky and unnecessary. It's simpler to state that eCDT is unstable under recursion, and that it's a partial version of FDT.

[Stub] Newcomb problem as a prisoners' dilemma/anti-coordination game

2 Stuart_Armstrong 21 March 2017 10:34AM

You should always cooperate with an identical copy of yourself in the prisoner's dilemma. This is obvious, because you and the copy will reach the same decision.

That justification implicitly assumes that you and your copy as (somewhat) antagonistic: that you have opposite aims. But the conclusion doesn't require that at all. Suppose that you and your copy were instead trying to ensure that one of you got maximal reward (it doesn't matter which). Then you should still jointly cooperate because (C,C) is possible, while (C,D) and (D,C) are not (I'm ignoring randomising strategies for the moment).

Now look at the Newcomb problem. You decision enters twice: once when you decide how many boxes to take, and once when Omega is simulating or estimating you to decide how much money to put in box B. You would dearly like your two "copies" (one of which may just be an estimate) to be out of sync - for the estimate to 1-box while the real you two-boxes. But without any way of distinguishing between the two, you're stuck with taking the same action - (1-box,1-box). Or, seeing it another way, (C,C).

This also makes the Newcomb problem into an anti-coordination game, where you and your copy/estimate try to pick different options. But, since this is not possible, you have to stick to the diagonal. This is why the Newcomb problem can be seen both as an anti-coordination game and a prisoners' dilemma - the differences only occur in the off-diagonal terms that can't be reached.

[Error]: Statistical Death in Damascus

3 Stuart_Armstrong 20 March 2017 07:17PM

Note: This post is in error, I've put up a corrected version of it here. I'm leaving the text in place, as historical record. The source of the error is that I set Pa(S)=Pe(D) and then differentiated with respect to Pa(S), while I should have differentiated first and then set the two values to be the same.

Nate Soares and Ben Levinstein have a new paper out on "Functional Decision theory", the most recent development of UDT and TDT.

It's good. Go read it.

This post is about further analysing the "Death in Damascus" problem, and to show that Joyce's "equilibrium" version of CDT (causal decision theory) is in a certain sense intermediate between CDT and FDT. If eCDT is this equilibrium theory, then it can deal with a certain class of predictors, which I'll call distribution predictors.

 

Death in Damascus

In the original Death in Damascus problem, Death is a perfect predictor. It finds you in Damascus, and says that it's already planned it's trip for tomorrow - and it'll be in the same place you will be.

You value surviving at $1000, and can flee to Aleppo for $1.

Classical CDT will put some prior P over Death being in Damascus (D) or Aleppo (A) tomorrow. And then, if P(A)>999/2000, you should stay (S) in Damascus, while if P(A)<999/2000, you should flee (F) to Aleppo.

FDT estimates that Death will be wherever you will, and thus there's no point in F, as that will just cost you $1 for no reason.

But it's interesting what eCDT produces. This decision theory requires that Pe (the equilibrium probability of A and D) be consistent with the action distribution that eCDT computes. Let Pa(S) be the action probability of S. Since Death knows what you will do, Pa(S)=Pe(D).

The expected utility is 1000.Pa(S)Pe(A)+1000.Pa(F)Pe(D)-Pa(F). At equilibrium, this is 2000.Pe(A)(1-Pe(A))-Pe(A). And that quantity is maximised when Pe(A)=1999/4000 (and thus the probability of you fleeing is also 1999/4000).

This is still the wrong decision, as paying the extra $1 is pointless, even if it's not a certainty to do so.

So far, nothing interesting: both CDT and eCDT fail. But consider the next example, on which eCDT does not fail.

 

Statistical Death in Damascus

Let's assume now that Death has an assistant, Statistical Death, that is not a prefect predictor, but is a perfect distribution predictor. It can predict the distribution of your actions, but not your actual decision. Essentially, you have access to a source of true randomness that it cannot predict.

It informs you that its probability over whether to be in Damascus or Aleppo will follow exactly the same distribution as yours.

Classical CDT follows the same reasoning as before. As does eCDT, since Pa(S)=Pe(D), as before, since Statistical Death follows the same distribution as you do.

But what about FDT? Well, note that FDT will reach the same conclusion as eCDT. This is because 1000.Pa(S)Pe(A)+1000.Pa(F)Pe(D)-Pa(F) is the correct expected utility, the Pa(S)=Pe(D) assumption is correct for Statistical Death, and (S,F) is independent of (A,D) once the action probabilities have been fixed.

So on the Statistical Death problem, eCDT and FDT say the same thing.

 

Factored joint distribution versus full joint distributions

What's happening is that there is a joint distribution over (S,F) (your actions) and (D,A) (Death's actions). FDT is capable of reasoning over all types of joint distributions, and fully assessing how its choice of Pa acausally affects Death's choice of Pe.

But eCDT is only capable of reasoning over ones where the joint distribution factors into a distribution over (S,F) times a distribution over (D,A). Within the confines of that limitation, it is capable of (acausally) changing Pe via its choice of Pa.

Death in Damascus does not factor into two distributions, so eCDT fails on it. Statistical Death in Damascus does so factor, so eCDT succeeds on it. Thus eCDT seems to be best conceived of as a version of FDT that is strangely limited in terms of which joint distributions its allowed to consider.

Announcing: The great bridge

8 Elo 17 March 2017 01:11AM

Original post: http://bearlamp.com.au/announcing-the-great-bridge-between-communities/


In the deep dark lurks of the internet, several proactive lesswrong and diaspora leaders have been meeting each day.  If we could have cloaks and silly hats; we would.

We have been discussing the great diversification, and noticed some major hubs starting to pop up.  The ones that have been working together include:

  • Lesswrong slack
  • SlateStarCodex Discord
  • Reddit/Rational Discord
  • Lesswrong Discord
  • Exegesis (unofficial rationalist tumblr)

The ones that we hope to bring together in the future include (on the willingness of those servers):

  • Lesswrong IRC (led by Gwern)
  • Slate Star Codex IRC
  • AGI slack
  • Transhumanism Discord
  • Artificial Intelligence Discord

How will this work?

About a year ago, the lesswrong slack tried to bridge across to the lesswrong IRC.  That was bad.  From that experience we learnt a lot that can go wrong, and have worked out how to avoid those mistakes.  So here is the general setup.

Each server currently has it's own set of channels, each with their own style of talking and addressing problems, and sharing details and engaging with each other.  We definitely don't want to do anything that will harm those existing cultures.  In light of this, taking the main channel from one server and mashing it into the main channel of another server is going to reincarnate into HELL ON EARTH.  and generally leave both sides with the sentiment that "<the other side> is wrecking up <our> beautiful paradise".  Some servers may have a low volume buzz at all times, other servers may become active for bursts, it's not good to try to marry those things.

Logistics:

Room: Lesswrong-Slack-Open

Bridged to:

  • exegesis#lwslack_bridge
  • Discord-Lesswrong#lw_slack_main
  • R/rational#lw_slack_open
  • SSC#bridge_slack

I am in <exegesis, D/LW, R/R, SSC> what does this mean?

If you want to peek into the lesswrong slack and see what happens in their #open channel.  You can join or unmute your respective channel and listen in, or contribute (two way relay) to their chat.  Obviously if everyone does this at once we end up spamming the other chat and probably after a week we cut the bridge off because it didn't work.  So while it's favourable to increase the community; be mindful of what goes on across the divide and try not to anger our friends.

I am in Lesswrong-Slack, what does this mean?

We have new friends!  Posts in #open will be relayed to all 4 children rooms where others can contribute if they choose.  Mostly they have their own servers to chat on, and if they are not on an info-diet already, then maybe they should be.  We don't anticipate invasion or noise.

Why do they get to see our server and we don't get to see them?

So glad you asked - we do.  There is an identical set up for their server into our bridge channels.  in fact the whole diagram looks something like this:

Server Main channel Slack-Lesswrong Discord-Exegesis Discord-Lesswrong Discord-r/rational Discord-SSC
Slack-Lesswrong Open   lwslack_bridge lw_slack_main lw_slack_open bridge_slack
Discord-Exegesis Main #bridge_rat_tumblr   exegesis_main exegesis_rattumb_main bridge_exegesis
Discord-Lesswrong Main #Bridge_discord_lw lwdiscord_bridge   lw_discord_main bridge_lw_disc
Discord-r/rational General #bridge_r-rational_dis rrdiscord_bridge reddirati_main   bridge_r_rational
Discord-SSC General #bridge_ssc_discord sscdiscord_bridge ssc_main ssc_discord_gen  

Pretty right? No it's not.  But that's in the backend.

For extra clarification, the rows are the channels that are linked.  Which is to say that Discord-SSC, is linked to a child channel in each of the other servers.  The last thing we want to do is impact this existing channels in a negative way.

But what if we don't want to share our open and we just want to see the other side's open?  (/our talk is private, what about confidential and security?)

Oh you mean like the prisoners dilemma?  Where you can defect (not share) and still be rewarded (get to see other servers).  Yea it's a problem.  Tends to be when one group defects, that others also defect.  There is a chance that the bridge doesn't work.  That this all slides, and we do spam each other, and we end up giving up on the whole project.  If it weren't worth taking the risk we wouldn't have tried.

We have not rushed into this bridge thing, we have been talking about it calmly and slowly and patiently for what seems like forever.  We are all excited to be taking a leap, and keen to see it take off.

Yes, security is a valid concern, walled gardens being bridged into is a valid concern, we are trying our best.  We are just as hesitant as you, and being very careful about the process.  We want to get it right.

So if I am in <server1> and I want to talk to <server3> I can just post in the <bridge-to-server2> room and have the message relayed around to server 3 right?

Whilst that is correct, please don't do that.  You wouldn't like people relaying through your main to talk to other people.  Also it's pretty silly, you can just post in your <servers1> main and let other people see it if they want to.

This seems complicated, why not just have one room where everyone can go and hang out?

  1. How do you think we ended up with so many separate rooms
  2. Why don't we all just leave <your-favourite server> and go to <that other server>?  It's not going to happen

Why don't all you kids get off my lawn and stay in your own damn servers?

Thank's grandpa.  No one is coming to invade, we all have our own servers and stuff to do, we don't NEED to be on your lawn, but sometimes it's nice to know we have friends.

<server2> shitposted our server, what do we do now?

This is why we have mods, why we have mute and why we have ban.  It might happen but here's a deal; don't shit on other people and they won't shit on you.  Also if asked nicely to leave people alone, please leave people alone.  Remember anyone can tap out of any discussion at any time.

I need a picture to understand all this.

Great!  Friends on exegesis made one for us.


Who are our new friends:

Lesswrong Slack

Lesswrong slack has been active since 2015, and has a core community. The slack has 50 channels for various conversations on specific topics, the #open channel is for general topics and has all kinds of interesting discoveries shared here.

Discord-Exegesis (private, entry via tumblr)

Exegesis is a discord set up by a tumblr rationalist for all his friends (not just rats). It took off so well and became such a hive in such a short time that it's now a regular hub.

Discord-Lesswrong

Following Exegesis's growth, a discord was set up for lesswrong, it's not as active yet, but has the advantage of a low barrier to entry and it's filled with lesswrongers.

Discord-SSC

Scott posted a link on an open thread to the SSC discord and now it holds activity from users that hail from the SSC comment section. it probably has more conversation about politics than other servers but also has every topic relevant to his subscribers.

Discord-r/rational

reddit rational discord grew from the rationality and rational fiction subreddit, it's quite busy and covers all topics.


As at the publishing of this post; the bridge is not live, but will go live when we flip the switch.


Meta: this took 1 hour to write (actualy time writing) and half way through I had to stop and have a voice conference about it to the channels we were bridging.

Cross posted to lesswrong: http://lesswrong.com/lw/oqz

 

Attractor Theory: A Model of Minds and Motivation

4 lifelonglearner 18 March 2017 03:12AM

Attractor Theory: A Model of Minds and Motivation

[Epistemic status: Moderately strong. Attractor Theory is a model based on the well-researched concept of time-inconsistent preferences combined with anecdotal evidence that extends the theory to how actions affect our preferences in general. See the Caveats at the end for a longer discussion on what this model is and isn’t.]

<Cross-posted from mindlevelup>


Introduction:

I’ve thinking about minds and motivation somewhat on/off for about a year now, and I think I now have a model that merges some related ideas together into something useful. The model is called Attractor Theory, and it brings together ideas from Optimizing Your Mindstate, behavioral economics, and flow.

Attractor Theory is my attempt to provide a way of looking at the world that hybridizes ideas from the Resolve paradigm (where humans Actually Try and exert their will) and the “click-whirr” paradigm (where humans are driven by “if-then” loops and proceduralized habits).

As a brief summary, Attractor Theory basically states that you should consider any action you take as being easier to continue than to start, as well as having meta-level effects on changing your perception of which actions feel desirable.


The Metaphor:

Here’s a metaphor that provides most of the intuitions behind Attractor Theory:

Imagine that you are in a hamster ball:


As a human inside this ball, you can kinda roll around by exerting energy. But it’s hard to do so all of the time — you’d likely get tired. Still, if you really wanted to, you could push the ball and move.


These are Utilons. They represent productivity hours, lives saved, HPMOR fanfictions written, or anything else you care about maximizing. You are trying to roll around and collect as many Utilons as possible.


But the terrain isn’t actually smooth. Instead, there are all these Attractors that pull you towards them. Attractors are like valleys, or magnets, or point charges. Or maybe electrically charged magnetic valleys. (I’m probably going to Physics Hell for that.)


The point is that they draw you towards them, and it’s hard to resist their pull.

Also, Attractors have an interesting property: Once you’re being pulled in by one, this actually modifies other Attractors. This usually manifests by changing how strongly other ones are pulling you in. Sometimes, though, this even means that some Attractors will disappear, and new ones may appear.


As a human, your goal is to navigate this tangle of Utilons and Attractors from your hamster ball, trying to collect Utilons.


Now you could just try to take a direct path to all the nearest Utilons, but that would mean exerting a lot of energy to fight the pull of Attractors that pull you in Utilon-sparse directions.

Instead, given that you can’t avoid Attractors (they’re everywhere!) and that you want to get as many Utilons as possible, the best thing to do seems to be to strategically choose which Attractors you’re drawn to and selectively choose when to exert energy to move from one to another to maximize your overall trajectory.



The Model:

Global Optimization:

In the above metaphor, actions and situations serve as Attractors, which are like slippery slopes that pull you in. Your agency is represented by the “meta-human” that inhabits the ball, which has some limited control when it comes to choosing which Attractor-loops to dive into and which ones to pop out of.

So the default view of humans and decisions seems to be something like viewing actions as time-chunks that we can just slot into our schedule. Attractor Theory attempts to present a model that moves away from that and shifts our intuitions to:

a) think less about our actions in a vacuum / individually

b) consider starting / stopping costs more

c) see our preferences in a more mutable light

It’s my hope that thinking about actions in as “things that draw you in” can better improve our intuitions about global optimization:

My point here is that, phenomenologically, it feels like our actions change the sorts of things we might want. Every time we take an action, this will, in turn, prime how we view other actions, often in somewhat predictable ways. I might not know exactly how they’ll change, but we can get good, rough ideas from past experience and our imaginations.

For example, the set of things that feel desirable to me after running a marathon may differ greatly from the set of things after I read a book on governmental corruption.

(I may still have core values, like wanting everyone to be happy, which I place higher up in my sense of self, which aren’t affected by these, but I’m mainly focusing on how object-level actions feel for this discussion. There’s a longer decision-theoretic discussion here that I’ll save for a later post.)

When you start seeing your actions in terms of, not just their direct effects, but also their effects on how you can take further actions, I think this is useful. It changes your decision algorithm to be something like:

“Choose actions such that their meta-level effects on me by my taking them allow me to take more actions of this type in the future and maximize the number of Utilons I can earn in the long run.”

By phrasing it this way, it makes it more clear that most things in life are a longer-term endeavor that involve trying to globally optimize, rather than locally. It also provides a model for evaluating actions on a new axis — the extent to which is influences your future, which seems like an important thing to consider.

(While it’s arguable that a naive view of maximization should by default take this into account from a consequentialist lens, I think making it explicitly clear, as the above formulation does, is a useful distinction.)

This allows us to better evaluate actions which, by themselves, might not be too useful, but do a good job of reorienting ourselves into a better state of mind. For example, spending a few minutes outside to get some air might not be directly useful, but it’ll likely help clear my mind, which has good benefits down the line.

Along the same lines, you want to view actions, not as one-time deals, but a sort of process that actively changes how you perceive other actions. In fact, these effects should somtimes perhaps be as important a consideration as time or effort when looking at a task.

Going Upstream:

Attractor Theory also conceptually models the idea of precommitment:

Humans often face situations where we fall prey to “in the moment” urges, which soon turn to regret. These are known as time-inconsistent preferences, where what we want quickly shifts, often because we are in the presence of something that really tempts us.

An example of this is the dieter who proclaims “I’ll just give in a little today” when seeing a delicious cake on the restaurant menu, and then feeling “I wish I hadn’t done that” right after gorging themselves.

Precommitment is the idea that you can often “lock-in” your choices beforehand, such that you will literally be unable to give into temptation when the actual choice comes before you, or entirely avoid the opportunity to even face the choice.

An example from the above would be something like having a trustworthy friend bring food over instead of eating out, so you can’t stuff yourself on cake because you weren’t even the one who ordered food.

There’s seems to be a general principle here of going “upstream”, such that you’re trying to target places where you have the most control, such that you can improve your experiences later down the line. This seems to be a useful idea, whether the question is about finding leverage or self-control.

Attractor Theory views all actions and situations as self-reinforcing slippery slopes. As such, it more realistically models the act of taking certain actions as leading you to other Attractors, so you’re not just looking at things in isolation.

In this model, we can reasonably predict, for example, that any video on YouTube will likely lead to more videos because the “sucked-in-craving-more-videos Future You” will have different preferences than “needing-some-sort-of-break Present You”.

This view allows you to better see certain “traps”, where an action will lead you deeper and deeper down an addiction/reward cycle, like a huge bag of chips or a webcomic. These are situations where, after the initial buy-in, it becomes incredibly attractive to continue down the same path, as these actions make reinforce themselves, making it easy to continue on and on…

Under the Attractor metaphor, your goal, then, is to focus on finding ways of being drawn to certain actions and avoidong others. You wan to find ways that you can avoid specific actions which you could lead you down bad spirals, even if the initial actions themselves may not be that distractiong.

The result is chaining together actions and their effects on how you perceive things in an upstream way, like precommitment.

Exploring, Starting, and Stopping:

Local optima is also visually represented by this model: We can get caught in certain chains of actions that do a good job of netting Utilons. Similar to the above traps, it can be hard to try new things once we’ve found an effective route already.

Chances are, though, that there’s probably even more Utilons to be had elsewhere. In which case, being able to break out to explore new areas could be useful.

Attractor Theory also does a good job of modeling how actions seem much harder to start than to stop. Moving from one Attractor to a disparate one can be costly in terms of energy, as you need to move against the pull of the current Attractor.

Once you’re pulled in, though, it’s usually easier to keep going with the flow. So using this model ascribes costs to starting and places less of a cost on continuing actions.

By “pulled in”, I mean making it feel effortless or desirable to continue with the action. I’m thinking of the feeling you get when you have a decent album playing music, and you feel sort of tempted to switch it to a better album, except that, given that this good song is already playing, you don’t really feel like switching.

Given the costs between switching, you want to invest your efforts and agency into, perhaps not always choosing the immediate Utilon-maximizing action moment-by-moment but by choosing the actions / situations whose attractors pull you in desirable directions, or make it such that other desirable paths are now easier to take.


Summary and Usefulness:

Attractor Theory attempts to retain willpower as a coherent idea, while also hopefully more realistically modeling how actions can affect our preferences with regards to other actions.

It can serve as an additional intuition pump behind using willpower in certain situations. Thinking about “activation energy” in terms of putting in some energy to slide into positive Attractors removes the mental block I’ve recently had on using willpower. (I’d been stuck in the “motivation should come from internal cooperation” mindset.)

The meta-level considerations when looking at how Attractors affect how other Attractors affect us provides a clearer mental image of why you might want to precommit to avoid certain actions.

For example, when thinking about taking breaks, I now think about which actions can help me relax without strongly modifying my preferences. This means things like going outside, eating a snack, and drawing as far better break-time activities than playing an MMO or watching Netflix.

This is because the latter are powerful self-reinforcing Attractors that also pull me towards more reward-seeking directions, which might distract me from my task at hand. The former activities can also serve as breaks, but they don’t do much to alter your preferences, and thus, help keep you focused.

I see Attractor Theory as being useful when it comes to thinking upstream and providing an alternative view of motivation that isn’t exactly internally based.

Hopefully, this model can be useful when you look at your schedule to identify potential choke-points / bottlenecks can arise, as a result of factors you hadn’t previously considered, when it comes to evaluating actions.


Caveats:

Attractor Theory assumes that different things can feel desirable depending on the situation. It relinquishes some agency by assuming that you can’t always choose what you “want” because of external changes to how you perceive actions. It also doesn’t try to explain internal disagreements, so it’s still largely at odds with the Internal Double Crux model.

I think this is fine. The goal here isn’t exactly to create a wholly complete prescriptive model or a descriptive one. Rather, it’s an attempt to create a simplified model of humans, behavior, and motivation into a concise, appealing form your intuitions can crystallize, similar to the System 1 and System 2 distinction.

I admit that if you tend to use an alternate ontology when it comes to viewing how your actions relate to the concept of “you”, this model might be less useful. I think that’s also fine.

This is not an attempt to capture all of the nuances / considerations in decision-making. It’s simply an attempt to partially take a few pieces and put them together in a more coherent framework. Attractor Theory merely takes a few pieces that I’d previously had as disparate nodes and chunks them together into a more unified model of how we think about doing things.

I Want To Live In A Baugruppe

43 Alicorn 17 March 2017 01:36AM

Rationalists like to live in group houses.  We are also as a subculture moving more and more into a child-having phase of our lives.  These things don't cooperate super well - I live in a four bedroom house because we like having roommates and guests, but if we have three kids and don't make them share we will in a few years have no spare rooms at all.  This is frustrating in part because amenable roommates are incredibly useful as alloparents if you value things like "going to the bathroom unaccompanied" and "eating food without being screamed at", neither of which are reasonable "get a friend to drive for ten minutes to spell me" situations.  Meanwhile there are also people we like living around who don't want to cohabit with a small child, which is completely reasonable, small children are not for everyone.

For this and other complaints ("househunting sucks", "I can't drive and need private space but want friends accessible", whatever) the ideal solution seems to be somewhere along the spectrum between "a street with a lot of rationalists living on it" (no rationalist-friendly entity controls all those houses and it's easy for minor fluctuations to wreck the intentional community thing) and "a dorm" (sorta hard to get access to those once you're out of college, usually not enough kitchens or space for adult life).  There's a name for a thing halfway between those, at least in German - "baugruppe" - buuuuut this would require community or sympathetic-individual control of a space and the money to convert it if it's not already baugruppe-shaped.

Maybe if I complain about this in public a millionaire will step forward or we'll be able to come up with a coherent enough vision to crowdfund it or something.  I think there is easily enough demand for a couple of ten-to-twenty-adult baugruppen (one in the east bay and one in the south bay) or even more/larger, if the structures materialized.  Here are some bulleted lists.

Desiderata:

  • Units that it is really easy for people to communicate across and flow between during the day - to my mind this would be ideally to the point where a family who had more kids than fit in their unit could move the older ones into a kid unit with some friends for permanent sleepover, but still easily supervise them.  The units can be smaller and more modular the more this desideratum is accomplished.
  • A pricing structure such that the gamut of rationalist financial situations (including but not limited to rent-payment-constraining things like "impoverished app academy student", "frugal Google engineer effective altruist", "NEET with a Patreon", "CfAR staffperson", "not-even-ramen-profitable entrepreneur", etc.) could live there.  One thing I really like about my house is that Spouse can pay for it himself and would by default anyway, and we can evaluate roommates solely on their charming company (or contribution to childcare) even if their financial situation is "no".  However, this does require some serious participation from people whose financial situation is "yes" and a way to balance the two so arbitrary numbers of charity cases don't bankrupt the project.
  • Variance in amenities suited to a mix of Soylent-eating restaurant-going takeout-ordering folks who only need a fridge and a microwave and maybe a dishwasher, and neighbors who are not that, ideally such that it's easy for the latter to feed neighbors as convenient.
  • Some arrangement to get repairs done, ideally some compromise between "you can't do anything to your living space, even paint your bedroom, because you don't own the place and the landlord doesn't trust you" and "you have to personally know how to fix a toilet".
  • I bet if this were pulled off at all it would be pretty easy to have car-sharing bundled in, like in Benton House That Was which had several people's personal cars more or less borrowable at will.  (Benton House That Was may be considered a sort of proof of concept of "20 rationalists living together" but I am imagining fewer bunk beds in the baugruppe.)  Other things that could be shared include longish-term storage and irregularly used appliances.
  • Dispute resolution plans and resident- and guest-vetting plans which thread the needle between "have to ask a dozen people before you let your brother crash on the couch, let alone a guest unit" and "cannot expel missing stairs".  I think there are some rationalist community Facebook groups that have medium-trust networks of the right caution level and experiment with ways to maintain them.

Obstacles:

  • Bikeshedding.  Not that it isn't reasonable to bikeshed a little about a would-be permanent community edifice that you can't benefit from or won't benefit from much unless it has X trait - I sympathize with this entirely - but too much from too many corners means no baugruppen go up at all even if everything goes well, and that's already dicey enough, so please think hard on how necessary it is for the place to be blue or whatever.
  • Location.  The only really viable place to do this for rationalist population critical mass is the Bay Area, which has, uh, problems, with new construction.  Existing structures are likely to be unsuited to the project both architecturally and zoningwise, although I would not be wholly pessimistic about one of those little two-story hotels with rooms that open to the outdoors or something like that.
  • Principal-agent problems.  I do not know how to build a dormpartment building and probably neither do you.
  • Community norm development with buy-in and a good match for typical conscientiousness levels even though we are rules-lawyery contrarians.

Please share this wherever rationalists may be looking; it's definitely the sort of thing better done with more eyes on it.

[Link] Automoderation System used by Columbus Rationality Community

7 J_Thomas_Moros 15 March 2017 01:18PM

View more: Next