Transcript since I find the above basically impossible to read (I have to go and do something else for a bit; will transcribe more when I'm done):
[note: I have not tried to e.g. turn underlining into italics etc.; this was enough effort as it was; nor does my spacing exactly match the original.]
----
Abram's Machine-Learning model of the benefits of meditation
(a synthesis of Zizian "fusion" and Shinzen's explanation of meditation, also inspired by some of the ideas in Kaj Sotala's "My attempt to explain Looking and enlightenment in non-mysterious terms" ... but this model is no substitute for those sources and does not summarize what they have to say)
note that I am not an experienced meditator; let that influence your judgement of the validity of what I have to say as it may.
(also heavily influenced by my CFAR level 2 workshop experience)
My immediate inspiration for postulating this model was noticing that after just a little meditation, tolerating cold or hot shower temperatures was much easier.
[picture: person in hot/cold shower]
I had previously been paying attentin to what happens in my mind when I flinch away from too-hot or too-cold temperatures in the shower, as a way to pay attention to "thoughts which lead to action".
There are several reasons why it might be interesting to pay attention to thoughts which lead to action.
1. "Where's the steering wheel on this thing, anyway?" [picture: confusing car dashboard] If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.
2. "Who [or what] is steering this thing?" [picture: car with various people in it] Far from being alone in a mysterious spacecraft, it is more like we are on a big road trip with lots of backseat driving and fighting for the wheel, if you buy the multiagent mind picture.
We often think as if we were unitary, and blame any failings of this picture on a somewhat mysterious limited resource called "willpower". I'm not implying willpower models are wrong exactly; I'm unsure of what is going on. But bear with me on the multiagent picture...
I think there is a tendency to gravitate toward narratives where an overarching self with coherent goals drives everything -- missing the extent to which we are driven by a variety of urges such as immediate comfort. So, I think it is interesting to watch oneself and look for what really drives actions. You don't often eat because eating is necessary for continuing proper function of body & brain in order to use them for broader goals; you eat because food tastes good / you're hungry / etc.
Well, maybe. You have to look for yourself. But, it seems easy to mistakenly rationalize goals as belonging to a coherent whole moreso than is the case.
Why would we be biased to think we are alone in an alien spaceship which we only partly know how to steer, whne in fact we are fighting for the wheel in a crowded road-trip?
[picture: same car as before, loudmouth backseat driver circled]
Well, maybe it is because the only way the loudmouth (that is to say, consciousness) gets any respect around here is by maintaining the illusion of control. More on that later.
3. A third reason to be interested in "thoughts which lead to action" is that it is an agentless notion of decision.
Normally we think of a decision made by an atomic agent which could have done one of several things; chooses one; and, does it. [picture: person labelled "agent" with "input" and "output" arrows, and "environment" outside] In reality, there is no solid boundary between an agent and its environment; no fixed interface with a well-defined set of actions which act across the interface.
[picture: brain, spinal cord, muscles, eyeballs, bones, arrows, with circles sketched in various places]
Instead, there are concentric rings where we might draw such a boundary. The brain? The nerves? The muscles? The skin?
With a more agentless notion of agency, you can easily look further out.
Does this person's thought of political protest cause such a protest to happen? Does the protest lead to the change which it demands?
Anyway. That is quite enough on what I was thinking in the shower. [picture: recap of some of the pictures from above, in a thought bubble] The point is, after meditation, the thoughts leading to action were quite different, in a way which (temporarily) eliminated any resistance which I had to going under a hot or cold shower which I knew would not harm me but which would ordinarily be difficult to get myself to stand under.
(I normally can take cold-water showers by applying willpower; I'm talking about a shift in what I can do "easily", without a feeling of effort.)
So. My model of this:
I'm going to be a little bit vague here, and say that we are doing something like some kind of reinforcement learning, and the algorithm we use includes a value table:
[picture: table, actions on x-axis, states on y-axis, cells of table are estimated values of taking actions in states]
A value isn't just the learned estimate of the immediate reward which you get by taking an action in a state, but rather, the estimate of the eventual rewards, in total, from that action.
This makes the values difficult to estimate.
An estimate is improved by value iteration: passing current estimates of values back along state transitions to make values better-informed.
[picture: table like above, with arrows, saying "if (s1,a1)->(s2,a2) is a common transition, propagate backward along the link (s1,a1)<-(s2,a2)"]
For large state & action sets, this can be too expensive: we don't have time to propagate along all the possible (state,action) transitions.
So, we can use attention algorithms to focus selectively on what is highest-priority to propagate.
The goal of attention is to converge to good value estimates in the most important state,action pairs as efficiently as possible.
Now, something one might conceivably try is to train the attention algorithm based on reinforcement learning as well. One might even try to run it from the very same value table:
[picture: value table as before, actions partitioned into "thinking" actions that propagate values and "standard external actions"]
"The problem with this design is that it can allow for pathological self-reinforcing patterns of attention to emerge. I will provocatively call such self-reinforcing patterns "ego structures". An ego structure doesn't so much feed on real control as on the illusion of control.
[picture: loudmouth-representing-consciousness from before, saying "I told you so!"]
The ego structure gets it supply of value by directing attention to its apparent successes and away from its apparent failures, including focusing on interpretations of events which make it look like the ego had more control than it did during times of success, and less than it did in cases of failure.
[picture: car with loudmouth saying "I never said to go left!"]
Some of this will sound quite familiar to students of cognitive bias. One might normally explain these biases (confirmation bias, optimism bias, attribution bias) as arising from interpersonal incentives (like signalling games).
I would not discount the importance of those much, but the model here suggests that internal dynamics are also to blame. In my model, biases arise from wireheading effects. In the societal analogies mentioned earlier, we're looking at regulatory capture and rent-seeking.
This is rather fuzzy as a concrete mathematical model because I haven't specified any structure like an "interpretation" -- but I suspect details could be filled in appropriately to make it work. (Specifically, model-based reinforcement needs to somehow be tied in.)
Anyway, where does meditation come in?
My model is that meditation entices an ego structure with the promise of increased focus (i.e., increased attentional control), which is actually delivered, but at the same time dissolves ego structures by training away any contortions of attention which prevent value iteration from spreading value through the table freely and converging to good estimates efficiently.
[picture: happy meditating person with value table including "updating" actions in his/her head]
How does it provide increased control while dissolving control structures?
Well, what are you training when you meditate? Overtly, you are training the ability to keep attention fixed on one thing. This is kind of a weird thing to try to get attention to do. The whole point of attention is to help propagate updates as efficiently as possible. Holding attention on one thing is like asking a computer to load the same data repeatedly. It doesn't accomplish any computation. Why do it?
[picture: same meditation with an empty-set symbol instead of face + value table]
Well, it isn't quite a no-operation. Often, the meditative focus is on something which you try to observe in clarity and detail, like the sensations of the body. This can be useful for other reasons.
For the sake of my model, though, think of it as the ego structure trying to keep the attention algorithm in what amounts to a no-op, repeatedly requesting attention in a place where the information has already propagated.
[picture: value table with lots of arrows between the same pair of cells]
The reason this accomplishes anything is that the ego is not in complete control. Shadows dance beneath the surface.
[picture: same value table with a bunch of other arrows sketched in too]
The ego is a set of patterns of attention. It has "attachments" -- obligatory mental gymnastics which it has to keep up as part of the power struggle. Indeed, you could say it is only a set of attachments.
In CFAR terms, an attachment is a trigger-action pattern.
Examples:
"Oh, I mustn't think that way" (rehearsing a negative association to make sure a specific attention pattern stays squished)
Getting up & getting food to distract yourself when you feel sad
Rehearsing all the reasons you'll definitely succeed whenever a failure though comes up
Meditation forces you to do nothing whenever these thoughts come up, because the only way to maintain attention at a high level is to develop what is called quanimity: any distracting thought is greeted and set aside in the same way, neither holding it up nor squishing it down. No rehearsal of why you must not think that way. No getting up to go to the fridge. No rehearsal of all the reasons why you will definitely succeed.
Constantly greeting distractions with equanimity and setting them aside fills the value table with zeros where attachments previously lived.
[picture: value table with a bunch of zeros in the left portion labelled "attentional acts"]
Note that these are not fake zeros. You are not rewriting your true values out of existence (though it may feel that way to the ego). You are merely experimenting with not responding to thoughts, and observing that nothing terrible happens.
Another way of thinking about this is un-training the halo effect (though I have not seen any experimental evidence supporting this interpretation). Normally, all entities of conscious experience are imbued with some degree of positive or negative feeling (according to cognitive bias research & experienced meditators alike), which we have flinch-like responses to (trigger-action patterns). Practicing non-response weakens the flinch, allowing more appropriate responses.
Putting zeros in the table can actually give the ego more control by eliminating some competition. However, in the long term, it destabilizes the power base.
You might think this makes sustained meditative practice impossible by removing the very motivational structures trying to meditate; and perhaps it sometimes works that way. Another possibility is that the ego is sublimated into a form which serves to sustain the meditative practice, the skills of mental focus / mindfulness which have been gained, and the practice of equanimity. This structure serves to ensure that propagation of value through the table remains unclogged by attachments in the future. Such a structure doesn't need to play games to get credit for what it does, since it is actually useful.
Regardless, my advice is that you should absolutely not take this model as an invitation to try and dissolve your ego.
Perhaps take it as an invitation to develop better focus, and to practice equanimity in order to debias halo-effect related problems & make "ugh fields" slowly dissolve.
I have no particular indication that directly trying to "dissolve ego" is a safe or fruitful goal, however, and some reason to think that it is not. The indirect route to un-wireheading our cognitive strategies through a gently rising tide of sanity seems safest.
Speaking of the safety of the approach...
Why doesn't "zeroing out" the value table destroy our values, again??
At sinceriously.fyi, Ziz talks about core vs structure.
Core is where your true values come from. However, core is not complex enough to interface with the world. Core must create structure to think and act on its behalf.
[picture: central golden circle with complex stuff radiating out from it]
"Structure" means habits of thinking and doing; models, procedures. Any structure is an approximation of how the values represented by the core play out in some arena of life.
So, in this model, all the various sub-agents in your mind arise from the core, as parts of the unfolding calculation of the policy maximizing the core's values.
These can come into conflict only because they are approximations.
[picture: that car again, with core+radiating-stuff superimposed on it]
The model may sound strange at first, but it is a good description of what's going on in the value-table model I described. (Or rather, the value-table model gives a concrete mechanism for the core/structure idea.)
The values in the table are approximations which drive an agent's policy; a "structure" is a subset of the value table which acts as a coherent strategy in a subdomain.
Just removing this structure would be bad; but, it would not remove the core values which get propagated around the value table. Structure would re-emerge.
However, meditation does not truly remove any structure. It only weakens structure by practicing temporary disengagement with it. As I said before, meditation does not introduce any false training data; the normal learning mechanisms are updating on the simple observation of what happens when most of the usual structure is suppressed. This update creates an opportunity to do some "garbage collection" if certain structures prove unnecessary.
According to this model, all irrationality is coming from the approximation of value which is inherent in structure, and much of the irrationality there is coming from structures trying to grab credit via regulatory capture.
("Regulatory capture" refers to getting undue favor from the government, often in the form of spending money lobbying in order to get legislation which is favorably to you; it is like wireheading the government.)
The reflective value-table model predicts that it is easy to get this kind of irrationality; maybe too easy. For example, addictions can be modeled as a mistaken (but self-reinforcing) attention structure like "But if I think about the hangover I'll have tomorrow, I won't want to drink!"
So long as the pattern successfully blocks vlaue propagation, it can stick.
(This should be compared with more well-studied models of such irrationality such as hyperbolic discounting.)
Control of attention is a computationally difficult task, but the premise of Buddhist meditation (particularly Zen) is that you have more to unlearn than to learn. In the model I'm presenting here, that's because of wireheading by attentional structure.
However, there is some skill which must be learned. I said earlier that one must learn equanimity. Let's go into what that means.
The goal is to form a solid place on which to stand for the purpose of self-evaluation: an attentional structure from which you can judge your other attentional structures impartially.
[picture: wisdom-seeker on mountaintop nonplussed at being told by lotus-sitting master that all that's in the way of seeing himself is himself and he should simply stand aside]
If you react to your own thoughts too judgementally, you will learn to hide them from yourself. Better to simply try to see them clearly, and trust the learning algorithms of the brain to react appropriately. Value iteration will propagate everything appropriately if attention remains unblocked.
According to some Buddhist teachings, suffering is pain which is not experienced fully; pain with full mindfulness contains no suffering. This is claimed from experience. Why might this be true? What experience might make someone claim this?
Another idea about suffering is that it results from dwelling on a way that reality differs from how you want it to be which you can't do anything about.
Remember, I'm speaking from within my machine-learning model here, which I don't think captures everything. In particular, I don't think the two statements above capture everything important about suffering.
Within the model, though, both statements make sense. We could say that suffering results from a bad attention structure which claims it is still necessary to focus on a thing even though no value-of-information is being derived from i. The only way this can persist is if the attention structure is refusing to look at some aspects of the situation (perhaps because they are too painful), creating a block to value iteration properly scoring the attentional structure's worth.
For example, it could be refusal to face the ways in which your brilliant plan to end world hunger will succeed or fail due to things beyond your control. You operate under a model which says that you can solve every potential problem by thinking about it, so you suffer when this is not the case.
From a rationalist perspective, this may at first sound like a good thing, like the attitude you want. But it ruins the value-of-information calculations, ignores opportunity costs, and stops you from knowing when to give up.
To act with equanimity is to be able to see a plan as having a 1% chance of success and see it as your best bet anyway, if best bet it is -- and in that frame of mind, to be able to devote your whole being toward that plan; and yet, to be able to drop it in a moment if sufficient evidence accumulates in favor of another way.
So, equanimity is closely tied to your ability to keep your judgements of value and your judgements of probability straight.
Adopting more Buddhist terminology (perhaps somewhat abusively), we can call the opposite of equanimity "attachment" -- to cling to certain value estimates (or certain beliefs) as if they were good in themselves.
To judge certain states of affairs unacceptable rather than make only relative judgements of better or worse: attachment! You rob yourself of the ability to make tradeoffs in difficult choices!
To cling to sunk costs: attachment! You rob your future for the sake of maintaining your past image of success!
To be unable to look at the possibility of failure and leave yourself a line of retreat: attachment! Attachment! Attachment!
To hunt down and destroy every shred of attachment in oneself -- this, too, would be attachment. Unless our full self is already devoted to the task, this will teach some structure to hide itself.
Instead, equanimity must be learned gently, through nonjudgemental observation of one's own mind, and trust that our native learning algorithm can find the right structure if we are just able to pay full attention.
(I say this not because no sect of Buddhism recommends the ruthless route -- far from it -- nor because I can derive the recommendation from my model; rather, this route seems least likely to lead to ill effects.)
So, at the five-second level, equanimity is just devoted attention to what is, free from immediate need to judge as positive or negative or to interpret within a pre-conceived story.
"Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom." -- Viktor E. Frankl
There's definitely a lot that is missing in this model, and incorrect. However, it does seem to get at something useful. Apply with care.
-- End --
I too had difficulties reading this, but I agreed with the general gist of much of what I read. I've actually been intending to write a post that discusses meditation more from a reinforcement-learning angle, but haven't gotten around it. The closest that I've gotten was this post from about a year ago:
Some time back, Juha lent me his copy of The Mind Illuminated, a book on meditation. This is the best book on meditation that I have ever read. Among other practical instructions, it was the first time that a text really properly explained what the concrete goal of mindfulness practices are.
The goal (or at least a goal) of mindfulness is to train the mental processes responsible for maintaining your peripheral awareness – your background sense of everything that is going on around you, but which is not in the focus of your active attention – to observe not only your physical surroundings, but also the processes going on in your mind. By doing so, the mental processes responsible for habit formation start to get more information about what kinds of thought patterns produce pleasure and which kinds of thought patterns produce suffering. Over time this will start reshaping your mind, as patterns which only produce suffering will get dropped.
And part of the reason why this happens, is that you will start seeing thoughts with false promises of pleasure as what they are; rather than chasing promises of short-term pleasure, you will shift to sustainable thought patterns that produce long-term pleasure.
Suppose that you are meditating, and trying to maintain a focus on your breath. Over time this may start to feel boring. A pleasant-feeling thought will arise, tempting you to get distracted with its promise of relief from the boredom. But if you do get distracted sufficiently many times, and pay attention to how you feel afterwards, you will notice that this didn’t actually make you feel very good. Your concentration is in shambles and chasing random thoughts has just made you feel scatter-brained.
So the next time when that particular distraction arises, it may be slightly less tempting. And you begin to notice that it does feel good when you succeed at maintaining your concentration and ignoring the distractions. You had been suffering because your mind had been offering promises of pleasure which you felt you had to reject, but eventually you begin to internalize it’s not a choice of pleasure versus concentration at all. Concentration is only boring, or otherwise unpleasant, if you buy into the illusion of needing to chase the pleasant thought in order to feel good. If the false promise of pleasure stops tempting you, then the suffering of not having that pleasure goes away.
The tempting, pleasant thought is kind of like a marketer who first makes you feel inadequate about something, and then offers to sell you a product that will make you feel better. Your problem was never the lack of product; your problem was the person who made you think you can only feel good once you have his product.
Over time you learn to transfer this to your everyday life, paying attention to tempting thought-patterns that cause you suffering there. You experience different kinds of suffering, and feel that this could be fixed, if only you had X. Maybe you are procrastinating on something, and you get distracted by the idea of playing video games instead. Your mind tells you that if you just played video games, they would feel so good, and that pleasure would take away the pain of procrastination.
But if you do start to play the game, you may eventually notice that the promised pleasure never really manifested. Procrastination didn’t make you feel good, it just made you feel more miserable. And it’s one thing to know this on an intellectual level, in the way that most of us know intellectually that we’re going to regret procrastinating later; it’s quite another to actually internalize that belief in such a way that you recognize the temptation itself as harmful, and your mind begins learning to just ignore the temptation, until it never arises in the first place.
And the same principle applies more widely. Social anxiety, frustration over having to participate in an event you wouldn’t actually want to participate in, regrets over past mistakes: all are fundamentally about clinging to a thought which promises to offer pleasure, if only you (weren’t around these people/could skip the event/could change what had happened in the past). It is when you internalize that thinking about this isn’t actually going to deliver the pleasure and is actually causing you suffering, that reframe of the thought makes it easier to just automatically let go of it, with no need to struggle or expend willpower.
(Also, this is no longer related to meditation, but people who are interested in the connection between human values and values-in-the-reinforcement-learning-sense may be interested in my paper about that topic.)
I enjoyed reading the hand-written text from images (although I found it a bit surprising that I did). I feel that the resulting slower reading pace fit the content well and that it allowed me to engage with it better. It was also aesthetically pleasant.
Content-wise I found that it more or less agrees with my experience (I have been meditating every day for ~1 hour for a bit over a month and after that non-regularly). It also gave me some insight in terms of every-day mindfulness and some motivation for resuming regular practice or at least making it more regular.
My favorite quote was the this:
To act with equanimity is to be able to see a plan as having a 1% chance of success and see it as your best bet anyway, if best bet it is -- and in that frame of mind, to be able to devote your whole being toward that plan; and yet, to be able to drop it in a moment if sufficient evidence accumulates in favor of another way.
(thanks to @gjm for transcribing it, so I didn't have to :)
A related perhaps useful model: one interpretation of some Buddhist claims is that by default/habit we/evolution hit upon using affective circuitry as representative/a processing aid in propagating probabilities in a belief network. It is incredibly common for people to assume that if they short circuit this that their values/beliefs or what have you and thus ability to act will disappear. The surprising thing is that they don't. It appears that some other way of processing things is possible. People who reach certain fruitions often report surprise that they go in to work the next day and everything seems normal.
| "I have no indication that directly trying to dissolve ego is a safe or fruitful goal"
Does Dzogchen practice (described in Sam Harris' book "Waking Up") contradict this? The sense of self is presented as a primary cause of suffering, and directly dissolving it (or noticing that it is already an illusion) as the antidote.
I have listened to that in audiobook form. I don't consider it to be strong evidence about my concerns. I don't find its view to be especially implausible, though.
Could you (or anyone interested) elaborate on why practices like Self Inquiry might be maladaptive?
Is it a Chesterton Fence around the fragility of values in general, or some specific value, as indicated here?
If so, it could be useful in moderation, or to some agents in specific situations. Examples: 1) Someone serving a life sentence in prison or solitary confinement in a way that their ability to create value both for themselves and others is limited could benefit from weakening the DMN.
2)A Google Design Ethicist might want to hold off on this kind of mental training at least until s/he has a strong moral framework already in place.
My inferential distance from this post is high, I think. So excuse if this question doesn't even make sense.
If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.
I think I don't understand what you mean by 'thoughts' ?
I view 'thoughts' as not having very much to do with action in general. They're just like... incidental post-hoc things. Why is it useful to track which thoughts lead to actions? /blink
My inferential distance from yours is also high.
I view 'thoughts' as not having very much to do with action in general. They're just like... incidental post-hoc things.
I spend the whole of every working day thinking, and all this thought drives the things that I do at work. For example, the task currently (or before I took a break to read LW) in front of me is to make the API to a piece of functionality in the software I'm developing as simple as it can possibly be, while not making the implementation go through contortions to make it that simple. The actions this has given rise to so far have been to write a page of notes on possible APIs and a mockup of a procedure implementing one of them.
A lot of what I do when I'm not "at work" is the same sort of thing. What I have just written was produced by thinking. So thoughts as "incidental post-hoc things" does not describe anything that I call thoughts.
Do you define thoughts as something relatively specific - that is, does the post make any more sense if you substitute "mental contents" for "thoughts"?
Hmmm. No.
Basically, as far as I know, System 1 is more or less directly responsible for all actions.
You can predict what actions a person will take BEFORE they are mentally conscious of it at all. You can do this by measuring galvanic skin response or watching their brain activity.
The mentally conscious part happens second.
But like, I'm guessing for some reason, what I'm saying here is already obvious, and Abram just means something else, and I'm trying to figure out what.
I am confused, like obviously my thoughts cause some changes in behavior. Maybe not immediately (though I am highly dubious of the whole "you can predict my actions before they are mentally conscious bit"), but definitely in the future (by causing some kind of back-propagation of updates that change my future actions).
The opposite would make no sense from an evolutionary adaptiveness perspective (having a whole system-2 like thingy would be a giant waste of energy if it never caused any change in actions), and doesn't at all correspond to high-level planning actions, isn't what the whole literature on S1 and S2 says (which does indeed make the case that S2 determines many actions), and doesn't correspond well to my internal experience.
Yeah I'm not implying that System 2 is useless or irrelevant for actions. Just that it seems more indirect or secondary.
Also please note that overall I'm probably confused about something, as I mentioned. And my comments are not meant to open up conflict, but rather I'm requesting a clarification on this particular sentence and what frame / ontology it's using:
If you're experiencing "motivational issues", then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.
I would like to expand the words 'thoughts' and 'useful' here.
People seem to be responding to me as though I'm trying to start an argument, and this is really not what I'm going for. Sharing my POV is just to try to help close inferential gap in the right direction.
(fwiw I agree something about the conversation felt a bit off/overly-argumentative to me, although it's hard to place)
I acknowledge that it's likely somehow because of how I worded things in my original comment. I wish I knew how to fix it.
I don't know whether this is the true cause, but re-reading your original comment, the word "just" in this sentence gives me a very slight sense of triggeredness:
They're just like... incidental post-hoc things.
I have a feeling which has the rough shape of something like... "people here [me included] are likely to value thoughts a lot and think of them as important in shaping behavior, and may be put on the defensive by wording which seems to be dismissive towards the importance of thoughts".
Your second comment ("Basically, as far as I know, System 1 is more or less directly responsible for all actions") feels to me like it might trigger a bit of the same, as it can be read to imply something like... "all of the stuff in the Sequences about figuring out biases and correcting for them on a System 2 level is basically useless, since System 1 drives people's actions".
I also feel that neither of these explanations is exactly right, and it's actually something more subtle than that. Maybe something like "thoughts being the cause of actions is related to a central strategy of many people around here".
It's also weird that, like Raemon says, the feeling-of-the-conversation-being-off is subtle; it doesn't feel like anybody is being explicitly aggressive and one could in principle interpret the conversation as everyone just sharing their models and confusion. Yet it feels like there is that argumentative vibe.
thoughts being the cause of actions is related to a central strategy of many people around here
(It's a good reason to at least welcome arguments against this being the case. If your central strategy is built on a false premise, you should want to know. It might be pointless to expect any useful info in this direction, but I think it's healthier to still want to see it emotionally even when you decide that it's not worth your time to seek it out.)
Thanks! This was helpful analysis.
I suspect my slight trigger (1/10) set off other people's triggers. And I'm more triggered now as a result (but still only like 3/10.)
I'd like to save this thread as an example of a broader pattern I think I see on LW, which makes having conversations here more unpleasant than is probably necessary? Not sure though.
If you observe your thoughts a lot you might discover that certain thoughts you have are often followed by you taking actions afterwards.
Thoughts might mean certain internal dialog. For some people thoughts are also very visual.
This is really hard to read. Is there some reason why all this text isn’t just… text? I don’t know about anyone else, but I’d very much prefer that…
I have somewhat mixed feelings about this. I really liked the post the way it was, because the fact that it was all drawn on an Ipad clearly drastically reduced the trivial inconveniences for Abram to add small diagrams and illustrations, which is were a lot of the value of this post comes from. There did turn out to be long sections without illustrations, but I both think that Abram didn't know how much text there would be before he started, and that I just really want to err on the side of people using whatever tools allows them to best get their ideas across.
I do think that searchability and referencability are really important. My model is that if we curate something like this, which does include a large amount of text, we should just pay $20 or so to have someone transcribe the images to text and add it to the bottom of the post, or comment on the post (related to that, happy to send anyone $20 via Venmo who wants to transcribe this thing and post it in a comment).
One of the things I really like about LessWrong is that we've historically had an openness to non-standard ways of explaining things. A lot of Eliezer's writing included weird fictional dialogues, some weird bouts of poetry, personal stories, napkin diagrams and standard popular science explanations, and I feel having Abram's comics on here continues that legacy quite well. I am excited about people experimenting with new ways of explaining things, and am very very hesitant to discourage that.
I agree with more or less everything people have said about the advantages of the text being actual text.
But also it's fun and nice to have it handwritten and I think the benefits are non trivial.
So. Both? Just have both versions so everyone can enjoy the version that's best for them?
I'll go ahead and transcribe this one. (i'm currently learning two different alternative methods of typing and I'm at a stage where transcription is better practice than normal writing, and doing this will give be a nice opportunity to reflect on the post). I'll have it done this weekend. Let me know where I should post the text.
Upvoted for putting in the work to do the thing!
(I basically agree that having the transcript would be good, but don't think Abram should be any particular obligation to do so – exploring whimsical formats seems fine to me and if other people find it valuable enough to write a transcript that sounds good too)
One of the things I really like about LessWrong is that we’ve historically had an openness to non-standard ways of explaining things. A lot of Eliezer’s writing included weird fictional dialogues, some weird bouts of poetry, personal stories, napkin diagrams and standard popular science explanations, and I feel having Abram’s comics on here continues that legacy quite well. I am excited about people experimenting with new ways of explaining things, and am very very hesitant to discourage that.
Please see my comment elsethread about usability/accessibility/etc. Note that none[1] of the concerns I listed apply to Eliezer’s writing.
I think experimenting with non-standard ways of explaining things is great.
But a total disregard for usability and accessibility is not so great.
[1] Well, almost none; one may quibble that people on text-based browsers and screen readers won’t get the benefit of Eliezer’s diagrams—but at least they’ll have the rest of his posts, the overwhelming majority of the content of which is text, and which provide context for the diagrams. In contrast, with the OP, such users get absolutely nothing at all.
I am in favor of accessibility, but I would be highly surprised if more than 3% of users to LessWrong have a limitation of not being able to see images, so text-based browsers and screen readers do not strike me as a major concern. I am however in favor of reducing the size of the images to make it easier to read on mobile, since that is a much larger share of our users, but I don't think this is something the author should have to worry about, but is instead something LessWrong should make as easy possible by providing our own tools for image upload and associated conversion into sensible formats. I do think that's something we should improve relatively soon (and/or we would appreciate a PR on).
… I would be highly surprised if more than 3% of users to LessWrong have a limitation of not being able to see images, so text-based browsers and screen readers do not strike me as a major concern.
That may well be true, but please note that this was only one of eight issues I listed.
I generally like the hand-written style and would like to see more of it. I'm guessing that style was net-positive for me here (and made me a lot more likely to read the whole thing), though I did experience some reading fatigue 2/3 through this post.
As with podcasts, the obvious solution (and the one I’d recommend) is to provide both formats—the “fancy” one, and also a pure-text transcript.
I have a similar preference. I don't have any special accessibility needs, but text is generally easier to read for me because I can make adjustments to it to better fit where I'm reading. Text also makes translation easier, which I think of as extremely important since machine translation is good enough that you can read foreign language materials in your own language (for example, I often read German and Russian language things thanks to Google translate).
But of course if Abram would not post if he felt he had to provide text rather than images of text, I wouldn't want that. Just all else equal I have a similar preference for text.
In contrast, I really liked it written out (which makes picture integration natural) and I was surprised to find others having problems reading it. My vision is 20/50 the last time I checked if that's relevant.
I wondered the same thing. However, after thinking about it, I noticed that having the text be handwritten in different colors and sizes gave it a different feel, in a good way, in that the color and size in a way stood in for speech modulations like tone/volume/etc. One could change the font size and color in normal text, but I feel like that probably wouldn't have had the same effect, though I could be wrong.
having the text be handwritten in different colors and sizes gave it a different feel, in a good way, in that the color and size in a way stood in for speech modulations like tone/volume/etc.
I have absolutely no idea what you mean, here—which means that any such effect, even if it’s intended, will simply not be perceived by some (many?) readers.
Other disadvantages of this “text as images” format:
Makes the post a way, way bigger download (problematic for people on slow/metered connections)
Let me emphasize this part by pointing out that this post contains over twenty megabytes of images.
Edit: Note that converting the images from PNGs to GIFs would cut the file size down by about 75 percent, with zero loss of quality (so it would only be about 5 MB of images—still totally unnecessary, IMO, but not quite as egregiously so).
Since when did GIFs have notably better compression than PNGs? (Perhaps the issue is that these are badly-generated PNGs, and simply loading them into something that knows how to write PNGs and saving them again would produce similar gains?)
It’s the color palette; you can indeed save them as 256-color PNGs and get the same file size reduction. I suggested converting to GIF because it’s more likely that the OP knows or can figure out how to do that, than that he has & knows how to use a tool which can save palette-reduced PNGs.
Ah, I see. Yes, that would be an improvement, maybe 10% as good as just making the stuff be text in the first place.
I have seen the question of the safety of mediation raised before, but I notice there isn't much context surrounding people's concerns and it makes me wonder if I am missing something. What are the thoughts on safety?
For myself all I have to go on are occasional articles like this one, which seem in the same vein as the warnings in Mastering the Core Teachings of the Buddha.
An analogy I've heard is to compare mental training to physical training. It is generally useful, but if you have injuries or limitations of some sort (say, a busted knee), you should find ways to work around those.
What are the thoughts on safety?
I'm hoping someone who is experienced in both rationality and meditation can weigh in here, and also resolve any possible contradictions (especially around Insights gained).
You mentioned that this metaphor should also include world models. I can help there.
Many world models try to predict the next state of the world given the agent's action. With curiosity-driven exploration the agent tries to explore in a way that maximizes it's a reduction of surprise, allowing it to learn about its effect on the world (see for example https://arxiv.org/abs/1705.05363). Why not just maximize surprise? Because we want a surprise we can learn to decrease, not just the constant surprise of a TV showing static.
This means they focus an exploration reward on finding novel states. Specifically, novel states that are due to the agent's actions, since those are the most salient. We could rephrase this as "novel changes the agent has control over". But what is defined as an action, and what can it control?
Meditation changes where we draw the boundary between the agent and the environment. The no-self insight and lets you view thoughts as external things arising outside of your control. The impermanence insight lets you view more things as outside your control.
These two changes in perspective mean that an agent no longer experiences negative reward for states it now thinks it has no control over. It can also do reward hacking on its own thoughts since they are now "external" and targets of exploration rewards. Previously it could only learn patterns of thought with reference to some external goal, not it can learn a pattern of thought directly.
Disclaimer: world models and curiosity-driven exploration are at an early stage, and probably have a poor correspondence to how our brains work. There are quite a few unsolved problems like the noisy TV problem.
Here's an illustrated rendition of a semiformal explanation of certain effects of meditation. It was inspired by, but differs significantly from, Kaj's post on meditation. Some people appreciated gjm's transcription for readability.