Occasionally, I get asked for feedback on someone’s resume. I’m not really a resume-editing coach, but I can ask them what they accomplished in that roll where they’re just listing their duties. Over time, I’ve found I’m completely replaceable with this rock.
You did X for Y company? Great! Why is it impressive? Did it accomplish impressive outcomes? Was it at an impressive scale? Did it involve impressive other people or companies?
You wrote an application using Z software? Great! Why is it impressive? Did the code speed up run time by an impres...
Fabricated goals
I’m really good at taking an abstract goal and breaking it down into concrete tasks. Most of the time, this is super useful.
But if I’m not sure what would accomplish the high level goal, sometimes the concrete goals turn out to be wrong. They don’t actually accomplish the high-level more vague goal. If I don’t notice that, I’ll at best be confused. At worse, I’ll accomplish the concrete goals, fail at the high-level goal, and then not notice that all my effort isn’t accomplishing the outcome I actually care about.
I’m calling the misguided c...
I actually have a heart condition that severely limits my ability to exercise. Walking three miles is beyond what I'm capable of on an average day, let alone jogging anywhere.
This is an surprisingly harsh critique of a minor detail. In the future, I would strongly recommend a more polite, truth-seeking inquiry.
Hmm, it would probably work well to write a longer daily FB post, like if I set a goal to publish at least 500 words each day.
Part of the goal is ‘become comfortable submitting things I'm not fully happy with’ and part is 'actually produce words faster'. The second part feels like it needs the length requirement. I've done daily short FB posts before and found it useful, but I noticed that I tended to write mostly short posts that didn't require me to hammer out words.
Hmm, I'm not certain where you're getting that. I interpreted this as the amount of deliberate practice contributed to success in some fields much more than it did in other fields. (Which could be explained by some fields not having developed techniques and training methods that enable good DP, or could be explained by everyone maxing out practice, or by practice not mattering in those fields.) DP still makes a difference among top performers in music and chess, indicating that not all top performers are maxing out deliberate practice in those areas.
I considered that early on during my exploration, but didn't go deep into it after seeing Scott's comment on his post saying:
These comparisons held positions (specialist vs. generalist) constant. Aside from whether someone is a specialist or not, I don't think there's any tendency for older doctors to get harder cases.
Now, after seeing that the other fields also match the same pattern of decline, I'd be somewhat surprised by evidence that taking on harder cases explained the majority of skill plateaus in middle age for doctors.
Note: I was treating the 2009 study as a psudo-replication. It's not a replication, but it's a later study on the same topic that found the same conclusion, which had allayed some of my concerns about old psychology research. However, I since looked deeper into Dan Ariely's work, and the number of accusations of fraud or academic misconduct makes me less confident in the study. https://en.m.wikipedia.org/wiki/Dan_Ariely#Accusations_of_data_fraud_and_academic_misconduct
I wasn't thinking of shards as reward prediction errors, but I can see how the language was confusing. What I meant is that when multiple shards are activated, they affect behavior according to how strongly and reliably they were reinforced in the past. Practically, this looks like competing predictions of reward (because past experience is strongly correlated with predictions of future experience), although technically it's not a prediction - the shard is just based on the past experience and will influence behavior similarly even if you rationally know t...
Time inconsistency example: You’ve described shards as context-based predictions of getting reward. One way to model the example would be to imagine there is one shard predicting the chance of being rewarded in the situation where someone is offering you something right now, and another shard predicting the chance you will be rewarded if someone is promising they will give you something tomorrow.
For example, I place a substantially better probability on getting to eat cake if someone is currently offering me the slice of cake, compared to someone promising that they will bring a slightly better cake to the office party tomorrow. (In the second case, they might get sick, or forget, or I might not make it to the party.)
yeah, a key principle is something like "start light, stay sustainable". or maybe "start with space, make more space".
there's a large range of naturalism infrastructure it's possible to lay. some people want to dive all the way in immediately: evening journal, pocket field notes, a weekly time block for focused investigation, a weekly time block for analysis, a big "catching the spark" exercise to get things started, and a full predict-observe-update loop practice. but most people are better off choosing one single TAP: "I'll snap my fingers when I think I...
>Logan, how do you make space for practicing naturalism?
I don't have a ready-made answer to this, so I'm going to start rambling whatever maybe-nonsense comes to mind, and see what happens. This will probably not resemble "a good answer" very closely.
I think I mostly "make space for naturalism" by having different intellectual priorities than most adults. When I want to learn something, or to solve a problem, or when I'm in some unfamiliar kind of situation, naturalism-type thoughts are way higher on my priority list than non-naturalism-type thoughts. I...
Speculating here, I'm guessing Logan is pointing at a subcategory of what I would call mindfulness - a data point-centered version of mindfulness. One of my theories of how experts build their deep models is that they start with thousands of data points. I had been lumping frameworks along with individual observations, but maybe it's worth separating those out. If this is the case, frameworks help make connections more quickly, but the individual data points are how you notice discrepancies, uncover novel insights, and check that your frameworks are working in practice.
(Copying over FB reactions from while reading) Hmm, I'm confused about the Observation post. Logan seems to be using direct observation vs map like I would talk about mindfulness vs mindlessness. Except for a few crucial differences: I would expect mindfulness to mean paying attention fully to one thing, which could include lots of analysis/thinking/etc. that Logan would put in the map category. It feels like we're cutting reality up slightly differently.
Trying to sit with the thought of the territory as the thing that exists as it actually is regardless of whatever I expect. This feels easy to believe for textbook physics, somewhat harder for whatever I'm trying to paint (I have to repeatedly remind myself to actually look at the cloud I'm painting), and really hard for psychology. (Like, I recently told someone that, in theory, if their daily planning is calibrated they should have days where they get more done than planned, but in practice this gets complicated because of interactions between their plan...
(Copying over FB reactions from while reading) There’s something that feels familiar so far. For myself and when I’m working with clients, I often encourage experiments and journaling about the experience as they go. Part of the reason is uncertainty about the result, but another part is taking the time to check your expectations against your actual experience as it’s happening.
Like, I recently felt really drained. Noticing that feeling, I could immediately say several things that had been draining, but I wouldn’t have said they were hard while I’d been do...
Coming back after finishing the series, I notice the "scary!" reaction is gone. Based on some of Logan's comments on the FB thread, I think I updated toward 1. worry less about having to explicitly remember every detail -> instead just learn to pay attention and let those observations filter into your consciousness, and 2. it's still fine to use/learn from frames, just make sure you also have direct observations to let you know if your frames/assumptions are off.
I want to understand how to actually get humans to do the right things, and that task feels gargantuan without building on the foundation of simplified handles other people have discovered.
Yet, I value something like naturalism because I don't trust many handles as they are now, especially coming from psychology. "Confusion-spotting" seems pretty important if I'm going to improve on the status quo.
(Copying FB reactions I made as I read) 1. I care a lot about deep mastery. 2. It doesn’t feel immediately obvious that direct observation is the fastest route to deep mastery. Like, say I wanted to understand a new field - bio for example. I would start with some textbooks, not staring at my dog and hoping I’d understand how he worked. I’d get to examples and direct experience, but my initial instinct is to start with pre-existing frameworks. But maybe I’m just misunderstanding what “direct observation” means?
Probably worth noticing that my mind spent the last half of the post trying to skip ahead. Like, there’s a storyteller setting the scene and my brain wants to skip over “Once upon a time…” ….but I’m guessing that skipping over something because it seems familiar is antithetical to the whole point of this series.
Initial reaction: “Ah, scary.” Their move from frames to unfiltered, direct observations feels scary. Like I’m going to lose something important. I rely on frames a lot to organize and remember stuff, because memory is hard and I forget so many important data points. I can chunk lots of individual stuff under a frame.
I don't think this post added anything new to the conversation, both because Elizabeth Van Nostrand's epistemic spot check found essentially the same result previously and because, as I said in the post, it's "the blog equivalent of a null finding."
I still think it's slightly valuable - it's useful to occasionally replicate reviews.
(For me personally, writing this post was quite valuable - it was a good opportunity to examine the evidence for myself, try to appropriately incorporate the different types of evidence into my prior, and form my own opinions for when clients ask me related questions.)
Pro: The piece aimed to bring a set of key ideas to a broad audience in an easily understood, actionable way, and I think it does a fair job of that. I would be very excited to see similar example-filled posts actionably communicating important ideas. (The goal here feels related to this post https://distill.pub/2017/research-debt/)
Con: I don't think it adds new ideas to the conversation. Some people commented on the sale-sy style of the intro, and I think it's a fair criticism. The piece prioritizes engagingness and readability over nuance.
Hmm, that framing doesn't feel at odds with mine. Finding what's rewarding can definitely include whatever it is that's reinforcing the current behavior. I emphasized the gut-level experience because I expect those emotions contain the necessary information that's missing from rational explanations for what they "should" do.
But Ericsson's research found that one group of expert violinists averaged 10,000 hours. Another group of "expert" violinists averaged 5,000 hours, and other numbers he cites for expertise range from 500 to 25,000. So really, it's generalizing from "you should have 10,000 hours of practice by the time you're 20 if you want an international career as a violinist" to "you should get 10,000 hours of practice if you want to be an expert in anything"....
So I put that example because one of the things that felt like a breakthrough in cooking ability for me was seeing a post listing a bunch of world cuisines by spices (I think it was a post by Jeff Kaufman, but I can't find it now). Having a sense of which spices usually contribute to the flavor profile I want made me a better cook than my arbitrary "sniff spice and guess whether that would be good" previous method.
That seems likely. I'm not calling Gladwell out - I also haven't read the book, and there's probably a pretty defensible motte there. However, it seems likely that he laid the foundation for the popular internet version by overstating the evidence for it, e.g. this quote from the book: “The idea that excellence at performing a complex task requires a critical minimum level of practice surfaces again and again in studies of expertise. In fact, researchers have settled on what they believe is the magic number for true expertise: ten thousand hours."&nbs...
Interesting. None of the sleep doctors I spoke to recommended data sources. However, they seemed to consider even at-home professional sleep tests with skepticism, so this might say more about the level of accuracy they want than about the potential usefulness of personal devices.
As for age, I tried to focus this post on actionable advice. The non-actionable factors that influence sleep are simply to numerous for me to cover properly, and, unfortunately, however impactful aging is on sleep, reversing aging isn't (yet!) in my repertoire of recommendations.
Sounds like you're describing autonomy, mastery, and meaning - some of the big factors that are supposed to influence job satisfaction. 80,000 Hours has an old but nice summary here https://80000hours.org/articles/job-satisfaction-research. I expect job satisfaction and the resulting motivation make a huge difference on hours you can work productively.
For retired and homemaking folks, I think that's really up to them. I don't have a good model for external evaluation. For a student who wants to do impactful things later, I think the calculations are similar.
Since I can't link to it easily, I'm reposting a FB post by Rob Wiblin on a similar point:
"There's something kinda funny about how we don't place much value on the time of high school and undergraduate students.
To see that, imagine that person X will very likely be able to do highly valuable work for society and earn a high peak income of...
Maximization of neglectedness gives different results from those of maximization of impact.
I don't disagree, but my point is that you can't directly maximize impact without already knowing a lot. Other people will usually do the work that's very straightforward to do, so the highest counterfactually valuable work requires specialized knowledge or insights.
Obviously there are many paths that are low-impact. Since it's hard to know which are valuable before you learn about them, you should make a theory-of-change hypothesis and start testing that best guess. That way you're more likely to get information that causes you to make a better plan if you're on a bad track.
As I understand it, your objection is that "being the best" means traditional career success (probably high prestige and money), and this isn't a good path for maximizing impact. That makes sense, but I'm not talking about prestige or money (unless you're trying to earn to give). When I say "best," I mean being able to make judgement calls and contributions that the other people working on the issue can't. The knowledge and skills that make you irreplaceable increase your chances of making a difference.
Honestly, the main thing was to start treating my life as an experiment. Before that, I was just doing what the doctors told me without checking to see if their recommendations actually produced good results. For me, experimenting mainly meant that I 1. tried tracking a bunch of things on my own and analyzing the results, and 2. was willing to try a lot more things, like caffeine pills and antidepressants, because the information value was high. (I first did my research and, when relevant, checked with a doctor, of course.) I think there was a mindset shif...
I used a Lights sheet ( https://www.ultraworking.com/lights ) to track the variables alongside my daily habits, to reduce overhead.
There's a group for Effective Altruists on Focusmate:
(Dony Christie) "The Basic plan creates the group, and any member who has not subscribed to the Focusmate service for unlimited Focusmate sessions ($5/month) will be able to do 3 free sessions a week with either members of the EA group or the general public. Basic costs $50/month total, which rounds out to $2.50/month per person we currently have interested, and we will get even more people once the group exists and is popularized.
If you wish for unlimited sessions with other EAs (imagine a hi...
Note, I consider this post to be “Lynette speculates based on one possible model”, rather than “scientific evidence shows”, based on my default skepticism for psych research.
A recent Astral Codex Ten post argued that advice is written by people who struggle because they put tons of time into understanding the issue. People who succeeded effortlessly don’t have explicit models of how they perform (section II). It’s not the first time I’ve seen this argument, e.g. this Putanumonit post arguing that explicit rules help poor performers, who then abandon ... (read more)