For sure! It's a devilishly hard problem. Despite dipping in and out of the topic, I don't feel confident in even forming a problem statement about it. I feel more like one of the blind men touching different parts of an elephant.
But it seems like having many projects like the Verified Voting Foundation should hedge the risk--if each such project focuses on a small part, then the blast radius of unfortunate mistakes should be limited. I would just hope that, on average, we would be trending in the right direction.
When I last looked a couple of months back, I found very little discussion of this topic in the rationalist communities. The most interesting post was probably this one from 2021: https://forum.effectivealtruism.org/posts/8cr7godn8qN9wjQYj/decreasing-populism-and-improving-democracy-evidence-based
I supposed it's not a popular topic because it rubs up against politics. But I do think that liberal democracy is the operating system for running things like LW, EA, and other communities we all love. It's worth defending it--though what that means exactly is vague to me.
Defending liberal democracy is complex, because everyone wants to say that they are on the side of liberal democracy.
If you take the Verified Voting Foundation as one of the examples of highly recommended projects in the link, mainstream opinion these days is probably that their talking points are problematic because people might trust less in elections when the foundations speaks about the need for a more trustworthy election process.
While I personally believe that pushing for a more secure voting system is good, it's a complex situation and many other projects in the space are similar. It's easy for a project that's funded for the purpose of strengthening liberal democracy to do the opposite.
Did you find anything interesting in 2018? Did you use it, and, if yes, how'd it go?
How would you label Metz's approach in the dialogue with Zack? To me it's clear that Zack is engaging in truth-seeking--questioning maps, seeking where they differ, trying to make sense of noisy data.
But Metz is definitely not doing that. Plenty of Dark Arts techniques there, and his immediate goal is pretty clear (defend position & extract information), but I can't quite put a finger on his larger goal.
If Zack is doing truth-seeking, then Metz is doing...?
Seconding "Style: Ten Lessons in Clarity and Grace". Amazing explanation of effective written communication.
I would only add this, for the original poster: when you read what the book suggests, reflect on why it's doing so.
When I read "Style" the second time around, it occurred to me how hard reading really is, and that all this advice is really for building a sturdy boat to launch your ideas at the distant shores of other minds.
I felt a jolt of excitement when I overheard a non-Rat (at least looking) person casually drop "Slack" during a conversation.
I work at a mid-sized software company based in the SF Bay area. The person talking was a director in my organization. The context was about setting aside time for chewing over problems--not trying to solve them, but just looking at them to see the broader context in which the problem exists.
I experienced it firsthand not too long ago at the NYC Megameetup: dialogues where both (or more) parties actively tried to explore each others' maps, seeking points where there was overlap and where there were gaps. More concretely, everyone was asking a lot more questions then usual. These questions were relevant and clarifying. They helped make the discussion feel speedy, as in, like we were running from room to room, trying to find interesting bits of knowledge, especially where views diverged.
The best way I can describe it is that it felt like thinkin...
I’m still hopeful that there’s some way to make progress if we get enough good minds churning out ideas on how to enroll people into their own personal development.
Me too! I hope my comment didn't come through as cynical or thought-stopping. I think this is one of the highest goods people can produce. It just seems like this is one of those problems where even defining the problem is a wicked problem in the first place--but falling into analysis-paralysis is bad too.
Please do write more on this topic. Ill try to make a post around the same themes this weekend :)
The ending hints at the true problem: how do we go about implementing such change?
We already have more than enough tools: de Bono's thinking hats, 5-whys, CBT & a dozen other forms of therapy (+ drugs!), CFAR, gratitude journaling, meditation, anger management, post-mortems, pre-mortems, legitimate self-help, etc.
But the problem in deploying them are legion:
My own version of this is over-trying to introduce a topic. I'll zoom out until I hit a generally relatable idea like, "one day I was at a bookstore and...", then I'll retrace my steps until I finally introduce what I originally wanted to talk about. That makes for a lot of confusing filler.
The opposite of this l, and what I use to correct myself, is how Scott Alexander starts his posts with the specific question or statement he wants to talk about.
Very much! Apart from enjoying it myself, I usually pick some things out and share them with friends and family as a way to offset some of the unagentic doom and gloom present in mainstream media :)
Thanks! It seems I've been practicing most of these, but:
I came here to say exactly this! It's one thing to build an app, but quite another to built the institution that makes it work.
So apart from having members exchange physical goods, and someone to take care of the technical machine, someone has to invest time to tackle all the moderation work to limit bad actors' effects on the series of trades.
There was a flourishing of apps like this around the '10s with stuff like couch surfing and tool trading apps etc but most have died off, leaving bigger players like Facebook marketplace or Craigslist precisely because, as I believe, they didn't have a plan to tackle the institutional work--instead just believing strangers will sort things out between themselves.
Some really good stuff here.
I've been reading these digests for about a year now. I look forward to each one. Thanks Jason!
Excellent post. I wholeheartedly agree that progress should be driven by humanistic values as that appears to be the only viable way of steering it toward a future in which humanity can flourish.
I'm somewhat confused though. The techno-optimist space seems to be largely and strongly already permeated with humanist values. For example, Jason Crawford's Roots of Progress regularly posts/links things like a startup using technology to decrease the costs of beautiful sculpture, a venture to use bacteria to eradicate cavities, or a newsletter about producing hi...
What's different there compared to the first book?
I read the first one and found it to resonate strongly, but also found my mental models to not fit well with the general thrust. Since then I've been studying stats and thinking more about measurement with the intent to reread the first book. Curious if the cybersecurity one adds something more though
Thank you for sharing that. Parts of it resonate very strongly, like being unable to know how much fuel I have left, or practicing making choices, or the need for strategy (which is just dawning on me). It's helpful to know that someone else has walked this path, at least the common part of it, and made it farther along.
Funny/uncanny to read this. This is something I've just started working on (+improving sleep) maybe two weeks ago.
How does this work for you if you don't mind me asking?
The pots theory reminded of this bit about creativity:
In the mid-1960s, researchers Jacob Getzels and Mihaly Csikszentmihalyi studied students at the School of the Art Institute of Chicago to discover what led to successful creative careers. Giving them a variety of objects and asking them to compose a still life drawing, two distinct groups emerged: those who hastily chose an object and proceeded straight to drawing, and those that took much more time, carefully considering different arrangements.
...In their view, the first group was trying to solv
I can see how this can look like procrastination from the outside. But I think in my case, it really is some weird jedi trickery where meta-level replaced the object-level (at much less energy cost--so why would I ever do object level?)
I've written more this week than in a long time just by clearly asking myself whether I'm doing something meta (fun, leisure) or object-level (building stuff) and there's no ugh-field at all!
Thank you for writing this out. It resonates with what's happening in my head on a deeper, emotional level ("if I study enough meta, I will become a fearsome champion in my first match, ha ha").
The meta element of anything is a necessary evil
This is going on a post-it on my desk.
Thanks! Your comment made me realize I built sort of trap for myself: I would go for meta when I would be tired, telling myself "hey, maybe I'm not pushing on The Thing, but at least I'm pushing on it indirectly." But that slowly moved me farther and farther away from The Thing because if I can keeping pushing on it with less energy by going meta, why would I ever push on the object-level which costs more energy for the same effect?
But the effect is not the same of course. I just tricked myself.
Also, from your other comment, the pots theory really resonated because it sounds so much like play--making 50 pots creates so much space for experimentation and silliness!
I'm a big fan of the Replacing Guilt series. But I've always found the "guilt" part troubling because it always felt there was something more behind, something even more primitive.
Perhaps it's just me or people like me but now I believe that thing is fear. Completely subjectively I had an experience recently while watching my thoughts (inspired by https://www.lesswrong.com/posts/bbB4pvAQdpGrgGvXH/tuning-your-cognitive-strategies) and noticed that certain chains if thoughts terminated as if at a wall made up of this panicky feeling, the one where you feel ...
Thanks, this is incredibly useful.
I think I understand enough to put together a curriculum to delve into this topic. Starting with the harvard course you recommended.
I'm reminded of a post from not too long ago: https://www.lesswrong.com/posts/bbB4pvAQdpGrgGvXH/tuning-your-cognitive-strategies
I haven't run through the exercise that it suggests, but I've borrowed an idea that seems in line with the framework in this post. Internally, I call it a brain debugger.
Basically, from time to time I ask myself what am I thinking and what was I thinking before. To better illustrate what I mean in the context of this post, here's an example:
This is good advice that I've seen work very well, both for myself and others.
There is, however, a related problem, or rather a metaproblem: how do you choose what to whitebox?
Going with the programming example, the field is huge. Do you invest time into ML? Linux? Rust? Data engineering? SRE?
Then, within each of those categories you can find vast categories: as an SRE do you focus on observability or CI/CD or orchestration or...? Each is a 1-3+ year subfield in itself.
You can use a heuristic like "what's useful for my job" but even then, unless you're alr...
Thanks! I'll look this over.
Out of curiosity,
Most people with a strong intuition for statistics have taken courses in probability. It is foundational material for the discipline.
Do some people learn statistics without learning probability? Or, what's different for someone who learns only stats and not probability?
(I'm trying to grasp what shape/boundaries are at play between these two bodies of knowledge)
Thanks! This is really helpful--I think this is exactly what I'm trying to do.
Are these texts part of a specific academic track/degree or field of study? It sounds like something someone in engineering would spend a semester on. But also like something someone could spend a career on studying.
You raise a good question, but it still relies on following the (historical) authority of the Academy. Perhaps the Academy has changed? Perhaps the environment the Academy is operating has changed, forcing the Academy to adjust?
Of course, this would apply to the non-Academy, ie. broader society, as well--but at different rates, and also different directions.
A stab at answering your question: you should only apply an update based on the Academy if the Academy is an important entity for you. This isn't binary. Awards factor into my perception of movies, but only play a minor role.
As someone who's experienced this, I've found that Slack is a helpful idea to bring to bear.
Sometimes, trying to utilize the small segments of free time leads to scheduling so much work that one small interruption snowballs into a huge failure. So I've often asked myself, "What can I do to create more slack so that I do have the required bigger chunks of time to truly focus on work that matters?"
Many thanks for letting me know!
Wish I had heard this sooner. Coming from a place where every purchase had to be planned out weeks in advance, and after finding financial stability, it took me some years to realize I shouldn't be trying to optimize the purchase of chopsticks or closet hanger.
Thanks for sharing your perspective. I remember you describing your experience in a little more depth some time ago and it makes me doubt my experience. Perhaps I've been in less healthy orgs. But more likely there are knobs/patterns I can't see, so org change work like this feels out of reach for me. I've got some thinking to do.
I've been thinking about AllAmericanBreakfast's recent shortform posts about mentition. It's because I've been teaching myself three new things and I noticed that one practice I engage in regularly is playing with problems in my head. But this practice seems to largely depend on how good I am at something.
Anecdotal examples:
I've seen this happen too, along with same end result.
It appears that a common failure mode here is that the middle management layer fails to translate the values into system updates. No one updates performance reviews, no one updates quarterly/half goals, etc. So things just continue as they were before.
Ultimately, it's the responsibility of leadership to fix this. Whether it's by direct intervention or a huddle with middle management, they must do something.
(My experience as an individual contributor that attempted to change how performance reviews are d...
Thanks for posting this. I'll add it to my collection of "thinking tools."
These techniques feel like they have the same spirit as some of de Bono's work, for example, his idea of PO:
PO implies, 'That may be the best way of looking at things or putting the information together. That may even turn out to be the only way. But let us look around for other ways.
...The two main functions of PO are first to protect an arrangement of information from judgement and to indicate that it is being used provocatively and second to challenge a particular arrangement of
In my experience with doing something similar, this practice also helps memorize adjacent concepts.
For example, I was recently explaining to myself Hubbard's technique that uses Student's t-stat to figure out the 90% CI of a range of possible values of a population using a small sample.
Having little memory of statistics from school, I had to refresh my understanding of variance and the standard deviation, which are required to use the t-stat table. So now, whenever I need to "access" variance or stdev, I mentally "reach" for the t-stat table and pick up variance and stdev.
Thirding this. Would love more detail or threads to pull on. Going into the constructivism rabbit hole now.
That was my idealism/naivete: that the league of liberal democracies is so mature and strong that they could flip a switch and the war would cease. Maybe they would just tell Putin to stop and he would have to. Because for me, democracy was always a guarantee of peace. But the war made me realize my map was way off from the territory, and Popper's book, in turn, helped to replace my fantasy with something closer to the territory.
This echos an excellent post by Dan Luu that touches on problems you face when you build larger, less legible systems that force you to deal with normalization of deviance: https://danluu.com/wat/
The action items he recommends are:
Pay attention to weak signals
Resist the urge to be unreasonably optimistic
Teach employees how to conduct emotionally uncomfortable conversations
System operators need to feel safe in speaking up
Realize that oversight and monitoring are never-ending
Most of these go against what is considered normal or comfortable though:
Enjoy Portland! Btw, if you want to hang out with some cool people, there's a rationalist space in Seattle called The Territory full of cool people.
Some people seem to do this automatically. They notice which things make code harder to work with and avoid them. Occasionally, they notice things that make working with code easier and make sure to include those bits in. I guess that's how you get beautiful code like redis or Django.
But I've never seen any formal approach to this. I've gone down the software craftsmanship rabbit hole for a few years and learned a lot thanks to it, but none of it was based on any research--just people like Beck, Uncle Bob, Fowler, etc. distilling their experience into blog...
Disclaimer: I spent about 2 months diving into the crypto space this past December-January. Read a bunch of stuff (here's a shortened list), got an ENS domain, and wrote some Solidity code (w/o deploying any of it, even to a testchain though).
I seems to me that blockchain tech has a lot of potential for building newer, better coordination tools that integrate with an increasingly online lifestyle and culture.
Currently, most of the community's energy seems to be going into financial solutions, which also produces many, many highly suspicious projects--eithe...
Another technique is to compare yourself to your past self.
I'm often dissatisfied with my writing. But when I look back at stuff that I wrote six months ago, I can't help but notice how much better I've become.
The caveat here is that comparing myself to people like Scott Alexander gives me some direction. Comparing myself to an earlier version of myself doesn't give me that direction. Instead, it gives me a sort of energy/courage to keep on going.
I analyze essays: I'll find an essay I really like and then go through it paragraph by paragraph, trying to figure out what makes it so good. I have no formal training in composition above English 101 and 102, so it's been a long journey of finding out things that people have been talking about for centuries. Above all, it has been slowly changing the way I see written texts because now I'm able to discern parts that I couldn't see before.
For some time, I also analyzed my daily work ([wrote about it here](https://www.lesswrong.com/posts/MAM3pdncCWBrkhnxq/watching-myself-program)), but I've put that on hold because things became too hectic at work (and I'm also changing jobs).
Thanks for sharing this. I often catch myself thinking this way, about how, for example, the outlet in the very room you're sitting in is connected to another strand of wire, and another, and another, until the very generator in some powerplant somewhere. And since other outlets in other buildings and cities are connected to the same state, you could say that there is almost a complete circuit between your room and every other room connected to the same grid.
Or go up a level: consider that all this infrastructure is being operated by humans, and that conne...
My impression is that this movie is to older teenagers/early 20-somethings what The Matrix/The Big Lebowski/Fight Club were to me and my age group: grappling with nihilism by taking a long, hard look at all the completely arbitrary social hierarchies that our society is composed of. All of these movies highlight how flimsy social customs are. All of them also give voice to a certain kind of deep anger with the status quo through the violence they portray.
It was a solid movie, though I wouldn't place it in the top 100. I enjoyed it for giving me a window into the thoughts of a group of people I don't normally get to talk with.
This is a trap that software engineers appear to be especially susceptible to. If you hang out in places where they congregate, like Silicon Valley, you'll eventually hear about solutions to all sorts of problems, from homelessness to the opioid crisis.
Being one myself, I attribute this to solving problems every day and getting rewarded for that, then falling under curse of thinking that since you just solved Very Complicated Problem X, you can probably solve Very Complicated Problem Y, except that Y is in a domain you have so little context in that it see...
Strong upvote too. Thank you for sharing this.
There are innumerable resources with techniques like "avoid passive voice", but very few about the writing process itself, which I think becomes a crucial point to focus on if a person wants to advance beyond a "good enough" level of writing ability. Reading about Duncan's process gave me a few ideas on how to improve my own.
I suspect that this why writers' workshops help people improve--they allow writers to debug each others' writing processes.
This is very interesting work in the Rationalist "I want to be stronger!" sense. Thank you for putting it together and sharing--Ill be on the look out for workshop dates to sign up for!
Also, I just bumped into this game: https://talktomehuman.com/. It's a simple role playing game that puts you into awkward situations at home or work and has you talk your way out of them. The NPCs are LLM-powered. You have to use your voice and there's a time limit. I haven't tried it yet. But it seems similar in spirit to what you want to build, albeit focused on a different problem.