This reminds me of Justin Skycak's thoughts on Deliberate Practice with Math Academy. His ~400 page document about skill building and pedagogy I think would be useful to you if you haven't seen it yet.
I think this post was important, and pointing out a very real dynamic. It also seems to have sparked some conversations about moderation on the site, and so feels important as a historical artifact. I don't know if it should be in the Best Of, but I think something in this reference class should be.
I like this! Especially the Past, Present, Future framing. I usually split along epistemic and instrumental lines. So my fundamental questions were:
1. Epistemic: What do you think you know and how do you think you know it?
2. Instrumental: What are you trying to protect, and how are you trying to protect it?
I've had some notion of a third thing, but now I've got a better handle on it, thanks!
I'm fond of saying, "your ethics are only opinions until it costs you to uphold them"
The reason I think this is important is because "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates": if you write 3000 words inveighing against people who think comparative advantage means that horses can't get sent to glue factories, that doesn't license the conclusion that superintelligence Will Definitely Kill You if there are other reasons why superintelligence Might Not Kill You that don't stop being real just because very few people have the expertise to formulate them carefully.
There's ...
I believe DaystarEld was talking about this in various places at LessOnline. They've got a sequence going in more depth here: Procedural Executive Function, Part 1
What do you mean by "necessary truth" and "epistemic truth"? I'm sorta confused about what you are asking.
I can be uncertain about the 1000th digit of pi. That doesn't make the digit being 9 any less valid. (Perhaps what you mean by necessary?) Put another way, the 1000th digit of pi is "necessarily" 9, but my knowledge of this fact is "epistemic". Does this help?
For what it's worth, I find the Dath Ilan song to be one of my favorites. Upon listening I immediately wanted this song to be played at my funeral.
There's something powerful there, which can be dangerous, but it's a kind of feeling that I draw strength and comfort from. I specifically like the phrasing around sins and forgiveness, and expect it to be difficult to engender the same comfort or strength in me without it. Among my friends I'm considered a bit weird in how much I think about grief and death and loss. So maybe it's a weird psychology thing.
If you can code, build a small AI with the fast.ai course. This will (hopefully) be fun while also showing you particular holes in your knowledge to improve, rather than a vague feeling of "learn more".
If you want to follow along with more technical papers, you need to know the math of machine learning: linear algebra, multivariable calculus, and probability theory. For Agent Foundations work, you'll need more logic and set theory type stuff.
MIRI has some recommendations for textbooks here. There's also the Study Guide and this sequence on leve...
Feature Suggestion: add a number to the hidden author names.
I enjoy keeping the author names hidden when reading the site, but find it difficult to follow comment threads when there isn't a persistent id for each poster. I think a number would suffice while keeping the hiddenness.
This has unironically increased the levels of fun in my life
If you already have the concept, you only need a pointer. If you don't have the concept, you need the whole construction. [1]
Related: Sazen and Wisdom Cannot Be Unzipped
Yay! I've always been a big fan of the art you guys did on the books. The Least Wrong page has a sort of official magazine feel I like due to the extra design.
Completed the survey. I liked the additional questions you added, and the overall work put into this. Thanks!
Oh, got it.
I mean, that still sounds fine to me? I'd rather know about a cool article because it's highly upvoted (and the submitter getting money for that) than not know about the article at all.
If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)
You get money for writing posts that people like. Upvoting posts doesn't get you money. I imagine that creats an incentive to write posts. Maybe I'm misunderstanding you?
non.io is a reddit clone that costs 1$ to subscribe, and then it splits the money towards those users you upvote more of. I think it's an interesting idea worth watching.
Maybe? I've not played it all that much, honestly. I was simply struck by the neat way it interacted with multiple players.
I think it could be easily tweaked or houseruled to be a peavewager game by just revealing all the hidden information. Next time I play I'll probably try it out this way.
War of Whispers is a semi-cooperative game where you play as cults directing nations in their wars. The reason it's cooperative is because each player's cult can change the nation they are supporting. So you can end up negotiating and cooperating with other players to boost a particular nation, because you both get points for it.
Both times I've played people started on opposite sides, then ended up on the same or nearly the same side. In one of the games two players tied.
There is still the counting of points so it doesn't quite fit what you are going for here, but it is the closest game I know of where multiple players can start negotiating for mutual aid and both win.
I think this is pointing at something real. Have you looked at any of the research with the MDA Framework used in video game development?
There are lots of reasons a group (or individual) goes to play a game. This framework found the reasons clustering into these 8 categories:
oh, that's right. I keep forgetting the LessWrong karma does the weighing thing.
Has anyone tried experimenting with EigenKarma? It seems like it or something like it could be a good answer for some of this.
I think this elucidates the "everyone has motives" issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I'm confused how psychoanalysis is supposed to work without devolving.
For example, let's say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?
In the simple case, I know what my intentions are and they either trust me when I tell them or they don't.
It's the cases when people can't...
I think what you are looking for is prediction markets. The ones I know of are:
I don't see all comments as criticism. Many comments are of the building up variety! It's that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.
Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.
The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post.
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussi...
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.
Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind...
If you feel like it should be written differently, then write it differently! Nobody is stopping you. Write a thousand roads to Rome.
Could Eliezer have written it differently? Maybe, maybe not. I don't have access to his internal writing cognition any more than you do. Maybe this is the only way Eliezer could write it. Maybe he prefers it this way, I certainly do.
Light a candle, don't curse the darkness. Build, don't burn.
This sequence has been a favorite of mine for finding little drills or exercises to practice overcoming biases.
https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness
Cult or Not-Cult aren't two separate categories. They are a spectrum that all human groups live on.
I agree wholeheartedly that the intent of the guidelines isn't enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?
If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better.
An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it's just straight up more useful to work with other people's code where the type signatures...
Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:
The point of the guidelines isn't to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable m...
As always, the hard part is not saying "Boo! conspiracy theory!" and "Yay! scientific theory!"
The hard part is deciding which is which
Wow, this hit home in a way I wasn't expecting. I ... don't know what else to say. Thanks for writing this up, seriously.
see the disconnect—the reason I think X is better than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad."
I think the X's and Y's got mixed up here.
Otherwise, this is one of my favorite posts. Some of the guidelines are things I had already figured out and try to follow but most of them were things I could only vaguely grasp at. I've been thinking about a post regarding robust communication and internet protocols. But this covers most of what I wanted to say, better than I could say it. So thanks!
The Georgism series was my first interaction with a piece of economic theory that tried to make sense by building a different model than anything I had seen before. It was clear and engaging. It has been a primary motivator in my learning more about economics.
I'm not sure how the whole series would work in the books, but the review of Progress and Poverty was a great introduction to all the main ideas.
Related: Wisdom cannot be unzipped
Reading Worth the Candle with a friend gave us a few weird words that are sazen in and of themselves. Being able to put a word to something lets you get a handle on it so much better. Thanks for writing this up.
If the Highlights are too long, then print off a single post from each section. If that's too long, print off your top three. If that's too long, print off one post.
Summarizing the post usually doesn't help, as you've discovered. So I'm not really sure what else to tell you. You have a lot of curated options to choose from to start. The Highlights, the Best of LessWrong, the Curated Sequences, Codex. Find stuff you like, and print it off for your friend.
Or, alternatively, tell them about HPMOR. That's how I introduced myself to the concepts in a fashion where the protagonist had need of them. So the techniques stuck with me.
If you have some of the LessWrong books, I would recommend those. They are small little books that you can easily lend out. That's what I've thought of doing before.
Really, starting is the hard part. Once I saw the value I was getting out of the sequences and other essays, I wanted to read more. So share a single essay, or lend a small book. Start small, and then if you are getting value out of it, continue.
You don't have to commit to reading the whole Sequences before you start. Just start with one essay from the highlights, when you feel like...
I wrote a bunch of reviews before I realized I wasn't eligible. Oops. Maybe the review button could be disabled for folks like me?
(I don't care whether my reviews are kept or discarded, either way is fine with me)
Writing up your thoughts is useful. Both for communication and for clarification to oneself. Not writing for fear of poor epistemics is an easy failure mode to fall into, and this post clearly lays out how to write anyway. More writing equals more learning, sharing, and opportunities for coordination and cooperation. This directly addresses a key point of failure when it comes to groups of people being more rational.
This post felt like a great counterpoint to the drowning child thought experiment, and as such I found it a useful insight. A reminder that it's okay to take care of yourself is important, especially in these times and in a community of people dedicated to things like EA and the Alignment Problem.
A great example of taking the initiative and actually trying something that looks useful, even when it would be weird or frowned upon in normal society. I would like to see a post-review, but I'm not even sure if that matters. Going ahead and trying something that seems obviously useful, but weird and no one else is doing is already hard enough. This post was inspiring.
This was a useful and concrete example of a social technique I plan on using as soon as possible. Being able to explain why is super useful to me, and this post helped me do that. Explaining explicitly the intuitions behind communication cultures is useful for cooperation. This post feels like a step in the right direction in that regard.
This is my favorite guide to confronting doom yet