This post is even-handed and well-reasoned, and explains the issues involved well. The strategy-stealing assumption seems important, as a lot of predictions are inherently relying on it either being essentially true, or effectively false, and I think the assumption will often effectively be a crux in those disagreements, for reasons the post illustrates well.
The weird thing is that Paul ends the post saying he thinks the assumption is mostly true, whereas I thought the post was persuasive that the assumption is mostly false. The post illustrates that the u...
I've stepped back from thinking about ML and alignment the last few years, so I don't know how this fits into the discourse about it, but I felt like I got important insight here and I'd be excited to include this. The key concept that bigger models can be simpler seems very important.
In my words, I'd say that when you don't have enough knobs, you're forced to find ways for each knob to serve multiple purposes slash combine multiple things, which is messy and complex and can be highly arbitrary, whereas with lots of knobs you can do 'the thing you na...
This came out in April 2019, and bore a lot of fruit especially in 2020. Without it, I wouldn't have thought about the simulacra concept and developed the ideas, and without those ideas, I don't think I would have made anything like as much progress understanding 2020 and its events, or how things work in general.
I don't think this was an ideal introduction to the topic, but it was highly motivating regarding the topic, and also it's a very hard topic to introduce or grok, and this was the first attempt that allowed later attempts. I think we should reward all of that.
One factor no one mentions here is the changing nature of our ability to coordinate at all. If our ability to coordinate in general is breaking down rapidly, which seems at least highly plausible, then that will likely carry over to AGI, and until that reverses it will continuously make coordination on AGI harder same as everything else.
In general, this post and the answers felt strangely non-"messy" in that sense, although there's also something to be said for the abstract view.
In terms of inclusion, I think it's a question that deserves more thought, but I didn't feel like the answers here (in OP and below) were enlightening enough to merit inclusion.
Self-review: Looking back, this post is one of the first sightings of a simple, very useful concrete suggestion to have chargers ready to go literal everywhere you might want them, and that is a remarkably large life improvement that got through to many people and that I'm very happy I realized.
However, that could easily be more than all of this post's value, because essentially no one embraced the central concept of Duel Wielding the phones themselves. And after a few months, I stopped doing so as well, in favor of not getting confused about which p...
This is a very important point to have intuitively integrated into one's model, and I charge a huge premium to activities that require this kind of reliability. I hope it makes the cut.
I also note that someone needs to write The Costs of Unreliability and I authorize reminding me in 3 months that I need to do this.
After reading this, I went back and also re-read Gears in Understanding (https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding) which this is clearly working from. The key question to me was, is this a better explanation for some class of people? If so, it's quite valuable, since gears are a vital concept. If not, then it has to introduce something new in a way that I don't see here, or it's not worth including.
It's not easy to put myself in the mind of someone who doesn't know about gears.
I think the original Gears in Understandin...
The central point here seems strong and important. One can, as Scott notes, take it too far, but mostly yes one should look where there are very interesting things even if the hit rate is not high, and it's important to note that. Given the karma numbers involved and some comments sometimes being included I'd want assurance that we wouldn't include any of that with regard to particular individuals.
That comment section, though, I believe has done major harm and could keep doing more even in its current state, so I still worry about bringing more focus...
This post is hard enough to get through that the original person who nominated it didn't make it, and also I tried and gave up in order to look at more other things instead. I agree that it's possible there is something here, but we didn't build upon it, and if we put it in the book people are going to be confused as to what the hell is going on. I don't think we should include.
Echoing previous reviews (it's weird to me the site still suggested this to review anyway, seems like it was covered already?) I would strongly advise against including this. While it has a useful central point - that specificity is important and you should look for and request it - I agree with other reviewers that the style here is very much the set of things LW shouldn't be about, and LWers shouldn't be about, but that others think LW-style people are about, and it's structuring all these discussions as if arguments are soldiers and the goal is to win w...
Echoing Raemon that this has become one of my standard reference points and I anticipate linking to this periodically for a long time. I think it's important.
I'm also tagging this as something I should build upon explicitly some time soon, when I have the bandwidth for that, and I'm tagging Ben/Raemon to remind me of this in 6 months if I haven't done so yet, whether or not it makes the collection.
These issues are key ones to get right, involve difficult trade-offs, and didn't have a good descriptor that I know about until this post.
Consider this as two posts.
The first post is Basketballism. That post is awesome. Loved it.
The second post is the rest of the post. That post tries to answer the question in the title, but doesn't feel like it makes much progress to me. There's some good discussion that goes back and forth, but mostly everyone agrees on what should be clear to all: No, rationalism doesn't let you work miracles at will, and we're not obviously transforming the world or getting key questions reliably right. Yes, it seems to be helpful, and generally the people who do i...
The problem with evaluating a post like this is that long post is long and slow and methodical, and making points that I (and I'm guessing most others who are doing the review process) already knew even at the time it was written in 2017. So it's hard to know whether the post 'works' at doing the thing it is trying to do, and also hard to know whether it is an efficient means of transmitting that information.
Why can't the post be much shorter and still get its point across? Would it perhaps even get the point across better if it was much shorter, bec...
So I reread this post, found I hadn't commented... and got a strong desire to write a response post until I realized I'd already written it, and it was even nominated. I'd be fine with including this if my response also gets included, but very worried about including this without the response.
In particular, I felt the need to emphasize the idea that Stag Hunts frame coordination problems as going against incentive gradients and as being maximally fragile and punishing, by default.
If even one person doesn't get with the program, for any reason, ...
At first when I read this, I strongly agreed with Zack's self-review that this doesn't make sense to include in context, but on reflection and upon re-reading the nominations, I think he's wrong and it would add a lot of value per page to do so, and it should probably be included.
The false dichotomy this dissolves, where either you have to own all implications, so it's bad to say true things that imply things that are true but focus upon would have unpleasant consequences, or it has to be fine to ignore all the extra communication that's involved in ...
So first off... I'd forgotten this existed. That's obviously a negative indication in terms of how much it guided my thinking over the past two years! It also meant I got to see it with fresh eyes two years later.
I think the central point the post thinks it is making is that, extending on the original econ paper, search effectiveness can rapidly become impossible to improve by expanding size of one's search, if those you are searching understand they are in competition. To improve results further, one must instead improve average quality in the searc...
As someone who was involved in the conversations, and who cares about and focuses on such things frequently, this continues to feel important to me, and seems like one of the best examples of an actual attempt to do the thing being done, which is itself (at least partly) an example of the thing everyone is trying to figure out how to do.
What I can't tell is whether anyone who wasn't involved is able to extract the value. So in a sense, I "trust the vote" on this so long as people read it first, or at least give it a chance, because if that doesn't convince them it's worthwhile, then it didn't work. Whereas if it does convince them, it's great and we should include it.
Building off Raemon's review, this feels like it is an attempt to make a 101-style point that everyone needs to understand if they don't already (not as rationalists, but as people in general) but that seems to me like it fails because those reading it will fall into the categories of (1) those who already got it and (2) those who need to get it but won't.
This was important to the discussions around timelines at the time, back when the talk about timelines felt central. This felt like it helped give me permission to no longer consider them as central, and to fully consider a wide range of models of what could be going on. It helped make me more sane, and that's pretty important.
It was also important for the discussion about the use of words and the creation of clarity. There's been a long issue of exactly when and where to use words like "scam" and "lie" to describe things - when is it accurate, when is it ...
This points out something true and important that is often not noticed, and definitely is under-considered. That seems very good. The question I ask is, did this cause other people to realize this effect exists, and to remember to notice and think about it more? I don't know either way.
If so, it's an important post, and I'd be at moderately excited to include it.
If not, it's not worth the space.
I'm guessing this post could be improved/sharpened relatively easily, if it did get included - it's good, and there's nothing wrong exactly, but feels l...
I've known about S-curves for a long time, and I don't think I read this the first time. If you don't know S-curves exist, this has good info, and it seems to be well explained. There are also a few useful nuggets otherwise. As someone who has long known of S-curves, hard to say how big an insight this is to others, but my instinct is that while I have nothing against this post and I'm very glad it exists, this isn't sufficiently essential to justify including.
This idea seems obviously correct, all the responses to objections seem correct, and the chance of this happening any time soon is about epsilon.
In some sense I wish the reasons it will never happen were less obvious than they are, so it would be a better example of our inability to do things that are obviously correct.
The question is, how much does this add to the collection. Do we want to use a slot on practical good ideas that we could totally do if we could do things, and used to do? I'm not sure.
These are good lists of open problems, although as Ben notes are bad lists if they are to be considered all the open problems. I don't think that is the fault of the post, and it's easy enough to make clear the lists are not meant to be complete.
This seems like a spot where a good list of open problems is a good idea, but here we're mostly going to be taking a few comments. I think that's still a reasonable use of space, but not exciting enough to think of this as important.
I'm all for such things existing and a book entirely composed of such things seems like it should exist, but I don't know what it would be doing in this particular book.
The combination of the two previous reviews, by hamnox and fiddler, seem to summarize: It's a pure happy infodump that doesn't add much, that gets you a lot of upvotes, and that says more about the voting system than about what is valuable.
I use this concept often, including explicitly thinking about what (about) five words I want to be the takeaway or that would deliver the payload, or that I expect to be the takeaway from something. I also think I've linked to it quite a few times.
I've also used it to remind people that what they are doing won't work because they're trying to communicate too much content through a medium that does not allow it.
A central problem is how to create building blocks that have a lot more than five words, but where the five words in each block can do a reasonable substitute job when needed.
This falls under the category of "things that is good to have a marker to point at and does a much better job than any previous marker." It also made me more aware of this principle, and caused me to get more explicit about ensuring the possibility of no in some interactions even though it was socially slightly awkward.
This post actively helped me improve how I interacted with my children by helping motivate what otherwise seemed like boring/stupid actions, and gave me a label to put on things.
I endorse this perspective and have since well before this post, and it was great to have it said explicitly and cleanly by someone else. This is especially true because I believe most people disagree with it. I've linked back to this a few times.
The only way to get information from a query is to be willing to (actually) accept different answers. Otherwise, conservation of expected evidence kicks in. This is the best encapsulation of this point, by far, that I know about, in terms of helping me/others quickly/deeply grok it. Seems essential.
Reading this again, the thing I notice most is that I generally think of this point as being mostly about situations like the third one, but most of the post's examples are instead about internal epistemic situations, where someone can't confidently conclude or ... (read more)