Is this feature likely to be released in the near future?
I personally have a post sitting in drafts that is far too long to rely on footnotes, as it would break the reading experience. Sidenotes would be ideal for these kinds of posts, but it would be almost as good if you had an option to put them at the end of a paragraph in a little box or something, as that would translate well to the mobile reading experience.
Agree, I am also confused about this as a bystander.
It wasn't announced on the list this year, so I'm not sure what the situation is. If I were to guess, I'd say the person who set things up last year didn't respond to email requests for information about this year's plans.
If you want to take the initiative to organize the Manchester meetup, you are welcome to do so.
Update: At 11:30 we are moving to foundation coffee on Whitworth Street
(Disclaimer: This is a brief low effort reply because I've spent a large amount of time on this topic with very little to show for it, but I also don't want to ignore questions which people have a reasonable expectation of getting a response.)
If you don't write a seperate post about it, you could reply to this comment with the results. (i have nothing further to add at this time.)
I think the question becomes much more social than technical. It's not about how to design the UI, it's about evolving cultural norms. I would say it's both, it's getting users to want to do something and having the UI make it easy for them to do it.
(As a side note, for some reason people have become more reluctant in the past decade to rebel against interfaces and the implicit messages sent by its design choices. Like, until about last year you could not get people to use Discord as a tool for serious work, even though it was better than Slack, simply ...
I think those things would be a step in the right direction, but I'd be surprised if they turned out to be sufficient. Remember, LessWrong already notifies the subset of the userbase most likely to reply (i.e. users who have already replied) when there are new comments, but those users choose to ignore them after ~2 weeks.
For things to actually change, I predict that we'd first need a widespread perception that this behaviour is a problem, then have various UI nudges put in place. The only way you'd get the desired behaviour change without that consensus is if the UI went beyond nudging and aggressively pushed it as the default.
I think points 2 and 3 are correct, but the thing I wanted to convey was that without strong explicit preferences for things to be different, it's unlikely that the necessary changes would be made.
I think that while 1 is often true in general, it is not true in this specific case. We already have the positive sum solution (notifications) which allows anyone to continue discussions for as long as they like without having to manually check for new replies, and this clearly isn't enough to unstick the norm of avoiding comment sections once a post is a few wee...
I don't think it's completely to blame, but I suspect that the way the LessWrong homepage is set up encourages this cultural norm. LessWrong 2.0 has paid some attention to the need to revisit content, but the homepage is still much closer to Reddit (where discussions die out quickly) than a forum (where they don't).
My reason for thinking the website is not completely to blame is that it seems to reflect the revealed preferences of the users. If there was a strong (and conscious) preference for long running discussions, people would work around it via the n...
I don't think there are any important caveats, and I also wouldn't expect there to be. The reason I wouldn't is that if the best things in life weren't cheap, it would mean that the best things in life are the things that require lots of highly skilled labour that can't be amortised across a large number of people.
When things are expensive and highly desired, market forces incentivise people to put a lot of effort into making those things less expensive, so the only things that tend to remain that way are:
A) Things where there are hard constraints on how m...
To clarify, I was thinking more about the overall effect of the weather on people. You are not indoors all the time, nor can you cover every square inch of your body with warm clothing. At least from my point of view, being outdoors in 20F wind in a winter coat is worse than 85F in shorts + t-shirt. I'm not disputing that air conditioning is more technologically complex than a fireplace, I just don't think it's a major factor.
Ah, that makes more sense. I think if you'd posted this last year I would have assumed you were making an individual case, but the recent interest in moving the hub away from Berkeley made me think otherwise.
From what I understand, the case for Boston is as follows:
A. Similar good things
Big improvements (for me -- YMMV):
1. Boston has two of the world's best few universities very close together. (It's hard to live close to Stanford without studying there, and it's a huge trek from Stanford to Berkeley).
2. There's an obvious Schelling point in Boston for where to live (Camberville), while interesting people/companies/organizations in the Bay are in SF, Oakland, Berkeley, and South Bay/Peninsula.
3. Boston is closer to NYC (and the other big East Coast cities) and Europe.
I'd guess Camberville is significantly cheaper in term...
Fixed, thanks.
What about homeschooling? Many people within the community plan to homeschool their children, yet a quick google search indicates that homeschooling is illegal in all of Germany and you will be arrested if you attempt to do so.
Here's my incredibly detailed pitch for Manchester, UK.
If anyone has feedback, you can reply here or comment on the article itself.
I've given this a strong downvote, but I'm writing a comment so the OP and passerby aren't confused why a long comment that provides relevant answers is (currently) sitting at -3 karma:
I agree, but I also think there's a bit of a chicken and egg problem there too. Leaders fear that enforcing order will result in a mutiny, but if that fear is based on an accurate perception of what will happen, telling leadership to grow a pair is not going to fix it.
Thinking about my own experiences of seeing these bottlenecks in action, I don't think either is a subset of the other. It seems more like there's a ton of situations where the only way forward is for a few people to grow a spine and have the tough conversations, and an adjacent set of problems that need centralised competent leadership to solve, but it's in short supply for the usual economic reasons plus things like "rationalists won't defer authority to anyone they don't personally worship unless bribed with a salary".
As food for thought on the last line, here's my comment from a previous post on moral mazes:
It was meant to include Canada (because I suspect it still applies to them and I was unsure if they were included in Moral Mazes) but not Mexico or any countries south of Mexico which are technically in North America. This was not clear in retrospect and I have edited my comment in light of that.
Fortunately or unfortunately, this problem seems much worse in America compared to other western countries. Unfortunately, because most of the audience lives and works there. Fortunately, because it means large organisations aren't destined to become hellholes. By no means are they absent, but when I researched this they seemed far less intense.
Have you looked into the workings of large organisations outside of the US or Canada?
As George Carlin says, some people need practical advice. I didn't know how to go about providing what such a person would need, on that level. How would you go about doing that?
The solution is probably not a book. Many books have been written on escaping the rat race that could be downloaded for free in the next 5 minutes, yet people don't, and if some do in reaction to this comment they probably won't get very far.
Problems that are this big and resistant to being solved are not waiting for some lone genius to find the 100,000 word combina...
I've been following your whole series on moral mazes. I felt the rest of them were important because they explained why "working for the man" was bad in explicit terms, but this one was a pleasant surprise. Until about halfway through this post, I was under the impression you were articulating the dangers of moral mazes in the abstract while carefully ignoring any implications it would have for your own career on Wall Street. The point I realised you'd actually quit was a jaw-dropping moment, given that I already knew you weren't s...
Thank you for all that. I worry about the same thing - that this will not feel/be sufficiently actionable for people, and they won't be that likely to change their situations based on it. As George Carlin says, some people need practical advice. I didn't know how to go about providing what such a person would need, on that level. How would you go about doing that? It feels like a book-length or longer problem, the same way one can't write a post on how to prepare for a street fight that would actually be that good, beyond giving basic pointers (like run away).
Often what needs reviewing is less like "author made an unsubstantiated claim or logical error" and more like "is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable?
I agree with this, but given that these posts were popular because lots of people thought they were true and important, deeming the entire worldview of the author flawed would also imply the worldview of the community was flawed as well. It's certainly possible that the community's entire worldview is flawed, but even if you believe that to be true, it would be very difficult to explain in a way that people would find believable.
Those numbers look pretty good in percentage terms. I hadn't thought about it from that angle and I'm surprised they're that high.
FWIW, my original perception that there was a shortage was based on the ratio between the quantity of reviews and the quantity of new posts that have been written since the start of the review period. In theory, the latter takes a lot more effort than the former, so it would be unexpected if more people do the higher effort thing automatically and less people do the lower effort thing despite explicit calls to action and $2000 in prize money.
I'm not surprised to learn that is the case.
This is my understanding of how karma maps to social prestige:
The shortage of reviews is both puzzling and concerning, but one explanation for it is that the expected financial return of writing reviews for the prize money is not high enough to motivate the average LessWrong user, and the expected social prestige for commenting on old things is lower per unit of effort than writing new things. (It's certainly true for me, I find commenting way easier than posting but I've never got any social recognition from it, whereas my single LW post introduced me to about 50 people.)
Another potential reason is that it...
Raw numbers to go with Bendini's comment:
As of the time of writing this comment, there've been 82 reviews on the 75 qualified (i.e., twice-nominated) posts by 32 different reviewers. 24 reviews were by 18 different authors on their own posts.
Whether this counts as a shortage, is puzzling, or is concerning is a harder question to answer.
My quick thoughts:
I find this theory intuitively plausible, and I expect it will be very important if it's true. Having said that, you didn't provide any evidence for this theory, and I can't think of a good way to validate it using what I currently know.
Do you have any evidence that people could use to check this independently?
One possibility is that
1. The DMV is especially bad, because people don't have to tolerate using it on a weekly basis.
2. The USPS isn't especially good, but it's hard to notice because American delivery companies aren't much better by comparison.
I've already given this an upvote, but I'm also leaving a comment because I think LessWrong has a shortage of this kind of content. I think broad personal overviews are particularly important because a lot of useful information you can get from "comparing notes" is hard to turn into standalone essays.
Yesterday I noticed that some of what I'd attributed to cultural differences in communication strength between myself and the LessWrong audience was actually due to differences in when I would choose to verbalise something. I originally thought this was me opting to state my positions clearly instead of couching them in false uncertainty so they would sound less abrasive, but yesterday I left some comments where I found myself wanting to use vocabulary that was a significantly more "nuanced" than it used to be (example) and yet I didn't...
I like this post a lot, but the example debates that seem like intractable aesthetic disagreements seem to be missing a 2 key ideas that are preventing resolution:
1. Shared verbal acknowledgement that regardless of the aesthetic considerations, the status quo is not working. If you're debating the merits of "everyone pitch in" vs "specialise and outsource" and you've failed to recognise that people are generally not clearing up after themselves or funnelling money towards the problem, your first order of business shouldn&apos...
For what it's worth, I think that post made the right tradeoff. There will probably be some people who will have glossed over it due to lack of examples, but in that case I think it was an acceptable price to pay.
What I'm referring to is when the community does this by default, not when the author has explicitly weighed up the pros and cons. Not wanting to get into an issue is okay in isolation, but when everyone does this it impedes the flow of information in ways that make it even more difficult to avoid talking past each other.
I don't disagree with that, but I do think one reason we find it difficult to form good models and coordinate is that there's an insane norm of only ever talking about issues in abstract terms like X and Y. Maybe the issue in question here is super sensitive, since I have no idea what you are talking about, but "raising awareness of general patterms" often seems to be used as a (mostly subconscious) justification for avoiding the object level because it might make someone important look bad.
Ah.
My first reaction was thinking of a few scenarios that were analogous to the original framing, one example being "if it takes you years to coordinate the local removal of [obvious abuser], why do you think you will be able to coordinate safe AI development on a global scale?"
This isn't a pet issue of mine, but I suspect it is important to be able to say things like this. I guess my overall view is that crystallising this pattern might be putting ducttape over a more structural problem.
I have no trouble believing that this is common thing to hear if you're in a position of power, but what about situations where this is correct? After all, if it was never correct, people would never find it persuasive.
Are there any heuristics you use to figure out when this is likely to be true?
I'm reading this again now because I remember liking it and wanted to link it in something I'm writing, however:
Yes, some countries printed too much money and very bad things happened, but no countries printed too much money because they wanted more inflation. That’s not a thing.
That is absolutely a thing that some governments do. Even if we disregard hyperinflation, when a government's tax brackets, spending commitments and sovereign debt are denominated in nominal currency and it needs more money for stuff, the polit...
(Site meta: it would be useful if there was a way to get a notification for this kind of mention)
Some thoughts about specific points:
the whole point of this sequence is to go "Yo, guys, it seems like we should actually be able to be good at this?"
This is true for the sequence overall, but this post and some others you've written elsewhere follow the pattern of "we don't seem to be able to do the thing, therefore this thing is really hard and we shouldn't beat ourselves up about not being able to do it" that seems to come ...
I strongly support this suggestion.
a) that you don't think disagreements take a long time for the reasons discussed in the post
Disagreements aren't always trivial to resolve, but you've been actively debating an issue for a month and zero progress has been made, either the resolution process is broken or someone is doing something besides putting maximum effort into resolving the disagreement.
b) that rationalists should easily be able to avoid the traps of disagreements being lengthy and difficult if only they "did it right".
Maybe people who call themselves rationalists "should" be a
...I'm glad this post was written, but I don't think it's true in the sense that things have to be this way, even without new software to augment our abilities.
It's true that 99% of people cannot resolve disagreements in any real sense, but it's a mistake to assume that because Yudkowsky couldn't resolve a months long debate with Hanson and the LessWrong team can't resolve their disagreements that they're inherently intractable.
If the Yud vs Hanson debate was basically Eliezer making solid arguments and Hanson responding with interesting contrarian points bec
...I deliberately avoided giving a citation because I don't remember which paper I read that confirmed it, so searching for one that backs up a cached memory to appear more rigorous would be bad epistemic practice.
Instead, my confidence that this is true rests on several pieces of circumstantial evidence:
Yes. When it comes to tolerance of stimulant drugs, there is such thing as a free lunch.
While you will get some tolerance, and ceasing use will give you some withdrawal effects, tolerance will eventually plateau unless you are taking far more than you should be. After tolerance is accounted for, using caffeine will still give you a higher baseline of productivity than taking nothing at all.
How do you know?
I don't get any value out of content-free comments, but a sentence or two explaining what someone liked about my post gives me better feedback than an anonymous upvote. And even if it's just a phatic "Good post!", just knowing who said it can be quite useful.
I'd like to second this and say my experience has also been completely different.
There are some conversations that make sense to have 1v1, and most of the value I've gained from writing things has been when someone contacts me in private.
It does seem that while LessWrong doesn't actively discourage it, the site's UX makes it quite inconvenient to have those interactions.
Squeezing everyone into college-dorm-style housing would certainly reduce living costs, but people who want that can already do it. Most don't.
You're right that dorm-style housing is an existing option, and most people don't want to in them for obvious reasons. However:
(Thoughts translated from private message)
As I've said before, if political solutions were viable then this would have been solved 5+ years ago.
Addressing the problem will require an approach that doesn't assume you can build more housing in the expensive metro areas with good jobs. While that doesn't leave many options, I can think of at least 3 that are somewhat practical:
1. Find ways to increase the quality of the average grouphouse so more people want to live in them.
2. Coordinate groups of people to move from NIMBY cities with 10/10 job...
Your solution is... a bunk bed with cabinets built in?
Squeezing everyone into college-dorm-style housing would certainly reduce living costs, but people who want that can already do it. Most don't.
Plausible theory.
In the scenario where a breakthrough leads to a coordination takeoff, what implications do you think that would have for alignment/AI safety research?