On terminal vs instrumental truth, it's worth considering the hypothesis that domains where it's better to be overconfident are best modeled as confidence games, enterprises that claim to be about real things but are mainly about building extractive coalitions to exploit people who take the pretext literally.
Startups, the example you gave, often seem to work this way in practice. Sometimes the suckers are employees who believe inflated valuations (which are often offset by hidden concessions to investors). Other times they're the customers, who buy something little better than vaporware on the basis of a momentum narrative. (This seems to have been true with Theranos.)
Instead of remaining in denial about the extent to which we're participating in fraudulent enterprises, and consequently doing poorly at them, we should decide in each case whether we want to participate in lies for profit (and if so, be honest with ourselves about what we're doing), or whether we want to play a different game.
I think the confidence game framework is pointing at a useful thing - an important red flag to pay attention to. But I think your examples tend to blur the distinction between an orthogonal axis of "is the thing exploitive?".
I think basically all things worth doing that involve 2 or more people are something of a confidence game (and quickly scale up how confidence-game-like they are as they become more ambitious).
Silicon Valley Startup Culture has a lot of exploitation going on, and maybe this isn't coincidence, but some things that seem confidence-game-like (or maybe, "seem like the thing Ray cares about right now that inspired him to write this post, which may or may not be what Ben is pointing at") include:
1. Honest businesses starting from their own funding, trying to build a good product, pay fair wages and treat their neighbors well. Their chance of success is still probably less than 50%, and succeeding usually requires at least the founders to all believe (or alieve) that this is worth the effort to give it their all. I currently believe maintaining that alief on most human-hardware requires some degree of overconfidence. (link to Julia Galef's piece on this debate)
2. Secular Solstice was almost literally this - a holiday is only real if people believe it's real. The first 2-3 years were "on credit" - I got people to believe in it enough to give it a chance. Now it's become sufficiently real that people starting to celebrate in a new location are earnestly buying into an "existing real holiday" that has grown beyond any one person's control.
3. Quakers, Less Wrong, or Academia - Even when your goal is just thinking about things together, you still need to convince people to actually do it, and to put effort into it. When the "product" is thinking, certain forms of selling-yourself or maintaining-the-vision that'd work for a business no longer work (i.e. you can't lie), but it's still possible to accidentally kill the thing if a critical mass of participants say too many true/plausibly-but-pessimistic things at too early a stage and create an atmosphere where putting in the work doesn't feel worthwhile.
I have been persuaded that the "have a community that never compromises on the pursuit of truth" is really valuable, and trumps the sorts of concerns that instrumental truth seekers tend to bring up. But I am an instrumental truthseeker, and I don't think those concerns actually go away even if we're all committed to the truth, and I think navigating the "what does committing to truth even mean?" is still a confusing question.
(I know this is all something you're still in the process of mulling over, and last I checked you were trying to think them through on your own before getting tangled up in social consensus pressure stuff. At some point when you've had time to sort through those things I'm very interested in talking through it in more detail)
On honest businesses, I'd expect successful ones to involve overconfidence on average because of winner's curse. But I'd also expect people who found successful businesses to correctly reject most ideas they consider, and overconfidence to cause them to select the wrong business plan. It's possible that this is true but founders to switch to overconfidence once they're committed, perhaps as part of a red queen's race where founders who don't exaggerate their business's prospects don't attract employees, investors, customers, etc.
As far as the motivation of the founders themselves, though, if success requires overconfidence, it seems to me like either that implies that these businesses actually weren't worth it in expectation for their founders (in which case founders are harmed, not helped, by overconfidence), or there's some failure of truthseeking where people with nominally calibrated beliefs irrationally avoid positive-EV ventures.
Eliezer seems like something of a counterexample here - if I recall correctly his prior estimate that HPMoR would do what it did (recollected after the fact) was something like 10%, he just tries lots of things like that, when they're EV-positive, and some of them work.
Secular Solstice seems basically honest. Speculative bubbles cause harm in part because they tend to be based on distorted narratives about underlying facts, such that participating in the bubble implies actually having the facts wrong (e.g. that some type of business is very profitable). With a process that plays out entirely at a single level of social reality (the holiday's real iff enough people declare it real), there's not some underlying thing being distorted.
Something else that may be confusing here is that in our language, predictions and performative statements of intent often sound identical. Strunk and White's discussion of the difference between "will" and "shall" is a clear example of an intent to promote distinguishing these. Again, there could be incentives pointing towards exaggeration (and recasting statements of intent as predictions).
EA and LessWrong seem like examples of things that succeeded at becoming movements despite a lot of skepticism expressed early on. On the other hand, a substantial amount of the promotion for these has been promotion of still-untested vaporware, and I'm not sure I uniformly regret this.
Definitely agree that a good business person needs to be epistemically sound enough to pick good plans. I think the idealized honest business person is something like Nate Soares, separating their ability to feel conviction from their ability to think honestly. But I think that's beyond most people (in practice, if not in theory). And I think most business ventures, even good ones you have reason for confidence in, would still have a less than 50% success rate. (I think the "switch to overconfidence once you commit" strategy is probably good for most people)
(I do think you can select business ventures that would generate value even if the business ends up folding within a few years, and that an ecosystem that pushed more towards that than random-pie-in-the-sky-with-VC-exploitation-thrown in is probably better. But because of the competitive nature of business, it's hard for something to end up with a greater than 50% success rate)
[mental flag: I notice that I've made a prediction that's worth me spending another hour or two thinking about, fleshing out the underlying model of and/or researching]
> Eliezer...
Oddly enough I actually have a memory of Eliezer saying both the thing you just referenced and also the opposite (i.e, that he tries a lot of things and you don't see the ones that don't succeed, but also that he had a pretty good idea of what he was doing with HPMOR, and while it succeeded better than he expected... he did have a pretty good model of how fanfiction and memespreading worked and that he did expect it to work "at all" or something)
Your recollection about Eliezer seems right to me.
Also I guess you're right on idealized businessperson vs nearly every actual one. But it's worth noting that "X is very rare" is more than enough reason for X to be rare among successes.
[Note: This comment seems pretty pedantic in retrospect. Posting anyway to gauge reception, and because I'd still prefer clarity.]
On honest businesses, I'd expect successful ones to involve overconfidence on average because of winner's curse.
I'm having trouble understanding this application of winner's curse.
Are you saying something like the following:
People put in more resources and generally try harder when they estimate a higher chance of success. (Analogous to people bidding more in an auction when they estimate a higher value.)
These actions increase the chance of success, so overconfident people are overrepresented among successes.
This overrepresentation holds even if the "true chance of success" is the main factor. Overconfidence of founders just needs to shift the distribution of successes a bit, for "successful ones to involve overconfidence on average".
First, this seems weird to me because I got the impression that you were arguing against overconfidence being useful.
Second, are you implying that successful businesses have on average "overpaid" for their successes in effort/resources? That is central to my understanding of winner's curse, but maybe not yours.
Sorry if I'm totally missing your point.
You get winner's curse even if results are totally random. If you have an unbiased estimator with a random error term, and select only the highest estimate in your sample, the expected error is positive (i.e. you probably overestimated it).
Oh, an important issue that maybe should have been part of the post, but which I'm still fleshing out:
Truth and Weirdness
I notice the tension in "should we be weird or not?", with some people saying "I want to invite academics over to Less Wrong who really knowledgable and care about truth and will further the discussion, and it'd be great if we didn't see Harry Potter fanfiction on the front page that'd turn them off."
Other people respond with something like "But. The manner in which people generate ideas here is tightly coupled with feeling free, feeling excited. There is a spark at the center of all this that drives the pursuit of truth and if you try to make things more presentable, you're more likely to kill that spark than to attract clear-thinking academics that further the discussion."
This is sort of the like "blunt honesty vs wording-things-so-they-don't offend-people" thing. Different people respond to different things. If you naively try to kill the blunt-honesty without making sure that it's still possible for people to say uncomfortable things that social pressures push against saying, you end up not just silencing people but killing the generator for an important source of insights.
But whereas my intuitions for the Blunt Honesty thing is that it's worth cultivating the skill of saying uncomfortable things in a less-blunt-way, my intuition for the "let's try to be less weird thing" is more of a creeping sense of dread and losing something precious.
Which is what I expect the pro-blunt-honesty people to feel when anyone talks about "let's make blunt honesty less acceptable here."
And this is a competing access need that isn't actually better or worse for truthseeking, but which matters a lot to how individuals truthseek. And one way or another the mods and site designers are going to need to make a call about.
My feeling about weirdness is (I think) that it would be a mistake to try to be less weird or to pretend to be less weird, but it might be appropriate to make the weirdness less immediately prominent. For instance, to take one of your examples, maybe not have HPMOR take up 1/4 of the "recommended reading" space at the top of the front page, but still feel free to call things "Project Hufflepuff" and refer to HJPEV in conversation with the expectation that most readers will understand and so forth.
It seems kinda unlikely that this level of weirdness-hiding is going to Kill The Spark, given e.g. that old-LW doesn't have HPMOR on its front page at all.
(I'm trying to think of other notable varieties of weirdness and what my suggestion translates to, but actually I'm not sure there's anything else that comes close in visible weirdness : useful weirdness ratio.)
Blunt honesty incoming:
But whereas my intuitions for the Blunt Honesty thing is that it's worth cultivating the skill of saying uncomfortable things in a less-blunt-way, my intuition for the "let's try to be less weird thing" is more of a creeping sense of dread and losing something precious.
Which is what I expect the pro-blunt-honesty people to feel when anyone talks about "let's make blunt honesty less acceptable here."
That may be true, when you make this “less blunt honesty!” change to an existing, active community. But when you build a new place, and say “less blunt honesty here!”, the Blunt Honesty crowd feels no creeping dread, but simply shrugs and leaves.
FWIW, this didn't feel at all like blunt honesty, just regular honesty (although I suppose if you've carving things along "did I bother to put a smooth-it-out-nicely filter" vs "did I say something people will be offended by", I could see an argument for thinking more about the former)
I think, having written this post over several months with evolving views in the meantime... my strongest remaining opinion is:
If you're going to do the blunt honesty thing, and it seems nontrivial to reword things so people will be less offended, a strong* signal of collaborative building is really important . A community where people criticize and hold each other to a high standard can produce a lot of useful, interesting things. A community where people mostly complain at each other probably will not.
(I also just realized I forgot to include "actually build a thing" along with "brainstorm solutions" and "active listening" in the "how to collaboratively build" section. This was triggered by remembering that when you criticized LW2.0 for font choice etc, you accompanied that with actually making a stylesheet that addressed the issues you mentioned while sticking to the general aesthetic LW2.0 was going for. Which I think was pretty cool)
* by "strong" I mean "strong enough that people feel that you're fundamentally out to help", and how much effort it takes to send that signal varies by situation.
if you've carving things along "did I bother to put a smooth-it-out-nicely filter"
This.
a strong* signal of collaborative building is really important
You answered that yourself in the next paragraph, in exactly the way I was going to, so I think we’re on the same page here.
My first instinct when I see something being done wrong is to say “this is bad and wrong”. My second instinct is to do it myself. This has worked out well enough for me, all things considered, but it doesn’t scale very well, and a community where the pattern is that the first instinct never gets any results and I have to fall back to the second one all the time, is a community that I quickly come to question to value of participating in. (This is similar to the thing some managers do where if you bring up a problem, you’re assigned to fix it; well, guess what—that disincentivizes bringing up problems. But a web forum is not a job, so instead of keeping quiet, people can just leave.)
(I’ve now alluded to taking my toys and going home two or three times, which puts me in danger of sounding like a histrionic whiner, so I’ll leave this topic of conversation alone now. I just want to mention, before I do, that this being Less Wrong and not some random nowhere place, I don’t actually want to want to take my toys and go home, which is, in fact, the reason I bother to complain or to make custom stylesheets or whatever.)
*nods*
I very much believe "if you can't actually trust people to take things seriously, then the system is missing a crucial layer of trust."
The central thesis I've been thinking about this year is something like: "there's a lot of kinds of trust that have to happen at once for a valuable community to work". These include:
1. trust that people are actually trying to work together to make a good thing
2. trust that when people say they will do a thing, they will do it
3. trust that people will say (and do) things that matter, and if they're incapable of doing so, level themselves up until they can. (i.e. if you will resolve #2 by never committing to anything... well, maybe that's a step up but it's not sufficient).
4. trust people to be aware/introspective enough to not fall into a lot of failure modes
5. you have to trust people at all (i.e. like, not be lying or manipulative)
And all of that trust requires both people to be willing to trust, and also to have the skills and dedication to actually be worthy of that trust.
And, along a number of axis, we are not there yet.
Re: UX/UI stuff
I'm not sure if the "take your ball and go home" thing was mostly relating to UX stuff or other less obvious-to-me things. I do want to:
a) acknowledge that the core dev team hasn't been as communicative about that as we'd ideally have been, nor taken obvious effort to address it, so from your perspective it'd make sense if we seemed untrustworthy in that regard
b) from our perspective... there's just a lot of stuff to do. A lot of bugs and core-site-streamlining that seem higher priority, and a lot of people bringing up additional bugs all the time, and time/effort spent on any given thing means not doing other things (this includes responding quickly/adequately to everyone bringing stuff up).
I don't actually know what advice I'd give a manager of a team that is already strapped for time and working on the most important thing. Or rather: the advice is "put the thing in the backlog and assign it later", but in a more volunteerish-situation where a) people disagree on what the most important thing is b) people have different skills and interests and people tend to notice things that are more relevant to them...
...I just don't think there's much room for improvement beyond "if you want this thing to be a higher priority, you'll need to contribute work towards getting it done, or at least make a clear case that it is more important than the other things we're working on instead."
Re: Bad/Wrong
(phrasing this more bluntly than I normally would since it's seems like that's what you prefer)
I think there's very much an intermediate step between "this is bad/wrong" and "do all the work yourself", and that's "communicate that wrongness in a way that feels helpful and collaborative."
If it's legitimately effortful to translate your thoughts into that form, and you don't trust people to actually respond usefully (because they historically haven't), I think that's fair. But to actually get from there to "a good, functioning system", I think it's both necessary for the people-ignoring-you to put in that effort and for you to put more effort into translating your criticism into ones that pattern-match more to "trying to help" than "complaining/snarking/sniping"
(I personally have to remind myself that you have a track record of putting in effort and changing your mind about things when relevant, because while those things have happened over four years in a significant way, they are less frequent and salient than your average critical comment)
I think overriding the impulse to say "Bad/Wrong" and replace it with constructive substance is one of the core skills necessary for group-rationality to thrive (esp. when running on human hardware, and [possibly] even in more idealized circumstances.
or at least make a clear case that it is more important than the other things we're working on instead
https://www.nngroup.com/articles/aesthetic-usability-effect/
https://www.nngroup.com/articles/first-impressions-human-automaticity/
"communicate that wrongness in a way that feels helpful and collaborative."
Yeah - I just realized I was conflating a few different clusters of things you said at different times in different contexts in way which was both factually inaccurate as as well as uncharitable. I apologize for that.
I'm not actually sure to what extent we disagree on the object level things here - I definitely believe UX and typography are important, just lower priority than "site loads in a reasonable amount of time" and "get an integration/unit-testing framework in place to avoid mounting tech-debt". Do you disagree with that?
It felt from the previous couple comments that you felt frustrated about some pattern of interaction but I'm not actually sure which thing you're particularly worried about. (Also not sure if this is in the scope of things you still wanted to talk about)
I apologize for that.
Thank you, and apology accepted.
I definitely believe UX and typography are important, just lower priority than "site loads in a reasonable amount of time" and "get an integration/unit-testing framework in place to avoid mounting tech-debt". Do you disagree with that?
It’s hard to know what to say here; of course the site should load in a reasonable amount of time (this other NN Group article is particularly relevant here)… but…
One thing I could say is—of these scenarios, which is best, and which is worst:
The site loads quickly, and it’s good.
The site loads quickly, but it sucks.
The site loads slowly, but once loaded it’s good.
The site loads slowly, and once loaded, it sucks.
Clearly, #4 is worst. We’d all prefer #1. But suppose we find ourselves at #4, and further find that (by the expenditure of some unit of time/effort) we can move to #2 or to #3—we can’t jump directly to #1. (Then, perhaps, we can take another step, with more time and effort, and make it all the way to #1.)
What I am suggesting is that #3 is the superior first step compared to #2, because (in no particular order):
The aesthetic usability effect (as described in the link above) ensures that poor performance is more readily excused if the site is perceived to be appealing; the reverse is not the case
Poor performance is more readily excused as a transitional-stage effect of a beta development stage than either poor aesthetics or poor usability
A visitor who perceives using the site as having high value but also high cost may retain the mental impression of “value here, but also high cost; questionable for now whether worthwhile” and come back to it (and keep coming back to it) later; a visitor who makes the snap judgment of “little to no value here” leaves and does not return
As per this comment thread, aesthetics and usability can often be (but are not always) easier to improve than infrastructure (i.e., separately from #3 being a better place to be, it’s also easier to get to than #2)
Of course aesthetics and usability and other aspects of UX (like efficiency and effectiveness of task completion) aren’t everything that go into making a site “good” or “sucky”, so what I said above is not the whole story; content, for instance, is largely (though not nearly entirely) orthogonal to both that and to performance per se…
As for “get an integration/unit-testing framework in place to avoid mounting tech-debt”, well, there is much to be said for doing that before starting a public beta; of course there is nothing to be done about this now. (Neither is there necessarily anything to be done about various technical decisions which quite complicate certain sorts of improvement; the fact remains that they do indeed have that effect…)
The problem, as I’m sure you realize, is that I, the user of the site, cannot see your tech-debt. Oh, I may quite clearly see the effects of your tech-debt—or, at least, you say they’re effects of your tech debt; how can I know? maybe some unrelated cause is to blame; I have only your word, here, and anyway it hardly matters why problem X, Y, and Z exists—they do, and that’s that.
So then you do your thing, to avoid mounting tech-debt, and—and what? Is the site now more usable, as a result? More aesthetically pleasing? Can I navigate it more easily? Does it load quicker? Well, no, those things will come, now that you’ve solved your tech-debt problem… but they have not come yet; so from my, user’s, perspective, you’ve spent time and effort and benefited me not at all.
So perhaps you’ve made a rational decision, following a cost-benefit analysis, to delay improving the user experience now, so you can more easily improve the user experience later. Very well; but if you ask me, as a user, what priority solving your tech-debt problem has, I can only say “zero”, because that accomplishment in itself does nothing for me. Count it as a cost of doing the things that actually matter, not as itself a thing that actually matters.
Viewed that way, the choice is actually not between the options you list, but rather between “improve usability and the user experience by improving aesthetics / layout / design” and “improve usability and the user experience by improving performance”. I spoke about this choice in the first part of the post; in any case, pick one and do it. If one of those things (but not the other) requires you to fix your tech-debt first, take this cost difference into account in your decision. If both of those things require you to fix your tech-debt first, then go ahead and fix your tech-debt. If neither of those things actually require you to fix your tech-debt first, well, that too is something to factor in…
Gotcha.
At this point we've already made most of the progress we wanted on the performance and testing framework and the remaining work to be done is roughly on par with the "fix the easier to fix issues", and hashing out the remainder here feels less valuable than just doing the work. I do think a case could be made that the easier stuff was worth doing first, but we made the decision to switch when it seemed like we'd make more rapid progress on the "easier" stuff after we'd fixed some underlying issues.
I do probably agree with the implied "public beta should have waited another month or something" (though with a caveat that many of the problems didn't become apparent until the site was actually under a real load)
P.S.
It felt from the previous couple comments that you felt frustrated about some pattern of interaction but I'm not actually sure which thing you're particularly worried about. (Also not sure if this is in the scope of things you still wanted to talk about)
Nah, I’m tired of talking about meta / social stuff, let’s leave that alone for a while.
if you mention the disconnect between what people are saying and what they are doing, they get upset and angry at you challenging their self-conception
I wonder if the reason why science has been somewhat successful at finding things out about the world is because it is divorced from trying to do things.
The idealised scientist never promises to make a certain thing or find the most important thing, they promise to investigate a thing. And that investigation shows that the thing can't be made or the hoped for thing isn't there, that is progress has been made and a worthwhile thing has occurred.
An individuals scientists identity is not wrapped up having an impact in the world over and above searching that little bit of hypothesis space.
This idealised science person does not occur too much, but I think what science has achieved has been due to this separation of concerns of thinking/doing.
We can't search everywhere or science everything. We need a way for questions to be prioritized. But then if people champion a specific question and it was not a useful question they would be invested in that question and we would be back in the same situation.
Perhaps we need a collective anonymous way to vote for the questions we want answering.
When we have a good accurate model of the problem then courses of actions could need be unanimous?
Yeah. This feels like the useful-division-of-labor between the Rationality community and the Effective Altruism community.
Just a small note: in a place where you said "instrumental truthseekers tend to," I represent the countertrend. I'm an instrumental truthseeker, but I nevertheless still think it comes first and highest because I have a strong sense that all the other goods depend on it tremendously. This is in no way counter to your point but it seemed worth elevating to the level of explicitness—there are instrumental truthseekers who are ~as rabidly committed to truthseeking as the terminal ones.
Yeah. I think the most interesting things I was trying to point at at that even if we're all agreed on "rabid commitment to truthseeking", figuring out what that means is still a bit hairy and context-dependent. (i.e. do you say the literal truth or the words that'll lead people to believe true things?)
A related issue (that I'm less sure of your take on) is that even if you have a clear set of injunctions against truth violations (i.e. never tell even a white lie, or [summarizing what I think of as your point from Models of Models], never tell a white lie without looking yourself in the mirror and clearly thinking through the ramifications of what you're doing)...
...there's still a long range of "how much you could go out of your way to be _extra_ full of integrity". Do you list all your flaws up front when applying for a job? (Do you truthfully reveal flaws but sandwich them between a good first impression and a peak-end? Is the former more truthful than the latter, or is the choice random?)
That said, writing that last paragraph felt more like descending into a pointless rabbit hole than talking about something important.
I think my main thesis here was that if you point out things in a way that make people feel criticized or otherwise disengage, you're not necessarily upholding the Truth side of things, in addition to potentially having some cost on instrumental things like people's motivation to get shit done.
Yeah. I think the most interesting things I was trying to point at at that even if we're all agreed on "rabid commitment to truthseeking", figuring out what that means is still a bit hairy and context-dependent. (i.e. do you say the literal truth or the words that'll lead people to believe true things?)
What we do will become community norms. I think each thing favors a certain demographic.
If saying the literal truth is a social norm it becomes easier for new people with new ideas to come in and challenge old ideas.
If they have to model what the community believes then they will have an uphill struggle, there will be an immense amount of effort required to understands everyone in the communities interests and how they might speak in a way to not raise any hackles. If they spend most of their time in a Laboratory looking at results or hacking away with spreadsheets they will not be optimised for this task. They may even be non-neuro typical and find it very hard to model other people sufficiently to convince them.
On the other hand saying words that'll lead people to believe true things will make people more effective in the real world, they can use more traditional structures and use social recognition as motivation. It favors the old guard those with entrenched positions. Someone would need to remove things from the sequences if the rationalist were to go down this route, else new people would read things like the article on lost purposes and point out the hypocrisy.
Optimising for one or the other seems like a bad idea. Every successful young rabble rouser contrarian becomes the old guard at some point (unless they take the Wittgenstein way out of becoming a teacher).
We need some way to break the cycle I think. Something that makes the rationalist community different to the the normal world.
In the case of communication on Less Wrong, I think it's a (relatively) achievable goal to have people communicate their ideas clearly without having to filter them through "what will people understand?" (and if people don't understand your ideas, you can resolve the issue by taking a step back and explaining things in more detail)
The sort of issue that prompted this post was things more like "People in the EA posting infographics or advertisements that are [probably true / true-ish but depend on certain assumptions / not strictly true but fleshing out all the details would dramatically reduce the effectiveness of the advertisements, not because of falsehoods but because then you've turned a short, punchy phrase into a meandering paragraph]"
i.e. Less Wrong is a place to talk about things in exhaustive depth, but as soon as your ideas start interfacing with the rest of the world you need to start thinking about how to handle that sort of thing
I think I'm worried that people that have to interface with the outside world cannot interact with a weird/blunt lesswrong, as they need to keep up their normal/polite promoting behaviour. It is not like the rest of the world can't see in here or that they will respect this as a place for weirdness/bluntness. There is a reputational price of "Guilt by association" for those interfacing with the outside world.
When you talk about “truthseekers”, do you mean someone who is interested in discovering as many truths as possible for themselves, or someone who seeks to add knowledge to the collective knowledge base?
If it’s the latter, then the rationalsphere might not be so easy to differentiate from say, academia, but if it’s the former, that actually seems to better match with what I typically observe about people’s goals within the rationalsphere.
But usually, when someone is motivated to seek truths for themselves, the “truth” isn’t really an end in and of itself, but it’s really the pleasure associated with gaining insight that they’re after. This is highly dependent on how those insights are acquired. Did someone point them out to you, or did you figure it out by yourself?
I often feel that the majority of the pleasure of truthseeking is gained from self-discovery, i.e., “the joy of figuring things out.” This can be a great motivating force for scientists and intellectuals alike.
But the main issue with this is that it’s not necessarily aligned with other instrumental goals. Within academia, there are at least incentive structures that point truthseekers in the direction of things that have yet to be discovered. In the rationalsphere, I have no idea if those incentive structures even exist, or if there are many incentive structures at all (besides gaining respect from your peers). This seems to leave us open to treading old ground quite often, or not making progress quickly enough in the things that are important.
I definitely sympathize with the terminal-truthseekers (I spend a great deal of time learning stuff that doesn’t necessarily help me towards my other goals), but I also think that in regard to this effort of community building we should keep this in mind. Do we want to build a community for the individual or to build the individual for the community?
Edit: this post was originally intended as an explicit call to action. The new shape of LessWrong is (I think rightly) disincentivizing calls to action. I've modified this post a bit in light of that. It's still a bit call-to-action-y. I'm erring on the side of posting it to main but am interested feedback (both from Sunshine Folk and others) on the degree of call-to-action-ness it implies.
Thanks Raemon! I think the post is pretty good on this spectrum. While this post is definitely about social norms, it's largely arguments about social mechanisms and their consequences (letting the reader decide their actions for themselves), rather than telling the reader they're doing something bad / trying to use social force to change the reader's actions. So as a 'Sunshine Folk' I appreciate the effort, this post easily could be more on the other end of the spectrum.
Meta-comment: I'm noticing some artefacts that appear to be the remnants of editing, and the post seems to cut off mid-sentence. Is that intentional? Is it a bug?
(Note: I downvoted quanticle's post since it's less relevant now and I wanted it sorted lower, but have upvoted a random other post of his to compensate)
I find the topic of learning how to be a better commenter particularly interesting. If you have any further thoughts on that, I’d like to hear about them.
I think that a common reason that people who might have commented on something end up not doing so is that they aren’t sure if what they had to say is actually worthwhile. Well, just saying ‘I agree!’ probably isn’t, but this does raise the question of how how high that threshold should be.
The first paragraph of this comment is near that borderline, in my opinion - it could pretty much be formulaic: "I find [subtopic] particularly interesting. If you have any further thoughts on that, I’d like to hear about them."
On the other hand, it’s true, and conveys information that an upvote wouldn’t, so I do consider it worthwhile.
I think it varies from author to author. I know I appreciated getting this comment (and I think periodically having a new comment briefly bump an old post into the public consciousness again is fine).
Epistemic Effort: I've thought about this for several weeks and discussed with several people who different viewpoints. Still only moderately confident though.
So, I notice that people involved with the rationalsphere have three major classes of motivations:
Truthseeking (how to think clearly and understand the world)
Human/Personal (how to improve your life and that of your friends/family)
Impact (how to change/improve the the world at large)
All three motivations can involve rationality. Many people who end up involved care about all three areas to some degree, and have at least some interest in both epistemic and instrumental rationality. And at least within the rationalsphere, the Personal and the Impact motivations are generally rooted in Truth.
But people vary in whether these motivations are terminal, or instrumental. They also have different intuitions about which are most important - or about how to pursue a given goal. This sometimes results in confusion, annoyance, distrust, and exasperated people working at cross purposes.
Terminal vs Instrumental Truth
For some, truthseeking is important because the world is confusing. Whether you're focused on your personal life or on changing the world, there's a lot of ways you might screw up because something seems right but doesn't work or has a lot of negative externalities. It's necessary to do research, to think clearly, and to be constantly on the lookout for new facts that might weigh on your decisions.
For others, truth-seeking seems more like a fundamental part of who they are. Even if it didn't seem necessary, they'd do it anyway because because it just seems like the right thing to do.
I think there's a couple layers of conflict here. The first is that instrumental-truthseekers tend to have an intuition that lots of other things matter as much or more than truth.
Then, there are people (who tend to be terminal-truthseekers, although not always), who counter:
I find this argument fairly compelling (at least for a deeper delve into the concept). But what's interesting is that even if it's an overriding concern, it doesn't really clarify what to do next.
The Trouble With Truthseeking While Human
On the one hand, social reality is a thing.
Most cultures involve social pressure to cheer for your ingroup's ideas, to refrain from criticizing your authority figures. They often involve social pressure to say "no, that outfit doesn't make you look fat" whether or not that's true. They often involve having overt, stated goals for an organization (lofty and moral sounding) that seem at odds with what the organization ends up doing - and if you try to mention the disconnect between what people are saying and what they are doing, they get upset and angry at you challenging their self-conception.
The pressure to conform to social reality is both powerful and subtle. Even if you're trying to just think clearly, privately for yourself, you may find your eyes, ears and brain conforming to social reality anyway - an instinctive impulse to earnestly believe the things that are in your best interest, so you peers never notice that you are doubting the tribe. I have noticed myself doing this, and it is scary.
In the face of that pressure, many people in the rationality community (and similar groups of contrarians), have come to prize criticism, and willingness to be rude. And beyond that - the ability to see through social reality, to actively distance themselves from it to reduce its power over them (or simply due to aesthetic disgust).
I earnestly believe those are important things to be able to do especially in the context of a truthseeking community. But I see many people's attempts as akin Stage 2 of Sarah Constantin's "Hierarchy of Requests":
It's better to be able to rudely criticize than not at all. And for some people, a culture of biting, witty criticism is fun and maybe an important end in-and-of-itself. (Or: a culture of being able to talk about things normally considered taboo can be freeing and be valuable both for the insight and for the human need for freedom/agency). I've gotten value out of both of those sorts of cultures.
But if you're unable to challenge social reality without brusquely confronting it - or if that is the manner in which you usually do - I think there's a lot of net-truth you're leaving on the table.
There are people who don't feel safe sharing things when they fear brusque criticism. I think Robby Bensinger summarized the issue compactly: "My own experience is that 'sharp culture' makes it more OK to be open about certain things (e.g., anger, disgust, power disparities, disagreements), but less OK to be open about other things (e.g., weakness, pain, fear, loneliness, things that are true but not funny or provocative or badass)."
Brusque confrontation leads to people buckling down to defend their initial positions because they feel under attack. This can mean less truth gets uncovered and shared.
Collaboration vs Criticism For The Sake Of It
The job of the critic is much easier than the job of the builder.
I think that there's a deeper level productive discussion to be had when people have a shared sense that they are collaboratively building something, as opposed to a dynamic where "one person posts an idea, and then other people post criticisms that tear it down and hopefully the idea is strong enough to survive." Criticism is an important part of the building process, but I (personally) feel a palpable difference when criticized by someone who shows a clear interest in making sure that something good happens as a result of the conversation.
Help Brainstorm Solutions - If you think someone's goals are good but their approach is wrong, you can put some effort into coming with alternate approaches that you think are more likely to work. If you can't think of any ways to make it work (and it seems like it's better to do nothing than to try something that'll make a situation worse), maybe you can at least talk about some other approaches you considered but still feel inadequate.
Active Listening / Ideological Turning Tests - If you disagree with a person's goals, you can try to understand why they have those goals, and showcase to them that you at least get where they're coming from. In my experience people are more willing to listen when they feel they're being listened to.
Accompanying criticism with brainstorming and active listening acts as a costly signal, that helps create an atmosphere where it's a) worth putting in the effort to develop new ideas, and b) easier to realize (and admit) that you're wrong.
Truth As Impact
If you constantly water down your truth to make it palatable for the masses, you'll lose the spark that made that truth valuable. There are downsides to being constantly guarded, worried that a misstep could ruin you. Jeff Kaufman writes:
Daniel in the comments notes:
I'm not sure how to handle that paradox (Less Wrong is hardly the first group of people to note that PR-speak turns dull and lifeless as organizations grow larger and more established - it seems like an unsolved problem).
But there's a difference between watering things down for the masses and speaking guardedly... and learning to communicate in a way that uses other people's language, that starts from their starting point.
If you want your clear insights to matter anywhere outside a narrow cluster of contrarians, then at some point you need to figure out how to communicate them so that the rest of the world will listen. Friends who are less contrarian. Customers. Political bodies. The Board of Directors at the company you've taken public.
How to approach this depends on the situation. In some cases, there's a specific bit of information you want people to have, and if you can successfully communicate that bit then you're won. In other cases, the one bit doesn't doing anything in isolation - it only matters if you successfully get people to think clearly about a complex set of ideas.
Consider Reversing All Advice You Hear
One problem writing this is that there's a lot of people here, with different goals, methods and styles of communication. Some of them could probably use advice more like:
And some some could probably use advice more like:
I started writing this post four months ago, as part of the Hufflepuff Sequence. Since then, I've become much less certain about which elements here are most important to emphasize, and what the risks are of communicating half-baked versions of each of those ideas to different sorts of people.
But I do still believe that the end-goal for a "true" truth-oriented conversation will need to bear all these elements in mind, one way or another.