Proof-of-work is a radical and relatively recent idea which does not yet have a direct correspondent in philosophy. Here, cryptographic proofs witness the expenditure of resources like physical energy to commit to particular beliefs. In this way, the true scale of the system which agrees on certain beliefs can be judged, with the largest system being the winner.
I think this relates to the notion that constructing convincing falsehoods is more difficult and costly than discovering truths, because (a) the more elaborate a falsehood is, the more likely it is to contradict itself or observed reality, and (b) false information has no instrumental benefit to the person producing it. Therefore, the amount of "work" that's been put into a claim provides some evidence of its truth, even aside from the credibility of the claimant.
Example: If you knew nothing about geography and were given, on the one hand, Tolkien's maps of Middle-Earth, and on the other, a USGS survey of North America, you'd immediately conclude that the latter is more likely to be real, based solely on the level of detail and the amount of work that must've gone into it. We could imagine that Tolkien might get to work drawing a fantasy map even more detailed than the USGS maps, but the amount of work this project would require would vastly outweigh any benefit he might get from it.
- Reward yourself after each session.
What kinds of rewards do you use for this?
Consider the following charts:
Chart 1 shows the encephalization quotient (EQ) of various lineages over time, while Chart 2 shows the maximum EQ of all known fossils from any given time. (Source 1, Source 2. Admittedly this research is pretty old, so if anyone knows of more recent data, that'd be good to know.)
Both of these charts show a surprising fact: that the intelligence of life on Earth stagnated (or even decreased) throughout the entire Mesozoic Era, and did not start increasing until immediately after the K/T event. From this it appears that life had gotten stuck in a local equilibrium that did not favor intelligence; i.e. the existence of dinosaurs (or other Mesozoic species) made it impossible for any more intelligent creatures to emerge. Thus the K/T event was a Great Filter: we needed a shock severe enough to dislodge this equilibrium, but not so severe as to wipe out all the lineages from which intelligence could evolve.
If this is true, then the existence of ravens and elephants today is not much evidence that evolving intelligence is easy, because they exist for the same reason that humans do.
None of this considers octopuses. It would be interesting to see if their brain size history follows similar curves as for the vertebrates illustrated above (but since they're made up of soft tissue we may never know). If so, then that would confirm the view that evolving intelligence is difficult. On the other hand, it's hard to imagine that the marine ecosystem would've been affected by the K/T event in the same way that the terrestrial was. Or maybe octopuses are themselves what is suppressing the evolution of greater intelligence among marine invertebrates.
Such a category is called paraphyletic. It can be informationally useful if the excluded subgroup is far-divergent from the overarching group, such that it has gained characteristics not shared by the others, and lost characteristics otherwise shared. But the less divergence has taken place, the harder it is to justify a paraphyletic category. The category "reptile" (excluding birds) makes sense today, but it wouldn't have made sense in the Jurassic period. The mammal/cetacean distinction is somewhere in the middle.
Animal/human is different because the evolutionary divergence is so recent that it's difficult to justify the paraphyletic usage on biological grounds. Rather this is more of an ingroup/outgroup distinction, along the lines of βαρβαρος ("anybody who isn't Greek"). If humans learned to communicate with e.g. crows, the shared language probably wouldn't have a compact word for "non-human animal," although it might have one for "non-human non-crow animal."
I’m also not sure how far non-core and core identity rationalism are mutually exclusive. (Just like a lot of people are vaguely christian without belonging to a church, so maybe a lot of people would be vaguely interested in rationalism without wanting to join their local temple)
Agreed; finding a way for multiple levels of involvement to coexist would be helpful. Anecdotally, when I first tried attending LW meetups in around 2010, I was turned off and did not try again for many years, because the conversation was so advanced I couldn't follow it. But when I did try again, I enjoyed it a lot more because I found that the community had expanded to include a "casual meetup attendee and occasional commenter" tier, which I fitted comfortably into. Now we could imagine adding a 3rd tier, namely "people who come and listen to a speech and then make small talk and go for a picnic afterward" (or whatever).
Could this be considered a "temple"? Maybe, but I'd guess that most prospective members wouldn't think of it that way and would be embarrassed to hear such talk. "Philosophical society" might be closer to the mark. It's fun to imagine a Freemason-like society where people are formally allocated into "tiers" and then promoted to the next inner tier by a secret vote, perhaps involving black and white marbles. But at this point, such a level of ritual would probably be a waste of weirdness points.
If you believe as I do that rationalism makes people better human beings, is morally right and leads to more open, free, just and advanced societies, then creating and spreading it is good pretty much irrespective of social circumstances.
I'm uncertain about this, but there is something I suspect and fear may be true, which is that rationalism (as exemplified by current LW members) is not actually helpful for most people on an individual level (see e.g.). There are some people, like me, who are born in the Uncanny Valley and must study rationalism as part of a lifelong effort to climb up out of it. But for others, I would not want to pull them down into the Valley just so I can have company.
For example, I enjoy going to rationalist meetups and spending hours talking about philosophical esoterica, because it fills an intellectual void that I can't fill elsewhere. But most people wouldn't enjoy this, and it wouldn't be a good use of their time.
That's not to say that rationalism is totally inert in society. The ideas developed by rationalists can percolate into the wider population, even to those who are more passive consumers than active participants.
- Rationalist content is mostly in english. Most people don’t speak/read english. Even those that do as a second language don’t consumer primarily english sources
You're probably right, although as a monolingual English speaker I myself wouldn't know. I have heard of efforts to translate some of the sequences into Russian and Spanish. But for less popular languages, it may be difficult to assemble enough people who both speak the language and are interested in rationalism. In that respect it differs from Christianity in that there is no definitive text that you can point to and say "If you read and understand this, then you understand rationality." Rationality must be cultivated through active engagement in dialogue, which requires a critical mass of people.
- Rationalism is niche and hard to stumble upon. It’s not like christianity or left/right ideology in the west. Whereas those ideologies are broadcasted at you constantly and you will know about them and roughly what they represent, rationalism is something you only find if you happen to just luck out and stumble on this weird internet trail of breadcrumbs.
This is a challenge I've faced when I've tried to explain what, exactly, rationalism is when friends ask me what it's all about. I struggle to answer, because there is no single creed that rationalists believe. One could try to put together a soundbite-tier explanation, but to do so would risk distorting the very essence of rationality, which at its core is a process, not a conclusion. At best, we might try and draw up a list of 40 statements and say "Rationalists all agree that at least 30 of these are true, but there is vehement disagreement as to which."
A few thoughts on this.
First, I probably have a higher appetite for religion-ifying rationalism than others in the community, but I wouldn't want to push my preferences too hard lest it scare people off. This may stem from my personal background as a cradle atheist. Religious people don't want rationality to become rivalrous with their religion, and ex-religionists don't want it to become they very thing they escaped. To the extent that it's good for rationality to become more religion-like, I think it'll happen on its own in the next few decades or centuries without any concerted effort. I'm not in a hurry.
Second, we should avoid treating "religion" as a fixed concept already optimized for a particular social niche, as if to say that if rationality has some attributes of a religion, then it would necessarily gain by taking on the rest as well. Some of the functions that a religion might manage are:
Different societies will have different ways of allocating these responsibilities amongst the various institutions/philosophies within it. In Western cultures we use the word "religion" because it's common for most or all of these domains to be handled by the same thing, so we need a word for whatever category of thing that is. But the Western bias is revealed whenever we try to apply the concept to non-Western societies. E.g. a Chinese person may be a Confucianist with respect to (1) (3) and (4), a Taoist for (2) (6) and (8), and a Buddhist for (5) and (7). Which of these is a "religion"? Does it matter?
Even within the West, these boundaries have shifted over time. (3) was forcibly purged from Christianity in the European Wars of Religion, leading ultimately to the 1st Amendment in the US. And (8) is common in the Middle East and Eastern Europe, while mainline Protestantism is indifferent or outright hostile towards it. We can expect that the boundaries will continue to shift in the future, which leads into the third point.
Third, we should ask ourselves (and I'd be curious to hear your answer) what kind of future we're planning for in which the religion-ification of rationalism becomes relevant. I can think of three scenarios:
As for (A), I'm not qualified to weigh in on how likely that is; but if it does happen, then this whole question is pretty much irrelevant anyway, because there won't be any humans (as we know them) to practice any religion. The only possible relevance is that it would be bad for people to expend too much effort now in creating a rationalist religion if they could otherwise have been working on AI safety. But that probably doesn't apply to most people.
I don't think (B) is likely, but there's a compelling cultural narrative in its favor that we need to actively counterbalance in our estimates. We all like to imagine an apocalypse where we can wipe the slate clean and remake a "perfect" society. And everyone likes to look back to the Fall of Rome as an easy-to-apply historical template. If you imagine a rationalist religion in that context, you end up with something like "D&D magic + medieval Catholicism," where monks copy manuscripts to preserve knowledge that would otherwise be lost. But, again, I don't think loss of knowledge is major concern for the future, so efforts to create such an order of monks will probably be wasted.
(C) is where the question becomes most relevant, but since this scenario has no historical precedent, we can't just look to existing or past religions and think that we can just change a few incidentals and slot it into the future world. Whatever rationality ends up becoming in this world, it won't be what we'd call a "religion" (but perhaps a word for it will be devised eventually).
For example, in the future, scientific knowledge may never again be lost, but people will nevertheless feel adrift in a flood of false information so vast and confusing that they can't figure out what to believe. What sort of institution could remedy this situation? Not monks copying manuscripts, to be sure.
Lastly, some disjointed thoughts on outreach. There's a certain personality type that feels drawn to rationalist ideas, for reasons that are probably innate or at least very difficult to change. You know you're one of these people if your reaction upon finding LessWrong was "All my life people have been talking nonsense, but finally I've found something that makes sense!" Even if you don't agree with most of it.
At some point (perhaps already past), all of those people who can be persuaded will be. This will only comprise a small fraction of the population, but they will cling to the "rationalist community" with a near-religious zeal. (I have friends who absolutely loathe "rationalists" but still participate in the community online because, in their view, literally no one else even tries to make convincing arguments.) This zeal is a valuable quality, but most normal people will not sympathize. The question then becomes: For that majority of people who are not rationalists-by-disposition, is there some way they can benefit by associating with the community?
I think the answer will involve addressing this:
We don’t have rituals. Hence meetups are awkward to organize, often stilted and revolve around the discussion of readings or rationality problems or even just lack any structure at all. Contrast this to a church where you show up every Sunday, listen to a service and then make smalltalk or go to a picnic.
Maybe rationalists should give talks that are open to the public and geared towards a general audience, and encourage listeners to talk about it amongst themselves. That way there'd be less pressure to follow along with extremely esoteric conversations. But you don't have to think of it as a "religion" or a "ritual" - it's just a public lecture, which is a perfectly normal thing for someone of any religious views to attend. Putting it forward as a religion-substitute would probably turn people off.
1-3 months doesn't seem so bad as a timeline. While it's important not to let the perfect be the enemy of the good (since projects like this can easily turn into a boondoggle where everyone quibbles endlessly about what the end-product should look like), I think it's also worth a little bit of up-front effort to create something that we can improve upon later, rather than getting stuck with a mediocre solution permanently. (I imagine it's difficult to migrate a social network to a new platform once it's already gotten off the ground, the more so the more people have joined.)
I would also like to register my opposition to using Facebook. While it might seem convenient in the short term, it makes the community more fragile by adding a centralized failure point that's unaccountable to any of its members. Communicating on LessWrong.com has the virtue of it being owned by the same community that it serves.
It seems to me that there's a tension at the heart of defining what the "purpose" of meetups is. On the one hand, the community aspect is one of the most valuable things one can get out of it - I love that I can visit dozens of cities across the US, and go to a Less Wrong meetup and instantly have stuff to talk about. On the other hand, a community cannot exist solely for its own sake. Someone's personal interest in participating in the community will naturally fluctuate over time, and if everyone quits the moment their interest touches zero then nobody will ever feel like it's worth investing in its long-term health.
Personally, I do have a sense that going to meetups matters, in that it helps (however marginally) to raise the sanity waterline in one's local community, and to move important conversations about x-risk and the future of humanity into the mainstream. I myself was motivated to dive into Less Wrong again, after a hiatus of many years, by finding a lively meetup group that was discussing these ideas regularly.
In any case I think that the question of "why meetups matter" is something that we're all collectively trying to figure out over time. I don't claim to know the answer right now.
I do, however, have some concern about creating a "monoculture" among the various sub-groups. It's good that we have a wide variety of intellectual interests, ways-of-running-meetups, etc., because this allows for mistakes to be corrected and innovations to be discovered. If we are all given a directive from on high[1] saying "We are going to mobilize all the resources of the Rationality Community towards goal X, which we will achieve by strategy Y," then it might at first seem like a lot of stuff is getting done. But what if strategy Y is ineffective, or goal X is a bad goal? Then we would have ruined our chance to discover our mistake until it was too late. This is especially important when the goals of the community are so ill-defined, as is the case now.
Of course, in order to reap these benefits of having a diverse community, a prerequisite is that there be any communication at all between groups. So, the suggestion of having meetups write up blog posts for public consumption seems like a good one[2]. But I don't think the groups should be told which topics they must discuss, because they might be interested in something else that nobody else would've thought of. Perhaps it's enough to provide a list of topics that any meetup group can draw from if they can't think of something. And maybe, after one group publishes a writeup, another group might be inspired to discuss the same topic later and submit their own writeup in response.
[1] Or, more realistically, a persuasive message to the effect of "All the cool kids are doing Z and you're going to feel left out if you don't," which can feel like a compulsory directive because of Schelling points, etc.
[2] Caveat: The mood of a conversation is likely to change dramatically if it's known that someone is taking notes that will be posted later, since then one is not speaking merely to those in attendance, but effectively to an indefinitely large audience of all LessWrong readers. So, I would recommend that meetups have a mixture of on- and off-the-record conversations, with a clear signal of which norm is in effect at any given time.
Curb Your Enthusiasm - I didn't know you could be anonymous and tell people! I would've taken that option!
This is a good chance for me to interrogate my priors because I share (although not very strongly) the same intuitions that you criticize in this post. There's tension between the following and my desire not to live in a bland tall-poppy-syndrome dystopia where nobody ever wants to accomplish great things; I don't really know how I'd resolve it.
Intuition 1: Social praise is a superstimulus which titillates the senses and disturbs mental tranquility. When I tell a joke that lands well, or get a lot of upvotes on a post, or someone tells me that something I did years ago affected them in a good way and they still remember it, I feel a big boost to my ego and I'm often tempted to mentally replay those moments over and over. However, too much of this is a distraction from what's really important. If I were a talented stock trader I'd be spending my time doing that rather than lying in bed obsessively refreshing my portfolio valuation; analogously, if I did actually possess the traits for which I received praise, I wouldn't be so preoccupied with others' affirmations.
More generally, we don't want people to get addicted to social status, because then they'll start chasing highs to the point where their motivation diverges from actual altruism. It's better to nip this tendency in the bud.
Intuition 2: Social status is zero-sum, which means that if I spend money to gain status, I am necessarily making it more costly for others to do so. Therefore, telling people about your altruism is a "public bad" which we try to discourage through teasing/shaming. Now, some altruistic acts inherently cannot be done in a status-indifferent way (e.g. working full-time for a charity), but for something like donating money, which can easily be kept private, the reaction against doing it publicly is proportionally harsh.