Overview

The LessWrong Community Census is an entertaining site tradition that doubles as a useful way to answer various questions about what the userbase looks like. This is a request for comments, constructive criticism, careful consideration, and silly jokes on the census.

Here's the draft.

I'm posting this request for comments on November 1st. I'm planning to incorporate feedback throughout November, then on December 1st I'll update the census to remove the "DO NOT TAKE" warning at the top, and make a new post asking people to take the census. I plan to let it run throughout all December, close it in the first few days of January, and then get the public data and analysis out sometime in late January.

How Was The Draft Composed?

I copied the question set from 2023. Last year, there were a lot of questions and the overall sentiment was that there should be fewer questions, so I removed a lot of things that either didn't seem interesting to me or didn't have much history with the census plus collapsed some of the answer variety that doesn't get used much. This included gutting all but one question in the detailed politics section, though I mean to put a few new ones there. Then I changed some things that change every year like the Calibration question, and swapped around the questions in the Indulging My Curiosity section. 

Changes I'm Interested In

No seriously, I want to keep the question count down this year. Right now I think we're a little below 100 (down from ~150 last year) and I plan to keep things under 100. This should be the current arrangement, at 95:

NumberSectionQuestion Budget 2024
0Population3
1Demographics5
2Sex and gender10
3Work and education3
4Politics7
5Intellect5
6LessWrong Basics7
7LessWrong Community7
8Probability15
9Traditional5
10LW Team5
11Adjacent Communities5
12My Curiosity5
13Detailed past questions5
14Bonus Politics5
15Wrapup3

I currently have zero actual questions in the Questions From Adjacent Communities section. Ideally I'd like to get, say, a question from the Forecasting community, a question from the Glowfic community, a question from EA, etc, and add up to 5 questions there. I'll be actively reaching out to organizers and managers of those groups, but if anyone wants proactively step forward in the comments please do!

I currently have only one question in Bonus Politics. I don't find politics interesting, lots of people do, so here's an open invitation to make some suggestions. Last year Tailcalled had an array they wanted to use, and I don't think it's worth repeating that whole set every year but I'm happy to have run it once.

I think there's probably another ten questions I can cut out that either aren't getting us useful information or aren't very interesting. Right now, the first politics section and the Intellect section are looking like good targets for some trimming, but it might also turn out that we don't use the Adjacent Communities section. The Probability section is the biggest, but most of those questions have been around in almost every incarnation of the survey and putting probabilities on odd events seems a core skill for rationalists so I'm reluctant to cut them.

This year, the thing I want most is to figure out a way to evaluate foundational rationalist skills on the census. Last year I tried checking the conjunction fallacy, but I did it in a kind of clumsy way and don't think I got a good signal. If you have ideas on how to do that I'd be delighted, and (other than trimming) that's the place I'm planning to focus on. Speaking of which: Does anyone have a better list of foundational lessons to check than I'm using in Internalized Lessons?

My best compilation of previous versions is in this google sheet. 

New Comment
32 comments, sorted by Click to highlight new comments since:

Number of Current Partners
(for example, 0 if you are single, 1 if you are in a monogamous relationship, higher numbers for polyamorous relationships)

 

This is a confusing phrasing. If you have 1 partner, it doesn't mean your relationship is monogamous. A monogamous relation is one in which there is a mutually agreed understanding that romantic or sexual interaction with other people is forbidden. Without this, your relationship is not monogamous. For example:

  • You have only one partner, but your partner has other partners.
  • You have only one partner, but you occasionally do one night stands with other people.
  • You have only one partner, but both you and your partner are open to you having more partners in the future.

All of the above are not monogamous relationships!

Hrm. I parse this as part of an example: if you are partnered and monogamous (and faithful!) then you should put down 1. If you're polyamorous, but happen to have one partner, you would also put 1 for this question. There's a Relationship Styles question that gets at what people prefer.

Do you think this example will confuse people?

You say "higher numbers for polyamorous relationships" which is contrary to "If you're polyamorous, but happen to have one partner, you would also put 1 for this question."

Are you planning on having more children? Answer yes if you don't have children but want some, or if you do have children but want more.

Whether I want to have children and whether I plan to have children are different questions. There are lots of things I want but don't have plans to get, and one sometimes finds oneself with plans to achieve things that one doesn't actually want.

If you've been waiting for an excuse to be done, this is probably the point where twenty percent of the effort has gotten eighty percent of the effect.

Should be "eighty percent of the benefit" or similar.

I have no opinion on the difference and chatgpt agrees with you, so sure, changed to "eighty percent of the benefit."

Oh I misread it as "eighty percent of the effort" oops.

I'd be interested in a Q about whether people voted in the last national election for their country (maybe with an option for "my country does not hold national elections") and if so how they voted (if you can find a schema that works for most countries, which I guess is hard).

Yeah, this would either need options for many countries or one schema for many countries.

Asking whether they voted or not in a national election is straight forward enough, and there's been past questions like that.

"Voting Did you vote in your country's last major national election?"

I adapted the version from 2022 and added it to Bonus Political.

"Voting
Did you vote in your country's last major national election? If you were ineligible to vote for some reason, the answer is No. [Yes, No, My country does not have elections]"

In the highest degree question, one option is "Ph D.". This should be "PhD", no spaces, no periods.

Should be fixed now. Thanks!

. . . This is going to mess up comparisons to previous years, I can already tell.

P(GPT-5 Release)

What is the probability that OpenAI will release GPT-5 before the end of 2025? "Release" means that a random member of the public can use it, possibly paid.

 

Does this require a product called specifically "GPT-5"? What if they release e.g "OpenAI o2" instead, and there will never be something called GPT-5?

I'm being a little bit sneaky here, and trying to compare the LessWrong community to Manifold. Here's the Manifold Market I'm trying to track.

I don't want to add multiple paragraphs to the question text, but there's probably a way to make this a little clearer.

Wait. No. That market is for a release before 2025, not by the end of 2025. 

. . . I was pretty sure there was a market for end of 2025 and now I can't find it. Hrm.

Some of the probability questions (many worlds, simulation) are like... ontologically weird enough that I'm not entirely certain it makes sense to assign probabilities to them? It doesn't really feel like they pay rent in anticipated experience?

I'm not sure "speaking the truth even when it's uncomfortable" is the kind of skill it makes sense to describe yourself as "comfortable" with.

Many worlds and the Simulation question are probably not going to change our anticipated experiences. I do think we can put probabilities on things we don't expect to change our experiences- for instance, if you flip a coin, look at it, and commit to never telling me whether it came up heads, I still think the coin has a 50% chance of coming up heads. That's less ontologically weird though. 

Those two are longstanding census standard questions, and I'm probably going to keep them because I like being able to do comparisons over time. Many Worlds in particular is interesting to me as an artifact of the Sequences.

Yeah, the skills section is very much a draft that I'm hoping people will have good ideas for.

I've changed the wording to "speaking the truth even against social pressure" but I don't think this is good, just a little better.

Expanding a little:

I think something like speaking the truth even when you're afraid to is a skill. I've noticed apprehension holds me back sometimes, both consciously and in a sneaky quiet voice in the back of my head asking if I'm sure, why not check again, surely this isn't the fight I want to pick. When I imagine an idealized rationalist, they don't keep quiet because of nagging anxiety about what might happen and that feels important.

I don't know if it's like, one of the top ten core rationalist skills I want to ask about, and I'm not at all sure this is the right phrasing. 

Per request, I just added "LLM Frequency" and "LLM Use case" to the survey, under LessWrong Team Questions. I'll probably tweak the options and might move it to Bonus Questions later when I can sit down and take some time to think. Suggestions on the wording are welcome!

So, I think Fight 1 is funny, but it is kind of high context, involving reading two somewhat long stories. (Planecrash in particular is past a million words long!) I'd considered "Who would win in a fight, Eliezer Yudkowsky or Scott Alexander? ["Eliezer", "Scott", "Wait, what's this? It's Aella with a steel chair!"]" and "Who is the rightful caliph? ["Eliezer Yudkowsky","Scott Alexander", "Wait, what's this? It's Robin Hanson with a steel chair!"]" but feel a bit weird about including real people. 

I think they're just as funny though, and far more people will understand it, so maybe I should switch. Anyone have convincing thoughts here?

P(Bitcoin) What is the probability that the price of one bitcoin will be higher on July 1st, 2025, than it was on December 1st, 2024? ($???)

Probably best to include "what price of one bitcoin do you expect on July 1st, 2025, given that it was $??? on December 1st, 2024?" as well.
You could also include P(weak EMH) - instead of P(GPT-5 Release) if there's not enough space.

Overall, the questions seemed insufficiently checking social skills to me, instead preferring testing large, "impactful" beliefs.

Thanks for the year catch.

I could check their expected price of bitcoin, but that feels like more weight than I want to put on bitcoin- it's already a little bit overlapping with the S&P question. What I'd like to replace it with is something that 1. will have a definitive answer by next summer, 2. people have enough context to understand the question, and 3. isn't at obvious. 

The questions are not checking for social skills. I am not sure how I'd do that on an online survey that's going to be self reported, and if you have thoughts about that I'm kind of curious? What percentage of the survey being about social skills would be sufficient? (I'm heavily into meetups and in-person gatherings for LessWrong events, so I might be one of the more receptive audiences for this line of argument!)

I guess I'm looking for questions of this family:

  1. Do you sometimes tell things that are not literally true but help the person you're talking to in understanding?
  2. On average, do you believe statements by members of rationalist community significantly more (+1.0 bit of evidence or more) than exact same words from non-rationalists?
  3. What is the biggest barrier you face when trying to communicate rational ideas to others? [a) Emotional resistance b) Lack of shared vocabulary c) Time constraints d) Preexisting strong beliefs e) Complexity of ideas f) People disengaging randomly]

Also,

  • Have you ever intervened on someone's behalf where the person was failing and would prefer to succeed?
  • How many people can a [brainstorming] conversation hold on average, so that everyone is active?

It would be nice to see at least three questions which would demonstrate how person extracts evidence from others' words, how much time and emotions could they spend if they needed to communicate a point precisely, etc.

I'll have to sleep on that, actually. Will return tomorrow, presumably with more concrete ideas)

By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank

You won't be able to distinguish people who think singularity is super unlikely from people who just didn't want to answer this question for whatever reason (maybe they forgot or thought it's boring or whatever).

Hrm.

So, if I want that information I think I could get close by looking at everyone who answered the question before and the question after, but didn't answer Singularity.

I'll change the text to say they should enter something not a number, like "N/A" and then filter out anything that isn't a number when I'm doing math to it.

I think you can also ask them to put some arbitrarily large number (3000 or 10000 or so) and then just filter out all the numbers above some threshold.

I could, but what if someone genuinely thinks it's that high number? Someone put 1,000,000 on the 2022 version of that question. 

I don't think there's a lot of value in distinguishing 3000 and 1,000,000 and probably for any aggregate you'll want to show this will just be "later than 2200" or something like that. But yes this way they can't make a statement that this will be 1,000,000 which is some downside.

I'm not a big fan of looking at the neighbors to decide whether this is a missing answer or high estimate (it's OK to not want to answer this one question). So some N/A or -1 should be ok.

(Just to make it clear, I'm not saying this is an important problem)