Learning from experts is very useful. On the other hand, the time of experts is often scarce. If I have a random basic level question about quantum physics, I would expect that those rationalists who are experts in quantum physics and who don't know me would have little interest in getting a cold email from me to answer a basic question about the field.
My Habryka (and thus Lightcone Infrastructure) model would worry about how to do the gatekeeping to protect the valuable time of experts. That's likely the more important problem to solve than thinking about how to validate whether the expertise people claim for themselves is correct.
Expertise by its nature is also complex. A surgeon and a massage therapist might both be experts in anatomy but understand different parts of it well. Expertise gets acquired by studying a given paradigm for a topic and when there are multiple paradigms that gather knowledge about a given topic it can be hard to know which of those will give the best answer to a given question.
My Habryka (and thus Lightcone Infrastructure) model would worry about how to do the gatekeeping to protect the valuable time of experts. That's likely the more important problem to solve than thinking about how to validate whether the expertise people claim for themselves is correct.
Hmm good point. Maybe money can solve it? Put up prices based on the value of your time, and the incentives can sort out the rest.
...Expertise by its nature is also complex. A surgeon and a massage therapist might both be experts in anatomy but understand different parts of
I imagine something like Stack Exchange, except that people could get certified for rationality and for domain knowledge, and then would have corresponding symbols next to their user names.
Well, the rationality certification would be a problem. An ideal test could provide results like "rational in general, except when the question is related to a certain political topic". Because, it would be difficult to find enough perfectly rational people, so the second best choice would be to know their weaknesses.
I'm not sure we'd need anything that elaborate. The rationalist community isn't that big. I was more thinking that rationalists could self-nominate their expertise, or that a couple of people could come together and nominate someone if they notice that that person is has gone in depth with the topic.
I've previously played with the idea of more elaborate schemes, including tests, track records and in-depth arguments. But of course the more elaborate the scheme, the more overhead there is, and I'm not sure that much overhead is affordable or worthwhile if one just wants to figure stuff out.
I agree. We could afford more overhead if we had thousands of rationalists active on the Q&A site. Realistically, we will be lucky if we get twenty.
But some kind of verification would be nice, to prevent the failure mode of "anyone who creates an account is automatically considered a rationalist". Similarly, if people simply declare their own expertise, it gives more exposure to overconfident people.
How to achieve this as simply as possible?
One idea is to have a network of trust. Some people (e.g. all employees of MIRI and CFAR) would automatically be considered "rationalists"; other people become "rationalists" only if three existing rationalists vouch for them. (The vouch can be revoked or added at any moment. It is evaluated recursively, so if you lose the flag, the people you vouched for might lose their flags too, unless they already have three other people vouching for them.) There is a list of skills, but you can only upvote or downvote other people having a skill; if you get three votes, the skill is displayed next to your name (tooltip shows the people who upvoted it, so if you say something stupid, they can be called out).
This would be the entire mechanism. The meta debate could be in special LW threads, or perhaps in shortform, you could post there e.g. "I am an expert on X, could someone please confirm this? you can interview me by Zoom", or you could call out other people's misleading answers, etc.
It's not quite what you want, but there's this: https://forum.effectivealtruism.org/community#individuals and this: https://eahub.org/
Somewhat related: Rationalists should have mandatory secret identities (or rather sufficiently impressive identities).
As far as I can tell:
So in my view there's basically a lot of utility in finding the rationalists most specialized in each topic to make it easy to learn stuff from them. Has anyone worked on this in the past? Is this something someone (Lightcone Infrastructure, perhaps?) would be interested in setting up?
And I guess if you know of any underrated rationalists who specialize in some topic, feel encourage to share in the comments for this post?
This also seems to apply to non-rationalists, but that's not as important for this purpose.