LW is one of the few informal places which take existential risk seriously. Researchers can post here to describe proposed or ongoing research projects, seeking consultation on possible X-risk consequences of their work. Commenters should write their posts with the understanding that many researchers prioritize interest first and existential risk/social benefit of their work second, but that discussions of X-risk may steer researchers to projects with less X-risk/more social benefit.
Much of my current research (in philosophy, at LSE) concerns the general themes of "objectivity" and (strategies for strengthening) "co-operation", especially in politics. I didn't start doing research on these themes because of any concern with existential risk. However, it could be argued that in order to reduce X-risk, the political system needs to be improved. People need to become less tribal, more co-operative, and more inclined to accept rational arguments, both between and within nation states (though I mostly deal do research on the latter). In any case, here is what I'm working on/considering working on, in more precise terms:
1) Strategies for detecting tribalism. People's beliefs on independent but politically controversial questions, such as to what extent stricter gun laws would reduce the number of homicides and to what extent climate change is man-made, tend to be "suspiciously coherent" (i.e. either you take the pro-republican position on all of these questions, or the pro-democrat on all of them). The best explanation of this is that most people acquire whatever empirical beliefs the majority of their fellow tribe members hold instead of considering the actual evidence. I'm developing statistical techniques intended to detect this sort of tribalism or bias. For instance, people could take tests of their degree of political bias. Alternatively, you could try to read off their degree of bias from existing data. To make these inferences sufficiently precise and reliable promises to be a tough task, however.
2) Strategies for detecting "degrees of selfishness". This strategy is quite similar, but rather than testing the correlation between your empirical beliefs on controversial questions and those of the party mainstream, what is tested is rather the correlation betwen your opinions on policy and the policies that suit your interests. For instance, if you are male, have a high income, drive a lot and don't smoke and at the same time take an anti-feminist stance, are against progressive taxes, are against petrol taxes, and want to outlaw smoking, you would be given a high "selfishness score" (probably this score should be given another, less toxic, name). This would serve to highlight selfish behaviour among voters and politicians and promote objective and altruistic behaviour.
3) Voting Advice Applications (VAA) - i.e tests of what party is closest to your own views - are already being used to try to increase interest in politics and make people vote more on the basis of policy issues, and less on more emotional factors such as which politician they find most attractive or which party enjoys success at the moment the bandwagon effect. However, most voting advice applications are still appalingly bad, since many important questions are typically left out. Hence it's quite rational, in my opinion, for voters to discard their advice. I'd like to try to construct a radically improved VAA, which would be more than just a toy. Instead, the goal would be to construct a test which would be better at identifying which party best satisfies the voters' considered preferences than the voters intuitive judgments. If people then actually used these VAA's, this would, hopefully, lead to the politicians whose policies correspond most to those of the voters getting elected, as is intended in a democracy, and to politics getting more rational in general. The downside of this is that it is very hard to do in practice and that the market for VAA's is big.
4) Systematically criticizing politicians and other influential people's arguments. This could be done either by professionals (e.g. philosophers) or on a wiki-like webpage, something that is described here. What would be great would be if you somehow could gamify this; e.g., if, in election debates, referees would give and deduct points real-time, and that viewers could see this (e.g. through an app) instanteneously while watching the debate.
Any input regarding how tenable and important these ideas are (especially in relation to each other) in general, and how important they are for addressing x-risk are welcome.
How about measuring the "altruism score".
I think a huge issue with most of these is that politicians get asked in front of an election to take stances on questions. They usually don't evaluate at all what politicians actually do when in office. For a healthy democracy it's much more important to have feedback mechanism that punish politicians for doing the wrong things while in office instead of punishing them f... (read more)