What does the community here think when it comes to climate change as a potential existential risk? While strategies for combating climate change are fairly straightforward, the seeming lack of political capital behind meaningful climate reform and legislation seems to indicate that the problem is going to get substantially worse before it gets better, and the potential consequences of ignoring this issue look to be quite severe indeed!
Should the rationality/x-risks community be spending more effort on evaluating this idea and exploring potential solutions? It certainly seems like a big problem, and the current trajectory is quite worrisome. On the other hand, the issue is a political minefield and could risk entangling the community in political squabbling, potentially jeopardizing its ability to act on other threats. What do you guys think?
Any ideas about how capable computer programs would need to be to give significant help to researchers with hypothesis generation and with whether research programs make sense? With seeing whether abstracts match experimental results?
It seems to me that some of this could be done without even having full natural language.
As long as we're talking about, as you say, significant help rather than solving the whole problem, about what can be done without having full natural language - then I think this is one of the more promising areas of AI research for the next couple of decades.
I talked a few months ago to somebody who's doing biomedical research - one of the smartest guys I know - asking what AI might be able to do to make his job easier, and his answer was that the one thing likely to be feasible in the near future that would really help would be better text mining, something that could do better than just keyword matching for e.g. flagging papers likely to be relevant to a particular problem.