I deliberately didn't say that the majority of LessWrongers would give that answer. Partly because Lesswrong is only about 1/3 computer scientists/programmers.
Fortunately we have the census and the census does ask for the profession. Among those with the profession Computers (AI), Computers (practical: IT, programming, etc.) and Computers (other academic, computer science) 14.4% think that unfriendly AI is the biggest threat.
Lesswrong isn't a community that focuses much on bioengineered pandemics. Yet among those computer programmers 23.7% still think it's the greatest threat.
We are a community that actually cares about data.
If I were to ask the question "What threat poses the greatest risk to society/humanity?" to several communities I would expect to get some answers that follow a predictable trend:
If I asked the question on an HBD blog I'd probably get one of the answers demographic disaster/dysgenics/immigration.
If I asked the question to a bunch of environmentalists they'd probably say global warming or pollution.
If I asked the question on a leftist blog I might get the answer: growing inequality/exploitation of workers.
If I asked the question to Catholic bishops they might say abortion/sexual immorality.
And if I were to ask the question on LessWrong (which is heavily populated by Computer scientists and programmers) many would respond with unfriendly AI.
One of these groups might be right, I don't know. However I would treat all of their claims with caution.
Edit: This may not be a bad from thing from an instrumental rationality perspective. If you think that the problem you're working on is really important then you're more likely to put a good effort into solving it.