I think this is a bad thing, and that we need to try to avoid it.
I also think that we are biased not just by wanting the stuff we are good at to be the most important stuff, but also by wanting to agree with the community.
Here is a small step we can take in the right direction.
If you are one of the 15 people who down voted the following post, stop for a minute and ask yourself why. I am not saying either way whether or not the post deserved to be down voted. However, I would not be surprised if several people down voted it for a bad reason.
http://lesswrong.com/r/discussion/lw/ji0/the_first_ai_probably_wont_be_very_smart/
And if you come up with a reason why that isn't already in the comments, please add it. It's a lot less lonely being in the minority if the hate you receive isn't just a faceless wall of downvotes.
If I were to ask the question "What threat poses the greatest risk to society/humanity?" to several communities I would expect to get some answers that follow a predictable trend:
If I asked the question on an HBD blog I'd probably get one of the answers demographic disaster/dysgenics/immigration.
If I asked the question to a bunch of environmentalists they'd probably say global warming or pollution.
If I asked the question on a leftist blog I might get the answer: growing inequality/exploitation of workers.
If I asked the question to Catholic bishops they might say abortion/sexual immorality.
And if I were to ask the question on LessWrong (which is heavily populated by Computer scientists and programmers) many would respond with unfriendly AI.
One of these groups might be right, I don't know. However I would treat all of their claims with caution.
Edit: This may not be a bad from thing from an instrumental rationality perspective. If you think that the problem you're working on is really important then you're more likely to put a good effort into solving it.