Personally, I've been open with my friends and family about my lack of religion and the disdain I have for organized religions in general. I've been that way my whole life, and some people still give me flack for it. I look at such occasions as "teaching moments," when I can help bring a little light into someone else's muddy thinking.
But that's just me. If the question is, what advice would I give to a person that is surrounded by religious influences, and for whom religion makes up a major component of their social support system, then my answer is:
Think what you like, but behave like others.
You glean benefit from your social networks, from your family, from your friends. There is no reason to cast yourself into the role of the "lost sheep" that needs to be helped back on to the path. Continue your living your normal life as you always have. Let your new rationality cast your associations in a new light. Learn what you can, for as long as you can keep your mouth shut.
I would expect that over time, you will naturally start interacting with people that share your new appreciations, and that gradually, the religious folks will fall away slowly as they pursue their own ends.
There's no need to antagonize believers that mean you no harm.
I think that the reason that I find Brooks' ideas interesting is because it seems to mirror the way that natural intelligences came about.
Biological evolution seems to amount to nothing more than local systems adapting to survive in and environment, and then aggregating into more complex systems. We know that this strategy has produced intelligence at least once in the history of the universe, and thus is seems to me a productive example to follow in attempting to create artificial intelligence as well.
Now, I don't know what the state of the art is for the emergent AI school of thought is at the moment, but isn't it possible that the challenge isn't solving each of the little problems that feedback loops can help overcome, but rather enfolding the lessons learned by these simple systems into more complex aggregate systems?
That being said, you may be right, it may be easier (at this point) to program AI systems to narrow their search field with information about probability distributions and so forth, but could it not be that this strategy is fundamentally limited in the same way that expert systems are limited? That is, the system is only as "smart" as the knowledge base (or probability distributions) allow it to become, and they fail as "general" AI?