I am really not sure of why neuro-symbolic systems are considered as alternatives to the current black-box ones?
A concrete example I have found (and currently studying) is HOUDINI (https://arxiv.org/pdf/1804.00218). Essentially, it implements neural networks using higher order combinators (map, fold etc.) that were found via enumeration/genetic programming searches. When the programs are found, the higher order combinators are "transformed" into trainable networks and added to an ever growing library of "neural functions". The safety provided by such systems comes in the form of understanding the combined functions that form a the solution to a problem. Perhaps, mechanistic interpretability could be further used to dissect the inner workings of the trained networks.
Please, describe to me why is this not a viable course for AI Safety? For that matter, why are alternative technologies not considered at all (or if they are, please mention them)? My initial guess would be that such systems are either not competitive enough, or are a form of "starting from scratch". However, these point might not apply to neuro-symbolic systems.
https://www.lesswrong.com/posts/gebzzEwn2TaA6rGkc/deep-learning-systems-are-not-less-interpretable-than-logic