(95+% accuracy on training data sets)
I have only very limited knowledge in this area, so I could be misreading you. But doesn't "in training data sets" mean that the process had been developed using that specific data? That could mean that you have a program really good at reconstructing that piece of mouse brain, but not at reconstructing mouse brain in general. We had this problem in the last research project I worked on, where we'd use a gene expression data set to predict mood in bipolar subjects. We had to test the predictions on a separate data set from the one used in development to make sure it wasn't overfit to the training data. Is the same thing the case for your work, or am I misunderstanding your use of "training data"?
It is a good insight to notice that this is a potential problem, which is generally referred to as a generalization error. If you train a classifier or compute a regression on some data, there is always a chance that when you are given new data, it will perform poorly because of unforeseen larger-scale patterns that were poorly represented in the training data.
However, the scientists performing this work as also aware of this. This is why algorithmic learning theory, like machine learning methods, is so successful. You can derive tight bounds on generaliza...
Here's a great article by Paul Allen about why the singularity won't happen anytime soon. Basically a lot of the things we do are just not amenable to awesome looking exponential graphs.