10k vectors is pretty small. you should be able to get away with packing all of your vectors into a matrix in e.g pytorch and doing a simple matrix vector product . should take ~tens of milliseconds.
That won't help if his embeddings are bad:
if the data is similar enough.
He's not complaining about slowness, otherwise he'd say something like 'we tried FAISS and it was still too many milliseconds per lookup'. If his embeddings are bad, then even an exact nearest-neighbors lookup by brute-forcing every possible match (which, as you say, is more feasible than people usually think) won't help. You'll get the same bad answer.
Does anyone have advice or could recommend books on how to accomplish vector search with large datasets?
What I've found so far is vector DB's are very bad at larger datasets, even to the order of 10,000's of vectors if the data is similar enough. Some ideas we've gone through so far:
Any help, ideas, or recommendations on where I can read would be very much appreciated!