This is a linkpost for https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
It's an interesting idea, though I wouldn't call it "model-agnostic".
Basically they're jiggering the inputs and figuring out what you can't change without the prediction (classification) changing as well. In effect they are answering the question "given this model, which input values are essential to producing this particular output?"