Suffering is an emotional state triggered by desire. Desire is the attachment of value to imagined experiences.
So there's a minimal level of consciousness required to experience suffering, and a neuron farm probably doesn't meet it, that's why it's not morally significant. What sorts of organisms do meet it is another matter.
In the recent discussions here about the value of animals several people have argued that what matters is "sentience", or the ability to feel. This goes back to at least Bentham with "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"
Is "can they feel pain" or "can they feel pleasure" really the right question, though? Let's say we research the biological correlates of pleasure until we understand how to make a compact and efficient network of neurons that constantly experiences maximum pleasure. Because we've thrown out nearly everything else a brain does, this has the potential for orders of magnitude more sentience per gram of neurons than anything currently existing. A group of altruists intend to create a "happy neuron farm" of these: is this valuable? How valuable?
(Or say a supervillian is creating a "sad neuron farm". How important is it that we stop them? Does it matter at all?)