Buh? We can write systems with negative reinforcement. Say, a robot that performs various movements then releases a ball, detects how far from a target the ball landed, and executes this movement sequence less often if it was too far. I think "the robot avoids missing the target" is a fair description, and "the robot feels pain when it misses the target" is a completely bogus one. Do you disagree?
First off, I believe that consciousness is not discrete. That is, I can be more conscious than a dog. Given that consciousness isn't necessarily zero or one, it seems unlikely to ever be exactly zero. As such, all systems have consciousness. A simple robot has a simple mind, with little consciousness. You can cause it pain, but it won't be much pain.
Perhaps the robot simply isn't truly aware of anything. In that case, it's not aware that it should avoid missing the target, and it feels no pain. I just don't see why adding an extra level would make it aware...
I ended up reading this article about animal suffering by this Christian apologist called William Craig. Forgive the source, please.
He continues the argument here.
How decent do you think this argument is? I don't know where to look to evaluate the core claim, as I know very little neuroscience myself. I'm quite concerned about animal suffering, and choose to be vegetarian largely on the basis of that concern. How much should my decision on that be affected by this argument?
EDIT: David_Gerard wins by doing the basic Google search that I neglected. It seems that the argument is flawed. Particularly, animals apart from primates have pre-frontal cortexes.