Minor quibble: the conscious reasons for someone's actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality.
If you read the rest of the comment to which you are replying, I pointed out that it's effectively best to assume that nobody knows why they're doing anything, and that we're simply doing what's been rewarded.
That some of those things that are rewarded can be classed as "signaling", may actually have less to do (evolutionarily) with the person exhibiting the behavior, and more to do with the person(s) rewarding or demonstrating those behaviors.
IOW, we may not have an instinct to "signal", but only to imitate what we see others responding to, and do more of what gives appropriate responses. That would allow our motivation to be far less conscious, for one thing.
(Somewhat-unrelated point: the most annoying thing about trying to study human motivation is the implicit assumption we have that people should know why they do things. But when viewed from an ev. psych perspective, it makes more sense to ask why is there any reason for us to know anything about our own motivations at all? We don't expect other animals to have insight into their own motivation, so why would we expect that, at 5% difference from a chimpanzee, we should automatically know everything about our own motivations? It's absurd.)
I'm not sure that the class of all actions that are motivated by signaling is the same as (or a subset of) the class of all actions that are rewarded. At least, if by rewarded, you mean something other than the rewards of pleasure and pain that the brain gives.
Response to Man-with-a-hammer syndrome.
It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case.
There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple.
Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false.
If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something.
Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just remember to be ruthless when they get it wrong; it doesn't prove their idea is totally wrong, only that reliance upon it would be.
Edited to clarify: My argument is not that one should simply abandon a theory altogether. In some cases, this may be justified, if all the theory has going for it is its predictive power, and you show it lacks that, toss it. But in the case of broad, complex theories that actually can explain many divergent outcomes, this exercise should teach you not to rely on that theory as a means of inference. Yes, you should believe in evolution. No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.