...while for B, "2" doesn't really mean anything; it's just a symbol that it blindly manipulates.
I think I understand what concepts you were gesturing towards with this example, but for me the argument doesn't go through. The communication failure suggests to me that you might need to dissolve some questions around syntax, semantics, and human pscyhology. In the absence of clear understanding here I would fear a fallacy of equivocation on the term "meaning" in other contexts as well.
The problem is that B seems to output a "3" every single time it sees a "2" in the input. By "3" it functionally communicates "there was a '2' in the corresponding input" and presumably a "2" in the output functionally communicates some stable fact of the input such as that there was a "Q" in it.
This is a different functional meaning that A communicates, but the distance between algorithms A and B isn't very far. One involves a little bit of code and the other involves a little bit more, but these are both relatively small scripts that can be computed using little more memory than is needed to store the input itself.
I could understand someone using the term "meaning" to capture a sense where A and B are both capable of meaning things because they functionally communicate something to a human observer by virtue of their stably predictable input/output relations. Equally, however, I would accept a sense where neither algorithm was capable of meaning something because (to take one trivial example of the way humans are able to "mean" or "not mean" something) neither algorithm is capable of internally determining the correct response, examining the speech context they find themselves in, and emitting either the correct response or a false response that better achieves their goals within that speech context (such as to elicit laughter or to deceive the listener).
You can't properly ask algorithm A "Did you really mean that output?" and get back a sensible answer, because algorithm A has no English parsing abilities, nor a time varying internal state, nor social modeling processes capable of internally representing your understanding (or lack of understanding) of its own output, nor a compressed internal representation of goal outcomes (where fiddling with bits of the goal representation would leave algorithm A continuing to produce complex goal directed behavior except re-targeted at some other goal than the outcome it was aiming for before its goal representation was fiddled with).
I'd be willing to accept an argument that used "meaning" in a very mechanistic sense of "reliable indication" or a social sense of "honest communication where dishonest communication was possible" or even some other sense that you wanted to spell out and then re-use in a careful way in other arguments... But if you want to use a primitive sense of "meaning" that applies to a calculator and then claim that that is what I do when I think or speak, then I don't think I'll find it very convincing.
My understanding of words like "meaning" and conjugations of "to be" starts from the assumption that they are levers for referring to the surface layer of enormously complex cognitive modeling tools for handling many radically different kinds of phenomena where it is convenient to paper over the complexity to order to get some job done, like dissecting precisely what a love interest "meant" when they said you "were being" coy. What "means" means, or what "were being" means in that sort of context is patently obvious to your average 13 year old... except that its really hard to spell that sort of thing out precisely enough to re-implement it in code or express the veridicality conditions in formal logic over primitive observables.
We are built to model minds, just like we are built to detect visual edges. We do these tasks wonderfully and we introspect on them terribly, which means re-using a concept like "meaning" in your foundational moral philosophy is asking for trouble :-P
I think my previous argument was at least partly wrong or confused, because I don't really understand what it means for a computation to mean something by a symbol. Here I'll back up and try to figure out what I mean by "mean" first.
Consider a couple of programs. The first one (A) is an arithmetic calculator. It takes a string as input, interprets it a formula written in decimal notation, and outputs the result of computing that formula. For example, A("9+12") produces "21" as output. The second (B) is a substitution cipher calculator. It "encrypts" its input by substituting each character using a fixed mapping. It so happens that B("9+12") outputs "c6b3".
What do A and B mean by "2"? Intuitively it seems that by "2", A means the integer (i.e., abstract mathematical object) 2, while for B, "2" doesn't really mean anything; it's just a symbol that it blindly manipulates. But A also just produces its output by manipulating symbols, so why does it seem like it means something by "2"? I think it's because the way A manipulates the symbol "2" corresponds to how the integer 2 "works", whereas the way B manipuates "2" doesn't correspond to anything, except how it manipulates that symbol. We could perhaps say that by "2" B means "the way B manipulates the symbol '2'", but that doesn't seem to buy us anything.
(Similarly, by "+" A means the mathematical operation of addition, whereas B doesn't really mean anything by it. Note that this discussion assumes some version of mathematical platonism. A formalist would probably say that A also doesn't mean anything by "2" and "+" except how it manipulates those symbols, but that seems implausible to me.)
Going back to meta-ethics, I think a central mystery is what do we mean by "right" when we're considering moral arguments (by which I don't mean Nesov's technical term "moral arguments", but arguments such as "total utilitarianism is wrong (i.e., not right) because it leads to the following conclusions ..., which are obviously wrong"). If human minds are computations (which I think they almost certainly are), then the way that a human mind processes such arguments can be viewed as an algorithm (which may differ from individual to individual). Suppose we could somehow abstract this algorithm away from the rest of the human, and consider it as, say, a program that when given an input string consisting of a list of moral arguments, thinks them over, comes to some conclusions, and outputs those conclusions in the form of a utility function.
If my understanding is correct, what this algorithm means by "right" depends on the details of how it works. Is it more like calculator A or B? It may be that the way we respond to moral arguments doesn't correspond to anything except how we respond to moral arguments. For example, if it's totally random, or depend in a chaotic fashion on trivial details of wording or ordering of its input. This would be case B, where "right" can't really be said to mean anything, at least as far as the part of our minds that considers moral arguments is concerned. Or it may be case A, where the way we process "right" corresponds to some abstract mathematical object or some other kind of external object, in which case I think "right" can be said to mean that external object.
Since we don't know which is the case yet, I think we're forced to say that we don't currently know what "right" means.