Adding some structure to this hypothetical: At time t=0, Bob and Daisy have certain priors for their beliefs on sorcery, which they have not adjusted for this argument. Bob's belief was Position 1, with reduced strength, and Daisy's was Position 3, with greater strength.
I'll call your argument A0.
At time t=1, Bob and Daisy are both made aware of A0 and its implications for adjusting their beliefs. They update; Bob's belief in 1 increases, and Daisy's belief in 3 decreases.
More arguments:
A1: If Position 1 is true, then Bright is likely to cause you to increase your belief in him, therefore increasing your belief in Bright is evidence for Position 1.
A1': Corollary: Decreasing your belief in Bright is evidence against Position 1.
A2:If Position 3 is true, then Dark is likely to cause you to decrease your belief in him, therefore decreasing your belief in Dark is evidence for Position 3.
A2': Corollary: Increasing your belief in Dark is evidence against Position 3.
At time t=2, Bob and Daisy are exposed to A1 and A2, and their converses A1' and A2'. If they believe these, they should both increase credence for Positions 1 and 3, following A1 and A2, then increase credence for Position 1 and decrease it for Position 3, following A1 and A2', then follow A1 and A2 again, etc. This might be difficult to resolve, as you mention in your first question.
However, there is a simple reason to reject A1 and A2: Their influence is totally screened off! Bob and Daisy know why they revised their beliefs, and it was because of the valid argument A0. Unless Bright and Dark can affect the apparent validity of logical arguments (in which case your thoughts can't be trusted anyway), A0 is valid independent of which position is true. This action moves them to begin a feedback loop, but stop after a single iteration.
There is a valid reason they might want to continue a weaker loop.
A3: That you have encountered A0 is evidence for the sorcerers whose goals are served by having you be influenced by A0.
A3': That you have encountered A3 is evidence for the sorcerers whose goals are served by having you be influenced by A3.
A3'': That you have encountered A3'' is evidence for the sorcerers whose goals are served by having you be influenced by A3''.
etc.
But this is only true if they didn't reason out A0 or A3 for themselves, and even then A3', A3'', etc. should be considered obvious implications of A3 for a well-reasoned thinker. (In fact, A3 is properly more like "That you have encountered a valid argument is evidence for the sorcerers whose goals are served by having you be influenced by that argument.") So that adds at most one more layer, barring silly Tortoise-and-Achilles arguments.
Given all that, for your second question, you still should take their beliefs into account, but possibly to a slightly lesser degree.
A point I'm confused on: when you, based on A0, update based on their A0-updated belief, are you double-counting A0? If so, you should update to lesser degree. But is that so?
I don't think I completely follow everything you say, but let's take a concrete case. Suppose I believe that Dark is extremely powerful and clever and wishes to convince me he doesn't exist. I think you can conclude from this that if I believe he exists, he can't possibly exist (because he'd find a way to convince me otherwise), so I conclude he can't exist (or at least the probability is very low). Now I've convinced myself he doesn't exist. But maybe that's how he operates! So I have new evidence that he does in fact exist. I think there's some sort of p...
This article is going to be in the form of a story, since I want to lay out all the premises in a clear way. There's a related question about religious belief.
Let's suppose that there's a country called Faerie. I have a book about this country which describes all people living there as rational individuals (in a traditional sense). Furthermore, it states that some people in Faerie believe that there may be some individuals there known as sorcerers. No one has ever seen one, but they may or may not interfere in people's lives in subtle ways. Sorcerers are believed to be such that there can't be more than one of them around and they can't act outside of Faerie. There are 4 common belief systems present in Faerie:
This is completely exhaustive, because everyone believes there can be at most one sorcerer. Of course, some individuals within each group have different ideas about what their sorcerer is like, but within each group they all absolutely agree with their dogma as stated above.
Since I don't believe in sorcery, a priori I assign very high probability for case 4, and very low (and equal) probability for the other 3.
I can't visit Faerie, but I am permitted to do a scientific phone poll. I call some random person, named Bob. It turns out he believes in Bright. Since P(Bob believes in Bright | case 1 is true) is higher than the unconditional probability, I believe I should adjust the probability of case 1 up, by Bayes rule. Does everyone agree? Likewise, the probability of case 3 should go up, since disbelief in Dark is evidence for existence of Dark in exactly the same way, although perhaps to a smaller degree. I also think the case 2 and case 4 have to lose some probability, since it adds up to 1. If I further call a second person, Daisy, who turns out to believe in Dark, I should adjust all probabilities in the opposite direction. I am not asking either of them about the actual evidence they have, just what they believe.
I think this is straightforward so far. Here's the confusing part. It turns out that both Bob and Daisy are themselves aware of this argument. So, Bob says, one of the reasons he believes in Bright, is because that's positive evidence for Bright's existence. And Daisy believes in Dark despite that being evidence against his existence (presumably because there's some other evidence that's overwhelming).
Here are my questions:
I am looking forward to your thoughts.