we are not justified in assigning probability 1 to the belief that 'A=A' or to the belief that 'p -> p'? Why not?
Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.
"T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false."
Actually, I think if "I know T is true" means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 1 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief. So I'd say the problem is a wrong question.
Edited: I meant probability 1 of misleading evidence, not 0.
Actually, I think if "I know T is true" means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 0 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief.
The presumption of the claim "I know T is true" (and that evidence that it is false is false) is false precisely in the case that the reasoning used to show that T (in this case a theorem) is true is invalid. Were T not a theorem, then probabilistic reasoning would in fact apply, but it does not. (And since it doesn't, it is irrelevant to pursue that path. But, in short, the fact that it is a theorem should lead us to understand that the premisses' truth is not the issue at hand here, thus probabilistic reasoning need not apply, and so there is no issue of T's being probably true or false.) Furthermore, it is completely wide of the mark to suggest that one should apply this or that probability to the claims in question, precisely because the problem concerns deductive reasoning. All the non-deductive aspects of the puzzles are puzzling distractions at best. In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone's having arrived at the (fallacious) truth of T. (It is necessary that one be led to a true conclusion given true premisses.) Hence, one need not be concerned with the epistemic standing of the truth of T, since it would have clearly been demonstrated to be false. And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid. Valid reasoning is always valid, no matter what one may think of the reasoning; and one may invalidly believe in the validity of an invalid conclusion. Such is human fallibility.
So I'd say the problem is a wrong question.
No, I think it is a good question, and it is easy to be led astray by not recognizing where precisely the problem fits in logical space, if one isn't being careful. Amusingly (if not disturbingly), some of most up-voted posts are precisely those that get this wrong and thus fail to see the nature of the problem correctly. However, the way the problem is framed does lend itself to misinterpretation, because a demonstration of the falsity of T (namely, that it is invalid that T is true) should not be treated as a premiss in another apodosis; a valid demonstration of the falsity of T is itself a deductive conclusion, not a protasis proper. (In fact, the way it is framed, the claim ~T is equivalent to F, such that the claims [F, P1, P2, and P3] implies ~T is really a circular argument, but I was being charitable in my approach to the puzzles.) But oh well.
Puzzle 1
- RM is irrelevant.
The concept of "defeat", in any case, is not necessarily silly or inapplicable to a particular (game-based) understanding of reasoning, which has always been known to be discursive, so I do not think it is inadequate as an autobiographical account, but it is not how one characterizes what is ultimately a false conclusion that was previously held true. One need not commit oneself to a particular choice either in the case of "victory" or "defeat", which are not themselves choices to be made.
Puzzle 2
- Statements ME and AME are both false generalizations. One cannot know evidence for (or against) a given theorem (or apodosis from known protases) in advance based on the supposition that the apodosis is true, for that would constitute a circular argument. I.e.:
T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false. It is also false to suppose that a human being is always capable of reasoning correctly under all states of knowledge, or even that they possess sufficient knowledge of a particular body of information perfectly so as to reason validly.
- MF is also false as a generalization.
In general, one should not be concerned with how "misleading" a given amount of evidence is. To reason on those grounds, one could suppose a given bit of evidence would always be "misleading" because one "knows" that the contrary of what that bit of evidence suggests is always true. (The fact that there are people out there who do in fact "reason" this way, based on evidence, as in the superabundant source of historical examples in which they continue to believe in a false conclusion, because they "know" the evidence that it is false is false or "misleading", does not at all validate this mode of reasoning, but rather shores up certain psychological proclivities that suggest how fallacious their reasoning may be; however, this would not itself show that the course of necessary reasoning is incorrect, only that those who attempt to exercise it do so very poorly.) In the case that the one is dealing with a theorem, it must be true, provided that the reasoning is in fact valid, for theorematic reasoning is based on any axioms of one's choice (even though it is not corollarial). !! However, if the apodosis concerns a statement of evidence, there is room for falsehood, even if the reasoning is valid, because the premisses themselves are not guaranteed to be always true.
The proper attitude is to understand that the reasoning prior to exposure of evidence/reasoning from another subject (or one's own inquiry) may in fact be wrong, however necessary the reasoning itself may seemingly appear. No amount of evidence is sufficient evidence for its absolute truth, no matter how valid the reasoning is. Note that evidence here is indeed characteristic of observational criteria, but the reasoning based thereon is not properly deductive, even if the reasoning is essentially necessary in character. Note that deductive logic is concerned with the reasoning to true conclusions under the assumption that the relevant premisses are true; if one is taking into account the possibility of premisses which may not always be true, then such reasoning is probabilistic (and necessary) reasoning.
!! This, in effect, resolves puzzle 1. Namely, if the theorem is derived based on valid necessary reasoning, then it is true. If it isn't valid reasoning, then it is false. If "defeat" consists in being shown that one's initial stance was incorrect, then yes, it is essential that one takes the stance of having been defeated. Note that puzzle 2 is solved in fundamentally the same manner, despite the distracting statements ME, AME, and MF, on account of the nature of theorems. Probabilities nowhere come into account, and the employment of Bayesian reasoning is an unnecessary complication. If one does not take the stance of having been defeated, then there is no hope for that person to be convinced of anything of a logical (necessary) character.
I wonder whether or not there might be a prime example of the game of general expertise par excellence out there, one that touches on many domains simultaneously...
Probably not. While in video game design there are general competencies you can rely on, there are both mutually exclusive challenges: fast paced FPS games like Quake 3 cannot be played like slower paced FPS games like Call of Duty, players who attempt to transfer their skills without understanding this don't succeed; and balance problems, where the addition of game elements overshadow others like in Alien Swarm where there are five effective weapons even though there are fifteen other options and some of them are dismissed unfairly because they are introduced to players who haven't seen a need for the skills they ask. Both of these factors, however, mean that challenges and tradeoffs go hand in hand in your game's design.
That all said, people do try. Spore is the readiest example of this to me: the mishmash of different games doesn't really work, the way they tried to address the challenge balancing issues means that four fifths of the game design is effectively useless, but it's an instructive game nonetheless.
Excuse me for waxing over-philosophical in my last message, since I said "might be" rather than "currently is". To be clear, I'm referring to the practical possibility (if not the straightforward logical possibility) of such a game existing.
I suppose, in any case, that one form such a game has the greatest chance of succeeding in meeting that (rather vague) designation would involve its exhibiting the most generality within its gameplay, such that the kinds of cognitive requirements put upon users would not necessarily involve specific skills or skill acquisition per se, but rather a kind of mystifying push-without-training-wheels that permits the mind to shape itself however it sees fit to accomplish the task - which then creates problems for users by forcing them to constantly modify their adopted strategy or preferred tactics.
One such game that comes to mind as a (tentative) example is Dual N-Back (or related variants) that does not directly demand any specific strategy or conceptual framework for it to be taken on by a user. One has no specific input on how to tackle it, but when the user gets the hang of it, the game naturally changes the rule(s) or framework, forcing the user to adapt once more. Such a game most certainly involves expertise (a lot of time spent playing it and getting better).
But, yeah, with most, if not all, generally recognized games, it is pretty clear that with the kinds of skills demanded of a user it may be quite difficult to maneuver certain other skills and make such a game feasible.
I think the main issue here is that expertise must be conceptualized with respect to a particular activity or set of activities in order for it to maintain its essential meaning. The nature of expertise is also restricted to a specific range of tools the brain embodies (as in "embodied cognition"); in other words, it is not the hand the knows what to type, but rather the keyboard that knows what to type. To be clear, my cognitive capacity is effectively extended and reshaped by the interaction with the keyboard, so in effect the nature of the expertise will be limited specifically to the final cause (in the philosophical sense) of the activity itself. I like to think of it as the mind further approximating the function of the game, or activity, over time serving as a kind of analogy to the ever-accumulating expertise therein.
Taking the example of chess versus a modern-day computer-enhanced strategy game, the modes of embodiment are vastly different, and so the kinds of expertise to be expected should naturally diverge. However, I would not be so pollyannaish as to assert that playing StarCraft 2 (or Chess) would be "really useful", unless you're playing for money to help you in some specific goal outside of the game itself. That is going a bit too far, in my opinion. We already know that the nature of expertise is such that it only operates at the level of the activity one is engaged in, and will not generalize (or transfer) far from that domain of activity. For instance, the expertise in knowing the layout of a keyboard and being able to type commands without a second thought (being constantly honed by a game that demands it) will transfer to the tasks (of other games) that require the same input on a keyboard (and will differentially benefit from those quick reflexes), but the specific tactics and techniques learned in-game will generally not find much use beyond that game, and I do believe that is what we're getting at with a game like SC2 insofar as "expertise" is a concern here. Similarly with chess: one might very well have excellent reflexes, honed in certain other tasks, and know many strategies and techniques for other things, but they won't apply to the space of chess, and so vice versa for chess to other activities. (And we already know that typical memorization techniques used in chess really don't help with memorizing anything else.)
Having said all that, I wonder whether or not there might be a prime example of the game of general expertise par excellence out there, one that touches on many domains simultaneously... Perhaps the Glass Bead Game? Ah, never mind. But, in all seriousness, the way of the game is probably the only way we'll ever find out if such a thing exists and will permit the mind to approximate the function of life all the more perfectly.
By the way, I don't know how it is the researchers in the article don't think there hasn't been such a "satellite view" of expertise before, particularly on the note of chess. Hasn't anyone told them of the Chess Tactics Server? ( http://chess.emrald.net/ ) Chumps to champs aplenty there.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think I see your point, but if you allow for the possibility that the original deductive reasoning is wrong, i.e. deny logical omniscience, don't you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
Unless you assume that you can't make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
And if you do assume that you can't make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
I want to be very clear here: a valid deductive reasoning can never be wrong (i.e., invalid), only those who exercise in such reasoning are liable to error. This does not pertain to logical omniscience per se, because we are not here concerned with the logical coherence of the total collection of beliefs a given person (like the one in the example) might possess; we are only concerned with T. And humans, in any case, do not always engage in deduction properly due to many psychological, physical, etc. limitations.
No, the possibility that someone will commit an error in deductive reasoning is in no need of quantification. That is only to increase the complexity of the puzzle. And by the razor, what is done with less is in vain done with more.
To reiterate, an invalid deductive reasoning is not a deduction with which we should concern ourselves. The prior case of T, having been shown F, is in fact false, such that we should no longer elevate it to the status of a logical deduction. By the measure of its invalidity, we know full well in the valid deduction ~T. In other words, to make a mistake in deductive reasoning is not to reason deductively!
This is where the puzzle introduced needless confusion. There was no real evidence. There was only the brute fact of the validity of ~T as introduced by a person who showed the falsity/invalidity of T. That is how the puzzles' solution comes to a head – via a clear understanding of the nature of deductive reasoning.