You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ec429 comments on Syntacticism - Less Wrong Discussion

-3 Post author: ec429 23 September 2011 06:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (62)

You are viewing a single comment's thread. Show more comments above.

Comment author: ec429 23 September 2011 07:12:36PM 0 points [-]

I agree with the main tenets of your view - rejecting the platonic realm of mathematical truths, instead founding mathematics rigorously and finitely on formal systems - but you do not hold fast to it

This is because you have misconstrued my view. My philosophy of syntacticism is not formalism, nor is it a straight rejection of Platonism. Instead it is a Platonic formalism, in which the Platonic ideals are of the form "thus-and-such is derivable from these axioms with these deduction rules", except that any attempt to talk directly and formally about those Platonic ideals fails (because that involves application of a formal system to a formal system).

It is not possible to consider Godel's theorem with only a direct understanding of PA, you need an implemented one.

The point of my post is that 'implemented PA' in fact implements "PA", and there can be no rigorous proof that the referent of "PA" is PA (because such a rigorous proof in fact ends with ""PA" is PA", and how do we know that this new system's "PA" is PA?)

Inside our implemented PA we construct a predicate and then prove in ZFC that it satisfies a particular property defined in terms of PA.

But how do we prove that (ZFC proves statement about PA) ⇔ (statement about PA)? Only by appealing to a higher-order theory. After all, I can define a theory T by the axioms "1. PA is consistent. 2. PA proves PA is consistent.", where (if necessary) PA is defined by listing PA's axioms, and yet if I were to conclude from this that PA proves PA's consistency without thereby becoming inconsistent, you would scream blue Gödelian murder, because you have a proof in ZFC that if PA proves PA is consistent then PA is not consistent. But I have a proof in T that PA is consistent and proves PA consistent (a trivial one, in fact). So how can you justify privileging ZFC's referent of "PA" over T's referent of "PA"? Only by appeal to some meta-theory - or by informal justification.

I have no idea what real means here.

It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems. Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation. But that truth is not like a rock, it is not a physical object - so it must be a metaphysical one. Thus, Platonism.

But then why privilege any particular set of rules as having Platonic existence? Thus, all possible formal systems have Platonic existence.

The "magical spirit realm" does not say that "2+2=4". But it does say that "If you apply the rules of PA to evaluate the expression 2+2, a failure to converge on 4 implies a failure to follow the rules of PA" (which failure might, for instance, be a cosmic ray hitting a neuron).

Do I make sense now?

Comment author: David_Allen 23 September 2011 08:13:21PM 1 point [-]

Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation.

You are experiencing a mind projection fallacy.

The theorems don't exist unless an implementation produces them and once produced they only exist within a context that can represent them.

In the same way, the truth you refer to is generated by and exists within your mind. It has no existence outside of that implementation.

Comment author: ec429 23 September 2011 08:22:39PM 1 point [-]

If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.

It is not a mind-projection fallacy, any more than "the sheep control my pebbles" is a mind-projection fallacy. It's just that it's operating one meta-level higher.

Comment author: Zetetic 24 September 2011 06:43:44AM *  0 points [-]

If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.

People have very similar brains and I'd bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).

Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say "2+2 = 4 is true outside of implementation" as well as the term "cognitive agent".

What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been 'set up' by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as "the correct answer", perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the 'expected' phonemes). There appears to be an underlying regularity, but it isn't clear to me what the true reduction looks like! Is the computation the 'bottom level'? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as "more fundamental" than physics?

Does this make sense?

Comment author: ec429 24 September 2011 07:12:55AM 0 points [-]

What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been 'set up' by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as "the correct answer")

But note that there are also patterns of light which we would interpret as "the wrong answer". If arithmetic is implementation-dependent, isn't it a bit odd that whenever we build a calculator that outputs "5" for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.

Are we then to take computation as "more fundamental" than physics?

Yes! (So long as we define computation as "abstract manipulation-rules on syntactic tokens", and don't make any condition about the computation's having been implemented on any substrate.)

Comment author: Zetetic 24 September 2011 03:52:47PM 0 points [-]

But note that there are also patterns of light which we would interpret as "the wrong answer".

I did note that, maybe not explicitly but it isn't really something that anyone would expect another person not to consider.

isn't it a bit odd that whenever we build a calculator that outputs "5" for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)?

It doesn't seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information 'X' at time t, but instead received 'Y' and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).

Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4?

No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.

The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.

Because, if arithmetic is implementation-dependent, you should be able to do so.

I tend to think it depends on a human-like brain that has been trained to interpret '2', '+' and '4' in a certain way, so I don't readily agree with your claim here.

Yes! (So long as we define computation as "abstract manipulation-rules on syntactic tokens", and don't make any condition about the computation's having been implemented on any substrate.)

I'll look over it, but given what you say here I'm not confident that it won't be an attempt at a resurrection of Platonism.

Comment author: ec429 24 September 2011 07:25:32PM 0 points [-]

It doesn't seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).

Except that if you examine the workings of a calculator that does agree with us, you're much much less likely to find a wiring fault (that is, that it's implementing a different algorithm).

if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.

If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case "Has been asked 2+2", which overrides the usual algorithm and just outputs 4... would the human then claim they'd "made it implement arithmetic"? I don't think so.

I'll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that "these {foo} are the concepts which can be compressed by thus-and-such algorithm" (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven't heard of positional numeral systems (though their system still beats the Artificial Arithmetician).

I'll look over it, but given what you say here I'm not confident that it won't be an attempt at a resurrection of Platonism.

What's wrong with resurrecting (or rather, reformulating) Platonism? Although, it's more a Platonic Formalism than straight Platonism.

Comment author: Zetetic 24 September 2011 09:34:17PM *  0 points [-]

If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case "Has been asked 2+2", which overrides the usual algorithm and just outputs 4... would the human then claim they'd "made it implement arithmetic"? I don't think so.

Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for "this is a working calculator", it doesn't follow that he wouldn't make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.

That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven't heard of positional numeral systems (though their system still beats the Artificial Arithmetician).

Yes, it would be true, but what exactly is it that 'is true'? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is "intuitively obvious" that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that "computes" "arithmetic operations on integers" (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).

It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.

It is clear to me that we can perceive 'computational' patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.

I'm inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes' book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl's book "Causality"; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.

ETA: To Clarify, I'm not attacking the computable universe hypothesis; I think it is likely right (though I think that the term 'computable' in the broad sense in which it is often used needs some unpacking).

Comment author: David_Allen 24 September 2011 12:56:19AM *  0 points [-]

I am arguing against your concept "that truth exists outside of any implementation".

My claim is that "truth" can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.

As stated here

Our minds are not transparent windows unto veridical reality; when you look at a rock, you experience not the the rock itself, but your mind's representation of the rock, reconstructed from photons bouncing off its surface.

To your question:

If that is so, then how come others tend to reach the same truth?

These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them... eventually concluding that "others tend to reach the same truth". Another mind with its own unique perspective may come to a different conclusion such as "Fred is wearing a funny hat."

Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.

Comment author: ec429 24 September 2011 02:10:20AM 2 points [-]

Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.

I think a meta- has gone missing here: I can't be certain that others tend to reach the same truth (rather than funny hats), and I can't be certain that 2+2=4. I can't even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).

That "truth" in the map doesn't imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude "5!", then you actually implemented the computation "2+2 with such-and-such a cosmic ray bitflip".

I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.

Comment author: David_Allen 24 September 2011 05:31:44AM 0 points [-]

Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.

Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.

Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don't exist within physical reality.

"Sheep" is at a different meta-level than the chain of physical inference that led to that classification.

That "truth" in the map doesn't imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject.

"Truth" is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that "truth" is in the set of stuff that has meaning within the territory.

When you look at the statement 2+2=4 you think some form of "hey, that's true". When I look at the statement, I also think some form of "hey, that's true". We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.

For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.

Common contexts allow us to agree on the consequences of arithmetic.

The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.

Comment author: ec429 24 September 2011 07:05:39AM 0 points [-]

When you look at the statement 2+2=4 you think some form of "hey, that's true". When I look at the statement, I also think some form of "hey, that's true". We can then talk and both come to our own unique conclusion that the other person agrees with us.

I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it's not reflective. Putting two pebbles next to two pebbles also agrees.

Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens - and that this implies there is something to be converged upon. At the least, it is 'true' that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like "1=0").

These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.

Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The 'physical existence hypothesis' has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the 'mathematical realism hypothesis'. Thus by Occam, I conclude mathematical realism and no physical existence.

I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it's 8AM, I've been up all night, and I need some sleep :(

Comment author: David_Allen 25 September 2011 05:34:06PM *  0 points [-]

I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it's not reflective. Putting two pebbles next to two pebbles also agrees.

Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.

Agreement is an evaluation performed by your mind from its unique position in the universe.

... this implies there is something to be converged upon.

The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.

Mathematical realism also explains my observations and operates entirely within the mathematical universe; ...

Mathematical realism appears to confuse the map for the territory -- as does scientific realism, as does physical realism.

When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.

Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as "We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition."

I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.

When you state:

Putting two pebbles next to two pebbles also agrees.

I look for the context of evaluation that produces this result -- and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.

Comment author: [deleted] 23 September 2011 11:13:05PM 0 points [-]

It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems.

Not so! From most interesting formal systems there are exponentially many possible deductions that can be made. All known agents can only make a bounded number of possible deductions. A widely believed conjecture implies that longer-lived agents could only make a polynomial number of possible deductions.

Comment author: ec429 23 September 2011 11:25:48PM 0 points [-]

Um, that's not quite what I meant... if I prove a theorem you're unlikely to prove its negation, and if I tell you I have a proof of a theorem you're likely to be able to find one. Moreover, in the latter case, if you're not able to find a proof, then either I am mistaken in my belief that I have a proof, or you haven't looked hard enough (and in principle could find a proof if you did). That doesn't mean that every agent with "I think therefore I am" will deduce the existence of rice pudding and income tax, it only means that if one agent can deduce it, the others in principle can do the same. For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming 'proves' is defined by 'a proof of finite length exists', as happens in Gödel-numbering / PA "box")

Comment author: [deleted] 23 September 2011 11:30:21PM 0 points [-]

Um, that's not quite what I meant... if I prove a theorem you're unlikely to prove its negation,

As you know, I am outside of mainstream opinion on this. But this:

and if I tell you I have a proof of a theorem you're likely to be able to find one.

is flatly wrong if P <> NP and not controversially so. If I reach a point in a large search space there is no guarantee that you can reach the same point in a feasible amount of time.

For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming 'proves' is defined by 'a proof of finite length exists', as happens in Gödel-numbering / PA "box")

Recursively enumerable sure. Not feasibly enumerable.

Comment author: ec429 24 September 2011 12:26:35AM 1 point [-]

But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible... is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?

Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn't, I'd be very surprised.) Consequently, there is something common.

Similarly, no matter how low your prior probability for "PA is consistent", so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.

Comment author: [deleted] 24 September 2011 02:42:41AM *  2 points [-]

But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof

Incidentally but importantly, lengthiness is not expected to be the only obstacle to finding a proof. Cryptography depends on this.

As to why feasibility matters: it's because we have limited resources. You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources. If you wish to convince people to be less skeptical of infinity (your stated intention), you will have to take feasibility into account or else make a circular argument.

But saying that things are true only by virtue of their proof being feasible... is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?

I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.

Similarly, no matter how low your prior probability for "PA is consistent", so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.

Not so. If I have P(PA will be shown inconsistent in fewer than m minutes) = p, then I also have P(I will prove the negation of your theorem in fewer than m+1 minutes) = p. Your ability to prove things doesn't enter into it.

Comment author: ec429 24 September 2011 03:57:54AM 0 points [-]

lengthiness is not expected to be the only obstacle to finding a proof

True; stick a ceteris paribus in there somewhere.

You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources.

Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.

I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.

So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?

P(I will prove the negation of your theorem in fewer than m+1 minutes) = p

No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don't know whether my proof was correct.

Comment author: [deleted] 24 September 2011 01:04:48AM 1 point [-]

Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards.

A positive but minuscule amount. This is how cryptography works! In less than a minute (aided by my very old laptop), I gave a proof of the following theorem: the second digit of each of each of the prime factors of n is 6, where

n = 44289087508518650246893852937476857335929624072788480361

It would take you much longer to find a proof (even though the proof is very short!).

(If it doesn't, I'd be very surprised.)

Update!

About feasibility, I might say more later.

Comment author: ec429 24 September 2011 01:55:24AM 0 points [-]

A positive but minuscule amount.

Right - but if there were no 'fact-of-the-matter' as to whether a proof exists, why should it be non-zero at all?

Comment author: [deleted] 24 September 2011 01:57:38AM *  0 points [-]

But that isn't what either of us said. You mentioned P(you can find a proof). I am telling you (telling you, modulo standard open problems) that this can be very small even after another agent has found a proof. This is a standard family of topics in computer science.

Comment author: ec429 24 September 2011 02:31:31AM -1 points [-]

I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that "Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems" was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount - from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it - a step I have replaced - so does that not screen off my conclusion from that step - so why are you still arguing with me?

Comment author: [deleted] 24 September 2011 02:53:49AM 0 points [-]

I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.

You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount - from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists.

No. "I can find a proof in time t" is a definite event whose probability maybe can be measured (with difficulty!). "A proof exists" is a much murkier statement and it is much more difficult to discuss its probability. (For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1. Such a consistent prior is an oracle!)