But it really should be from Carnap.
Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.
("This paper" being Eliezer's writeup of the procrastination paradox.) That's true, thanks.
Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.
First of all (always assuming the theories are at least as strong as PA), note that in any such sequence, T_0 is the union of all the theories in the sequence; if T_(n+1) |- phi, then PA |- Box_(T_(n+1)) "phi", so T_n |- Box_(T_(n+1)) "phi", so by the trust schema, T_n |- phi; going up the chain like this, T_0 |- phi. So T_0 is in fact the "collapse" of the sequence into a single theory.
That said, I disagree that there is no hope that this kind of sequence could be useful. (I don't literally want to use an unsound theory, but see my writeup about an infinite sequence of sound theories each proving the next consistent, linked from the main post; the same remarks apply there.) Yes, T_0 is stronger than T_1, so why would you ever want to use T_1? Well, T_0 + Con(T_0) is stronger than T_0, so why would you ever want to use T_0? But by this argument, you can't use any sound theory including PA, so this doesn't seem like a remotely reasonable argument against using T_1. Moreover, the fact that an agent using T_0 can construct an agent using T_1, but it can't construct an agent using T_0, seems like a sufficient argument against the claim that the sequence as a whole must be useless because you could always use T_0 for everything.
I meant useful in the context of AI since any such sequence would obviously have to be non-computable and thus not something the AI (or person) could make pragmatic use of.
Also, it is far from clear that T0 is the union of all theories (and this is the problem in the proof in the other rightup). It may well be that there is a sequence of theories like this all true in the standard model of arithmetic but that their construction requires that Tn add extra statements beyond the schema for the proof predicate in T_{n+1}
Also, the claim that Tn must be stronger than T{n+1} (prove a superset of it...to be computable we can't take all these theories to be complete) is far from obvious if you don't require that Tn be true in the standard model. If Tn is true in the standard model than, as it proves that Pf(Tn+1, \phi) -> \phi this is true so if T{n+1} |- \phi then (as this witnessed in a finite proof) there is a proof that this holds from T_n and thus a proof of \phi. However, without this assumption I don't even see how to prove the containment claim.
Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.
Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.
It seems to me there are too separate issues.
1) Do you act like other people actually SAID the better argument (or interpretation of that argument) that you can put in his mouth?
2) Do you suggest the better alternative in debates and discussions of the idea before arguing against it.
2 is certainly a good idea while all the problems come from item 1. Indeed, I would suggest that both parties do best when everyone ACTS LIKE OTHER PEOPLE SAID WHATEVER YOU JUDGE TO BE MOST LIKELY THEY ACTUALLY INTENDED TO SAY. So you don't don't then on misspeaking nor do you pretend they argued for some straw-man position. However, everyone benefits the most when they learn why what they actually argued wasn't right,(especially if you offer a patched version when available).
This way people actually learn when they make erroneous arguments but the best arguments on each side are still addressed.
Indeed, I think a huge reason for the lack of useful progress in philosophy is too much charity.
People charitably assume that if they don't fully understand something (and aren't themselves an expert in the area) the person advancing the notion is likely contributing something of value that you just don't understand yet.
This is much of the reason for the continued existence of continental philosophy drivel like claims that set theory entails morality or the deeply confused erudite crap in Being and Time. Anyone who isn't actually an expert in this kind of philosophy feels it would be uncharitable (or at least seem uncharitable) to get up and denounce it as psuedo-philosophical mumbo-jumbo it is. It may seem harmless but the existence of this kind of stuff within the boundaries of philosophy means that less extreme but also wrong views are also not cut out.
Charity is more directly harmful within analytic (logic/math based philosophy as opposed to continental nonsense) philosophy where people frequently make the naive assumption that various theories, e.g., the definite description theory of reference and the baptismal naming theory of reference, are somehow either right or wrong and argue for these positions just as they would argue for the claims about the fundamental theory of physics. Yet, more sophisticated philosophers have frequently realized this entire naive realism viewpoint is flawed. There isn't a real thing meaning, just speech and writing, and thus these theories can only be taken as theoretical tools that help provide a useful framework for organizing patterns observed in speech acts and despite their incompatible assumptions can both be useful as approximations.
Unfortunately, I have observed time and time again that in situations like this the insight isn't passed on since it would be uncharitable to assume the philosophers who publish in this manner aren't really just debating which is a better approximation to help organize patterns in speech/writing.
Similarly charity stops people from being called out when they continue to wrestle in print with problems (surprise quiz etc..) that have a clear correct solution that was given decades ago since it would be uncharitable to assume (as it true) they simply don't have a good grip on the way mathematics can be applied or fails to apply to real world situations.
The fact that you can't think of a way to compute the behavior of such a universe is no reason to conclude that it can't be done.
In particular, it's easy enough to come up with simplistic billiard ball models where you can compute events without 'backtracking'. Now such models are certainly weird in the sense that in order to compute what happens in the future one naturally relies on counterfactual claims about what one might have done.
However, <B>Quantum Mechanics looks a great deal like this</B>. The existence of objects like time turners creates the opportunity for multiple solutions to otherwise deterministic mechanics and if microscopic time turners were common one might develop a model of reality that looked like wave functions to represent the space of possible future paths that can interfere constructively/destructively via interaction from time turner type effects.
Also on the issue of insisting that all facts be somehow reducible to facts about atoms or whatever physical features of the world you insist on consider the claim that you have experiences.
As Chalmers and others have long argued it's logically coherent to believe in a world that is identical to ours in every 'physical' respect (position of atoms, chairs, neuron firings etc..) but yet it's inhabitants simply lacked any experiences. Thus, the belief that one does in fact have experiences is a claim that can't be reduced to facts about atoms or whatever.
Worse, insisting on any such reduction causes huge epistemic problems. Presumably, you learned that the universe was made of atoms, quarks, waves rather than magical forces, spirit stuff or whatever by interacting with the world. Yet, ruling out any claims that can't be spelled out in completely physical terms forces you to assert that you didn't learn anything when you found out that the world wasn't made of spirit stuff because such talk, by it's very nature, can't be reduced to a claim about the properties of quantum fields (or whatever).
First a little clarification.
The contribution of Tarski was to define the idea of truth in a model of a theory and to show that one could finitely define truth in a model. Separately, he also showed no consistent theory can include a truth predicate for itself.
As for the issue of truth-conditions this is really a matter of philosophy of language. The mere insistence that there is some objective fact out there that my words hook on to doesn't seem enough. If I insist that "There are blahblahblah in my room." but that "There are no blahblahblah in your room." and when asked to clarify I only explain that blahblahblah are something that can't ever be experimentally measured or defined but I know when they are present and no one else does then my insistence that my words reflect some external reality really shouldn't be enough to convince you that they indeed do. Less extreme examples are the many philosophies of life people adopt that seem to have no observable implications.
One might react by insisting that only testable statements are coherent but this leads one down the rabbithole of positivism. Testable by who, when? Do they actually have to be tested? If not then in what sense are they testable, especially in a deterministic universe in which untested claims are automatically physically impossible to have tested (the initial conditions plus the laws determine they will not be tested). Taken to any kind of coherent end you find yourself denying everyday statements like "There wasn't a leprachan in my fridge yesterday," as nonsense since no one actually performed any measurement that would determine the truth of the statement.
Ultimately, I take a somewhat deflationary view of truth and philosophy of language. IMO all one can do is simply choose (like your priors) what assertions you take to be meaningful and which you don't. There is no logical flaw in the person who insists on the existence of extra facts but agrees with all your conclusions about shared facts. All you can do is simply tell them you don't understand these extra facts they claim to believe in.
This gunk about postmodernism is nothing but fanciful angst. You do in fact use language and make choices. If they are going to say there are extra facts about whether 'truth' is meaningful that amount to more than the fact that I might be a brain in a vat and that the disquotational biconditional holds then they are just another person insisting on extra facts I have to say I simply fail to understand (to the extent they are simply attacking the existence of shared interpersonal experience/history this is simply a disagreement over priors and no argument will settle it....however, since that concern exhausts the sense I understand the notion of truth and further worry is talking about something I'm not).
From a really strict Bayesian point of view, more information can certainly make decision worse. Only perfect information (or, perhaps, arbitrarily close-to-perfect information??) necessarily makes decisions better.
Not true. A perfect Bayesian updater will never make worse decisions in the light of new information. If new information causes worse decisions that is a reflection that the new information was not appropriately weighted according to the trustworthiness of the information source.
In other words, false information can only make for worse decisions if it is treated as true. The only reason you would treat false information as true is that you placed too much trust in the source of the information. The problem is not the receipt of the new information, it is incorrect updating due to incorrect priors regarding the reliability of the information source. That may be a common problem for actual imperfect humans but it is not an indication that acquiring new information can ever lead to worse decisions for a theoretical perfect Bayesian updater.
That's not quite right. The provision of all true but biased information (e.g. only those facts that are consistent with guilt) without complete awareness of the exact nature of the bias applied can increase the chances of an error.
Even unbiased info can't be said to always help. A good example is someone who has crazy priors. Suppose someone has the crazy prior that with probability .99999 creationism is true. If they have somehow aquired evidence that overcomes this prior but further information about problems with evolutionary theories would leave them with still strong but not convincing evidence that evolution is true then providing them with that evidence increases their chance of error.
More generally, disagreement in priors forces one to believe that others will make better decisions if evidence that exacerbates the errors in their priors is provided.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This is a debate about nothing. Turing completness tells us no matter how much it appears that a given Turing complete representation can only usefully process data about certain kinds of things in reality it can process data about anything any other language can do.
Well duh, but this (and the halting problem) have been taught yet systemically ignored in programming language design and this is exactly the same argument.
We are sitting around in the armchair trying to come up with a better means of logic/data representation (be it a programming language the underlying AI structure) as if the debate is about mathematical elegance or some such objective notion. Until you prove to me that any system in AIXI can duplicate the behavior (modulo semantic changes as to what we call a punishment) the other system can and vice versa that is the likely scenario.
So what would make one model for AI better than another? These vague theoretical issues? No, no more than how fancy your type system is determines the productivenesss of your programming language. Ultimately, the hurdle to overcome is that HUMANS need to build and reason about these systems and we are more inclined to certain kinds of mistakes than others. For instance I might write a great language using the full calculus of inductive constructions as a type system and still do type inference almost everywhere but if my language looks like line noise not human words all that math is irrelevant.
I mean ask yourself why is human programming and genetic programming so different. Because what model you use to build up your system has a far greater impact on your ability to understand what is going on than on any other effects. Sure, if you write in pure assembly JMPs everywhere with crazy code packing tricks it goes faster but you still lose.
If I'm right about this case as well it can only be decided by practical experiments where you have people try and reason in (simplified) versions of the systems and see what can and can't be easily fixed.