Comment author: TruePath 19 March 2014 11:42:02AM -1 points [-]

This is a debate about nothing. Turing completness tells us no matter how much it appears that a given Turing complete representation can only usefully process data about certain kinds of things in reality it can process data about anything any other language can do.

Well duh, but this (and the halting problem) have been taught yet systemically ignored in programming language design and this is exactly the same argument.

We are sitting around in the armchair trying to come up with a better means of logic/data representation (be it a programming language the underlying AI structure) as if the debate is about mathematical elegance or some such objective notion. Until you prove to me that any system in AIXI can duplicate the behavior (modulo semantic changes as to what we call a punishment) the other system can and vice versa that is the likely scenario.

So what would make one model for AI better than another? These vague theoretical issues? No, no more than how fancy your type system is determines the productivenesss of your programming language. Ultimately, the hurdle to overcome is that HUMANS need to build and reason about these systems and we are more inclined to certain kinds of mistakes than others. For instance I might write a great language using the full calculus of inductive constructions as a type system and still do type inference almost everywhere but if my language looks like line noise not human words all that math is irrelevant.

I mean ask yourself why is human programming and genetic programming so different. Because what model you use to build up your system has a far greater impact on your ability to understand what is going on than on any other effects. Sure, if you write in pure assembly JMPs everywhere with crazy code packing tricks it goes faster but you still lose.

If I'm right about this case as well it can only be decided by practical experiments where you have people try and reason in (simplified) versions of the systems and see what can and can't be easily fixed.

Comment author: fortyeridania 01 February 2014 07:13:43AM 0 points [-]

The one from Carnap ("Anything you can do, I can do meta") might not really be from Carnap. Can anyone find a source besides this one, which only gets it back to 1991?

Comment author: TruePath 01 February 2014 10:12:19AM 0 points [-]

But it really should be from Carnap.

Comment author: Benja 29 January 2014 04:58:42PM *  0 points [-]

Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.

("This paper" being Eliezer's writeup of the procrastination paradox.) That's true, thanks.

Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.

First of all (always assuming the theories are at least as strong as PA), note that in any such sequence, T_0 is the union of all the theories in the sequence; if T_(n+1) |- phi, then PA |- Box_(T_(n+1)) "phi", so T_n |- Box_(T_(n+1)) "phi", so by the trust schema, T_n |- phi; going up the chain like this, T_0 |- phi. So T_0 is in fact the "collapse" of the sequence into a single theory.

That said, I disagree that there is no hope that this kind of sequence could be useful. (I don't literally want to use an unsound theory, but see my writeup about an infinite sequence of sound theories each proving the next consistent, linked from the main post; the same remarks apply there.) Yes, T_0 is stronger than T_1, so why would you ever want to use T_1? Well, T_0 + Con(T_0) is stronger than T_0, so why would you ever want to use T_0? But by this argument, you can't use any sound theory including PA, so this doesn't seem like a remotely reasonable argument against using T_1. Moreover, the fact that an agent using T_0 can construct an agent using T_1, but it can't construct an agent using T_0, seems like a sufficient argument against the claim that the sequence as a whole must be useless because you could always use T_0 for everything.

Comment author: TruePath 30 January 2014 04:44:46AM 0 points [-]

I meant useful in the context of AI since any such sequence would obviously have to be non-computable and thus not something the AI (or person) could make pragmatic use of.

Also, it is far from clear that T0 is the union of all theories (and this is the problem in the proof in the other rightup). It may well be that there is a sequence of theories like this all true in the standard model of arithmetic but that their construction requires that Tn add extra statements beyond the schema for the proof predicate in T_{n+1}

Also, the claim that Tn must be stronger than T{n+1} (prove a superset of it...to be computable we can't take all these theories to be complete) is far from obvious if you don't require that Tn be true in the standard model. If Tn is true in the standard model than, as it proves that Pf(Tn+1, \phi) -> \phi this is true so if T{n+1} |- \phi then (as this witnessed in a finite proof) there is a proof that this holds from T_n and thus a proof of \phi. However, without this assumption I don't even see how to prove the containment claim.

Comment author: TruePath 29 January 2014 03:56:05PM 2 points [-]

Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.

Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.

Comment author: TruePath 29 January 2014 03:32:52PM 0 points [-]

It seems to me there are too separate issues.

1) Do you act like other people actually SAID the better argument (or interpretation of that argument) that you can put in his mouth?

2) Do you suggest the better alternative in debates and discussions of the idea before arguing against it.


2 is certainly a good idea while all the problems come from item 1. Indeed, I would suggest that both parties do best when everyone ACTS LIKE OTHER PEOPLE SAID WHATEVER YOU JUDGE TO BE MOST LIKELY THEY ACTUALLY INTENDED TO SAY. So you don't don't then on misspeaking nor do you pretend they argued for some straw-man position. However, everyone benefits the most when they learn why what they actually argued wasn't right,(especially if you offer a patched version when available).

This way people actually learn when they make erroneous arguments but the best arguments on each side are still addressed.

Comment author: TruePath 29 January 2014 03:26:32PM 0 points [-]

Indeed, I think a huge reason for the lack of useful progress in philosophy is too much charity.

People charitably assume that if they don't fully understand something (and aren't themselves an expert in the area) the person advancing the notion is likely contributing something of value that you just don't understand yet.

This is much of the reason for the continued existence of continental philosophy drivel like claims that set theory entails morality or the deeply confused erudite crap in Being and Time. Anyone who isn't actually an expert in this kind of philosophy feels it would be uncharitable (or at least seem uncharitable) to get up and denounce it as psuedo-philosophical mumbo-jumbo it is. It may seem harmless but the existence of this kind of stuff within the boundaries of philosophy means that less extreme but also wrong views are also not cut out.

Charity is more directly harmful within analytic (logic/math based philosophy as opposed to continental nonsense) philosophy where people frequently make the naive assumption that various theories, e.g., the definite description theory of reference and the baptismal naming theory of reference, are somehow either right or wrong and argue for these positions just as they would argue for the claims about the fundamental theory of physics. Yet, more sophisticated philosophers have frequently realized this entire naive realism viewpoint is flawed. There isn't a real thing meaning, just speech and writing, and thus these theories can only be taken as theoretical tools that help provide a useful framework for organizing patterns observed in speech acts and despite their incompatible assumptions can both be useful as approximations.

Unfortunately, I have observed time and time again that in situations like this the insight isn't passed on since it would be uncharitable to assume the philosophers who publish in this manner aren't really just debating which is a better approximation to help organize patterns in speech/writing.

Similarly charity stops people from being called out when they continue to wrestle in print with problems (surprise quiz etc..) that have a clear correct solution that was given decades ago since it would be uncharitable to assume (as it true) they simply don't have a good grip on the way mathematics can be applied or fails to apply to real world situations.

Comment author: TruePath 29 January 2014 03:04:38PM -3 points [-]

This highlights all the difficulties even making sense of a notion of rationality (I believe believing truth is well defined but that there is no relation one can define between OBSERVATIONS and BELIEFS that corresponds to our intuitive notion of rationality).

In particular your definition of rational seems to be something about satisfying the most goals or some other act based notion of rationality (not merely the attempt to believe the most truths). However, this creates several natural questions. First, if you would change your goals if you had sufficient time and were a clear enough thinker then does it still count as rational in your sense to achieve them? If so, then you end up with the strange result that you probably SHOULDN'T spend too much time thinking about your goals or otherwise trying to improve your rationality. After all, it is reasonably likely that your goals would change if subject to sufficient consideration and if you do manage to end up changing those goals you now almost certainly won't achieve the original goals (which are what is rational to achieve) while contemplation and attempts to improve the clearness of your thinking probably don't offer enough practical benefit to make it on net more likely you will achieve your original goals. This result seems ridiculous and in deep conflict with the idea of rationality.

Alternatively, if the mere fact that with enough clear-eyed reflection you would change your goals means that rational action is the action most likely to achieve the goals you would adopt with enough reflection rather than the goals you would adopt without it. This too leads to absurdities.

Suppose (as I was until recently) I'm a mathematician and I'm committed to solving some rather minor but interesting problem in my field. I don't consciously realise that I adopted that goal because it is the most impressive thing in my field that I haven't rejected as infeasible but that correctly describes my actual dispositions, i.e., if I discover that some other far more impressive result is something I can prove than I will switch over to wanting to do that. Now, almost certainly there is at least one open problem in my field that is considered quite hard but actually has some short clever proof but since I currently don't know what it is every problem considered quite hard is something I am inclined to think is impractical.

However, since rationality is defined as those acts which increase the likelihood that I will achieve the goal I would have had IF I spent arbitrarily long clearheadedly contemplating my goals and given enough time I can consider every short proof it follows that I ACT RATIONALLY WHENEVER MY ACTIONS MAKE ME MORE LIKELY TO SOLVE THE HARD MATH PROBLEM IN MY FIELD THAT HAPPENS TO HAVE AN OVERLOOKED SHORT PROOF EVEN THOUGH I HAVE NO REASON TO PURSUE THAT PROBLEM CURRENTLY. In other words I end up being rational just when I do something that is intuitively deeply irrational, i.e., for no discernable reason happen to ignore all the evidence that suggests the problem is hard and happen to switch to working on it.


This isn't merely an issue for act rationality as discussed here but also for belief rationality. Intuitively, belief rationality is something that should help me believe true things. Now ask whether it is more rational to believe, as all the current evidence suggests, that the one apparently hard but actually fairly easy math problem is hard or easy. If rationality is really about coming to more true beliefs than it is ALWAYS MORE RATIONAL TO RANDOMLY BELIEVE THAT AN EASY MATH PROBLEM IS EASY (OR A PROVABLE MATHEMATICAL STATEMENT IS TRUE) THAN TO BELIEVE WHATEVER THE EVIDENCE SAYS ABOUT THE PROBLEM. Yet, this is in deep conflict with our intuition that rationality should be about behaving in some principled way with respect to the evidence and not making blind leeps of faith.


Ultimately, the problem comes down to the lack of any principled notion of what counts as a rule for decision making. There is no principled way to distinguish the rule that says 'Believe what the experts in the field and other evidence tells you about the truth of unresolved mathematical statements' and 'Believe what the experts in the field and other evidence tells you about the truth of unresolved mathematical statements except for statement p which you should think is true with complete confidence.' Since the second rule always yields more truthful beliefs than the first it should be more rational to accept it.

This result is clearly incompatible with our intuitive notion of rationality so we are forced to admit the notion itself is flawed.

Note, that you can't avoid this problem by insisting that we have reasons for adopting one belief over another or anything like this. After all, consider someone whose basic belief formation mechanism didn't cause them to accept A if A & B was asserted. They are worse off than us in the same way that we were worse off than the person who randomly accepted ZFC -> FLT (Fermat's Last Theorem is true if set theory is true) before wiles provided any proof. There is a brute mathematical fact that each is inclined to infer without further evidence and that always serves to help them reach true beliefs.

In response to Causal Universes
Comment author: TruePath 14 December 2012 09:35:12PM 2 points [-]

The fact that you can't think of a way to compute the behavior of such a universe is no reason to conclude that it can't be done.

In particular, it's easy enough to come up with simplistic billiard ball models where you can compute events without 'backtracking'. Now such models are certainly weird in the sense that in order to compute what happens in the future one naturally relies on counterfactual claims about what one might have done.

However, <B>Quantum Mechanics looks a great deal like this</B>. The existence of objects like time turners creates the opportunity for multiple solutions to otherwise deterministic mechanics and if microscopic time turners were common one might develop a model of reality that looked like wave functions to represent the space of possible future paths that can interfere constructively/destructively via interaction from time turner type effects.

Comment author: TruePath 25 October 2012 08:48:52AM -3 points [-]

The concern of the philosophers is the idea of 'true causation' as independent from merely apparent causation. In particular, they have in mind the idea that even if the laws of the universe were deterministic there would be a sense in which certain events could be said to be causes of others even though mathematically, the configuration of the universe at any time completely entails it at all others. Frankly, this question arises out of half-baked arguments about whether events cause latter events or if god has predetermined and causes all events individually and I don't take it seriously.

My take is that there is no such thing as causation. Correlation is all there is and the fact that many correlations are usefully and compactlly described by Bayesian causal models is actually support for the idea that the ascription of causation reflects nothing more than how the arrows happen to point in those causal models we find most compelling. In other words I don't think it makes sense to look under your model to ask about what is truly causation but we should be clear that is what the philosophers mean.

Despite my great respect for Bayesian causal models it doesn't let us deduce causality from correlation and I can prove it.

Given results about k events (assume for simplicity they are binary True/False events) E1...Ek (so E1 might be burglary, E2 earthquake, E3 recession and a trial is each year) and any ordering <* on 1..k there is a causal model such that Ei is a causal antecedant of Ej iff i <* j that perfectly agrees with the given probabilities. In other words at the expense of potentially having every Ei with i <* j affect the probability of E_i I can have any causal order I want on the events and get the same results.

To see this is true start with whatever event we want to occur first, say E{i1}. Now we compute the probabilities that the next event E{i2} occurs conditional on E{i1} and it's negation. For E{i3} we compute the probabilities that this event occurs conditional on all 4 outcomes for the pair E{i1}, E{i2} and so on. This gives the correct probability to each set of outcomes and thus matches all observations. Alternatively, we can always make the E_i all dependent on some invisible common causes that match the appropriate priors.

True, these diagrams might be less simple in some sense than other diagrams we might draw but that doesn't mean they are false. Indeed, we might have very good general reasons for preferring some more complicated theory, e.g., even if a simpler causal model could explain the data but requires causal dependence on effects later in time reject it in favor of some more complicated model. This is a useful generalization we have about the world and following it helps us reach better predictions when we have limited data. Thus the mere number of arrows can't simply be minimized.

In other words all you've got is the same old crap about preferring the simpler theory where that has no principled mathematical definition and more or less means 'prefer whatever your priors say the causal model really looks like.' In other words we haven't gotten any closer to infering causation.

Just the opposite. The use of Bayesian causal models explains extremely well why, even if events are truly all effects caused by the choices of some unseen mover the notion of causation would be likely to evolve.

Comment author: TruePath 25 October 2012 07:51:27AM 0 points [-]

Also on the issue of insisting that all facts be somehow reducible to facts about atoms or whatever physical features of the world you insist on consider the claim that you have experiences.

As Chalmers and others have long argued it's logically coherent to believe in a world that is identical to ours in every 'physical' respect (position of atoms, chairs, neuron firings etc..) but yet it's inhabitants simply lacked any experiences. Thus, the belief that one does in fact have experiences is a claim that can't be reduced to facts about atoms or whatever.

Worse, insisting on any such reduction causes huge epistemic problems. Presumably, you learned that the universe was made of atoms, quarks, waves rather than magical forces, spirit stuff or whatever by interacting with the world. Yet, ruling out any claims that can't be spelled out in completely physical terms forces you to assert that you didn't learn anything when you found out that the world wasn't made of spirit stuff because such talk, by it's very nature, can't be reduced to a claim about the properties of quantum fields (or whatever).

View more: Prev | Next