Comment author: Eugine_Nier 14 October 2012 06:03:06PM 0 points [-]

What if the universe permits hyper-computation?

Comment author: potato 14 October 2012 11:04:26PM *  0 points [-]

Hmm, it depends on whether or not you can give finite complete descriptions of those algorithms, if so, I don't see the problem with just tagging them on. If you can give finite descriptions of the algorithm, then its komologorov complexity will be finite, and the prior: 2^-k(h) will still give nonzero probabilities to hyper environments.

If there are no such finite complete descriptions, then I gotta go back to the drawing board, cause the universe could totally allow hyper computations.

On a side note, where should I go to read more about hyper-computation?

Comment author: Eliezer_Yudkowsky 10 October 2012 05:56:43AM 5 points [-]

Koan 3:

Does the idea that everything is made of causes and effects meaningfully constrain experience? Can you coherently say how reality might look, if our universe did not have the kind of structure that appears in a causal model?

Comment author: potato 14 October 2012 09:08:06AM *  2 points [-]

At first thought. It seems that if it could be falsified, then it would fail the criteria of containing all and only those hypotheses which could in principle be falsified. Kind of like a meta-reference problem; if it does constrain experience, then there are hypotheses which are not interpretable as causal graphs that constrain experience (no matter how unlikely). This is so because the sentence says "all and only those hypothesis that can be interpreted as causal graphs are falsifiable", and for it to be falsified, means verifying that there is at least one hypothesis which cannot be interpreted as a causal graph which is falsifiable. Short answer, not if we got it right this time.

(term clarification) All and only hypotheses that constrain experience are falsifiable and verifiable, for there exists a portion of experience space which if observed falsifies them, and the rest verifies them (probabilistically).

Comment author: potato 14 October 2012 08:54:09AM *  0 points [-]

I have to ask, how does this metaphysics (cause that's what it is) account for mathematical truths? What causal models do those represent?

My bad:

Someone already asked this more cleverly than I did.

Comment author: potato 14 October 2012 08:46:56AM *  3 points [-]

I have a plausibly equivalent (or at least implies Ey's) candidate for the fabric of real things, i.e., the space of hypotheses which could in principle be true, i.e., the space of beliefs which have sense:

A Hypothesis has nonzero probability, iff it's computable or semi computable.

It's rather obviously inspired by Solomonoff abduction, and is a sound principle for any being attempting to approximate the universal prior.

Comment author: Bundle_Gerbe 12 October 2012 09:32:44PM 21 points [-]

I am confused by these posts. On one hand, Eliezer argues for an account of causality in terms of probability, which as we know are subjective degrees of belief. So we should be able to read off whether X thinks A causes B from looking at conditional probabilities in X's map.

But on the other hand, he suggests (not completely sure this is his view from the article) that the universe is actually made of cause and effect. I would think that the former argument instead suggests causality is "subjectively objective". Just as with probability, causality is fundamentally an epistemic relation between me and the universe, despite the fact that there can be widespread agreement on whether A causes B. Of course, I can't avoid cancer by deciding "smoking doesn't cause cancer", just as I can't win the lottery by deciding that my probability of winning it is .9.

For instance, how would an omniscient agent decide if A causes B according Eliezer's account of Pearl? I don't think they would be able to, except maybe in cases where they could count frequencies as a substitute for using probabilities.

Comment author: potato 13 October 2012 07:10:30PM 3 points [-]

It seems to me that this is the primary thing that we should be working on. If probability is subjective, and causality reduces to probability, then isn't causality subjective, i.e., a function of background knowledge?

Comment author: IlyaShpitser 04 October 2012 10:40:58PM *  1 point [-]

I don't understand your question, or your notation.

d-separation is just a way of talking about separating sets of vertices in a graph by "blocking" paths. It can't be implied by anything because it is not a statement in a logical language. For "certain" graph/joint distribution pairs, if a d-separation statement holds in the graph, then a corresponding conditional independence statement holds in the joint distribution. This is a statement, and it is proven in Verma and Pearl 1988, as paper-machine below says. Is that the statement you mean? There are lots of interesting true and hard to prove statements one could make involving d-separation.

I guess from a model theorist point of view, it's a proof in ZF, but it's high level and "elementary" by model theory standards.

Comment author: potato 11 October 2012 11:02:43AM *  0 points [-]

Looking it over, I could have been much clearer (sorry). Specifically I want to know. Given a Dag of the form:

A -> C <- B

Is it true that (in all prior joint distributions where A is independent of B, but A is evidence of C, and B is evidence of C) A is none-independent of B, given C is held constant?

I proved that when A & B is evidence against C, this is so, and also when A & B are independent of C, this is so, the only case I am missing is when A & B is evidence for C.

It's clear enough to me that when you have one none-colliding path between any two variables, they must not be independent; and that if we were to hold any of the variable along that path constant, that those variables would be independent. This can all be shown given standard probability theory and correlation alone. It can also be shown that if there are only colliding paths between two variables, those two variables are independent. If I have understood the theory of d-separation correctly, if we hold the collision variable (assuming there is only one) on one of these paths constant, the two variables should become none-independent (either evidence for or against one another). I have proven that this is so in two of the (at least) three cases that fit the given DAG using standard probability theory.

Those are the proofs I gave above.

Comment author: potato 02 October 2012 04:43:50AM *  3 points [-]

I have a question: is D-separation implied by the komologorov axioms?

I've proven that it is in some cases:

Premises:

1)A = A|B :. A|BC ≤ A|C
2)C < C|A
3)C < C|B
4) C|AB < C

proof starts:
1)B|C > B {via premise 3
2)A|BC = A * B * C|AB / (C * B|C) {via premise 1
3)A|BC * C = A * B * C|AB / B|C
4)A|BC * C / A = B * C|AB / B|C
5)B * C|AB / B|C < C|AB {via line 1
6)B * C|AB / B|C < C {via line 5 and premise 4
7)A|BC * C / A < C {via lines 6 and 4
8)A|C = A * C|A / C
9)A|C * C = A * C|A
10)A|C * C / A = C|A
11)C < A|C * C / A {via line 10 and premise 2
12)A|BC * C / A < A|C * C / A {via lines 11 and 7
13)A|BC < A|C
Q.E.D.

Premises:

1) A = A|B :. A|BC ≤ A|C
2) C < C|A
3) C < C|B
4) C|AB = C

proof starts:

1)A|C = A * C|A / C
2)A|BC = A * B * C / (B * C|B) {via premises 1 and 4
3)A|BC = A * C / C|B
4)A * C < A * C|A {via premise 2
5)A * C / C|B < A * C|A / C {via line 4 and premise 3
6)A|BC < A|C {via lines 1, 3, and 5
Q.E.D.

If it is implied by classical probability theory, could someone please refer me to a proof?

Comment author: potato 16 September 2012 07:50:30AM *  0 points [-]

A real deadlock i have with using your algorithmic meta-ethics to think about object level ethics is that I don't know who's volition, or "should" label I should extrapolate from. It allows me to figure out what's right for me, and what's right for any group given certain shared extrapolated terminal values, but it doesn't tell me what to do when I am dealing with a population with none-converging extrapolations, or with someone that has different extrapolated values from me (hypothetically).

These individuals are rare, but they likely exist.

Comment author: Spinning_Sandwich 11 September 2012 09:45:12PM -1 points [-]

Why not call the set of all sets of actual objects with cardinality 3, "three", the set of all sets of physical objects with >cardinality 2, "two", and the set of all sets of physical objects with cardinality 5, "five"?

Because that's how naive class theory works, not how consistent formal mathematics works.

The closest thing to a canonical approach these days is to start from what you have, nothing, and call that the first set. Then you make sets from those sets in a very restrictive, axiomatic way. Variants get as exotic as the surreal numbers, but the running theme is to avoid defining sets by intension unless you're quantifying over a known domain.

For the record, I don't think any of these things "exist" in any meaningful sense. We can do mathematics with inconsistent systems just as well, if less usefully. The law of non-contradiction is something I don't see how to get past (ie I can't comprehend such a thing), and there is nothing much else distinguishing the consistent systems as being anything other than collections of statements to the effect that this & that follow if we grant these or those axioms. (Fortunately, it's more interesting than that at the higher levels.)

Comment author: potato 16 September 2012 07:27:11AM *  0 points [-]

You've misunderstood me. It's really not at all conspicuous to allow a none-empty "set" into your ontology, but if you'd prefer we can talk about heaps; they serve for my purposes here (of course, by "heap", I mean any random pile of stuff). Every heap has parts: you're a heap of cells, decks are heaps of cards, masses are heaps of atoms, etc. Now if you apply a level filter to the parts of a heap, you can count them. For instance, I can count the organs in your body, count the organ cells in your body, and end up with two different values, though I counted the same object. The same object can constitute many heaps, as long as there are several ways of dividing the object into parts. So what we can do, is just talk about the laws of heap combination, rather than the laws of numbers. We don't require any further generality in our mathematics to do all our counting, and yet, the only objects I've had to adopt into my ontology are heaps (rather inconspicuous material fellows in IMHO).

I should mention that this is not my real suggestion for a foundation of mathematics, but when it comes to the challenge of interpreting the theory of natural numbers without adopting any ghostly quantities, heaps work just fine.

(edit): I should mention that while heaps, requiring only for you to accept a whole with parts, and a level test on any gven part, are much more ontologically inconspicuous than pure sets. Where exactly is the null set? Where is any pure set? I've never seen any of them. Of course, i see heaps all over the place.

Comment author: potato 11 September 2012 03:57:41AM *  0 points [-]

"

"You have brain damage" is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I've never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I'm going to assume it doesn't work. Why not?"

IMHO All human psychologies have a hard time updating to believe they're poorly built. We are by nature arrogant. Do not forget that common folk often "choose" what to believe after they think about how it feels to believe it.

(Brilliant article btw)

(eidt):"Likewise, how come delusions are so specific? It's impossible to convince someone who thinks he is Napoleon that he's really just a random non-famous mental patient, but it's also impossible to convince him he's Alexander the Great (at least I think so; I don't know if it's ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it's the CIA who's after you, and not the KGB or Bavarian Illuminati?"

IMHO I think there are plenty of cognitive biases that can explain that sort of behavior in healthy patients. Confirmation bias, and the affective heuristic are the first to come to mind.

View more: Prev | Next