Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Kutta 24 April 2015 11:23:09AM *  4 points [-]

I'd like to ask a not-too-closely related question, if you don't mind.

A Curry-Howard analogue of Gödel's Second Incompleteness Theorem is the statement "no total language can embed its own interpreter"; see the classic post by Conor McBride. But Conor also says (and it's quite obviously true), that total languages can still embed their coinductive interpreters, for example one that returns in the partiality monad.

So, my question is: what is the logical interpretation of a coinductive self-interpreter? I feel I'm not well-versed enough in mathematical logic for this. I imagine it would have a type like "forall (A : 'Type) -> 'Term A -> Partiality (Interp A)", where 'Type and 'Term denote types and terms in the object language and "Interp" returns a type in the meta-language. But what does "Partiality" mean logically, and is it anyhow related to the Löbstacle?

Comment author: Quill_McGee 24 March 2015 05:11:04PM 5 points [-]

Wasn't Löb's theorem ∀ A (Provable(Provable(A) → A) → Provable(A))? So you get Provable(⊥) directly, rather than passing through ⊥ first. This is good, as, of course, ⊥ is always false, even if it is provable.

Comment author: Kutta 24 March 2015 05:14:48PM *  2 points [-]

You're right, I mixed it up. Edited the comment.

Comment author: efim 24 March 2015 10:32:20AM *  5 points [-]

I am stuck at part 2.2: "So in particular, if we could prove that mathematics would never prove a contradiction, then in fact mathematics would prove that contradiction"

I've spend 15 minutes on this, but still cannot see relation to löb's theorem. Even though it seems like it should be obvious to attentive reader.

Could anyone please explain or link me to an explanation?

Comment author: Kutta 24 March 2015 05:02:07PM *  7 points [-]

Suppose "mathemathics would never prove a contradiction". We can write it out as ¬Provable(⊥). This is logically equivalent to Provable(⊥) → ⊥, and it also implies Provable(Provable(⊥) → ⊥) by the rules of provability. But Löb's theorem expresses ∀ A (Provable(Provable(A) → A) → Provable(A)), which we can instantiate to Provable(Provable(⊥) → ⊥)→ Provable(⊥), and now we can apply modus ponens and our assumption to get a Provable(⊥).

Comment author: Wei_Dai 09 February 2014 09:58:18PM 2 points [-]

If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less "measure" to them. But you rejected this kind of reasoning in your post, and I can't exactly come to grips with the "physics racism" that seem to logically follow from that.

Suppose I wanted to be fair to all, i.e., avoid "physics racism" and care about everyone equally, how would I go about that? It seems that I can only care about dynamical processes, since I can't influence static objects, and to a first approximation dynamical processes are equivalent to computations (i.e., ignoring uncomputable things for now). But how do I care about all computations equally, if there's an infinite number of them? The most obvious answer is to use the uniform distribution: take an appropriate universal Turing machine, and divide my "care" in half between programs (input tapes) that start with 0 and those that start with 1, then divide my "care" in half again based on the second bit, and so on. With some further filling in the details (how does one translate this idea into a formal utility function?), it seems plausible it could "add up to normality" (i.e., be roughly equivalent to the continuous version of Solomonoff Induction).

Comment author: Kutta 10 February 2014 12:46:24AM 0 points [-]

To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I'm largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.

Comment author: Kutta 09 February 2014 04:44:21PM *  0 points [-]

The unsimulated life is not worth living.

-- Socrates.

Comment author: Coscott 08 February 2014 06:27:24PM 1 point [-]

Evolution has equipped the minds in the worlds where thinking in terms of simplicity works to think according to simplicity. (Ass well as worlds where thinking in terms of simplicity works up to a certain point in time when the rules become complex)

In some sense, even bacteria are equipped with that, they work under the assumption that chemistry does not change over time.

I do not see the point of your question yet.

Comment author: Kutta 09 February 2014 04:34:50PM *  3 points [-]

You conflate two very different things here, as I see.

First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don't doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician's sense of elegance.

Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don't care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.

This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn't care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it's just that they can generalize this preference to more abstract domains.

(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn't mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).

So, what I'm trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.

If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less "measure" to them. But you rejected this kind of reasoning in your post, and I can't exactly come to grips with the "physics racism" that seems to logically follow from that.

In response to Why CFAR?
Comment author: Kutta 29 December 2013 11:17:49AM 31 points [-]

Donated $500.

Comment author: Kutta 22 November 2013 10:26:44AM 36 points [-]

Survey taken.

Comment author: ArisKatsaris 04 March 2013 10:36:14PM 0 points [-]

Television and Movies Thread

Comment author: Kutta 05 March 2013 09:12:48AM 1 point [-]

Very recently decent fansubs have surfaced for Mamoru Hosoda's new movie The Wolf Children Ame and Yuki. I wholeheartedly recommend it.

Comment author: lavalamp 09 December 2012 04:38:09PM 0 points [-]

I vaguely recall doing math on problem #2, and figuring "$20 in 60 days = $.33 dollar per day = not worth the time it takes to think about it." It looks like most people did some different math; what does that math look like?

On the anchoring question, I recommend putting a "click here to automatically generate a random number" button instead of a link to an external site. I'm pretty sure I read ahead and realized what the number would be used for, and I bet many others did, also.

Comment author: Kutta 09 December 2012 05:51:37PM 0 points [-]

I figured that ~36% discount over two months would be way too high.

View more: Next