Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Johnicholas 07 November 2013 12:38:28PM -2 points [-]

If humans are bad at mental arithmetic, but good at, say, not dying - doesn't that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?

E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.

Quantifying is ONE technique, and it's been used very effectively in recent centuries - but those successes were inside a laboratory / factory / automation structure, not in an individual-rationality context.

Comment author: Johnicholas 24 July 2013 03:13:05PM 2 points [-]

Steve Keen's Debunking Economics blames debt, not automation.

Essentially, many people currently feel that they are deep in debt, and work to get out of debt. Keen has a ODE model of the macroeconomy that shows various behaviors, including debt-driven crashes.

Felix Martin's Money goes further and argues that strong anti-inflation stances by central bank regulators strengthen the hold of creditors over debtors, which has made these recent crashes bigger and more painful.

Comment author: [deleted] 18 May 2013 10:46:50AM 0 points [-]

I am very late to the discussion. I have not read Kelley's papers in detail, so pardon me if my question betrays a fundamental misunderstanding of what you wrote: How can "(Kelly's version of) Occam's razor says the first [world] is simpler" and give us "the intuitively correct answer" if an infinite number of particles will be emitted in the first world, even though Kelley has already specified that the device will only emit a finite number of particles?

Comment author: Johnicholas 19 May 2013 05:38:02PM 0 points [-]

The statements, though contradictory, refer to two different thought experiments.

Comment author: [deleted] 18 May 2013 10:46:50AM 0 points [-]

I am very late to the discussion. I have not read Kelley's papers in detail, so pardon me if my question betrays a fundamental misunderstanding of what you wrote: How can "(Kelly's version of) Occam's razor says the first [world] is simpler" and give us "the intuitively correct answer" if an infinite number of particles will be emitted in the first world, even though Kelley has already specified that the device will only emit a finite number of particles?

Comment author: Johnicholas 18 May 2013 01:55:13PM -1 points [-]

The two comments, though contradictory, refer to two different thought experiments.

Comment author: Johnicholas 17 May 2013 11:26:33PM -1 points [-]

Is it reasonable to take this as evidence that we shouldn't use expected utility computations, or not only expected utility computations, to guide our decisions?

If I understand the context, the reason we believed an entity, either a human or an AI, ought to use expected utility as a practical decision making strategy, is because it would yield good results (a simple, general architecture for decision making). If there are fully general attacks (muggings) on all entities that use expected utility as a practical decision making strategy, then perhaps we should revise the original hypothesis.

Utility as a theoretical construct is charming, but it does have to pay its way, just like anything else.

P.S. I think the reasoning from "bounded rationality exists" to "non-Bayesian mind changes exist" is good stuff. Perhaps we could call this "on seeing this, I become willing to revise my model" phenomenon something like "surprise", and distinguish it from merely new information.

Comment author: abramdemski 12 January 2013 07:12:40AM -1 points [-]

You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.

...except in Haskell, where we can make functions for generating instances given a desired type. (Such as Magic Haskeller.) I still feel like it's accurate to say that general-purpose languages correspond to trivialism.

Thanks, though! I didn't think of the generic version. So, in general, inhabited types all act like "true" in the isomorphism. (We have A->B wherever B is a specific inhabited type.)

What I'm trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.

I'm fully aware of that, and I enjoy the correspondence. What bothers me is that people seem to go from a specific isomorphism (which has been extended greatly, granted) to a somewhat ill-defined general principle.

My frustration comes largely from hearing mistakes. One extreme example is thinking that (untyped!) lambda calculus is the universal logic (capturing first-order logic and any higher-order logic we might invent; after all, it's Turing complete). Lesser cases are usually just ill-defined invocations (ie, guessing that the isomorphism might be relevant in cases where it's not obvious how it would be).

Comment author: Johnicholas 12 January 2013 04:28:22PM *  0 points [-]

Magic Haskeller and Augustsson's Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.

I believe they cannot output a term t :: a->b because there is no such term, because 'anything implies anything else' is false.

Comment author: abramdemski 11 January 2013 11:12:15PM 0 points [-]

I don't know what you mean. Suppose you want a function A -> Int, where A is some type. \x:A. 6, the function which takes x (of type A) and outputs 6, seems to do fine. To put it in c-like form, int six(x) {return 6}.

If programs are proofs, then general programming languages correspond to trivialism.

Am I missing something?

Comment author: Johnicholas 12 January 2013 12:19:48AM *  0 points [-]

The type constructors that you're thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as "Forall A, Forall B, A -> B" are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.

The type corresponding to a proposition like "all computable functions from the reals to the reals are continuous" looks like a function type consuming some representation of "a computable function" and producing some representation of "that function is continuous". To represent that, you probably need dependent types - this would be a type constructor that takes an element, not a type, as a parameter. Because not all functions are continuous, the codomain representing "that function is continuous" isn't in general inhabited. So building an element of that codomain is not necessarily trivial - and the process of doing so amounts to proving the original proposition.

What I'm trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.

Comment author: Leoeer 08 January 2013 06:03:14PM 0 points [-]

Isn't the argument in one false? If one applies bayes' theorem, with initial prob. 50% and new likelihood ratio of a billion to one, don't you get 500000000 to one chances?

Comment author: Johnicholas 08 January 2013 08:45:11PM -1 points [-]

I think you may be sincerely confused. Would you please reword your question?

If your question is whether someone (either me or the OP) has committed a multiplication error - yes, it's entirely possible, but multiplication is not the point - the point is anthropic reasoning and whether "I am a Bolzmann brain" is a simple hypothesis.

Comment author: cousin_it 07 January 2013 02:29:28PM *  1 point [-]

Ultrafinitism doesn't seem relevant here. The "type theory" you're thinking of seems to be just the arithmetical hierarchy. A "concrete prediction" is a statement with only bounded quantifiers, "this Turing machine eventually halts" is a statement with one existential quantifier, "this Turing machine halts for every input" is a statement with one universal and one existential quantifier, and so on. Or am I missing something?

Comment author: Johnicholas 07 January 2013 04:02:28PM -1 points [-]

The arithmetical hierarchy is presuming a background of predicate logic; I was not presuming that. Yes, the type theory that I was gesturing towards would have some similarity to the arithmetical hierarchy.

I was trying to suggest that the answer to "what is a prediction" might look like a type theory of different variants of a prediction. Perhaps a linear hierarchy like the arithmetical hierarchy, yes, perhaps something more complicated. There could be a single starting type "concrete prediction" and a type constructor that, given source type and a destination type, gives the type of statements defining functions that take arguments of the source type and give arguments of the destination type.

The intuitive bridge for me from ultrafinitism to the question "what counts as a prediction?" is that ultrafinitists do happily work with entities like 2^1000 considered as a structure like ("^" "2" "1000"), even if they deny that those structures refer to things that are definitely numbers (of course, they can agree that they are definitely "numbers", given an appropriately finitist definition of "number"). Maybe extreme teetotalers regarding what counts as a prediction would happily work with things such as computable functions returning predictions, even if they don't consider them predictions.

Comment author: cousin_it 05 January 2013 11:01:54AM *  11 points [-]

Then the statement "this Turing machine halts for every input" doesn't count as a prediction either, because you can never have observed it for every input, even if the machine is just "halt". And the statement "this Turing machine eventually halts" is borderline, because its negation doesn't count as a prediction. What does this give us?

Comment author: Johnicholas 07 January 2013 02:01:00PM *  0 points [-]

Perhaps there is a type theory for predictions, with concrete predictions like "The bus will come at 3 o'clock", and functions that output concrete predictions like "Every monday, wednesday and friday, the bus will come at 3 o'clock" (consider the statement as a function taking a time and returning a concrete prediction yes or no).

An ultrafinitist would probably not argue with the existence of such a function, even though to someone other than an ultrafinitist, the function looks like it is quantifying over all time. From the ultrafinitist's point of view, you're going to apply it to some concrete time, and at that point it's going to output a some concrete prediction.

View more: Next