Comment author: potato 15 June 2011 09:42:42PM *  2 points [-]

What I'm really asking is, if some statement turns out to be undecidable for all of our Tarskian truth translation maps to models, does that make that conjecture meaningless, or is undecidable somehow distinct from unverifiable. What is the difference between believing "that conjecture is unverifiable" and believing "that conjecture is undecidable."? Are the expectations/restrictions on experience that those two believes offer identical? If so does that mean that the difference between those two believes is a syntactic issue?

See Making Beliefs Pay Rent :

http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/

Comment author: AlephNeil 15 June 2011 11:17:48PM 1 point [-]

What I'm really asking is, if some statement turns out to be undecidable for all of our models,

Nitpick: you don't mean "models" here, you mean "theories".

does that make that conjecture meaningless

Why should it?

or is undecidable somehow distinct from unverifiable.

Oh... you're implicitly assuming a 1920s style verificationism whereby "meaningfulness" = "verifiability". That's a very bad idea because most/all statements turn out to be 'unverifiable' - certainly all laws of physics.

As for mathematics, the word 'verifiable' applied to a mathematical statement simply means 'provable' - either that or you're using the word in a way guaranteed to cause confusion.

Or perhaps by "statement S is verifiable" what you really mean is "there exists an observation statement T such that P(T|S) is not equal to P(T|¬S)"?

Comment author: TimFreeman 12 June 2011 03:34:07AM *  0 points [-]

See Alan Hajek's classic article "Waging War on Pascal's Wager."

That article is paywalled. It was published in 2003. Hajek's entry about Pascal's Wager in the Stanford Encylopedia of Philosophy is free and was substantively revised (hopefully by Hajek) in 2008, so there's a good chance the latter contains all the good ideas in the former and is easier to get to. The latter does mention the idea that utilities should be bounded, and many other things potentially wrong with Pascal's wager. There's no neat list of four items that looks like an obvious match to the title of the paywalled article.

Comment author: AlephNeil 12 June 2011 03:58:59AM 0 points [-]

You can find it here though.

Comment author: CarlShulman 07 June 2011 05:58:50PM 0 points [-]

However, if the subject takes them into account, it's very likely that the subject will lose, in the sense that the subject's estimated utility from all of these infinite expected utility bets is going to swamp utility from ordinary things.

The point of mixed strategies is that without distinctions between lotteries with infinite expected utility all actions have the same infinite (or undefined) expected utility, so on that framework there is no reason to prefer one action over another. Hyperreals or some other modification to the standard framework (see discussion of "infinity shades" in Bostrom) are necessary in order to say that a 50% chance of infinite utility is better than a 1/3^^^3 chance of infinite utility. Read the Hajek paper for the full details.

It's probably worth looking at anyway, but if you can say specifically how it's relevant and cite a specific page it would help.

"Empirical stabilizing assumptions" (naturalistic), page 34.

Comment author: AlephNeil 08 June 2011 04:23:22PM 0 points [-]

Hyperreals or some other modification to the standard framework (see discussion of "infinity shades" in Bostrom) are necessary in order to say that a 50% chance of infinite utility is better than a 1/3^^^3 chance of infinite utility.

No it isn't, unless like Hayek you think there's something 'not blindingly obvious' about the 'modification to the standard framework' that consists of stipulating that probability p of infinite utility is better than probability q of infinite utility whenever p > q.

This sort of 'move' doesn't need a name. (What does he call it? "Vector valued utilities" or something like that?) It doesn't need to have a paper written about it. It certainly shouldn't be pretended that we're somehow 'improving on' or 'fixing the flaws in' Pascal's original argument by explicitly writing this move down.

Comment author: CarlShulman 07 June 2011 04:21:48PM *  11 points [-]

This doesn't work with an unbounded utility function, for standard reasons:

1) The mixed strategy. If there is at least one lottery with infinite expected utility, then any combination of taking that lottery and other actions also has infinite expected utility. For example, in the traditional Pascal's Wager involving taking steps to believe in God, you could instead go around committing Christian sins: since there would be nonzero probability that this would lead to your 'wagering for God' anyway, it would also have infinite expected utility. See Alan Hajek's classic article "Waging War on Pascal's Wager."

Given the mixed strategy, taking and not taking your bet both have infinite expected utility, even if there are no other infinite expected utility lotteries.

2) To get a decision theory that actually would take infinite expected utility lotteries with high probability we would need to use something like the hyperreals, which would allow for differences in the expected utility of different probabilities of infinite payoff. But once we do that, the fact that your offer is so implausible penalizes it. We can instead keep our money and look for better opportunities, e.g. by acquiring info, developing our technology, etc. Conditional on there being any sources of infinite utility, it is far more likely that they will be better obtained by other routes than by succumbing to this trick. If nothing else, I could hold the money in case I encounter a more plausible Mugger (and your version is not the most plausible I have seen). Now if you demonstrated the ability to write your name on the Moon in asteroid craters, turn the Sun into cheese, etc, etc, taking your bet might win for an agent with an unbounded utility function.

Also see Nick Bostrom's infinitarian ethics paper.

As it happens I agree that human behavior and intuitions (as I weight them) in these situations are usually better summed up with a bounded utility function, which may include terms like the probability of attaining infinite welfare, or attaining a large portion of hyperreal expected welfare that one could, etc, than an unbounded utility function. I also agree that St Petersburg lotteries and the like do indicate our bounded preferences. The problem here is technical, in the construction of your example.

Comment author: AlephNeil 08 June 2011 04:13:05PM 0 points [-]

Alan Hajek's article is one of the stupidest things I've ever read, and a depressing indictment on the current state of academic philosophy. Bunch of pointless mathematical gimmicks which he only thinks are impressive because he himself barely understands them.

Comment author: Tyrrell_McAllister 05 June 2011 11:20:02PM *  0 points [-]

I'm not so sure. The familiar saying isn't "Nature, red in teeth and claws." It seems like there is a poetic convention of "mass-noun-ifying" nouns (if that's the right way to describe what's going on grammatically).

ETA: This remark was based on an error on my part.

Comment author: AlephNeil 06 June 2011 12:53:53AM *  0 points [-]

That may be right, but I don't see how it conflicts with my (throwaway) remark.

"Quale" works better than "qualia" because (i) it sounds more like the word "claw" and (ii) it's singular whereas 'qualia' is plural.

Comment author: [deleted] 03 June 2011 09:46:53PM *  1 point [-]

That's fine, but it's not at all the same thing.

In response to comment by [deleted] on About addition and truth
Comment author: AlephNeil 03 June 2011 09:59:17PM 0 points [-]

Why is the difference relevant? I honestly can't imagine how someone could be in the position of 'feeling as though 2+2=4 is either necessarily true or necessarily false' but not 'feeling as though it's necessarily true'.

(FWIW I didn't downvote you.)

Comment author: [deleted] 03 June 2011 09:28:59PM 1 point [-]

When someone says "2+2=4", it feels as though they are asserting a necessary truth, something that cannot possibly be otherwise.

This is an illusion. If I say "37460225182244100253734521345623457115604427833 + 52328763514530238412154321543225430143254061105 = 8978898869677433866588884288884888725858488938" it should not immediately strike you as though I'm asserting a necessary truth that cannot possibly be otherwise.

My question is, ought the thought experiment of a universe whose galaxies and stars are counted by arithmetic mod 3^^^^3 cause us to abandon this intuition?

Counting is an algorithm, or really a sketch of an algorithm. In order to make this a coherent question, i.e. to imagine running an algorithm on that many galaxies and stars and coming up with a certain answer, and then thinking about the consequences, we would need at least

  1. An airtight definition of "galaxies and stars"
  2. A ledger big enough to fit 3^^^3 tickmarks
  3. A reliable enough method of writing down tick marks when we see stars, that when we did it twice and got two different answers, it was not overwhelmingly likely that we had made a mistake someplace.

Each of these is preposterous!

In response to comment by [deleted] on About addition and truth
Comment author: AlephNeil 03 June 2011 09:44:31PM 2 points [-]

If I say "37460225182244100253734521345623457115604427833 + 52328763514530238412154321543225430143254061105 = 8978898869677433866588884288884888725858488938" it should not immediately strike you as though I'm asserting a necessary truth that cannot possibly be otherwise.

It immediately strikes me that what you're asserting is either necessarily true or necessarily false, and whichever it is it could not be otherwise.

Comment author: AlephNeil 01 June 2011 08:05:45PM 2 points [-]

Nitpick 1:

It seems likely to be the optimal way to build an AI that has to communicate with other AIs.

This seems a very contentious claim. For instance, to store the relative heights of people, wouldn't it make more sense to have the virtual equivalent of a ruler with markings on it rather than the virtual equivalent of a table of sentences of the form "X is taller than Y"?

I think the best approach here is just to explicitly declare it as an assumption: 'for argument's sake' your robot uses this method. End of story.

Nitpick 2:

Because of General Relativity, when applied to the real world, it is, in fact, wrong.

This is false. General Relativity doesn't contradict the fact that space is "locally Euclidean".

Comment author: Larks 01 June 2011 06:29:29PM 0 points [-]

Good article.

from the Achilles and the Tortoise dialog.

Which? There are many in GEB.

Comment author: AlephNeil 01 June 2011 07:43:36PM 3 points [-]

He's talking about the Lewis Carroll dialog that inspired the ones in GEB. "What the tortoise said to Achilles."

The point of the dialog is that there's something irreducibly 'dynamic' about the process of logical inference. Believing "A" and "A implies B" does not compel you to believe "B". Even if you also believe "A and (A implies B) together imply B". A static 'picture' of an inference is not itself an inference.

Comment author: PhilGoetz 30 May 2011 04:40:32PM *  0 points [-]

You're right! But I may still be right that the set of functions in R is enumerable. (Not that it matters to my post.)

There is a Turing function that can take a Goedel number, and produce the corresponding Goedel function. If you can define a programming language that is Turing-complete, and for which all possible strings are valid programs, then you just turn this function loose on the integers, and it enumerates the set of all possible Turing functions. Can this be done?

Comment author: AlephNeil 31 May 2011 11:56:06AM 1 point [-]

Sure, R is recursively enumerable, but S and S_I are not.

View more: Prev | Next