Posts

Sorted by New

Wiki Contributions

Comments

Retracted.

[This comment is no longer endorsed by its author]Reply

Why the time factor? I don't find it particularly matches my intuitions directly, and as pointed out it makes having children arbitrarily bad (which also doesn't match my intuitions). Say we give each person's death a particular negative utility - histories in which they die get that single penalty regardless of time (though other independent time factors might apply, such as the sadness of their loved ones). Does that fit any better or worse with your conception of death morality?

(Incidentally, I was thinking about this just a few hours ago. Interesting how reading the same comment can trigger similar lines of thought.)

I think this is mistaken in that eliminating the HT and TTT possibilities isn't the only update SB can make on seeing heads. Conditioning on a particular sequence of flips, an observation of heads is certain under the HH or THH sequences, but only 50% likely under the THT or TTH sequences, so SB should adjust probabilities accordingly and consequently end up with no new information about the initial flip.

HOWEVER. The above logic relies on the assumption that this is a coherent and useful way to consider probabilities in this kind of anthropic problem, and that's not an assumption I accept. So take with a grain of salt.

Looks interesting - I've signed up. Definitely interested in a study group too, both as an external motivator and hopefully to get more value from the course.

Well, yeah, any run-of-the-mill category theory textbook will of course load you down with examples. That doesn't mean they'll give you the background instruction necessary to understand those examples. It's all very well being told that the classic example of a non-concretizable category is the category of topological spaces and homotopy classes of continuous maps between them - if you've never taken a topology course, you won't have any idea what that means, and the book isn't going to include a beginner's topology textbook as a footnote.

I'm pretty sure Eliezer's point holds even if you only consider the immediate purchasing power of each individual.

Let us define thefts A and B:

A : Steal 1 cent from each of 1e9 individuals. B : Steal 1e7 cents from 1 individual.

The claim here is that A has negligible disutility compared to B. However, we can define a new theft C as follows:

C: Steal 1e7 cents from each of 1e9 individuals.

Now, I don't discount the possibility that there are arguments to the contrary, but naively it seems that a C theft is 1e9 times as bad as a B theft. But a C theft is equivalent to 1e7 A thefts. So, necessarily, one of those A thefts must have been worse than a B theft - substantially worse. Eliezer's question is: if the first one is negligible, at what point do they become so much worse?

The benefit is doubled in the second case, but the investment is much larger (obviously), so RoI is not doubled. In fact, the investment is more than doubled (you have to pay for two transplants instead of one, as well as killing someone), so the RoI plummets.

Wait, what? We may not be born knowing what cars and electricity are, but I would be surprised if we weren't born with an ability (or the capacity to develop an ability) to partition our model of a car-containing section of universe into discrete "car" objects, while not being able to do the same for "electric current" objects.

I'll attempt to clarify a little, if that's alright. Given a particular well-behaved theory T, Gödel's (first!) incompleteness theorem exhibits a statement G that is neither provable nor disprovable in T - that is, neither G nor ¬G is syntactically entailed by T. It follows by the completeness theorem that there are models of T in which G is true, and models of T in which ¬G is true.

Now G is often interpreted as meaning "G is not provable in T", which is obviously true. However, this interpretation is an artifact of the way we've carefully constructed G, using a system of relations on Gödel numbers designed to carefully reflect the provability relations on statements in the language of T. But these Gödel numbers are elements of whatever model of T we're using, and our assumption that the relations used in the construction of G have anything to do with provability in T only apply if the Gödel numbers are from the usual, non-crazy model N of the natural numbers. There are unusual, very-much-crazy models of the natural numbers which are not N, however, and if we're using one of those then our relations are unintuitive and wacky and have nothing at all to do with provability in T, and in these G can be true or false as it pleases. So when we say "G is true", we actually mean "G is true if we're using the standard model of the natural numbers, which may or may not even be a valid model of T in the first place".

Well, strictly speaking we don't directly assume that 2+2=4. We have some basic assumptions about counting and addition, and it follows from these that 2+2=4. But that doesn't really avoid the objection, it just moves it down the chain.

Can I change these assumptions? Well, firstly it bears saying that if I do, I'm not really talking about counting or addition any more, in the same way that if I define "beaver" to mean "300 ton sub-Saharan lizard", I'm not really talking about beavers.

So suppose I change my assumptions about counting and addition such that it came out that 2+2=5. Would that mean that two apples added to two apples made five apples? Obviously not. It would mean that two apples added to two apples made five apples, where the starred words refer to altered concepts.

Load More