I'm not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it's an indication that the decisi...
Issues with the survey:
Oops
Gah! A single data point tells you very little about over-/underconfidence! Please, please stop acting like getting a 60% certain thing wrong (or 20% certain thing right) is a mistake.
A die has an 18% chance of rolling a 6. And that still happens. A die has a 66% chance of rolling a number larger than 2, and that sometimes doesn't happen. There is nothing unusual about these things, and the same applies for these estimation-with-confidence exercises!
(This isn't directed at you specifically, but there have been a few instances of this in this thread, and your comment was the "final straw")
I wonder how common it is for the opposite to be true. I think visible logos on clothig are phenomenally tacky and have a strong immediate negative reaction to the people wearing them when I see them. This isn't really a reaction to certain brands, but to the idea of advertising them.
On the other hand, I might assume that these people are wealthier.
I'm suspicious of this. My understanding is that true Solomonoff induction is incomputable (because it requires ordering by Kolomogorov complexity.). Thus, you can't just create an algorithm to predict its next move.
edit: That is, just because it is "defined mathematically" doesn't mean we can predict its next move.
I've been thinking about this on and off for half a year or so, and I have come to the conclusion that I cannot agree with any proposed moral system that answers "torture" to dust specks and torture. If this means my morality is scope-insensitive, then so be it.
(I don't think it is; I just don't think utilitarianism with an aggregation function of summation over all individuals is correct; I think the correct aggregation function should probably be different. I am not sure what the correct aggregation function is, but maximizing the minimum ind...
I think I am at least a standard deviation out on this, but my college experience had a lot of very good both theoretical and practical training which served me extremely well as a grad student and is continuing to do so in my current job. While I could imagine having done it in less than four years, the idea of learning all that I did and getting the practice applying it that I did in less than two or three years is insane. While college does have a very high signaling value, it can also be very good at what it is nominally for: teaching students. Alth...
I've been pretty consistent about rock climbing and martial arts for multiple short periods in my life, and it is always glorious. Currently I am climbing (bouldering, which has a simplicty top-roping does not) multiple times a week, and weightlifting and getting cardio exercise as well for a few months. I am probably in the best cardio shape of my life (which is pretty mediocre!) and it is pretty great. I've got a group I go with, which is good for motivation.
But that's not true? I already exist. There's nothing acausal going on here. I can pick whatever I want, and it just makes Prometheus wrong.
(Similarly, if Omega presented me with the same problem, but said that he (omniscient) had only created me if I would one-box this problem, I would still (assuming I am not hit by a meteor or something from outside the problem affects my brain) two-box. It would just make Omega wrong. If that contradicts the problem, well, then the problem was paradoxical to begin with.)
You said pretty much exacty everything I would have said and more.
One question--I only read the first third of so and skimmed the rest. The bits I read seemed to give a false dichotomy for dates of the composition of the gospels. The authors discussed atheistic schools that believed the gospels were all composed post 100 and contrasted these with the pre70 dates of Christian belief. Do they ever discuss the modern scholarly mostly-consensus of 70-90?
Relatedly, do you know of any good arguments for post 70 composition dates, especially for Matthew and Luk...
Or ideally you would launch it into space, with a cloak against detection, and a randomly fluctuating acceleration factor that would take it out of the Solar System.
Is this a MoR explanation for the Pioneer anomaly? Because that would be awesome.
Also, I assumed Voldemort was talking about the classical elements, too, and was amused that Harry, a scientist, had come up with those at random.
But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale.
I think this is a key point. While I think unFriendly AI could be a problem in ...
What kind of math do you know in where things can be "true, and that's the end of that"? In math, things should be provable from a known set of axioms, not chosen to be true because they feel right. Change the axioms, and you get different result.
Intuition is a good guide for finding a proof, and in picking axioms, but not much more than that. And intuitively true axioms can easily result in inconsistent systems.
The questions, "what axioms do I need to accept to prove Bayes' Theorem?", "Why should I believe these axioms reflect...
I was intrigued when I first read this when you last posted it, and I thought about it for a while. The problem with it, it seems to me, is that this is a good explanation for why qualia are ineffable, but it doesn't seem to be come any close to explaining what they are or how they arise.
So, I could imagine a world (it may even be this one!) where people's brains happen to be organized similarly enough that two people really could transfer qualia between them, but this still doesn't explain anything about them.
This isn't really true--clock performance is a really good metric for computing power. If your clock speed doubles, you get a 2x speedup in the amount of computation you can do without any algorithmic changes. If you instead increase chip complexity, e.g., with parallelism, you need to write new code to take advantage of it.