So, the truth value of "rationalists don't win" depends on your definition of "win"
Or the definition of rationalism. Maybe epistemic rationalism never had much to do with winning.
So, the truth value of "rationalists don't win" depends on your definition of "win"
Or the definition of rationalism. Maybe epistemic rationalism never had much to do with winning.
Upvoted, but I want to throw in the caveat that some baseline level of epistemic rationalism is very useful for winning. Schizophrenics tend to have a harder time of things than non-schizophrenics.
LW vocabulary relabels a lot of traditional rationality terms.
Has anyone put together a translation dictionary? Because it seems to me that most of the terms are the same, and yet it is common to claim that relabeling is common without any sort of quantitative comparison.
Information diet?
I did a quick search on LW but didn't find any important article about information diet. Did I miss something?
Questions worth considering:
So I'm aiming for the soft spot of eliminating all the unnecessary news while still getting those pieces that are relevant for me.
Any ideas?
I did a quick search on LW but didn't find any important article about information diet. Did I miss something?
I found a post that might be talking about the capital-I capital-D Information Diet you might be talking about.
There've been some other threads and a post about cutting down on news or eliminating news from one's life, too.
Should we eliminate all news sources like some advocate?
It's actually very plausible to me that a little news is the optimal amount for most people to deliberately consume, but I do mean a little — maybe 5 minutes a day as an order-of-magnitude guess — and one is probably not going to miss out on that much by cutting down to literally zero (though in a given time & place it might be a bad idea).
I'm aiming for the soft spot of eliminating all the unnecessary news while still getting those pieces that are relevant for me.
The first idea which pops into my mind is specialization: pick news/commentary sources where a specialist talks about a narrow topic they know well. In your tax code example, you might be able to find some interesting tax bloggers(!) who'd be likely to mention important changes to the tax code in your jurisdiction.
Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero.
As I understand it, tests of normality are not all that useful because: they are underpowered & won't reject normality at the small samples where you need to know about non-normality because it'll badly affect your conclusions; and at larger samples like the LW survey, because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results (because the sample is now large enough to benefit from the asymptotics and various robustnesses).
When I was looking at donations vs EA status earlier this year, I just added +1 to remove the zero-inflation, and then logged donation amount. Seemed to work well. A zero-inflated log-normal might have worked even better.
Also, you don't have to look at only one year's data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.
As I understand it, tests of normality are not all that useful because: they are underpowered & won't reject normality at the small samples where you need to know about non-normality because it'll badly affect your conclusions; and at larger samples [...], because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results
I agree that normality tests are too insensitive for most small samples, and too sensitive for pretty much any big sample, but I'd presumed there was a sweet spot (when the sample size is a few hundred) where normality tests have decent sensitivity without giving everything a negligible p-value, and that the LW survey is near that sweet spot. If I'd been lazy and used R's out-of-the-box normality test (Shapiro-Wilk) instead of following goocy's recommendation (Lilliefors, which R hides in its nortest library) I'd have got an insignificant p of 0.11, so the sample [edit: of non-zero donations] evidently isn't large enough to guarantee rejection by normality tests in general.
Also, you don't have to look at only one year's data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.
Certainly. It might be interesting to investigate whether the log-normal-with-zeroes distribution holds up in earlier years, and if so, whether the distribution's parameters drift over time. Still, goocy's complaint was about 2014's data, so I stuck with that.
I'd expect a Pareto distribution for charitable donations, not log-normal, and that's exactly what the histogram looks like:

Looks like alpha >> 2, so the variance is infinite.
Thanks for prompting me to take a closer look at this.
The distribution is certainly very positively skewed, but for that reason that histogram is a blunt diagnostic. Almost all of the probability mass is lumped into the first bar, so it's impossible to see how the probability distribution looks for small donations. There could be a power law there, but it's not obvious that the distribution isn't just log-normal with enough dispersion to produce lots of small values.
Looking at the actual numbers from the survey data file, I see it's impossible for the distribution to be strictly log-normal or a power law, because neither distribution includes zero in its support, while zero is actually the most common donation reported.
I can of course still ask which distribution best fits the rest of the donation data. A quick & dirty way to eyeball this is to take logs of the non-zero donations and plot their distribution. If the non-zero donations are log-normal, I'll see a bell curve; if the non-zero donations are Pareto, I'll see a monotonically downward curve. I plot the kernel density estimate (instead of a histogram 'cause binning throws away information) and I see

which is definitely closer to a bell curve. So the donations seem closer to a lognormal distribution than a Pareto distribution. Still, the log-donation distribution probably isn't exactly normal (looks a bit too much like a cone to me). Let's slap a normal distribution on top and see how that looks. Looks like the mean is about 6 and the standard deviation about 2?

Wow, that's a far closer match than it has any right to be! Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero. But the non-zero donations look impressively close to a log-normal distribution, and I really doubt a Pareto distribution would fit them better. (And in general it's easy to see Pareto distributions where they don't really exist.)
Scott, if you read this, how about a wager?
Despite his frequent comments that he's "betting" on Trump and that Silver is "betting" against Trump, Adams's position is that gambling is illegal when pressed to actually bet. This means one of the big feedback mechanisms preventing outlandish probabilities is not there, so don't take his stated probabilities as the stated numbers.
(In general, remember how terrible people are at calibration: a 98% chance probably corresponds to about a 70% chance in actuality, if Adams is an expert in the relevant field.)
And Adams himself says the "smart money" is on Silver's prediction! I think Adams's prediction is more performative than prognostic, even allowing for ordinary unconsciously bad calibration.
I haven't gone back and checked, but I seem to remember hearing that Eugene_Nier, when contacted by a moderator the first time, said he was trying to drive away people that he considered unproductive. So if it's the same person it's likely that he still tries to drive people away.
I arrive late but with a link to the Kaj_Sotala post you're probably thinking of:
I sent two messages to Eugine, requesting an explanation. I received a response today. Eugine admitted his guilt, expressing the opinion that LW's karma system was failing to carry out its purpose of keeping out weak material and that he was engaged in a "weeding" of users who he did not think displayed sufficient rationality.
But again: now you are equating irrationality with deliberate suicide. You're not really drawing a very strong connection here.
But again: now you are equating irrationality with deliberate suicide.
Whether PradyumnGanesh is or isn't (though I don't think they are), that doesn't change their observation that self-inflicted violence is a relatively common form of violence, at least going by fatal violence.
Dilbert creator Scott Adams, who has a fantastic rationalist-compatible blog, is giving Donald Trump a 98% of becoming president because Trump is using advanced persuasion techniques. We probably shouldn't get into whether Trump should be president, but do you think Adams is correct, especially about what he writes here. See also this, this, and this.
Forgetting what I know (or think I know) about Scott Adams, Donald Trump, Nate Silver, Jeb Bush, whoever, and going straight to the generic reference class forecast — I'm very sceptical someone could predict US presidential elections with 98% accuracy 14 months in advance.
How to change Anki timezone? Where to find free online textbooks?
I'm 12 hours different now and my Anki won't count a "new day" (to let me review my cards) until late in the day. Anyone know how to change the Anki timezone? The manual doesn't seem to say.
And I've heard there are online sources of great books (EY once joked that everything worth reading was online); where can I find those?
I don't know how up to date these suggestions are, but maybe they're still useful.