Comment author: 28 March 2017 05:51:33PM *  1 point [-]

I said:

For example, "because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event" is a not-quantified claim.

The claim doesn't mention any measurement uncertainties. Moreover, the actual claim is "CO2 cascades into a warming event" and, y'know, it's just an event. Maybe it's an event with a tiny magnitude, maybe another event happens which counterbalances the CO2 effect, maybe the event ends, who knows...

Comment author: 29 March 2017 08:12:43AM 0 points [-]

The claim doesn't mention any measurement uncertainties.

That's why I said "much more". If I claimed "X is greater than Y" and it turned out that X = 15±1 and Y = 47±1, would my claim not be falsified because it didn't mention measurement uncertainties?

Comment author: 28 March 2017 04:38:17PM *  1 point [-]

Are you asking me to write out the interpretation of the evidence I see as a mathematical model

Not evidence. I want you to make a precise claim.

For example, "because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event" is a not-quantified claim. It's not precise enough to be falsifiable (which is how a lot of people like it, but that's a tangent).

A quantified equivalent would be something along the lines of "We expect the increase in atmospheric CO2 from 300 to 400 ppmv to lead to the increase of the average global temperature by X degrees spread over the period of Z years so that we forecast the average temperature in the year YYYY as measured by a particular method M to be T with the standard error of E".

Note that this is all claim, no evidence (and not a model, either).

Comment author: 28 March 2017 05:25:15PM 0 points [-]

It's not precise enough to be falsifiable

Yes it is. For example, if CO2 concentrations and/or global temperatures went down by much more than the measurement uncertainties, the claim would be falsified.

Comment author: 27 March 2017 04:04:15PM *  0 points [-]

AFAIK (and wikipedia tells), this is not how IQ works. For measuring intelligence, we get an "ordinal scale", i.e. a ranking between test-subjects. An honest reporting would be "you are in the top such-and-so percent". For example, testing someone as "one-in-a-billion performant" is not even wrong; it is meaningless, since we have not administered one billion IQ tests over the course of human history, and have no idea what one-in-a-billion performance on an IQ test would look like.

Because the IQ is designed by people who would try to parse HTML by regex (I cannot think of a worse insult here), it is normalized to a normal distribution. This means that one applies the inverse error-function with SD of 15 points to the percentile data. Hence, IQ is Gaussian-by-definition. In order to compare, use e.g. python as a handy pocket calculator:

from math import *

iqtopercentile = lambda x: erfc((x-100)/15)/2

iqtopercentile(165)

4.442300208692339e-10

So we see that claims of any human being having an IQ of 165+ is statistically meaningless. If you extrapolated to all of human history, an IQ of 180+ is meaningless:

iqtopercentile(180)

2.3057198811629745e-14

Yep, by current definition you would need to test 10^14 humans to get one that manages an IQ of 180. If you test 10^12 humans and one god-like super-intelligence, then the super-intelligence gets an IQ of maybe 175 -- because you should not apply the inverse error-function to an ordinal scale, because ordinal scales cannot capture bimodals. Trying to do so invites eldritch horrors on our plane who will parse HTML with a regex.

Comment author: 28 March 2017 03:47:01PM *  0 points [-]

iqtopercentile = lambda x: erfc((x-100)/15)/2

The 15 should be (15.*sqrt(2)) actually, resulting in iqtopercentile(115) = 0.16 as it should be rather than 0.079 as your expression gives, iqtopercentile(165) = 7.3e-6 (i.e. 7 such people in a city with 1 million inhabitants in average), and iqtopercentile(180) = 4.8e-8 (i.e. several hundred such people in the world).

(Note also that in python (x-100)/15 returns an integer whenever x is an integer.)

Comment author: 27 March 2017 07:59:55PM *  1 point [-]

Comment author: 27 March 2017 08:21:43PM 1 point [-]

let's say theta is modeled by a Gaussian

The conjugate prior of the binomial distribution is the beta distribution, so if you use a beta distribution for theta, the posterior is also a beta distribution, and the expected value of the posterior predictive is just (u0 + u)/(u0 + u + d0 + d) where u and d are the number of up- and downvotes and u0 and d0 are the parameters of the prior distribution, or pseudocounts.

In response to comment by on Am I Really an X?
Comment author: 23 March 2017 10:59:53PM 0 points [-]

Those people are called "cis", because traditionally when an opposite of "trans" is needed "cis" is it.

You know, we don't have a word for people who aren't schizophrenics or say don't believe they are avatars of a god either.

In response to comment by on Am I Really an X?
Comment author: 23 March 2017 11:08:40PM 0 points [-]

we don't have a word for people who ... don't believe they are avatars of a god either

https://en.wikipedia.org/wiki/Laity

In response to comment by on March 2017 Media Thread
Comment author: 22 March 2017 05:32:04PM 0 points [-]

Does this site have a report button?

In response to comment by on March 2017 Media Thread
Comment author: 23 March 2017 07:48:28AM 0 points [-]

Only in your inbox as far as I can tell.

Comment author: 19 March 2017 08:47:14PM 0 points [-]

2% of LWers called themselves neoreactionary,

That's compatible with a lot of neoreactionaries being lesswrongers.

To the best of my knowledge, Moldbug didn't post on LW.

I believe he posted on OB when EY was posting there.

Comment author: 19 March 2017 09:16:26PM *  0 points [-]

I believe he posted on OB when EY was posting there.

Yes but it's not like there was a lot of love lost between MM and EY (or RH).

Comment author: 16 March 2017 05:50:35PM *  1 point [-]

But in fact all the probabilities are equally real, depending on your selection process.

This is not so. You are confused between two kinds of uncertainty (and so, probability): the uncertainty of the actual outcome in the real, physical world, and the uncertainty of some agent not knowing the outcome.

For a random person, the total probability of getting cancer will be 45.5%.

Let's unroll this. The actual probability for a random person to get cancer is either 90% or 1%. You just don't know which one of these two numbers applies, so you produce an estimate by combining them. Your estimate doesn't change anything in the real world and someone else -- e.g. someone who has access to the lesion-scanning results for this random person -- would have a different estimate.

Note, by the way, the difference between speaking about a "random person" and about the whole population. For the population as a whole, the 45.5% value is correct: out of 1000 people, about 455 will get cancer. But for a single person it is not correct: a single person has either a 90% actual probability or a 1% actual probability.

For simplicity consider an urn containing an equal number of white and black balls. You would say that a "random ball" has a 50% chance of being black -- but each ball is either black or white, it's not 50% of anything. 50% of the entire set of balls is black, true, but each ball's state is not uncertain and is not subject to ("actual") probability.

Comment author: 16 March 2017 11:33:04PM 1 point [-]

The actual probability for a random person to get cancer is either 90% or 1%. You just don't know which

"You just don't know which" is what probability is. The "actual" probability is the probability conditional on all the information we actually have, namely 45.5%; 90% or 1% would be the probability if, contrary to the fact, we also knew whether the person has the lesion.

In response to comment by on Am I Really an X?
Comment author: 16 March 2017 07:40:00AM 0 points [-]
• If someone cares strongly about whether they're regarded as male, female, or something else, then in the absence of strong special reasons for doing otherwise we should go along with that preference.

• If, for instance, they take on the considerable social cost of telling everyone that they want to be known by a new name, addressed with non-standard pronouns, etc., that is good evidence that they care strongly.

Except we don't, and can't, apply that logic in any other situation, otherwise we'd find ourselves going out of our way to accommodate every nutcase and everyone who finds it convenient to pretend to be a nutcase.

• If someone who appears (say) male by all other usual criteria says they're "really" a woman, I don't think "believe" is the right word for what I do in response, although "disbelieve" would be much worse. Rather, I don't think this is the sort of thing there's some kind of objective fact of the matter about; we get to choose how we classify people, and I'm happy to do that classifying -- for most purposes -- in ways that are strongly influenced by people's expressed gender identity.

What about someone who insists that Jesus talked to him? Or the classic reductio ad absurdum of someone who insists he (or it?) is an attack helicopter?

In response to comment by on Am I Really an X?
Comment author: 16 March 2017 09:20:06AM 0 points [-]

What about someone who insists that Jesus talked to him?

GWB did that all the time and we never institutionalized him.

Comment author: 15 March 2017 02:41:02PM *  0 points [-]

What shape could that possibly be?

Comment author: 15 March 2017 06:19:17PM *  0 points [-]

I was about to say "Since you never specified that the shape must be a measurable set ..." and link to here, but since you mention the area of the shape, you do (implicitly) require it to have one.

View more: Next