Comment author: Vaniver 06 January 2015 12:38:00AM 8 points [-]

1319 people supplied a probability of God that was not blank or "idk" or the equivalent thereof as well as a non-blank religion. I was going to do results for both religious views and religious background, but religious background was a write-in so no thanks.

Literally every group had at least one member who supplied a P(God) of 0 and a P(God) of 100.

Comment author: Grothor 06 January 2015 05:18:13PM *  0 points [-]

Literally every group had at least one member who supplied a P(God) of 0 and a P(God) of 100.

Okay, I'll bite: What does someone mean when they say they are Atheist, and they think P(God) = 100% ?

In response to 2014 Survey Results
Comment author: Vaniver 04 January 2015 10:55:19PM *  10 points [-]

Thanks for doing this!

[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]

I find the "vastly" part dubious, given that 3% asexual already seems disproportionately large (general population seems to be about 1%). I would expect for asexuals to be overrepresented, and I do think the question wording means the survey's estimate underestimates the true proportion, but I don't think that it's, say, actually 10% instead of actually 4%.

Comment author: Grothor 05 January 2015 01:02:30AM 0 points [-]

I would expect for asexuals to be overrepresented

Why do you expect this? It seems reasonable if I think in terms of stereotypes. Also, I guess LWers might be more likely to recognize that they are asexual.

In response to 2014 Survey Results
Comment author: Vulture 04 January 2015 06:18:32AM *  5 points [-]

Yayy! I was having a shitty day, and seeing these results posted lifted my spirits. Thank you for that! Below are my assorted thoughts:

I'm a little disappointed that the correlation between height and P(supernatural)-and-similar didn't hold up this year, because it was really fun trying to come up with explanations for that that weren't prima facie moronic. Maybe that should have been a sign it wasn't a real thing.

The digit ratio thing is indeed delicious. I love that stuff. I'm surprised there wasn't a correlation to sexual orientation, though, since I seem to recall reading that that was relatively well-supported. Oh well.

WTF was going on with the computer games question? Could there have been some kind of widespread misunderstanding of the question? In any case, it's pretty clearly poorly-calibrated Georg, but the results from the other questions are horrendous enough on their own.

On that subject, I have to say that even more amusing than the people who gave 100% and got it wrong are the people who put down 0% and then got it right -- aka, really lucky guessers :P

Congrats to the Snicket fan!

This was a good survey and a good year. Cheers!

Comment author: Grothor 04 January 2015 11:15:01PM 9 points [-]

I remember answering the computer games question and at first feeling like I knew the answer. Then I realized the feeling I was having was that I had a better shot at the question than the average person that I knew, not that I knew the answer with high confidence. Once I mentally counted up all the games that I thought might be it, then considered all the games I probably hadn't even thought of (of which Minecraft was one), I realized I had no idea what the right answer was and put something like 5% confidence in The Sims 3 (which at least is a top ten game). But the point is that I think I almost didn't catch my mistake before it was too late, and this kind of error may be common.

In response to 2014 Survey Results
Comment author: Grothor 04 January 2015 10:45:58PM 2 points [-]

Under the profession listings, it says 35 people and 4% for Business. 35 is 2.7% of 1500.

Comment author: spxtr 19 December 2014 10:35:49PM 2 points [-]

Omega tells us the state of the water at time T=0, when we put the beer into it. There are two ways of looking at what happens immediately after.

The first way is that the water doesn't flow heat into the beer, rather it does some work on it. If we know the state of the beer/water interface as well then we can calculate exactly what will happen. It will look like quick water molecules thumping into slow boundary molecules and doing work on them. This is why the concept of temperature is no longer necessary: if we know everything then we can just do mechanics. Unfortunately, we don't know everything about the full system, so this won't quite work.

Think about your uncertainty about the state of the water as you run time forward. It's initially zero, but the water is in contact with something that could be in any number of states (the beer), and so the entropy of the water is going to rise extremely quickly.

The water will initially be doing work on the beer, but after an extremely short time it will be flowing heat into it. One observer's work is another's heat, essentially.

Comment author: Grothor 20 December 2014 10:52:13PM 1 point [-]

The first way is that the water doesn't flow heat into the beer, rather it does some work on it.

This actually clears things up quite a lot. I think my discomfort with this description is mainly aesthetic. Thank you for being patient.

Comment author: spxtr 19 December 2014 07:09:37AM 1 point [-]

We don't lose those things. Remember, this isn't my definition. This is the actual definition of temperature used by statistical physicists. Anything statistical physics predicts (all of the things you listed) is predicted by this definition.

You're right though. If you know the state of the molecules in the water then you don't need to think about temperature. That's a feature, not a bug.

Comment author: Grothor 19 December 2014 07:29:57PM 1 point [-]

We don't lose those things.

Suppose that you boil some water in a pot. You take the pot off the stove, and then take a can of beer out of the cooler (which is filled with ice) and put it in the water. The place where you're confusing your friends by putting cans of beer in pots of hot water is by the ocean, so when you read the thermometer that's in the water, it reads 373 K. The can of beer, which was in equilibrium with the ice at a measured 273 K, had some bits of ice stuck to it when you put it in. They melt. Next, you pull out your fancy laser-doppler-shift-based water molecule momentum spread measurer. The result jives with 373 K liquid water. After a short time, you read the thermometer as 360 K (the control pot with no beer reads 371 K). There is no ice left in the pot. You take out the beer, open it, and measure it's temperature to be 293 K and its momentum width to be smaller than that of the boiling water.

What we observed was:

  • Heat flowed from 373 K water to 273 K beer
  • The momentum distribution is wider for water at 373 K than at 293 K
  • Ice placed in 373 K water melts
  • Our thermometer reads 373 K for boiling water and 273 K for water-ice equilibrium

Now, suppose we do exactly the same thing, but just after putting the beer in the water, Omega tells us the state of every water molecule in the pot, but not the beer. Now we know the temperature of the water is exactly 0 K. We still anticipate the same outcome (perhaps more precisely), and observe the same outcome for all of our measurements, but we describe it differently:

  • Heat flowed from 0 K water to 273 K beer
  • The momentum distribution is wider for water at 0 K (or recently at 0 K) than at 293 K
  • Ice placed in 0 K water melts
  • Our thermometer reads 373 K for water boiling at 0 K, and 273 K for water-ice equilibrium

So the only difference is in the map, not the territory, and it seems to be only in how we're labeling the map, since we anticipate the same outcome using the same model (assuming you didn't use the specific molecular states in your prediction).

Remember, this isn't my definition. This is the actual definition of temperature used by statistical physicists.

I agree that temperature should be defined so that 1/T = dS/dE . This is the definition that, as far as I can tell, all physicists use. But nearly every result that uses temperature is derived using the assumption that all microstates are equally probable (your second law example being the only exception that I am aware of). In fact, this is often given as a fundamental assumption of statistical mechanics, and I think this is what makes the "glass of water at absolute zero" comment confusing. (Moreover, many physicists, such as plasma physicists, will often say that the temperature is not well-defined unless certain statistical conditions are met, like the energy and momentum distributions having the correct form, or the system being locally in thermal equilibrium with itself.)

I'm having trouble with brevity here, but what I'm getting at is that if you want to show that we can drop the fundamental postulate of statistical mechanics, and still recover the second law of thermodynamics, then I'm, happy to call it a feature rather than a bug. But it seems like bringing in temperature confuses the issue rather than clarifying it.

Comment author: Grothor 19 December 2014 12:16:50AM *  2 points [-]

Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.

I think I've figured out what's bothering me about this. If we think of temperature in terms of our uncertainty about where the system is in phase space, rather than how large a region of phase space fits the macroscopic state, then we gain a little in using the second law, but give up a lot everywhere else. Unless I am mistaken, we lose the following:

  • Heat flows from hot to cold
  • Momentum distribution can be predicted from temperature
  • Phase changes can be predicted from temperature
  • The reading on a thermometer can be predicted from temperature

I'm sure there are others. I realize that if we know the full microscopic state of a system, then we don't need to use temperature for these things, but then we wouldn't need to use temperature at all.

if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

If you're able to do this, I don't see why you'd be using temperature at all, unless you want to talk about how hot the water is to begin with (as you did), in which case you're referring to the temperature that the water would be if we had no microscopic information.

Comment author: spxtr 18 December 2014 04:37:11AM 2 points [-]

I posted some plots in the comment tree rooted by DanielFilan. I don't know what you used as the equation for entropy, but your final answer isn't right. You're right that temperature should be intensive, but the second equation you wrote for it is still extensive, because E is extensive :p

Comment author: Grothor 18 December 2014 05:38:47AM *  2 points [-]

your final answer isn't right

You're right. That should be ε, not E. I did the extra few steps to substitute α = E/(Nε) back in, and solve for E, to recover DanielFilan's (corrected) result:

E = Nε / (exp(ε/T) + 1)

I used S = log[N choose M], where M is the number of excited particles (so M = αN). Then I used Stirling's approximation as you suggested, and differentiated with respect to α.

Comment author: Grothor 18 December 2014 03:54:51AM *  1 point [-]

I'l follow suit with the previous spoiler warning.

SPOILER ALERT .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

I took a bit different approach from the others that have solved this, or maybe you'd just say I quit early once I thought I'd shown the thing I thought you were trying to show:

If we write entropy in terms of the number of particles, N and the fraction of them that are excited: α ≡ E/(Nε) , and take the derivative with respect to α, we get:

dS/dα = N log [(1-α)/α]

Or if that N is bothering you (since temperature is usually an intensive property), we can just write:

T = 1/(dS/dE) = E / log[(1-α)/α]

This will give us zero temperature for all excited or no excited particles (which makes sense, because you know exactly where you are in phase space), and it blows up at half particles are excited. This means that there is no reservoir hot enough to get from α < .5 to α = .5 .

Comment author: someonewrongonthenet 16 December 2014 11:45:21PM *  6 points [-]

People are thinking in terms of grades

That's not an explanation, just a symptom of the problem. People of mediocre talent and high talent both get A - that's part of the reason why we have to use standardized tests with a higher ceiling.

My intuition is that the top few notches are satisficing, whereas all lower ratings are varying degrees of non-satisficing. The degree to which everything tends to cluster at the top represents the degree to which everything is satisfactory for practical purposes. In situations where the majority of the rated things are not satisfactory (like the Putnam - nothing less than a correct proof is truly satisfactory), the ratings will cluster near the bottom.

For example, compare motels to hotels. Motels always have fewer stars, because motels in general are worse. Whereas, say, video games will tend to cluster at the top because video games in general are satisfactorily fun.

Or, think Humanities vs. Engineering grades. Humanities students in general satisfy the requirements to be historians and writers or liberal-arts-educated-white-collar workers more than Engineering students satisfy the requirements to be engineers.

Comment author: Grothor 17 December 2014 05:17:18AM 1 point [-]

That's not an explanation, just a symptom of the problem.

This is what I was trying to convey when I said it might be another example of the problem.

I think it's reasonable, in many contexts, to say that achieving 75% of the highest possible score on an exam should earn you what most people think of as a C grade (that is, good enough to proceed with the next part of your education, but not good enough to be competitive).

I would say that games are different. There is not, as far as I know, a quantitative rubric for scoring a game. A 6/10 rating on a game does not indicate that the game meets 60% of the requirements for a perfect game. It really just means that it's similar in quality to other games that have received the same score, and usually a 6/10 game is pretty lousy. I found a histogram of scores on metacritic:

http://www.giantbomb.com/profile/dry_carton/blog/metacritic-score-distribution-graphs/82409/

The peak of the distributions seems to be around 80%, while I'd eyeball the median to be around 70-75%. There is a long tail of bad games. You may be right that this distribution does, in some sense, reflect the actual distribution of game quality. My complaint is that this scoring system is good at resolving bad games from truly awful games from comically terrible games, but it is bad at resolving a good game from a mediocre game.

What I think it should be is a percentile-based score, like Lumifer describes:

Consider this example: I come up to you and ask "So, how was the movie?". You answer "I give it a 6 out of 10". Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifurcate reality.

In branch 1 you then add "The distribution of my ratings follows the distribution of movie quality, savvy?" and let's say I'm sufficiently statistically savvy to understand that. But... does it help me? I don't know the distribution of movie quality. it's probably bell-shaped, maybe, but not quite normal if only because it has to be bounded, I have no idea if its skewed, etc.

In branch 2 you then add "The rating of 6 means I rate the movie to be in the sixth decile". Ah, that's much better. I now know that out of 10 movies that you've seen five were probably worse and three were probably better. That, to me, is a more useful piece of information.

Then again, maybe it's difficult to discern a difference in quality between a 60th percentile game and an 80th percentile game.

View more: Prev | Next