AnthonyC

Wikitag Contributions

Comments

Sorted by

It seems as if I need two more Karma (somehow), or I will have to wait five days.

Have some upvotes then :-)

Your insight about system entropy being limited by surface area was precisely one of the missing puzzle pieces I wanted to find. I do not yet know what to make of it, but perhaps it would be obvious to you.

The intuitive explanation for this is that a black hole is a maxentropy state, because you have the minimum possible information about its internal structure and composition. You can know its mass, momentum, and net charge, and that's it. You can't look at a black hole and infer anything else about the composition and structure of the matter that went into its formation. Hawking radiation complicates this a bit - I think there's some quantum information theory result that it necessarily somehow encodes the information about the matter that went in (IIRC because the virtual particles that cross beyond the event horizon annihilate normal matter instead of their virtual partners that escaped, in a way that matches the energy/charge/parity of the escaped matter?)? And you can theoretically, maybe, use something like the Penrose process to extract the energy from the rotational momentum and charge, thereby eliminating that gradient. At that point you only have the mass, and kinda (AFAIK) have to just wait the ridiculous amount of time it takes the black hole to evaporate into a similarly maxentropy gas in a post-heat-death cosmos.

With the CMB it is a bit different, since the CMB is present everywhere coming from all directions. If one could isolate the CMB photons in the atmosphere they would add energy based on atmospheric volume (I suspect), not area. 

In the limit, however, with a sufficiently thick gas and large volume (if you could even get such a thing), you would get absorption of all CMB-photons deep inside the atmosphere, and then you would be limited by surface area, not volume. Rather than the CMB giving a volume based black body radiation you would get a more standard flux situation, limited by area. 

I could be wrong, but I was under the impression that the CMB (a photonic gas) is composed of primordial photons; that they're not still being generated. In which case, for any concentration of matter (like the Earth's atmosphere), the CMB photons initially present will have either rapidly passed through and out (since the atmosphere is pretty transparent to microwaves) or else the local matter will have long since absorbed the local CMB photons and re-emitted their energy as part of a thermalized blackbody radiation of their own at the local temperature, such that there is no internal flux. Instead you have the gas emitting its own blackbody radiation out through its boundary, and the unabsorbed CMB passing out of the boundary, and the external CMB coming in across the same boundary. I think? In which case this is something that happens and reaches equilibrium very quickly - faster than you can actually form such a gas, since the CMB moves at light speed and matter does not. In any case it should all balance.

In the context of normal matter, the CMB sets a kind of minimum cold reservoir temperature for heat pumps that do net work - generating a colder cold reservoir takes more work than you can extract by dumping heat into them - but a minimum that decreases with time as the universe expands and cools. 

Things get a little wonky with black holes, which are much much colder than the CMB. I am a bit unsure whether black holes break this in some way, since they get colder as you add matter to them. But I think that's balanced by the frictional heating and other effects that happen as matter approaches the event horizon? And also by relativistic effects that mean that matter takes infinite time (from the reference frame of a distant observer) to cross the event horizon as it falls in? We still don't have a good understanding of quantum gravity, either, which could have a lot of implications for the metric effects that happen near black holes and for the long-term future of the cosmos.

Maybe the reset time would diverge into infinity, and the last conscious thought, the last computation, will be broken, just hanging there, never finished. 

Both of those thought experiments involve versions of this. Dyson's Eternal Intelligence assumes exponential slowing of computation over time, in order to produce infinite computation over even-more-infinite time using finite extropy. It is set in an ever-expanding,ever-cooling cosmos. Omega Point is set in a collapsing cosmos, performing infinite computation in finite time using finite extropy. Both involve decoupling objective from subjective time, since the computation/simulation happens much slower or faster than linearly, respectively. I don't think very many people seriously think that we, from within the universe, could set things up with the perfect precision needed to make either scenario work enough to do actually-infinite computation? More like you can stretch the efficiency of computation to be arbitrarily high the more precisely you can set things up.

Keep in mind I'm not any kind of cosmologist or theoretical physicist - just someone who once thought he wanted to be. I am a materials scientist, but one who hasn't worked in a lab in 15 years, and in any case we're way beyond that context now.

I think what you're saying is fine, and if it's useful to you or anyone else, then great! At heart I'm a scientist, not an engineer: I'm not so great at aiming for usefulness.

I do agree that I don't think engineers will be sitting around counting microstates in most cases. There are cases where they do some version of that, especially in high-end semiconductor devices and nanotech when you really have to account for quantum effects with as much precision as you can squeeze out of what you're working with. Otherwise it tends to be the kind of thing that gets abstracted away into practical approximations. Like how you can model the entropy of an ideal gas with N particles as the (unitless) volume of a 3-N-dimensional hypersphere, only that's an annoying formula and for high N the volume and surface area are almost the same, so instead you just use the surface area, and then the difference really doesn't matter because you take the ln() of it anyway. And after you see that derived one time in college you then just use the ideal gas law for the rest of your life.

"you" do not expand the number of ways the system can be. The universe does. 

I would counter that the solution is: It is almost always an error (generally necessary and usually not an important error) to exclude "you" from the system being modeled. If you're setting it up or using it or measuring it or just present in its past lightcone, you're interacting with it and are part of it. You are also part of the universe. But in the cases where you're asking this question, then I think it's the answer. Also: if we're getting into cosmological questions involving entropy, then a lot of fun things might come into play. Consider that the entropy of a system is limited by its surface area. See: black hole entropy; cosmological event horizons; comparisons of the density of the universe to the density of a black hole the size of the universe; Dyson's eternal intelligence; entropic gravity; the "Omega Point"; and holographic universe models.

Also: I'm not sure your definition for 'utility' adds more than it obfuscates, but I look forward to seeing what you write next!

Ditto.

Informally and intuitively, that seems about right, if you think about the fact there there's going to be some sort of near-equivalence between that and how often a typical person moves in a lifetime (divided by the number of people that live together in that space). And of course, oftentimes near the end of that chain you have new couples moving in together, potentially freeing up more than one home or apartment.

Ok, if we're talking about an audience of typical high school students (or equivalent general public) rather than upperclassmen in a college physics program or similar, then that's a bit harder, since you don't have the option to actually explain what entropy is, or even temperature, for that matter. For most people, temperature is the thing hot things have more of, that's all they know.

The thing that, IIRC, got most glossed over about this topic in high school physics and chemistry is that the Second Law isn't a law of physics at all. It's a law of combinatorics. And at least theoretically, the Counting Principle is something we got taught about in middle school, at least in more familiar contexts. S=k*ln(W), the rest is commentary - but that's not poetically satisfying like Kelvin's quote.

I think you are essentially looking for a not-too-mathy description of the equipartition theorem. If the energy in a system is distributed randomly among all the places it could theoretically go, then it is at equilibrium and cannot be do work. Otherwise, you can make the system do work as you allow it to evolve towards such a distribution.

Informally speaking, if there's some constraint keeping the system in a particular configuration, and you alleviate that constraint in a way that opens up more possibilities, then the system will spontaneously evolve in the direction of more possibilities, and you can (in principle, if you're clever) couple that spontaneous change to some other system to have it do work along the way. Maybe you want something like, "You can't make a system do work unless you expand the number of ways the system can be."

I'm a little confused what the goal is here? Are we trying to find the 'best' intuitive description of the Second Law? The best way to quantify its application to some specific type of physical process the way the 2008 paper cited does? Or are you claiming there is actually some flaw in the standard descriptions of how the Second Law arises from stat mech considerations? 

As a matter of engineering, "How do we extract work from this system?" was the practical question that needed solving, starting from the days of Watt. We keep finding new and better ways to do that, using more kinds of power sources. We also get better at measuring and monitoring and controlling all the relevant variables.

As a matter of physics, Gibbs and Boltzmann 'subsumed' Kelvin quite nicely. Energy gets transferred between degrees of freedom in a system in all kinds of ways, but some arrangements are indistinguishable in terms of parameters we measure like pressure and volume, and the states that can happen more ways happen more often. It's just the counting principle. The rest follows from that. That's really all it takes to get to 'Entropy increases with time, and will not spontaneously decrease in a closed system or any appreciable size, and you can't extract work from a system while reducing its entropy or holding entropy constant.' 

Few people know this, but boiling is a cooling effect. 

True for the general public, but if there's anywhere that this is true of college juniors or seniors studying physics, chemistry, materials science, or at least several other fields, then I would say about the program that taught them what Feynman said about physics education in Brazil: there isn't any thermodynamics being taught there.

This is a fun demonstration I have shown students

It is a fun demonstration! What age are you teaching? 

Also, I think you've set your Planet X example quite a bit farther from home than it needs to be. This looks like a perfectly normal thermodynamic half-cycle - basically half of the Otto cycle that our car ICEs are based on. The pressurized water boils due to the drilling-enabled pressure change creating a non-equilibrium pressure differential. Boiling converts the pressure difference into a temperature difference. The liquid undergoes isochoric heating, while the steam undergoes isentropic (adiabatic) expansion. It's an incomplete cycle because nothing is replenishing the heat or the water in the example as described, so over time the extraction of work cools the planet down and makes further extraction less and less efficient, and also eventually you run out of water pockets. 

I will definitely be checking out those books, thanks, and your response clarified the intent a lot for me.

As for where new metaphors/mechanisms come from, and whether they're ever created out of nothing, I think that that is very very rare, probably even rarer than it seems. I have half-joked with many people that at some level there are only a few fundamental thoughts humans are capable of having, and the rest is composition (yes, this is metaphorically coming from the idea of computers with small instruction sets). But more seriously, I think it's mostly metaphors built on other metaphors, all the way down.

I have no idea how Faraday actually came up with the idea of force lines, but it looks like that happened a couple decades after the first known use of isotherms, and a few more decades after the first known use of contour lines, with some similar examples dating back to the 1500s. The early examples I can quickly find were mostly about isobaths, mapping the depth of water for navigation starting in the Age of Exploration. Plus, there's at least one use of isogons, lines of equal magnetic inclination, also for navigation. AFAICT Faraday added the idea of direction to such lines, long before anyone else formalized the idea of vectors. But I can still convince myself, if I want, that it is a metaphor building on a previous well-known metaphor.

If I had to guess a metaphor for Newton, yes I think clockwork is part of it, but mathematically I'd say it's partly that the laws of nature are written in the language of geometry. Not just the laws of motion, but also ray optics.

Agreed on all counts. I really, genuinely do hope to see your attempt at such a benchmark succeed, and believe that such is possible.

(1) I agree, but don't have confidence that this alternate approach results in faster progress. I hope I'm proven wrong.

(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.

(5) Fair enough. 

Liron: Carl Feynman. What is your P(Doom)?

Carl: 43%.

Comments like this always remind me of the Tetlock result that forecasters who report probability estimates using more-precise, less-round numbers do in fact outperform others, and are more correctly incorporating the sources of information available.

Load More