waterlubber

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I believe the intention in Georgism is to levy a tax that eliminates appreciation in the value of the land. This is effectively the same as renting the land from the government. You are correct in that it would prevent people from investing in land -- investment in land is purely rent-seeking behavior, and benefits no one; building improvements on land (eg mines, factories, apartment complexes) that generate value, however, does.

Answer by waterlubber30

It could be using nonlinear optical shenanigans for CO2 measurement. I met someone at NASA using optical mixing and essentially using a beat frequency to measure atmospheric CO2 with all solid state COTS components (based on absorption of solar radiation). Technique was called optical heterodyne detection.

I've also seen some mid IR leds being sold, although none near the 10um CO2 wavelength.

COTS CO2 monitors exist for ~$100 and could probably be modified to messure breathing gases. They'll likely be extremely slow.

The cheapest way to measure CO2 concentration, although likely most inaccurate and slow, would be with the carbonic acid equilibrium reaction in water and a pH meter.

Ultimately the reason it's not popular is probably because it doesn't seem that useful. Breathing is automatic and regulated by blood CO2 concentration; I find it hard to believe that the majority of the population, with otherwise normal respiratory function, would be so off the mark. Is there strong evidence to suggest this is the case?

Strongly agree. I see many, many others use "intelligence" as their source of value for life -- i.e humans are sentient creatures and therefore worth something -- without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it's a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.

Aside from dissociation/bond energy, nearly all of the energy in the combustion chamber is kinetic. Hill's Mechanics and Thermodynamics of Propulsion gives us this very useful figure for the energy balance:

A good deal of the energy in the exhaust is still locked up in various high-energy states; these states are primarily related to the degrees of freedom of the gas (and thus gamma) and are more strongly occupied at higher temperatures. I think that the lighter molecular weight gasses have equivalently less energy here, but I'm not entirely sure. This might be something to look into.

Posting this graph has got me confused as well, though. I was going to write about how there's more energy tied up in the enthalpy of the gas in the exhaust, but that wouldn't make sense - lower MW propellants have a higher specific heat per unit mass, and thus would retain more energy at the same temperature. 

I ran the numbers in Desmos for perfect combustion, an infinite nozzle, and no dissociation, and the result was still there, but quite small:
https://www.desmos.com/calculator/lyhovkxepr

The one thing to note: the ideal occurs where the gas has the highest speed of sound. I really can't think of any intuitive way to write this other than "nozzles are marginally more efficient at converting the energy of lighter molecular weight gases from thermal-kinetic to macroscopic kinetic."

You've got the nail on the head here. Aside from the practical limits of high temperature combustion (running at a lower chamber temperature allows for lighter combustion chambers, or just practical ones at all) the various advantages of a lighter exhaust most than make up for the slightly lower combustion energy. the practical limits are often important: if your max chamber temperature is limited, it makes a ton of sense to run fuel rich to bring it to an acceptable range.

One other thing to mention is that the speed of sound of the exhaust matters quite a lot. Given the same area ratio nozzle and same gamma in the gas, the exhaust mach number is constant; a higher speed of sound thus yields a higher exhaust velocity.

The effects of dissociation vary depending on application. It's less of an issue with vacuum nozzles, where their large area ratio and low exhaust temperature allow some recombination. For atmospheric engines, nozzles are quite short; there's little time for gases to recombine.

I'd recommend playing around with CEA (https://cearun.grc.nasa.gov/), which allows you to really play with a lot of combinations quickly.

I'd also like to mention that some coefficients in nozzle design might make things easier to reason about. Thrust coefficient and characteristic velocity are the big ones; see an explanation here

Note that exhaust velocity is proportional to the square root of (T_0/MW), where T_0 is chamber temperature.

Thrust coefficient, which describes the effectiveness of a nozzle, purely depends on area ratio, back pressure, and the specific heat ratio for the gas.

You're right about intuitive explanations of this being few and far between. I couldn't even get one out of my professor when I covered this in class.

To summarize:

  1. Only gamma, molecular weight, chamber temp T0, and nozzle pressures affect ideal exhaust velocity.
  2. Given a chamber pressure, gamma, and back pressure, (chamber pressure is engineering limited), a perfect nozzle will expand your exhaust to a fixed mach number, regardless of original temperature.
  3. Lower molecular weight gases have more exhaust velocity at the same mach number.
  4. Dissociation effects make it more efficient to avoid maximizing temperature in favor of lowering molecular weight.

This effect is incredibly strong for nuclear engines: since they run at a fixed, relatively low engineering limited temperature, they have enormous specific impulse gains by using as light a propellant as possible.

You might be able to just survey the thing. If you've got a good floor plan and can borrow some surveying equipment, you should be able to take angles to the top and just work out the height that way. Your best bet would probably be to use averaged GPS measurements, or existing surveys, to get an accurate horizontal distance to the spire, then take the angle from the base to the spire and work out some trig. You might be able to get away with just a plain camera, if you can correct for the lens distortion.

I believe this is going to be vastly more useful for commercial applications than consumer ones. Architecture firms are already using VR to demonstrate design concepts - imagine overlaying plumbing and instrumentation diagrams over an existing system, to ease integration, or allowing engineers to CAD something in real time around an existing part. I don't think it would replace more than a small portion of existing workflows, but for some fields it would be incredibly useful.

This seems like a behavior that might have been trained in rather than something emergent. 

As silly as it is, the viral spread of deepfaked president memes and AI content would probably serve to inoculate the populace against serious disinformation - "oh, I've seen this already, these are easy to fake." 

I'm almost certain the original post is a joke though. All of its suggestions are opposite of anything you might consider a good idea.

That makes a lot of sense, and I should have considered that the training data of course couldn't have been predicted. I didn't even consider RLHF--I think there's definitely behaviors where models will intentionally avoid predicting text they ""know"" will result in a continuation that will be punished. This is a necessity, as otherwise models will happily continue with some idea before abruptly ending it because it was too similar to something punished via RLHF.

I think this means that these "long term thoughts" are encoded into the predictive behavior of the model turning training, rather than any sort of meta learning. An interesting experiment would be including some sort of token that indicates RLHF will or will not be used when training, then seeing how this affects the behavior of the model.

For example, apply RLHF normally, except in the case that the token [x] appears. In that case, do not apply any feedback - this token directly represents an "out" for the AI.

You might even be able to follow it through the network and see what affects the feedback has.

Whether this idea is practical or not requires further thought.. I'm just writing it now, late at night, because I figure it's useful enough to possibly be made into something meaningful.

Load More