AnthonyC

Wikitag Contributions

Comments

Sorted by

I'm a little confused what the goal is here? Are we trying to find the 'best' intuitive description of the Second Law? The best way to quantify its application to some specific type of physical process the way the 2008 paper cited does? Or are you claiming there is actually some flaw in the standard descriptions of how the Second Law arises from stat mech considerations? 

As a matter of engineering, "How do we extract work from this system?" was the practical question that needed solving, starting from the days of Watt. We keep finding new and better ways to do that, using more kinds of power sources. We also get better at measuring and monitoring and controlling all the relevant variables.

As a matter of physics, Gibbs and Boltzmann 'subsumed' Kelvin quite nicely. Energy gets transferred between degrees of freedom in a system in all kinds of ways, but some arrangements are indistinguishable in terms of parameters we measure like pressure and volume, and the states that can happen more ways happen more often. It's just the counting principle. The rest follows from that. That's really all it takes to get to 'Entropy increases with time, and will not spontaneously decrease in a closed system or any appreciable size, and you can't extract work from a system while reducing its entropy or holding entropy constant.' 

Few people know this, but boiling is a cooling effect. 

True for the general public, but if there's anywhere that this is true of college juniors or seniors studying physics, chemistry, materials science, or at least several other fields, then I would say about the program that taught them what Feynman said about physics education in Brazil: there isn't any thermodynamics being taught there.

This is a fun demonstration I have shown students

It is a fun demonstration! What age are you teaching? 

Also, I think you've set your Planet X example quite a bit farther from home than it needs to be. This looks like a perfectly normal thermodynamic half-cycle - basically half of the Otto cycle that our car ICEs are based on. The pressurized water boils due to the drilling-enabled pressure change creating a non-equilibrium pressure differential. Boiling converts the pressure difference into a temperature difference. The liquid undergoes isochoric heating, while the steam undergoes isentropic (adiabatic) expansion. It's an incomplete cycle because nothing is replenishing the heat or the water in the example as described, so over time the extraction of work cools the planet down and makes further extraction less and less efficient, and also eventually you run out of water pockets. 

I will definitely be checking out those books, thanks, and your response clarified the intent a lot for me.

As for where new metaphors/mechanisms come from, and whether they're ever created out of nothing, I think that that is very very rare, probably even rarer than it seems. I have half-joked with many people that at some level there are only a few fundamental thoughts humans are capable of having, and the rest is composition (yes, this is metaphorically coming from the idea of computers with small instruction sets). But more seriously, I think it's mostly metaphors built on other metaphors, all the way down.

I have no idea how Faraday actually came up with the idea of force lines, but it looks like that happened a couple decades after the first known use of isotherms, and a few more decades after the first known use of contour lines, with some similar examples dating back to the 1500s. The early examples I can quickly find were mostly about isobaths, mapping the depth of water for navigation starting in the Age of Exploration. Plus, there's at least one use of isogons, lines of equal magnetic inclination, also for navigation. AFAICT Faraday added the idea of direction to such lines, long before anyone else formalized the idea of vectors. But I can still convince myself, if I want, that it is a metaphor building on a previous well-known metaphor.

If I had to guess a metaphor for Newton, yes I think clockwork is part of it, but mathematically I'd say it's partly that the laws of nature are written in the language of geometry. Not just the laws of motion, but also ray optics.

Agreed on all counts. I really, genuinely do hope to see your attempt at such a benchmark succeed, and believe that such is possible.

(1) I agree, but don't have confidence that this alternate approach results in faster progress. I hope I'm proven wrong.

(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.

(5) Fair enough. 

Liron: Carl Feynman. What is your P(Doom)?

Carl: 43%.

Comments like this always remind me of the Tetlock result that forecasters who report probability estimates using more-precise, less-round numbers do in fact outperform others, and are more correctly incorporating the sources of information available.

I'm curious if you have an opinion on the relatives contributions of different causes, such as:

  1. Inability of individuals to think outside established metaphors, without realizing they're inadequate
  2. Inability of individuals to think outside established metaphors, even while knowing they're inadequate
  3. Inability of individuals to think of better new metaphors
  4. Inability to have public conversations through low-bandwidth channels without relying on established metaphors, whether or not the individuals on either end know they're inadequate

I'm thinking (as an example) of Newton, who used calculus to get his results but translated the results out of calculus in order to publish. This let other people see the results were right, but not how anyone could have come up with them. Without that known physics payoff communicated through inadequate tools, there wouldn't have been enough impetus (pun intended) to push the relevant community of people to learn calculus.

It's kind of fun to picture AI agents working during the day and resting at night. Maybe that's the true AGI moment.

In context, this will depend on the relative costs of GPUs and energy storage, or the relative value of AI vs other uses of electricity that can be time-shifted. I would happily run my dryer or dishwasher during the daytime instead of at night in order to get paid to let OpenAI deliver a few million extra tokens. Liberalizing current electricity market participation and the ability to provide ancillary services has a lot of unrealized potential. If you have AGI, and an actually-well-functioning electricity market, it's not likely to be the AI that has to shut down first when solar production decreases.

This would be great to have, for sure, and I wish you luck in working on it!

I wonder if, for the specific types of discussions you point to in the first paragraph, it's necessary or even likely to help? Even if all the benchmarks today are 'bad' as described, they measure something, and there's a clear pattern of rapid saturation as new benchmarks are created. METR and many others have discussed this a lot. There have been papers on it. It seems like the meta-level approach of mapping out saturation timelines should be sufficient to convince people that for any given capability they can define, if they make a benchmark for it, AI will acquire that capability at the level the benchmark can measure. In practice, what follows is usually some combination of pretending it didn't happen, or else denying the result means anything and moving the goalposts. For a lot of people I end up in those kinds of discussions with, I don't think much would help beyond literally seeing AI put them and millions of others permanently out of work, and even then I'm not sure.

Upvoted - I do think lack of a coherent, actionable strategy that actually achieves goals if successful is a general problem of many advocacy movements, not just AI. A few observations:

(1) Actually-successful historical advocacy movements that solved major problems usually did so incrementally over many iterations, taking the wins they could get at each moment while putting themselves in position to take advantage when further opportunities arose.

(2) Relatedly, don't complain about incremental improvements (yours or others'). Celebrate them, or no one will want to work with you or compromise with you, and you won't end up in position to get more wins later.

(3) Raising awareness isn't a terminal goal or a solution, but it gives others a reason to pay attention to you at all. If you have actually good proposals for what to do about a problem, and are in a position to make the case that your proposals are effective and practical, then a perception that the problem is real and a solution is necessary can be very helpful. If a politician solves a major problem that is not yet a crisis, or is not seen as a crisis by their constituents, then solving the problem just looks like wasting money/time/effort to the people that decide if they get to keep their jobs.

(4) Don't plan a path that leads to victory, plan so that all paths lead to victory. If you make a plan, any plan, to achieve an outcome that is sufficient, it will require many things to go right, and therefore will not work, for reasons you fail to anticipate, and it will also taint your further planning efforts along predetermined directions, limiting your ability to adapt to future opportunities and setbacks. Avoiding this failure mode is part of the upshot of seeking and celebrating incremental wins unreservedly and consistently, as long as those wins don't cut off the path to further progress.

(5) Being seen to have a long-term plan that no one currently in power would support seems like a quick way to get shut out of a conversation unless you already have some form of power such that you're hard to ignore. 

I was so glad the other day to see Nate Soares talk about the importance of openly discussing x-risks, and also the recent congressional hearings that actually started to ask about real AI risks, because it's an opening to push the conversation in useful directions. I genuinely worry that AI safety orgs and advocates will make the mistakes that e.g. climate change activists often make: Shut down proposals that are clearly net improvements likely to increase public support for further action, in favor of (in practice) counterproductively maintaining the status quo and turning people off. I started openly discussing x-risk with more and more people in my life last year, and found they have been quite receptive to it from people they knew and trusted to generally be reasonable.

I do think there is value in having organizations around with the kinds of plans you are discussing, but I don't think, in general, those are the ones that actually get the opportunity to make big wins. I think they serve as generators of ideas that get filtered through more incremental and 'moderate' organizations over time, and make those other organizations seem like better partners to collaborate with. I don't have good data for this, more a general intuition from looking at a few historical examples.

Load More