ryan_b

Sequences

National Institute of Standards and Technology: AI Standards

Wikitag Contributions

Comments

Sorted by

No matter how much I try, I just cannot force myself to buy the premise of replacement of human labor as a reasonable goal. Consider the apocryphal quote: 

If I had asked people what they wanted, they would have said faster horses. –Henry Ford

I'm clearly in the wrong here, because every CEO who talks about the subject talks about faster horses[1], and here we have Mechanize whose goal is to build faster horses, and here is the AI community concerned about the severe societal impacts of digital horse shit.

Why, exactly, are all these people who are into accelerating technical development and automation of the economy working so hard at cramming the AI into the shape of a horse?

  1. ^

    For clarity, faster horses here is a metaphor for the AI just replacing human workers at their existing jobs.

  • How do we count specialized language? By this I mean stuff like technical or scientific specialties, which are chock-full of jargon. The more specialized they are, the less they share with related topics. I would expect we do a lot more jargon generating now than before, and jargon words are mostly stand-ins for entire paragraphs (or longer) of explanation.
  • Related to jargon: academic publishing styles. Among other things, academic writing style is notorious for being difficult for outsiders to penetrate, and making no accommodation for the reader at all (even the intended audience). I have the sense that papers in research journals have almost evolved in the opposite direction, all though I note my perception is based on examples of older papers with an excellent reputation, which is a strong survivorship bias. Yet those papers were usually the papers that launched new fields of inquiry; it seems to me they require stylistic differences like explaining intuitions because the information is not there otherwise.
  • Unrelated to the first two, it feels like we should circle back to the relationship between speaking and writing. How have sentences and wordcount fared when spoken? We have much less data for this because it requires recording devices, but I seem to recall this being important to settling the question of whether the Iliad could be a written-down version of oral tradition. The trick there was they recorded some bards in Macedonia in the early 20th century performing their stories, transcribed the recordings, and then found them to be of comparable length to Homer. Therefore, oral tradition was ruled in.

Good fiction might be hard, but that doesn’t much matter to selling books. This thing is clearly capable of writing endless variations on vampire romances, Forgotten Realms or Magic the Gathering books, Official Novelization of the Major Motion Picure X, etc.

Writing as an art will live. Writing as a career is over.

And all of this will happen far faster than it did in the past, so people won’t get a chance to adapt. If your job gets eliminated by AI, you won’t even have time to reskill for a new job before AI takes that one too.

 

I propose an alternative to speed as explanation: all previous forms of automation were local. Each factory had to be automated in bespoke fashion one at a time; a person could move from a factory that was automated to any other factory that had not been yet. The automation equipment had to be made some somewhere and then moved to where the automation was happening.

By contrast, AI is global. Every office on earth can be automated at the same time (relative to historical timescales). There's no bottleneck chain where the automation has to be deployed to one locality, after being assembled in a different locality, from parts made in many different localities. The limitations are network bandwidth and available compute, both of which are shared resource pools and complements besides.

I like this effort, and I have a few suggestions:

  • Humanoid robots are much more difficult than non-humanoid ones. There are a lot more joints than in other designs; the balance question demands both more capable components and more advanced controls; as a consequence of the balance and shape questions, a lot of thought needs to go into wrangling weight ratios, which means preferring more expensive materials for lightness, etc.
  • In terms of modifying your analysis, I think this cashes out as greater material intensity - the calculations here are done by weight of materials, we just need a way to account for the humanoid robot requiring more processing on all of those materials. We could say something like 1500kg of humanoid robot materials take twice as much processing/refinement as 1500kg of car materials (occasionally this will be about the same; for small fractions of the weight it will be 10x the processing, etc).
  • The humanoid robots are more vulnerable to bottlenecks than cars. Specifically they need more compute and rare earth elements like neodymium, which will be tough because that supply chain is already strained by new datacenters and AI demands.

This is a fun idea! I was recently poking at field line reconnection myself, in conversation with Claude.

I don't think the energy balance turns out in the idea's favor. Here are the heuristics I considered:

  • The first thing I note is what happens during reconnection: a bunch of the magnetic energy turns into kinetic and thermal energy. The part you plan to harvest is just the electric field part. Even in otherwise ideal circumstances, that's a substantial loss.
  • The second thing I note is that in a fusion reactor, the magnetic field is already being generated by the device, via electromagnets. This makes the process look like putting current into a magnetic field, then to break the magnetic field in order to get less current back out (because of the first note).
  • The third thing I note is that reconnection is about the reconfiguration of the magnetic field lines. I'm highly confident that electric fields when the lines break define how the lines reconnect, so if you induct all the energy out the reconnection will look different than would have. Mostly this would cash out as a weaker magnetic field than it would be otherwise, driving more recharging of the magnetic field, making the balance worse.

All of that being said, Claude and ChatGPT both respond well to sanity checking. You can say directly something like: "Sanity check: is this consistent with thermodynamics?"

I also think that ChatGPT misleadingly treated the magnetic fields and electric fields as being separate because it was using an ideal MHD model, where this is common due to the simplifications the model makes. In my experience at least Claude catches a lot of confusion and oversights by asking specifically about the differences between the physics and the model.

Regarding The Two Cultures essay:

I have gained so much buttressing context from reading dedicated history about science and math that I have come around to a much blunter position than Snow's. I claim that an ahistorical technical education is technically deficient. If a person reads no history of math, science, or engineering than they will be a worse mathematician, scientist, or engineer, full stop.

Specialist histories can show how the big problems were really solved over time.[1] They can show how promising paths still wind up being wrong, and the important differences between the successful method and the failed one. They can show how partial solutions relate to the whole problem. They can show how legendary genius once struggled with the same concepts that you now struggle with.

 

  1. ^

    Usually - usually! As in a majority of the time! - this does not agree with the popular narrative about the problem.

I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.

Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn't been much call for it to date.

Why don’t you expect AGIs to be able to do that too?

I do, I just expect it to take a few iterations. I don't expect any kind of stable niche for humans after AGI appears.

I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don't even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like 'adaptation engineer' or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.

I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.

Load More