ryan_b

Sequences

National Institute of Standards and Technology: AI Standards

Wikitag Contributions

Comments

Sorted by
ryan_b32

Good fiction might be hard, but that doesn’t much matter to selling books. This thing is clearly capable of writing endless variations on vampire romances, Forgotten Realms or Magic the Gathering books, Official Novelization of the Major Motion Picure X, etc.

Writing as an art will live. Writing as a career is over.

ryan_b90

And all of this will happen far faster than it did in the past, so people won’t get a chance to adapt. If your job gets eliminated by AI, you won’t even have time to reskill for a new job before AI takes that one too.

 

I propose an alternative to speed as explanation: all previous forms of automation were local. Each factory had to be automated in bespoke fashion one at a time; a person could move from a factory that was automated to any other factory that had not been yet. The automation equipment had to be made some somewhere and then moved to where the automation was happening.

By contrast, AI is global. Every office on earth can be automated at the same time (relative to historical timescales). There's no bottleneck chain where the automation has to be deployed to one locality, after being assembled in a different locality, from parts made in many different localities. The limitations are network bandwidth and available compute, both of which are shared resource pools and complements besides.

ryan_b42

I like this effort, and I have a few suggestions:

  • Humanoid robots are much more difficult than non-humanoid ones. There are a lot more joints than in other designs; the balance question demands both more capable components and more advanced controls; as a consequence of the balance and shape questions, a lot of thought needs to go into wrangling weight ratios, which means preferring more expensive materials for lightness, etc.
  • In terms of modifying your analysis, I think this cashes out as greater material intensity - the calculations here are done by weight of materials, we just need a way to account for the humanoid robot requiring more processing on all of those materials. We could say something like 1500kg of humanoid robot materials take twice as much processing/refinement as 1500kg of car materials (occasionally this will be about the same; for small fractions of the weight it will be 10x the processing, etc).
  • The humanoid robots are more vulnerable to bottlenecks than cars. Specifically they need more compute and rare earth elements like neodymium, which will be tough because that supply chain is already strained by new datacenters and AI demands.
ryan_b20

This is a fun idea! I was recently poking at field line reconnection myself, in conversation with Claude.

I don't think the energy balance turns out in the idea's favor. Here are the heuristics I considered:

  • The first thing I note is what happens during reconnection: a bunch of the magnetic energy turns into kinetic and thermal energy. The part you plan to harvest is just the electric field part. Even in otherwise ideal circumstances, that's a substantial loss.
  • The second thing I note is that in a fusion reactor, the magnetic field is already being generated by the device, via electromagnets. This makes the process look like putting current into a magnetic field, then to break the magnetic field in order to get less current back out (because of the first note).
  • The third thing I note is that reconnection is about the reconfiguration of the magnetic field lines. I'm highly confident that electric fields when the lines break define how the lines reconnect, so if you induct all the energy out the reconnection will look different than would have. Mostly this would cash out as a weaker magnetic field than it would be otherwise, driving more recharging of the magnetic field, making the balance worse.

All of that being said, Claude and ChatGPT both respond well to sanity checking. You can say directly something like: "Sanity check: is this consistent with thermodynamics?"

I also think that ChatGPT misleadingly treated the magnetic fields and electric fields as being separate because it was using an ideal MHD model, where this is common due to the simplifications the model makes. In my experience at least Claude catches a lot of confusion and oversights by asking specifically about the differences between the physics and the model.

ryan_b42

Regarding The Two Cultures essay:

I have gained so much buttressing context from reading dedicated history about science and math that I have come around to a much blunter position than Snow's. I claim that an ahistorical technical education is technically deficient. If a person reads no history of math, science, or engineering than they will be a worse mathematician, scientist, or engineer, full stop.

Specialist histories can show how the big problems were really solved over time.[1] They can show how promising paths still wind up being wrong, and the important differences between the successful method and the failed one. They can show how partial solutions relate to the whole problem. They can show how legendary genius once struggled with the same concepts that you now struggle with.

 

  1. ^

    Usually - usually! As in a majority of the time! - this does not agree with the popular narrative about the problem.

ryan_b63

I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.

Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn't been much call for it to date.

ryan_b50

Why don’t you expect AGIs to be able to do that too?

I do, I just expect it to take a few iterations. I don't expect any kind of stable niche for humans after AGI appears.

ryan_b60

I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don't even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like 'adaptation engineer' or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.

I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.

ryan_b60

Obviously, at least one of those predictions is wrong. That’s what I said in the post.

Does one of them need to be wrong? What stops a situation like only one niche, or a few niches, being high value and the rest not providing enough to eat? This pretty much exactly like how natural selection operates, for example.

ryan_b42

I agree fake pictures are harder to threaten with. But consider that the deepfake method makes everyone a potential target, rather than only targeting the population who would fall for the relationship side of the scam.

There are other reasons I think it would be grimly effective, but I am not about to spell it out for team evil.

Load More