Richard Korzekwa

Director at AI Impacts.

Wiki Contributions

Comments

Yeah! I made some lamps using sheet aluminum. I used hot glue to attach magnets, which hold it onto the hardware hanging from the ceiling in my office. You can use dimmers to control the brightness of each color temperature strip separately, but I don't have that set up right now.

why do you think s-curves happen at all? My understanding is that it's because there's some hard problem that takes multiple steps to solve, and when the last step falls (or a solution is in sight), it's finally worthwhile to toss increasing amounts of investment to actually realize and implement the solution.

I think S-curves are not, in general, caused by increases in investment. They're mainly the result of how the performance of a technology changes in response to changes in the design/methods/principles behind it. For example, with particle accelerators, switching from Van der Graaff generators to cyclotrons might give you a few orders of magnitude once the new method is mature. But it takes several iterations to actually squeeze out all the benefits of the improved approach, and the first few and last few iterations give less of an improvement than the ones in the middle.

This isn't to say that the marginal return on investment doesn't factor in. Once you've worked out some of the kinks with the first couple cyclotrons, it makes more sense to invest in a larger one. This probably makes S-curves more S-like (or more step like). But I think you'll get them even with steadily increasing investment that's independent of the marginal return.

  • Neurons' dynamics looks very different from the dynamics of bits.
  • Maybe these differences are important for some of the things brains can do.

This seems very reasonable to me, but I think it's easy to get the impression from your writing that you think it's very likely that:

  1. The differences in dynamics between neurons and bits are important for the things brains do
  2. The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.

I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that's relevant to doing things like "learning rocket engineering", that's also hard to replicate in a digital computer.

(To be clear, I think this is difficult and I don't have much of an object level take on any of this, but I think I can empathize with Steven's position here)

The Trinity test was preceded by a full test with the Pu replaced by some other material. The inert test was designed to test whether they were getting the needed compression. (My impression is this was not publicly known until relatively recently)

Regardless, most definitions [of compute overhang] are not very analytically useful or decision-relevant. As of April 2023, the cost of compute for an LLM's final training run is around $40M. This is tiny relative to the value of big technology companies, around $1T. I expect compute for training models to increase dramatically in the next few years; this would cause how much more compute labs could use if they chose to to decrease.

I think this is just another way of saying there is a very large compute overhang now and it is likely to get at least somewhat smaller over the next few years.

Keep in mind that "hardware overhang" first came about when we had no idea if we would figure out how to make AGI before or after we had the compute to implement it.

Drug development is notably different because, like AI, it's a case where the thing we want to regulate is an R&D process, not just the eventual product

I agree, and I think I used "development" and "deployment" in this sort of vague way that didn't highlight this very well.

But even if we did have a good way of measuring those capabilities during training, would we want them written into regulation? Or should we have simpler and broader restrictions on what counts as good AI development practices?

I think one strength of some IRB-ish models of regulation is that you don't rely so heavily on a careful specification of the thing that's not allowed, because instead of meshing directly with all the other bureaucratic gears, it has a layer of human judgment in between. Of course, this does pass the problem to "can you have regulatory boards that know what to look for?", which has its own problems.

I put a lid on the pot because it saves energy/cooks faster. Or maybe it doesn't, I don't know, I never checked.

I checked and it does work.

Seems like the answer with pinball is to avoid the unstable processes, not control them.

Regarding the rent for sex thing: The statistics I've been able to find are all over the place, but it looks like men are much more likely to not have a proper place to sleep than women. My impression is this is caused by lots of things (I think there are more ways for a woman to be eligible for government/non-profit assistance, for example), but it does seems like evidence that women are exchanging sex for shelter anyway (either directly/explicitly or less directly, like staying in a relationship where the main thing she gets is shelter and the main thing the other person gets is sex).

Wow, thanks for doing this!

I'm very curious to know how this is received by the general public, AI researchers, people making decisions, etc. Does anyone know how to figure that out?

Load More