All of jprwg's Comments + Replies

jprwg
10

I see, thank you. So then, would you say this doesn't & isn't intended to answer any question like "whose perspective should be taken into account?", but that it instead assumes some answer to that question has already been specified, & is meant to address what to do given this chosen perspective?

2Stuart_Armstrong
It doesn't intend to answer those questions; but those questions become a lot easier to answer once this issue is solved.
jprwg
10

I'm trying to synthesise actual human values, not hypothetical other values that other beings might have.

To be clear, when you say "actual human values", do you mean anything different than just "the values of the humans alive today, in the year 2019"? You mention "other beings" - is this meant to include other humans in the past who might have held different values?

2Stuart_Armstrong
The aim is to be even more specific - the values of a specific human at a specific time. Then what we do with these syntheses is another point, how much change to allow, etc... Including other humans in the past is a choice that we then need to make, or not.
jprwg
40

Perhaps "size of compiled program" would be one way to make a crude complexity estimate. But I definitely would like to be able to better define this metric.

In any case, I don't think the concept of software complexity is meaningless or especially nebulous. A program with a great many different bespoke modules, which all interact in myriad ways, and are in turn full of details and special-cases and so on, is complex. A program that's just a basic fundamental core algorithm with a bit of implementation detail is simple.

I do agree that th

... (read more)
3moridinamael
I think that's a good way of framing it. Imagine it's the far future, long after AI is a completely solved problem. Just for fun, somebody writes the smallest possible fully general seed AI in binary code. How big is that program? I'm going to guess it's not bigger than 1 GB. The human genome is ~770 MB. Yes, it runs on "chemistry", but that laws of physics underpinning chemistry/physics actually don't take that many bytes to specify. Certainly not hundreds of megabytes. Maybe a clearer question would be, how many bytes do you need to beam to aliens, in order for them to grow a human? The details of the structure of the embryonic cell, the uterus, the umbilical cord, the mother's body, etc., are mostly already encoded in the genome, because a genome contains the instructions for copying itself via reproduction. Maybe you end up sending a few hundred more megabytes of instructions as metadata for unpacking and running the genome, but not more than that. Still, though, genomes are bloated. I'll bet you can build an intelligence on much less than 770 MB. 98.8% of the genome definitely has nothing to do with the secret sauce of having a powerful general intelligence. We know this because we share that much of our genome with chimps. Yes, you need a body to have a brain, so there's a boring sense in which you need the whole genome to build a brain, but this argument doesn't apply to AIs, which don't need to rely on ancient legacy biology.
jprwg
60

That could represent one step in a general trend of subsuming many detailed systems into fewer simpler systems. Or, it could represent a technology being newly viable, and the simplest applications of it being explored first.

For the former to be the case, this simplification process would need to keep happening at higher and higher abstraction levels. We'd explore a few variations on an AI architecture, then get a new insight that eclipses all these variations, taking the part we were tweaking and turning it into just another parameter for the system

... (read more)
6magfrump
I think there are some strong points supporting the latter possibility, like the lack of similarly high profile success in unsupervised learning and the use of massive amounts of hardware and data that were unavailable in the past. That said, I think someone five years ago might have said "well, we've had success with supervised learning but less with unsupervised and reinforcement learning." (I'm not certain about this though) I guess in my model AGZ is more like a third or fourth data point than a first data point--still not conclusive and with plenty of space to fizzle out but starting to make me feel like it's actually part of a pattern.
jprwg
50

Humans didn't evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn't undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for.

Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don't think anyone would argue otherwise. The plausible version of the modularity

... (read more)
jprwg
20

The main thing that would predict slower takeoff is if early AGI systems turn out to be extremely computationally expensive.

Surely that's only under the assumption that Eliezer's conception of AGI (simple general optimisation algorithm) is right, and Robin's (very many separate modules comprising a big intricate system) is wrong? Is it just that you think that assumption is pretty certain to be right? Or, are you saying that even under the Hansonian model of AI, we'd still get a FOOM anyway?

8Rob Bensinger
I wouldn't say that the first AGI systems are likely to be "simple." I'd say they're likely to be much more complex than typical narrow systems today (though shooting for relative simplicity is a good idea for safety/robustness reasons). Humans didn't evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn't undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for. So if the separate-modules proposal is that we're likely to figure out how to achieve par-human chemistry without being able to achieve par-human mechanical engineering at more or less the same time, then yeah, I feel confident that's not how things will shake out. I think that "general" reasoning in real-world environments (glossed, e.g., as "human-comparable modeling of the features of too-complex-to-fully-simulate systems that are relevant for finding plans for changing the too-complex-to-simulate system in predictable ways") is likely to be complicated and to require combining many different insights and techniques. (Though maybe not to the extent Robin's thinking?) But I also think it's likely to be a discrete research target that doesn't look like "a par-human surgeon, combined with a par-human chemist, combined with a par-human programmer, ..." You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.
jprwg
170

While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.

That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:

  • A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussed recently, or something along similar lines.

  • Long-run data showing AI systems gradually increasing in capability without any increase in complexity. Th

... (read more)

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

6magfrump
My sense is that AGZ is a high profile example of how fast the trend of neural nets (which mathematically have existed in essentially modern form since the 60s) can make progress. The same techniques have had a huge impact throughout AI research and I think counting this as a single data point in that sense is substantially undercounting the evidence. For example, image recognition benchmarks have used the same technology, as have Atari playing AI.