Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
greghb10

This part really resonates:

suppose most of the abilities we care about, when we use the term "AGI," are locked away in the very last tiny sliver of loss just above the intrinsic entropy of text.  In the final 0.00[...many extra zeros...]1 bits/character, in a loss difference so tiny we'd need vastly larger validation sets to for it to be distinguishable from data-sampling noise.

as does the ecological "road not taken". But I think part of this puzzle is that, in fact, there aren't adequate ecological measures of linguistic competence, vs. tasks that may be difficult but are always narrow and you never feel good about them being difficult for the right reasons. There are certainly no artificial environments where realistic linguistic competence is clearly needed to win. You hear things like, "but perplexity correlates with human judgement!" but they are always weak and unsatisfying. I think many NLP/DL researchers believe this, and just don't know what to do about it. The road is not readily taken. I think this is a big part of why discussions of LM performance can be so unsatisfying.

greghbΩ030

Fwiw, I think nostalgebraist's recent post hit on some of the same things I was trying to get at, especially around not having adequate testing to know how smart the systems are getting -- see the section on what he calls (non-)ecological evaluation.

greghbΩ284

Re: humans/brains, I think what humans are a proof of concept of is that, if you start with an infant brain, and expose it to an ordinary life experience (a la training / fine-tuning), then you can get general intelligence. But I think this just doesn't bear on the topic of Bio Anchors, because Bio Anchors doesn't presume we have a brain, it presumes we have transformers. And transformers don't know what to do with a lifetime of experience, at least nowhere near as well as an infant brain does. I agree we might learn more about AI from examining humans! But that's leaving the Bio Anchors framing of "we just need compute" and getting into the framing of algorithmic improvements etc. I don't disagree broadly that some approaches to AI might not have as big a (pre-)training phase the way current models do, if, for instance, they figure out a way to "start with" infant brains. But I don't see the connection to the Bio Anchors framing.

What's so bad about perplexity? I'm not saying perplexity is bad per se, just that it's unclear how much data you need, with perplexity as your objective, to achieve general-purpose language facility.  It's unclear both because the connection between perplexity and extrinsic linguistic tasks is unclear, and because we don't have great ways of measuring extrinsic linguistic tasks. For instance, the essay you cite itself cites two very small experiments showing correlation between perplexity and extrinsic tasks. One of them is a regression on 8 data points, the other has 24 data points. So I just wouldn't put too much stake in extrapolations there. Furthermore, and this isn't against perplexity, but I'd be skeptical of the other variable i.e. the linguistic task perplexity is regressed against: in both cases, a vague human judgement of whether model output is "human-like". I think there's not much reason to think that is correlated to some general-purpose language facility. Attempts like this to (roughly speaking) operationalize the Turing test have generally been disappointing; getting humans to say "sure, that sounds human" seems to leave a lot to be desired; I think most researchers find them to be disappointingly game-able, vague a critique though that may be. The google Meena paper acknowledges this, and reading between the lines I get the sense they don't think too much of their extrinsic, human-evaluation metric either. E.g., the best they say is, "[inter-annotator] agreement is reasonable considering the questions are subjective and the final results are always aggregated labels".

This is sort of my point in a nutshell: we have put very little effort into telling whether the datasets we have contain adequate signal to learn the functions we want to learn, in part because we aren't even sure how to evaluate those functions. It's not surprising that perplexity correlates with extrinsic tasks to a degree. For instance, it's pretty clear that, to get state-of-the-art low perplexity on existing corpora, transformers can learn the latent rules of grammar, and, naturally doing so correlates with better human judgements of model output. So, grammar is latent in the corpora. But is physics latent in the corpora? It would improve a model's perplexity at least a bit to learn physics: some of these corpora contain physics textbooks with answers to exercises way at the back of the books, so to predict the answers at the back you would have to be able to learn how to do the exercises. But it's unclear whether current corpora contain enough signal to do that. Would we even know how to tell if the model was or wasn't learning physics? I'm personally skeptical that it's happening at all, but I admit that's just based in my subjective assessment of GPT-3 output... again, part of the problem of not having a good way to judge performance outside of perplexity.

As for why all transformative tasks might have hard-to-get-data... well this is certainly speculative, but people sometimes talk about AI-complete tasks, analogizing to the concept of completeness for complexity classes in complexity theory (e.g., NP-complete). I think that's the relevant idea here. The goal being general intelligence, I think it's plausible that most (all? I don't know) transformative tasks are reducible to each other. And I think you also get a hint of this in NLP tasks, where they are weirdly reducible to each other, given the amazing flexibility of language. Like, for a dumb example, the task of question answering entails the task of translation, because you can ask, "How do you say [passage] in French?" So I think the sheer number of tasks, as initially categorized by humans, can be misleading. Tasks aren't as independent as they may appear. Anyway, that's far from a tight argument, but hopefully it provides some intuition.

Honestly I haven't thought about how to incorporate the dataset bottleneck into a timeline. But, I suppose, I could wind up with even longer timelines if I think that we haven't made progress because we don't have the faintest idea how and the lack of progress isn't for lack of trying. Missing fundamental ideas, etc. How do you forecast when a line of zero slope eventually turns up? If I really think we have shown ourselves to be stumped (not sure), I guess I'd have to fall back on general-purpose tools for forecasting big breakthroughs, and that's the sort of vague modeling that Bio Anchors seems to be trying to avoid.

greghbΩ8197

Yes, good questions, but I think there are convincing answers. Here's a shot:

1. Some kinds of data can be created this way, like parallel corpora for translation or video annotated with text. But I think it's selection bias that it seems like most cases are like this. Most of the cases we're familiar with seem like this because this is what's easy to do! But transformative tasks are hard, and creating data that really contains latent in it the general structures necessary for task performance, that is also hard. I'm not saying research can't solve it, but that if you want to estimate a timeline, you can't consign this part of the puzzle to a footnote of the form "lots of research resources will solve it". Or, if you do, you might as well relax the whole project and bring only that level of precision across the board.

2. At least in NLP (the AI subfield with which I'm most familiar), my sense of the field's zeitgeist is quite contrary to "compute is the issue". I think there's a large, maybe majority current of thought that our current benchmarks are crap, that performance on them doesn't relate to any interesting real-world task, that optimizing on them is of really unclear value, and that the field as a whole is unfortunately rudderless right now. I think this current holds true among many young DL researchers, not just the Gary Marcuses of the world. That's not a formal survey or anything, just my sense from reading NLP papers and twitter. But similarly, I think the notion that compute is the bottleneck is overrepresented in the LessWrong sphere, vs. what others think.

3. Humans not needing much data is misleading IMO because the human brain comes highly optimized out of the box at birth, and indeed that's the result of a big evolutionary process. To be clear, I agree achieving human-level AI is enough to count as transformative and may well be a second-long epoch on the way to much more powerful AI. But anyway, you have basically the same question to answer there. Namely, I'd still object that Bio Anchors doesn't address the datasets/environments issue regarding making even just human-level AI. Changing the scope to "merely" human doesn't answer the objection.

Q/A. As for recent progress: no, I think there has been very little! I'm only really familiar with NLP, so there might be more in the RL environments. (My very vague sense of RL is that it's still just "video games you can put an agent is" and basically always has been, but don't take it from me.) As for NLP, there is basically nothing new in the last 10 years. We have lots of unlabeled text for language models, we have parallel corpora for translation, and we have labeled datasets for things like question-answering (see here for a larger list of supervised tasks). I think it's really unclear whether any of these have latent in them the structures necessary for general language understanding. GPT is the biggest glimmer of hope recently, but part of the problem even there is we can't even really quantify how close it is to general language understanding. We don't have a good way of measuring this! Without it, we certainly can't train, as we can't compute a loss function. I think there are maybe some arguments that, in the limit, unlabeled text with the LM objective is enough: but that limit might really be more text than can fit on earth, and we'd need to get a handle on that for any estimates.

Final point: I'm more looking for a qualitative acknowledgement that this problem of datasets/environments is hard and unsolved (or arguments for why it isn't), is as important as compute, and, building on that, serious attention paid to an analysis of what it would take to make the right datasets/environments. Rather than consign it to an "everything else" parameter, analyze what it might take to make better datasets/environments, including trying to get a handle on whether we even know how. I think this would make for a much better analysis, and would address some of Eliezer's concerns because it would cover more of the specific, mechanistic story about the path to creating transformative AI.

(Full disclosure: I've personally done work on making better NLP benchmarks, which I guess has given me an appreciation for how hard and unsolved this problem feels. So, discount appropriately.)

greghbΩ510-4

Caveating that I did a lot of skimming on both Bio Anchors and Eliezer's response, the part of Bio Anchors that seemed weakest to me was this:

To be maximally precise, we would need to adjust this probability downward by some amount to account for the possibility that other key resources such as datasets and environments are not available by the time the computation is available

I think the existence of proper datasets/environments is a huge issue for current ML approaches, and you have to assign some nontrivial weight to it being a much bigger bottleneck than computational resources. Like, we're lucky that GPT-3 is trained with the LM objective (predict the next word) for which there is a lot of naturally-occurring training data (written text). Lucky, because that puts us in a position where there's something obvious to do with additional compute. But if we hit a limit following that approach (and I think it's plausible that the signal is too weak in otherwise-unlabeled text) then we're rather stuck. Thus, to get timelines, we'd also need to estimate what dataset/environments are necessary for training AGI. But I'm not sure we know what these datasets/environments look like. An upper bound is "the complete history of earth since life emerged", or something... not sure we know any better.

I think parts of Eliezer's response intersects with this concern, e.g. the energy use analogy. It is the same sort of question, how well do we know what the missing ingredients are? Do we know that compute doesn't occupy enough of the surface area of possible bottlenecks for a compute-based analysis to be worth much? And I'm specifically suggesting that environments/datasets occupy enough of that surface area to seriously undermine the analysis.

Does Bio Anchors deal with this concern beyond the brief mention above (and I missed it, very possible)? Or are there other arguments out there that suggest compute really is all that's missing?