glazgogabgolab

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

there was a result (from Pieter Abbeel's lab?) a couple of years ago that showed that pretraining a model on language would lead to improved sample efficiency in some nominally-totally-unrelated RL task

Pretrained Transformers as Universal Computation Engines
From the abstract:

We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning – in particular [...] a variety of sequence classification tasks spanning numerical, computation, vision, and protein fold prediction

Given your perspective, you may enjoy: Lies Told To Children: Pinocchio, Which I found posted here.

Personally I think I'd be fine with the bargain, but having read that alternative continuation, I think I better understand how you feel.

Oops, strangely enough I just wasn't thinking about that possibility. It's obvious now, but I assumed that SL vs RL would be a minor consideration, despite the many words you've already written on reward.

Hey Steve, I might be wrong here but I don't think Jon's question was specifically about what architectures you'd be talking about. I think he was asking more specifically about how to classify something as Brain-like-AGI for the purposes of your upcoming series.

The way I read your answer makes it sound like the safety considerations you'll be discussing depend more on whether the NTM is trained via SL or RL rather than whether it neatly contains all your (soon to be elucidated) Brain-like-AGI properties.

Though that might actually have been what you meant so I probably should have asked for clarification before I presumptively answered Jon for you.

If I'm reading your question right I think the answer is:

I’m going to make a bunch of claims about the algorithms underlying human intelligence, and then talk about safely using algorithms with those properties. If our future AGI algorithms have those properties, then this series will be useful, and I would be inclined to call such an algorithm "brain-like".

i.e. The distinction depends on whether or not a given architecture has some properties Steve will mention later. Which, given Steve's work, are probably the key properties of "A learned population of Compositional Generative Models + A largely hardcoded Steering Subsystem".

Regarding "posts making a bearish case" against GPT-N, there's Steve Byrnes', Can you get AGI from a transformer.

I was just in the middle of writing a draft revisiting some of his arguments, but in the meantime one claim that might be of particular interest to you is that: "...[GPT-N type models] cannot take you more than a couple steps of inferential distance away from the span of concepts frequently used by humans in the training data"