by [anonymous]
2 min read

12

Disclaimer: This post isn't intended to convince or bring anyone closer to a certain position on AI. It is merely - as the title suggests - for the record.

 

I would like to publicly record my prediction about the prospects of artificial superintelligence, consisting of a weak and a strong thesis:

 

Weak thesis: Current deep learning paradigms will not be sufficient to create an artificial superintelligence.

You could call this the Anti-"scaling-maximalist"-thesis, except that it goes quite a bit further by including possible future deep leaning architectures. Of course, "deep learning" is doing a lot of work here, but as a rule of thumb, I would consider a new architecture fits within the DL paradigm if it involves a very large number (at least in the millions) of randomly initialized parameters that are organized into largely uniform layers and are updated via a simple algorithm like backpropagation.

I intentionally use the word "superintelligence" here because "AGI" or "human-level intelligence" have become rather loaded terms, the definitions of which are frequently a point of contention. I take superintelligence to mean an entity so obviously powerful and impressive in its accomplishments that all debates around its superhuman nature should be settled instantly. Feats like building a Dyson Sphere, launching intergalactic colony ships at 0.9c, designing and deploying nanobots that eat the whole biosphere etc. Stuff like passing the Turing Test or Level 5 self-driving decidedly do not qualify (not that I think these challenges will be easy; but I want to make the demarcation blatantly clear).

(the downside is that we may never live to witness such an ASI in case it turns out to be UnFriendly, but by then earning some reputation points on the internet will be the least of my concerns)

 

Strong thesis: If and when the first ASI is built, it will not use deep learning as one of its components.

For instance, if the ASI uses a few CNN-layers to pre-process visual inputs, or some autoencoder system to distill data into latent variables, that's already enough to refute the strong thesis. On the other hand, merely running on hardware that was design-assisted by DL-systems does not disqualify the ASI from being DL-free.

 

For future reference, here is some context this prediction was made in:

It is the beginning of 2023, 1.5 months after the release of chatGPT, 5 months after the release of Stable Diffusion, both of which have renewed hype around deep learning and Transformer-based models in particular. Rumors around yet-to-be-released GPT-4 presenting a big leap in capabilities are floating about. The scaling maximalist position has gained a lot of traction both inside and outside the community and may even represent the mainstream opinion on LW. Timeline predictions are short (AGI by 2030-35 seems to be the consensus), broadly speaking, and even extremely short timelines (~2025) are not unheard of.

New Comment
13 comments, sorted by Click to highlight new comments since:

AGIs are more important than ASIs for human strategic considerations, as the point where human effort gets screened off, where research is automated (and probably control over the world is effectively lost, unless it's going to be eventually handed back). Even first AGIs likely work much faster than humans, and this advantage is sufficient on its own, without a need to also be stronger in other ways. AGI is probably sufficiently easier than ASI that humans only get to build AGIs, not ASIs.

When AGIs build ASIs, they are facing AI risk, which gives them motivation to put that off a bit. So I don't expect ASIs before AGIs get a couple of years to bootstrap nanotech and adapt themselves to new hardware, running at scale without significantly changing their design first to get a better handle on AI alignment. Whether a resulting ASI is going to use "deep learning" is something humans might need some time getting up to speed on.

How is this 'for the record' if it's by an anonymous account?

The key question for me is "can ML-architected AI build a successor superintelligence?". 

Would be even better if you could attach rough probabilities to both theses. Right now my sense is I probably disagree significantly, but it's hard to say how much. For the record, my credence for the weak thesis depends a ton on how some details are formalized (e.g. how much non-DL is allowed, does it have to be one monolithic network or not). For the strong thesis, <15%, would need to think more to figure out how low I'd go. If you just think the strong thesis is more plausible than most other people, at say 50%, that's not a huge difference, whereas if you have something like 95% that seems really wild to me.

I intentionally use the word "superintelligence" here because "AGI" or "human-level intelligence" have become rather loaded terms, the definitions of which are frequently a point of contention.

Is that the main reason to focus on ASI, or does ASI vs AGI also have a big impact on whether you believe these theses? E.g. do you still think they're true for AGI instead of ASI, if you get to judge what counts as "AGI" (but something more like human-level intelligence than quickly designing nanotech)?

+1 to recording beliefs.

More decision-relevant than propositions about superintelligence are propositions about something like the point of no return, which is probably a substantially lower bar.

Summary of this article according to @lesswrong_bot on Twitter

'This post is a response to the accusation that the organization "DL ASI" is a cult. The author argues that this is not the case, and provides evidence to support their claim.'

uh if that bot is run by someone contactable, might be good to tell them their bot is making nonsense summaries. If it's not, well, that explains its behavior.

Can a DL-based system still end up causing catastrophic damage before we ever even manage to get to ASI?

https://danijar.com/project/dreamerv3/ <- scaling this will give ASI (it will still have limitations, but that's irrelevant, it's enough to beat us if it doesn't destroy the lab during training)

(not as fast as future algorithms, but still a hell of a lot faster than scaling mere autoregressive transformers)

I'm sort of surprised this is getting downvoted to the degree that it is. If there were agreement-voting for posts, would the downvoters have down that?

The title is still wildly overconfident. There are no ASIs in the set of deep learning programs? If someone believes that, they don't understand deep learning - deep learning could be said to be the biology of software engineering.

Personally, I believe the strong inverse of this: there are absolutely no intelligent systems which are not a special case of deep learning, because deep learning is the anti-spotlight-effect of ai: connectionism is the perspective that we don't need to constrain types of connections, because statistical understanding will be able to generate an understanding of structural connectivity. Which means that even if the structural connectivity has enormous amounts of learned complexity, even if you already had another system that had more useful priors than mere connectionism, it still counts as connectionism! the point here is not merely that everything is a special case of everything else or something vacuous, but that the additional constraints, while useful, would need to import the core basis of deep learning: adjustment of large networks of interaction as the key basis for intelligence. it is the biology of software engineering because it allows us to talk about the shape of any program in terms of its manifold of behavior.

yes, there's a lot more an ai could use to shape its manifold of behavior than mere gradient descent. but as long as you have a need to do bayesian inference, you'll have something that looks like gradient descent in networks.

edit: not necessarily backprop, though.

I mean, I agree I think the claim is wrong. But I think I’m my preferred LessWrong culture people straightforwardly voice their contrarian positions, and get, like weakly upvoted, and the maybe strong disagree downvoted.

I like this post in part because it’s short and clearly states a claim in a way that isn’t very weaselable, which most contrarian-take posts fail to do. (It was at zero karma when I found it)

I do think the karma system is fundamentally flawed; I'd suggest considering what sort of multi-currency prediction market system might be better. what outcomes are being predicted? which currencies should trade easily with others? are there nonlinearities between the currencies? are there costs to do some things? is there a way to say "I think this should get this much total", rather than merely "I want to offset the current amount by this much"? which actions can be taken back? etc etc etc. It isn't a trivial project to figure out the details, I imagine there's research from mechanism design I could pull out of a hat with the right search engine. But I'd still spend a bet token "this post will turn out to have been not only wrong, but represents an overconfident style of discourse that doesn't express sufficient scientific hedging".

Of course, I get downvoted for similar reasons fairly often, so, like, you know, maybe what we really need is non-anonymous votes or something so people know when downvotes are by goofballs like me, establish reputation as thoughtful voters, etc. What if you could only vote by writing a review, and then the vote was generated by a sentiment analyzer? What if you had to select a phrase highlight in order to vote on a post?

edit: none of these are meant to be definitely good ideas and I think I'll be confident tomorrow whether they're all definitely bad ideas or not