This is for a person with no ML background. He is 55 years old, he liked the sequences and I recently managed to convince him that AI risk is serious by recommending a bunch of Lesswrong posts on it, but he still thinks it's astronomically unlikely that AGI is <80 years away. 

There are a lot of other people like this, so I think it's valuable to know what the best explainer is, more than just in my case.

New Answer
New Comment

5 Answers sorted by

mukashi

43

Has he tried personally to interact with GPT4? Can't think of a better way. It convinced even Bryan Caplan, who had bet publicly against it

the gears to ascension

3-1

I don't even bring up ai at all. They can figure that part out on their own easily enough. Much more important is Modeling the Human Trajectory

aogara

32

AI Timelines: Where the Arguments, and the "Experts," Stand would be my best bet, as well as mukashi's recommendation of playing with ChatGPT / GPT-4. 

awg

20

I might recommend the Most Important Century series on Cold Takes. It's long, but it's accessible and comprehensive.

weverka

10

This is difficult for people with no ML background.  The trouble with this is that one first has to explain timelines. Then explain what averages and ranges most researchers in the field maintain, and then explain why some discount that in favor of short AI timelines.  That is a long arc for a skeptical person.

Aren't we all skeptical people?  Carl Sagan said that extraordinary claims require extraordinary evidence.  Explaining a short timeline is a heavy lift by its very nature.

7 comments, sorted by Click to highlight new comments since:

Probably best not to skip to List of Lethalities. But then again that kind of approach was wrong for politics is the mind killer where it turned out to be best to just have the person dive right in.

It’s generally hard to find one-size-fits-all responses for things like this. Instead I would first want to know: WHY does he thinks it’s astronomically unlikely to be <80 years away?

Yes, I think this is the most important question. It's one thing to not be aware of progress in AI and so not have an idea that it might be soon. General resources are fine for updating this sort of mindset.

It's also a thing to be aware of current progress but think it might or probably will longer. That's fine, I think it might take longer, and can certainly understand having reasons to believe that it's less likely than not to happen this century even if I don't hold them myself.

It's very different if someone is aware of current developments but has extremely strong views against AGI happening any time this century. Do they actually mean the same thing by "AGI" as most other people do? Do they think it is possible at all?

What might work is linking the views of someone who is not an AI doomer, he might feel more affinity to that. Scott Aaronson's blog comes to mind, and Scott Alexander's as well. One does not need any ML background to follow their logic.

Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I'm on the camp that we are 2 or 3 years away to an AGI (it's hard to see why GPT4 does not qualify as that), I don't think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there

I don't know such source, but if I tried to create one, it would probably be a list of computer achievements, with years. Such as "defeats humans in chess", "defeats humans in go", "writes poetry", "draws pictures"... also some older ones, like "finds a travel route between cities", "makes a naive person believe they are talking to another human"... and ones that were not achieved yet, like "drives a car in traffic", "composes a popular song", "writes a popular book", "defeats a human army".

The achievements would be listed chronologically, the future ones only with a question mark (ordered by their estimated complexity). The idea would be to see that the distances between years are getting shorter recently. (That of course depends on which specific events you choose, which is why this isn't an argument, more like an intuition pump.) This could be further emphasized by drawing a timeline below the list, making big dots corresponding to the listed things (so you see the distances are getting smaller), and a question mark beyond 2022.

It might be a bit easier to explain if the approach is rooted in humans, and not in the dangers of AGI. (let's conveniently ignore the "direct" dangers of AGI until round 2)

To be entirely precise, what happens if we have GPT6 or 10 in the next 5 years, wielded by Microsoft, OpenAI, Musk, Facebook, and Google etc, or if you want to be unspecific, any "for profit" company, or worse.. governments. 

What happens if a superfast supersmart AGI is developed by a stock market company, that has to worry about shareholder value, stock prices, or profits? It will use the company's goal-aligned AGI to achieve the company's goals, beat the competition etc, and it's an absolute certainty that they will collude with humanity's goals. Just show him one of Satya Nadella's interviews about MS integrating AI into its development process, and to its products. The man is literally beaming with greed and giddiness in a manner, that is very very thinly veiled. He almost blurts out at one point, how happy he is that he will be able to fire the expensive programmers, and raise profits, but he pivots and calls it "the democratization of coding". I have zero doubts, that the current stock market-driven economies will use AI & AGI for enhancing profits & drive down costs, resulting in mass layoffs and global social turmoil. 

Imagine if a totalitarian regime or an extreme right-wing government gets a narrow AGI, that is trained for surveillance and control, and you get 1984 on steroids, and I don't even want to unpack the racist chain of thought, I don't think I need to.

To sum it up, humanity, in its current state is not ready for an AGI, as our current state of civilization is not mature enough, to use it for the common good of the 8+ billion people in the world. Our economics and politics are based on scarcity and privileged groups, who live their existence at the expense of the rest of humanity. 

The above is scary in itself, and comes with the assumption, that we CAN align with AGI at all.