Note: The video is no longer available. It has been set to private. It'll eventually be released on the main TED channel.

This is a TED talk published early by a random TEDx channel.

Tweet by EY: https://twitter.com/ESYudkowsky/status/1655232464506466306

Looks like my Sudden Unexpected TED Talk got posted early by a TEDx account.

YouTube description:

Eliezer Yudkowsky is a foundational thinker on the long-term future of artificial intelligence. 

With more than 20 years of experience in the world of AI, Eliezer Yudkowsky is the founder and senior research fellow of the Machine Intelligence Research Institute, an organization dedicated to ensuring smarter-than-human AI has a positive impact on the world. His writings, both fiction and nonfiction, frequently warn of the dangers of unchecked AI and its philosophical significance in today's world. 

Yudkowsky is the founder of LessWrong, an online forum and community dedicated to improving human reasoning and decision-making, and the coinventor of the "functional decision theory," which states that decisions should be the output of a fixed mathematical function answering the question: "Which output of this very function would yield the best outcome?"

New Comment
36 comments, sorted by Click to highlight new comments since:

This is much better than any of his other speaking appearances. The short format, and TED's excellent talk editing/coaching, have really helped.

This is still terrible.

I thought it was a TEDx talk, and I thought it was perhaps the worst TEDx talk I've seen. (I agree that it's rare to see a TEDx talk with good content, but the deliveries are usually vastly better than this).

I love Eliezer Yudkowsky. He is the reason I'm in this field, and I think he's one of the smartest human beings alive. He is also one of the best-intentioned people I know. This is not a critique of Yudkowsky as an individual.

He is not a good public speaker.

I'm afraid having him as the public face of the movement is going to be devastating. The reactions I see to his public statements indicate that he is creating polarization. His approach makes people want to find reasons to disagree with him. And individuals motivated to do that will follow their confirmation bias to focus on counterarguments.

I realize that he had only a few days to prepare this. That is not the problem. The problem is a lack of public communication skills. Those are very different than communicating with your in-group.

Yudkowsky should either level up his skills, rapidly, or step aside.

There are many others with more talent and skills for this type of communication.

Eliezer is rapidly creating polarization around this issue, and that is very difficult to undo. We don't have time to do that.

Could we bull through with this approach, and rely on the strength of the arguments to win over public opinion? That might work. But doing that instead of actually thinking about strategy and developing skills would hurt our odds of survival, perhaps rather badly.

I've been afraid to say this in this community. I think it needs to be said.

I'm not sure I agree. Consider the reaction of the audience to this talk- uncomfortable laughter, but also a pretty enthusiastic standing ovation. I'd guess that latter happened because the audience saw Eliezer as genuine- he displayed raw emotion, spoke bluntly, and at no point came across as someone making a play for status. He fit neatly into the "scientist warning of disaster" archetype, which isn't a figure that's expected to be particularly skilled at public communication.

A more experienced public speaker would certainly be able to present the ideas in a more high-status way- and I'm sure there would be a lot of value in that.  But the goal of increasing the status of the ideas might to some degree trade off against communicating their seriousness- a person skillfully arguing a high-status idea has a potential ulterior motive that someone like Eliezer clearly doesn't.  To get the same sort of reception from an audience that Eliezer got in this talk, a more experienced speaker might need to intentionally present themselves as lacking polish, which wouldn't necessarily be the best way to use their talents.

Better, maybe, to platform both talented PR people and unpolished experts.

This is an excellent point. This talk didn't really sound condescending, as every other presentation I've seen from him did. Condescension and other signs of disrespect are what create polarization. So perhaps it's that simple, and he doesn't need to skill up further.

I suspect he does need to skill up to avoid sounding hostile and condescending in conversation, though. The short talk format with practice and coaching may have fixed the real problems.

I agree that sounding unpolished might be perfectly fine.

I'm with you on this. I think Yudkowsky was a lot better in this with his more serious tone, but even so, we need to look for better. 

Popular scientific educators would be a place to start and I've thought about sending out a million emails to scientifically minded educators on YouTube, but even that doesn't feel like the best solution to me. 

The sort of people that are listened to are the more political types, so they I think are the people to reach out to. You might say they need to understand the science to talk about it, but I'd still put more weight on charisma vs. scientific authority. 

Anyone have any ideas on how to get people like this on board? 

I just read your one post. I agree with it. We need more people on board. We are getting that, but finding more people with more PR skills would seem like a good idea.

I think the starting point is finding people who are already part of this community who are interested in brainstorming about PR strategy. To that end, I'm writing a post on this topic.

Getting charismatic "political types" to weigh in is unlikely to help with "polarization." That's what happened with global warming climate change.

A more effective strategy might be to lean into the polarization: make "AI safety" an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.

It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.

I think there's a lot we could learn from climate change activists. Having a tangible 'bad guy' would really help, so maybe we should be framing it more that way. 

  • "The greedy corporations are gambling with our lives to line their pockets." 
  • "The governments are racing towards AI to win world domination, and Russia might win."
  • "AI will put 99% of the population out of work forever and we'll all starve."

And a better way to frame the issue might be "Bad people using AI" as opposed to "AI will kill us".

If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own. 

There's a catch-22 here where the wording will put people off if it's too extreme because they'll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It's like there's a tradeoff between level of alarm and credibility). 

And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don't understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.

Examples

  • 'Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.'
  • 'It's impossible to know what a machine is thinking. In running large language model based AI researchers don't know exactly what they're looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn't be immediately apparent.'

The reactions I see to his public statements indicate that he is creating polarization.

I had the opposite impression, from this video and in general: that Yudkowsky is good at avoiding polarizing statements, while still not compromising on saying what he actually thinks. Compare him with Hinton, who throws around clearly politically coded statements.

Do you have a control to infer he's polarizing? I suspect you are looking at a confounded effect.

It sounds like you're referring to political polarization. I'm referring to a totally different type of polarization, purely on the issue of AI development.

My evidence that Yudkowsky in particular is creating polarization is hearing his statements referred to frequently by other commentators, with apparent negative emotions. There is a potential confound here in that Yudkowsky is the loudest voice. However, I think what I'm observing is stronger than that. Other safety advocates who've gotten real press, like Hinton, Tegmark, Hawking, etc. etc. frame the argument in more general and less strident ways, and I have not heard their statements used as examples by people who sound emotionally charged on the opposite side.

That's why it sounds to me that Yudkowsky is making a systematic communication mistake that creates emotionally charged opposition to his views. Which is a big problem if it's true.

It seems to me like AGI risk needs a "zeitgeist addendum" / "venus project" style movie for the masses. Open up the overton window and touch on things like mesa optimization without boring the average person to death.

The /r/controlproblem faq is the most succinct summary i've seen but I couldn't get the majority of average folks to read that if I tried and it would still go over their heads.

There is this documentary: https://en.wikipedia.org/wiki/Do_You_Trust_This_Computer%3F Probably not quite what you want. Maybe the existing videos of Robert Miles (on Mesa-Optimization and other things) would be better than a full documentary.

I love robert miles but he suffers from the same problem as elizer or say connor leahy. Not a radio voice. Not a movie face. Also his existing videos are "deep dive" style.

 

You need to be able to introduce the overall problem, the reasons / deductions on why and how its problematic. Address the obvious pushback (which the reddit control problem faq does well) and then introduce the more "intelligentsia" concepts like "mesa optimization" in an easily digestible manner for a population with an average reading comprehension of a 6th grade level and a 20 second attention span.

 

So you could work off of Robert miles videos but they need to fit into a narrative / storytelling format. Beggining, middle and end. The end should be basically where were all at "we're probably all screwed but it doesn't mean we can't try" and then actionable advise (which should be sprinkled throughout the film, that's foreshadowing)

 

Regarding that documentary , I see a major flaw as drifting off into specifics like killer drones. The media has already primed peoples imaginations for lots of the specific ways x risk or s risk might plan out (matrix trilogy , black mirror etc). You could go down an entire rabbot hole on just nano tech or bioweapons. IMO you sprinkle those about to keep the audience engaged (and so that the takeaway isn't just "something something paperclips") but driving into them too much grts you lost in the weeds.

 

For example , I foresaw the societal problems of deepfakes but the way its actually played out (mass distributed powerful llm's people can diy with) coupled with the immediacy of the employment problem introeuces entire new vectors in social cohesion as problems I hadn't thought through at all. So , better to broadly introduce individual danger scenarios while keeping the narrative focused on the value alignment / control problems themselves.

Thanks, I've read that FAQ but I'll check it out again.

A good documentary might very well be an important step. I'm not familiar with your example films. I don't really like the idea of fictionalizing the arguments since that's an obviously sketchy way of making your points. However, if one was done in detail with really realistic portrayals of the problems and a very plausible path to AGI ruin, it might be really useful... unfortunately, Hollywood does not traffic in detail and realism by default, so I wouldn't get my hopes up on that.

Right right. It doesn't need to be finctionalized , just a kind of fun documentary. The key is , this stuff is not interesting for most folks. Mesa optimization sounds like a snore.

You have to be able to walk the audience through it in ane engaging way.

I'd go with "a bunch of weird stuff might happen. That might kill us all, because of instrumental convergence...

You should pull them up on youtube or whatever and then just jump around (sound off is fine) , the film maker is independent. I'm not saying that particular producer / film maker is the go to but the "style" and "tone" and overall storytelling fits the theme. 

 

"Serious documentary about the interesting thing you never heard about" , also this was really popular with young adults when it came out, it caught the flame of a group of young Americans who came of age during 9/11 and the middle east invasions and sort of shaped up what became the occupy wall street movement. Now, that's probably not exactly the demographic you want to target, most of them are tech savvy enough that they'll stumble upon this on their own (although they do need something digestible) but broadly speaking it seems to me like having a cultural "phenomenon" that brings this more into the mainstream and introduces the main takeaways or concepts is a must have project for our efforts.

I googled "Zeitgeist Addendum" and it does not seem to be a thing that would be useful for AGI risk.

  • is a followup movie of a 9/11 conspiracy movie
  • has some naive economic ideas (like abolishing money would fix a lot of issues)
  • the venus project appears to not be very successful

Do you claim the movie had any great positive impact or presented any new, true, and important ideas?

Ok well. Lets forget that exact example (which I now admit having not seen in almost twenty years)

I think we need a narrarive style film / docudrama. Beggining , middle , end. Story driven.

1.) Introduces the topic.

2.) Expands on it and touches on concepts

3.) Explains them in an ELI5 manner.

And that it should include all the relevant things like value alignment , control , inner and outer alignment etc without "losing" the audience.

Similarly if its going to touch on niche examples of x-risk or s-risk it should just "wet the imagination" without pulling down the entire edifice and losing the forest for the trees.

I think this is a format that is more likely to be engaged by a wider swathe of persons , I think (as I stated elsewhere in this thread) that rob miles , yudkowski and a large number of other AI experts can be quoted or summarized but do not offer the tonality / charisma to keep an audience engaged.

Think "attenborough" and the planet earth series.

It also seems sensible to me to kind of meld socratic questioning / rationality to bring the audience into the fold in terms of the deductive reasoning leading to the conclusions vs just outright feeding it to them upfront. Its going to be very hard to make a popular movie thst essentially promises catastophe. However if the narrator is asking the audience as it goes along "now , given the alien nature of the intelligence, why would it share human values? , imagine for a moment what it wpuld be like to be a bat..." then when you get to thr summary points any audience member with an iq above 80 is already halfway or more to the point independantly.

Thats what I like about the reddit controlproblem faq , it touches on all the basic superficial / kneejerk questions anyone who hasnt read like all of "superintelligence" would have when casually introduced to this.

This was a TED talk, not one of those TEDx talks. A random TEDx account posted this early, Yud doesn't know why. We might have needed to wait months for the official TED talk to be posted, since TED likes to have a constant stream of videos (even if the speakers would rather release immediately), and then this account undercut that process.

This is really unfortunate, since lots of people are going to look at this and think it's a TEDx talk instead of a TED talk. Every TEDx talk I've ever seen was garbage, and it seems like they're below Yud's standards and he wouldn't give one if offered.

[-][anonymous]10

Can't it be reported so they'll have to remove it? I assume that it won't happen because they probably wouldn't even upload it at this point , but given Youtube's copyright rules... I don't know.

Note that, while the linked post on the TEDx YouTube channel was taken down, there's a mirror available at: https://files.catbox.moe/qdwops.mp4.

I wouldn't recommend watching this talk to someone unfamiliar with the AI risk arguments, and I think promoting it would be a mistake. Yudkowsky seemed better on Lex Friedman's podcast. A few more Rational Animations-style AI risk YouTube videos would be more effective.

"Squiggle Maximizer" and "Paperclip Maximizer" have to go. They're misleading terms for the orthogonal AI utility function that make the concept seem like a joke when communicating with the general public. Better to use a different term, preferably something that represents a goal that's valuable to humans. All funny-sounding insider jargon should be avoided cough notkilleveryoneism cough.

Nanotech is too science-fictiony and distracting. More realistic near-term scenarios (hacks of nuclear facilities like Stuxnet to control energy, out-of-control trading causing world economies to crash and leading to a full-on nuclear war, large-scale environmental disaster that's lethal to humans but not machines, gain-of-function virus engineering, controlling important people through blackmail) would resonate better and emphasize the fragility of human civilization.

The chess analogy ("you will lose but I can't tell you exactly how") is effective. It's also challenging to illustrate to people how something can be significantly more intelligent than them, and this analogy simultaneously helps convey that by reminding people how they easily lose to computers.

It's unfortunate that this version is spreading because many people will think it's a low credibility TEDx talk instead of a very credible main stage TED talk.

I’m curious about the circumstances of this talk. What prompted the organisers to ask Eliezer to do it, and in such haste? (He has tweeted that he had only a few days to write it.) Who was the audience (of people in the room) and the intended audience (for the video)? Why is it embargoed, and why was it prematurely published? Was it intended to be just a rehearsal?

The occasional nervous titters from the audience put me in mind of Max Frisch’s 1953 play “The Fire Raisers”, and the more recent film “Don’t Look Up”.

I was somewhat disturbed by the enthusiastic audience applause to dire serious warnings. What are techniques or ways to anchor conversations like this to keep them more serious?

People give standing ovations when they feel inspired to because something resonated with them.  They're applauding him for trying to save humanity, and this audience reaction gives me hope.

As a note for Yudkowsky if he ever sees this and cares about the random gut feelings of strangers: after seeing this, I suspect the authoritative, stern strong leader tone of speaking will be much more effective than current approaches.

EDIT: missed a word

Correction: this is TEDx - a more local less official version of TED Apparently, it's TED, I only looked at the channel name, sorry for confusion

I think it is a TED talk, just uploaded to the wrong channel.

Yeah, based on EY's previous tweets regarding this, it seemed like it was supposed to be a TED talk.

[-]gjm41

Plausible guess, but I'm pretty sure it's wrong. He's on the programme here: https://conferences.ted.com/ted2023/program and the date (that TED conference was 17th to 21st April, and EY tweeted about giving a TED talk on the 18th) matches up. The YouTube video was updated by "TEDxKanke", a group in India which has also had an AI-themed session recently, but theirs was after EY gave his talk. (Also, the audience in the video looks more like it's full of people from British Columbia than like it's full of people from Jharkand.)

For reasons I can't remember (random Amazon recommendation?) I read Life 3.0 five years ago, and I've been listening to podcasts about AI alignment ever since. I work with education at a national level, and this January I wrote and published a book "AI in Education" to help teachers use ChatGPT in a sensible way – and, in its own chapter, make more people aware of the risks with AI. I've been giving a lot of talks about AI and education since then, and I end each presentation with some words about AI risks. I am sure that most people reading the book or listening to my talks don't care about existential risk from AI, or simply don't get it. But I'm also sure that there are people who are made more aware.

I believe that reaching out to specific groups of people (such as teachers) is a good complement to creating a public debate about (existential) AI risks. It is fairly easy to get people interested in a book or a talk about using ChatGPT in their profession, and adding a section on AI risks is a good way of reaching the fraction of the audience who can grasp and work with the risks.

All this being said, I also want to say that I am not convinced that AGI is immanent, or that AGI necessarily means apocalypse. But I am sufficiently convinced about AI posing real and serious risks – also today – that the risks must get much more attention than they get today. I also believe that this attitude has a better chance of getting an audience than a doomsayer attitude, but I think it's a bad idea to use a particular attitude as a make up to get a message across – better is for everyone to be sincere (perhaps combined with selecting public voices partly based on how well the voices can get across).

Concerning this TED Talk: It seems to me that Eliezer has run too far on his track to be able to get across to those who are new to existential AI risk. He is (obviously) quite worried, on the verge of despairing. The we (as the society) are dealing with AI risk must seem like a bad comedy to him – but a comedy where everyone dies. For real. It is difficult to get your message out when feeling like that. I think his strongest part of the presentation was when he sounded sincerely angry.

(Side note: I think the interview with Eliezer that Lex Fridman made was good, so I don't agree that Eliezer isn't a good public communicator.)

I can't access the YouTube link or find the video on TED's website. Has the talk been removed? Does someone know what's going on?

I'm not sure what's going on, but the presentation can be viewed here: https://files.catbox.moe/qdwops.mp4

As some people here have said, it's not a great presentation. The message is important, though.