I guess those are pretty vague words. It's a (set of) research projects followed by thousands, if not tens of thousands, of people. Among these people are philanthropists and entrepreneurs who have donated millions of dollars to the cause, and seem to be on track to donate even more money. It's received attention and support from major scientist, and some world-famous people, including Stephen Hawking, Elon Musk, and very recently Bill Gates. He's been published alongside the academics from the Future of Humanity Institute, and his work has merited the respect of prominent thinkers in fields related to artificial intelligence. When his work has attracted derision, it has also been because his ideas attract enough attention for other prominent academics and thinkers to see fit to criticize him. If we evaluate the success of a movement on the basis of memetics alone, this last observation might also count.
The idea of dangers from superintelligence was debated in Aeon Magazine last year. Much of the effort and work to raise the profile and increase focus upon the issue has been done by Nick Bostrom and the Future of Humanity Institute, the Future of Life Institute, and evne the rest of the Machine Intelligence Research Institute aside from Eliezer himself. Still, though, he initiated several theses on solving the problem, and communicated them to the public.
This is gonna be maybe uncomfortably blunt, but: Eliezer seems to be playing a role in getting AI risk research off the ground similar to the role of Aubrey de Gray in getting life extension research off the ground. Namely, he's the embarrassing crank with the facial hair that will not shut up, but who's smart enough and informed enough to be making arguments that aren't trivially dismissed. No one with real power wants to have that guy in the room, and so they don't usually end up as the person giving TV interviews and going to White House dinners and s...
[Contains No HPMOR Spoliers]
[http://hpmor.com/notes/119/](http://hpmor.com/notes/119/)
I was at first confused by Eliezer's requests at the end of Ch. 119. I missed his Author's Notes, which his explains his rationale behind them. I thought I would share in case others missed it, especially because readers on LessWrong may have more elite or broader networks to help Eliezer achieve his new goals.
Eliezer has several other projects he might be interested in. Learn more by clicking the link.