I find that negative visualization in conjunction with Mark Williams' guided meditation "Exploring Difficulties" is useful for getting me in that stoic mindset of being more okay with a worst-case scenario. (Or at least, I hope so - I guess I'll see how well it worked if the worst-case scenario ever comes to pass.)
Thanks, I'll try out the meditation!
Using humility to counteract shame
"Pride is not the opposite of shame, but its source. True humility is the only antidote to shame."
Uncle Iroh, "Avatar: The Last Airbender"
Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent - anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions - e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.
What is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to LessWrong, "to be humble is to take specific actions in anticipation of your own errors", which seems more like a possible consequence of being humble than a definition.
Humility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school. While humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset?
One way is through negative visualization or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don't actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.
(Cross-posted from my blog. Thanks to Janos Kramar for his feedback on this post.)
Hi Victoria,
I am highly interested in AI safety research. Unfortunately, I do not have a strong math background and I live in an area distant from AI research. After spending some time thinking about my future I have come to the decision to go for a math intensive PhD in some area not far from MIRI or FLI. I have only the bachelor degree in Engineering with major in Computer Science and Software Engineering. Currently, I spend most of my time working full time as a software developer, preparing for a GRE general exam and thinking about PhD and FAI.
Andrew Critch from MIRI and Berkeley is very enthusiastic about pursuing the PhD. He suggested the Statistics. I would be glad to know your opinions about PhD/AI & FAI research. Here is a list of some questions, which are bothering me.
- What do you think would be more relevant for AI safety research - CS, Statistics or something else?
- What areas of research are the most promising for AI safety, in your opinion?
- Is it better to pick the research area close to what MIRI working on, or a more general AI research one (such as a ML).
- Is it possible to increase the chances of successful admission by gaining some research experience before the admissions in this year? Or is it better to spend the time in some other way?
- Does the Math GRE subject test increase the chance of admission?
I would recommend doing a CS PhD and take statistics courses, rather than doing a statistics PhD.
For examples of promising research areas, I recommend taking a look at the work of FLI grantees. I'm personally working on the interpretability of neural nets, which seems important if they become a component of advanced AI. There's not that much overlap between MIRI's work and mainstream CS, so I'd recommend a more broad focus.
Research experience is always helpful, though it's harder to get if you are working full time in industry. If your company has any machine learning research projects, you could try to get involved in those. Taking machine learning / stats courses and doing well in them is also helpful for admission. Math GRE subject test probably helps (not sure how much) if you have a really good score.
The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people thinking about long-term AI safety.
Okay, but surely it’s still important to think now about the eventual consequences of AI. - Absolutely. We ought to be talking about these things.
Upvoted to encouraging people to get hands-on. Learning is good. Trying to go for a higehr level of understanding in whatever you do is a core rationality skill.
Sadly you stopped there though. For the sake of discussion, I've heard Artificial Intelligence: A Modern Approach is a good book on the subject. Hopefully a discussion could start here; perhaps there's something flawed, or perhaps the book is outdated. If anyone here, and I'm looking at you, the AI, AGI, FAI, IDK and other acronym-users whom I can't keep up with can provide some more directions for the potentially aspiring AI researchers lurking around, it would be very appreciated.
There are a lot of good online resources on deep learning specifically, including deeplearning.net, deeplearningbook.org, etc. As a more general ML textbook, Pattern Recognition & Machine Learning does a good job. I second the recommendation for Andrew Ng's course as well.
To contribute to AI safety, consider doing AI research
Among those concerned about risks from advanced AI, I've encountered people who would be interested in a career in AI research, but are worried that doing so would speed up AI capability relative to safety. I think it is a mistake for AI safety proponents to avoid going into the field for this reason (better reasons include being well-positioned to do AI safety work, e.g. at MIRI or FHI). This mistake contributed to me choosing statistics rather than computer science for my PhD, which I have some regrets about, though luckily there is enough overlap between the two fields that I can work on machine learning anyway. I think the value of having more AI experts who are worried about AI safety is far higher than the downside of adding a few drops to the ocean of people trying to advance AI. Here are several reasons for this:
- Concerned researchers can inform and influence their colleagues, especially if they are outspoken about their views.
- Studying and working on AI brings understanding of the current challenges and breakthroughs in the field, which can usefully inform AI safety work (e.g. wireheading in reinforcement learning agents).
- Opportunities to work on AI safety are beginning to spring up within academia and industry, e.g. through FLI grants. In the next few years, it will be possible to do an AI-safety-focused PhD or postdoc in computer science, which would hit two birds with one stone.
To elaborate on #1, one of the prevailing arguments against taking long-term AI safety seriously is that not enough experts in the AI field are worried. Several prominent researchers have commented on the potential risks (Stuart Russell, Bart Selman, Murray Shanahan, Shane Legg, and others), and more are concerned but keep quiet for reputational reasons. An accomplished, strategically outspoken and/or well-connected expert can make a big difference in the attitude distribution in the AI field and the level of familiarity with the actual concerns (which are not about malevolence, sentience, or marching robot armies). Having more informed skeptics who have maybe even read Superintelligence, and fewer uninformed skeptics who think AI safety proponents are afraid of Terminators, would produce much needed direct and productive discussion on these issues. As the proportion of informed and concerned researchers in the field approaches critical mass, the reputational consequences for speaking up will decrease.
A year after FLI's Puerto Rico conference, the subject of long-term AI safety is no longer taboo among AI researchers, but remains rather controversial. Addressing AI risk on the long term will require safety work to be a significant part of the field, and close collaboration between those working on safety and capability of advanced AI. Stuart Russell makes the apt analogy that "just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, issues of control and safety will become central to AI as the field matures". If more people who are already concerned about AI safety join the field, we can make this happen faster, and help wisdom win the race with capability.
(Cross-posted from my blog. Thanks to Janos Kramar for his help with editing this post.)
[LINK] OpenAI doing an AMA today
The OpenAI research team is doing a Reddit AMA today! A good opportunity to ask them questions about AI safety and machine learning.
[LINK] The Top A.I. Breakthroughs of 2015
A great overview article on AI breakthroughs by Richard Mallah from FLI, linking to many excellent recent papers worth reading.
Progress in artificial intelligence and machine learning has been impressive this year. Those in the field acknowledge progress is accelerating year by year, though it is still a manageable pace for us. The vast majority of work in the field these days actually builds on previous work done by other teams earlier the same year, in contrast to most other fields where references span decades.
Creating a summary of a wide range of developments in this field will almost invariably lead to descriptions that sound heavily anthropomorphic, and this summary does indeed. Such metaphors, however, are only convenient shorthands for talking about these functionalities. It's important to remember that even though many of these capabilities sound very thought-like, they're usually not very similar to how human cognition works. The systems are all of course functional and mechanistic, and, though increasingly less so, each are still quite narrow in what they do. Be warned though: in reading this article, these functionalities may seem to go from fanciful to prosaic.
The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. I'll highlight a small number of important threads within each that have brought the field forward this year.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It's funny, I wrote a blog post arguing against humility not too long ago. I had a somewhat different picture of humility than you:
But I actually don't think we disagree all that much, we're just using the same word to describe different things. I think the thing I called humility - the kind of draconian, overbearing anti-self-charity that scrupulous people experience - that is a bad thing. And I think the thing you called humility - acceptance of your flaws, self-compassion - that is a very good thing. In fact, I ended the essay with a call for more self-charity from (what I called) humble people. And I've been trying to practice self-compassion since writing that essay, and it's been a boon for my mental health.
(By far the most useful technique, for what it's worth, has been "stepping outside of myself", i.e. trying to see myself as just another person. I find when I do something embarrassing it's the worst thing to have ever happened, and obviously all my friends are thinking about how stupid I am and have lowered their opinion of me accordingly...whereas when a friend does something embarrassing, it maybe warrants a laugh, but then it seems totally irrelevant and has absolutely no bearing on what I think of them as a person. I now try as much as possible to look at myself with that second mindset.)
Anyway, language quibbles aside, I agree with this post.
Thanks for the link to your post. I also think we only disagree on definitions.
I agree that self-compassion is a crucial ingredient. This is the distinction I was pointing at with "while focusing on imperfections without compassion can lead to beating yourself up". Humility says "I am flawed and it's ok", while self-loathing is more like "I am flawed and I should be punished". The latter actually generates shame instead of reducing it.
I think that seeking external validation by appearing humble is completely orthogonal to humility as an internal state or attitude you can take towards yourself (my post focuses on the latter). This signaling / social dimension of humility seems to add a lot of confusion to an already fuzzy concept.