So Eliezer explains why rationalization doesn't work, then makes this extremely convincing case why John Baez shouldn't be spending effort on environmentalism. Baez replies with "Ably argued!" and presumably returns to his daily pursuits.
Baez replies with "Ably argued!" and presumably returns to his daily pursuits.
Please don't assume that this interview with Yudkowsky, or indeed any of the interviews I'm carrying out on This Week's Finds, are having no effect on my activity. Last summer I decided to quit my "daily pursuits" (n-category theory, quantum gravity and the like), and I began interviewing people to help figure out my new life. I interviewed Yudkowsky last fall, and it helped me decide that I should not be doing "environmentalism" in the customary sense. It only makes sense for me to be doing something new and different. But since I've spent 40 years getting good at math, it should involve math.
If you look at what I've been doing since then, you'll see that it's new and different, and it involves math. If you're curious about what it actually is, or whether it's something worth doing, please ask over at week313. I'm actually a bit disappointed at how little discussion of these issues has occurred so far in that conversation.
Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his.
Would you be willing to write a blog post reviewing his arguments and explaining why you either reject them, don't understand them or accept them and start working to mitigate risks from AI? It would be valuable to have someone like you, who is not deeply involved with the SIAI (Singularity Institute) or LessWrong.com, to write a critique on their arguments and objectives. I myself don't have the education (yet) to do so and welcome any reassurance that would help me to take action.
If you don't have the time to write a blog post, maybe you can answer just the following question. If someone was going to donate $100k and you could pick the charity, would you choose the SIAI? Yes/No answer if you're too busy, a short explanation if you've the time. Thank you!
For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.
You mean, "before we take on the galaxy, let’s do a smaller problem"? So you don't think that we'll have to face risks from AI before climate change takes a larger toll? You don't think that working on AGI means working on the best possible solution to the problem of climate change? And even if we had to start taking active measures against climate change in the 2020s, you don't think we should rather spend that time on AI because we can survive a warmer world but no runaway AI? Gregory Benford writes that "we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren". That sounds to me like he assumes that there will be grandchildren, which might not be the case if some kind of AGI doesn't take care of a lot of other problems we'll have to face soon.
A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues.
I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?
Since XiXiDu also asked this question on my blog, I answered over there.
I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?
I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.
A reference to a paper by David Wolpert and Gregory Benford on Newcomb's paradox
Isn't the whole issue with Newcomb's paradox the fact that if you take two boxes Omega will predict it and if you take one box Omega will predict it? It doesn't matter if both boxes are transparent, you'll only take one if you did indeed precommit (or if you're the kind of person who one-boxes 'naturally') to only take one box. Since I've read the first time about it I'm puzzled by why people think there is a paradox or that the problem is difficult. Maybe I just don't get it.
In my interview of Gregory Benford I wrote:
If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!
If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!
It sounds like you find the second argument so unconvincing that you don't see why people consider it a paradox.
For what it's worth, I'd take only one box.
The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.
I'm waiting for this for so long. I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI. So far there exists literally no valuable third-party critique.
XiXiDu wrote:
I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.
Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.
(Week 311 is just the first part of a multi-part interview.)
For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.
Tim Tyler wrote:
It looks like a conventional "confused environmentalist" prioritisation to me.
I'm probably confused (who isn't?), but I doubt I'm conventional. If I were, I probably wouldn't be so eager to solicit the views of Benford, Yudkowsky and Drexler on my blog. A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues. I'd like to make a dent in that problem.
The list you cite is not the explanation that XiXiDu seeks.
What do you believe because others believe it, even though your own evidence and reasoning ("impressions") point the other way?
I don't know. If I did, I'd probably try to do something about it. But my subconscious mind seems to have prevented me from noticing examples. I don't doubt that they exist, lurking behind the blind spot of self-delusion. But I can't name a single one.
Feynman: "The first principle is that you must not fool yourself - and you are the easiest person to fool."
Thurston said:
Quickness is helpful in mathematics, but it is only one of the qualities which is helpful.
Gowers said:
The most profound contributions to mathematics are often made by tortoises rather than hares.
Gelfand said it in a more funny way:
You have to be fast only to catch fleas.
The most impressive quality I've seen in mathematicians (including students) is the capacity to call themselves "confused" until they actually understand completely.
Most of us, myself included, are tempted to say we "understand" as soon as we possibly can, to avoid being shamed. People who successfully learn mathematics admit they are "confused" until they understand what's in the textbook. People who successfully create mathematics have such a finely tuned sense of "confusion" that it may not be until they have created new foundations and concepts that they feel they understand.
Even among mathematicians who project more of a CEO-type, confident persona, it seems that the professors say "I don't understand" more than the students.
It isn't humility, exactly, it's a skill. The ability to continue feeling that something is unclear long after everyone else has decided that everything is wrapped up. You don't have to have a low opinion of your own abilities to have this skill. You just have to have a tolerance for doubt much higher than that of most humans, who like to decide "yes" or "no" as quickly as possible, and simply don't care that much whether they're wrong or right.
I know this, because it's a weakness of mine. I'm probably more tolerant of doubt and sensitive to confusion than the average person, but I am not as good at being confused as a good mathematician.
It's a bit easier in math than other subjects to know when you're right and when you're not. That makes it a bit easier to know when you understand something and when you don't. And then it quickly becomes clear that pretending to understand something is counterproductive. It's much better to know and admit exactly how much you understand.
And the best mathematicians can be real masters of "not understanding". Even when they've reached the shallow or rote level of understanding that most of us consider "understanding", they are dissatisfied and say they don't understand - because they know the feeling of deep understanding, and they aren't content until they get that.
Gelfand was a great Russian mathematician who ran a seminar in Moscow for many years. Here's a little quote from Simon Gindikin about Gelfand's seminar, and Gelfand's gift for "not understanding":
One cannot avoid mentioning that the general attitude to the seminar was far from unanimous. Criticism mainly concerned its style, which was rather unusual for a scientific seminar. It was a kind of a theater with a unique stage director playing the leading role in the performance and organizing the supporting cast, most of whom had the highest qualifications. I use this metaphor with the utmost seriousness, without any intention to mean that the seminar was some sort of a spectacle. Gelfand had chosen the hardest and most dangerous genre: to demonstrate in public how he understood mathematics. It was an open lesson in the grasping of mathematics by one of the most amazing mathematicians of our time. This role could be only be played under the most favorable conditions: the genre dictates the rules of the game, which are not always very convenient for the listeners. This means, for example, that the leader follows only his own intuition in the final choice of the topics of the talks, interrupts them with comments and questions (a privilege not granted to other participants) [....] All this is done with extraordinary generosity, a true passion for mathematics.
Let me recall some of the stage director's strategems. An important feature were improvisations of various kinds. The course of the seminar could change dramatically at any moment. Another important mise en scene involved the "trial listener" game, in which one of the participants (this could be a student as well as a professor) was instructed to keep informing the seminar of his understanding of the talk, and whenever that information was negative, that part of the report would be repeated. A well-qualified trial listener could usually feel when the head of the seminar wanted an occasion for such a repetition. Also, Gelfand himself had the faculty of being "unable to understand" in situations when everyone around was sure that everything is clear. What extraordinary vistas were opened to the listeners, and sometimes even to the mathematician giving the talk, by this ability not to understand. Gelfand liked that old story of the professor complaining about his students: "Fantastically stupid students - five times I repeat proof, already I understand it myself, and still they don't get it."
In my 25 years of being a professional mathematician I've found many (though certainly not all) mathematicians to be acutely aware of status, particularly those who work at high-status institutions. If you are a research mathematician your job is to be smart. To get a good job, you need to convince other people that you are smart. So, there is quite a well-developed "pecking order" in mathematics.
I believe the appearance of "humility" in the quotes here arises not from lack of concern with status, but rather various other factors:
1) Most of us know that there are mathematicians much better than us: mathematicians who could, with their little pinkie on a lazy Sunday afternoon, accomplish deeds that we might struggle vainly for years to achieve.
2) Many of us realize that it's wiser to emphasize our shortcomings than boast of our accomplishments.
By the way: people quoted in this article are all extremely high in status, and indeed it's mostly such mathematicians who wind up talking about themselves publicly, answering questions like "Can you remember when and how you became aware of your exceptional mathematical talent?" Every mathematician worth his or her salt knows of Hironaka, Langlands, Gromov, Thurston and Grothendieck. So these are not typical mathematicians: they are our heroes, our gods.
It is nice having humble gods. But still, they're not stupid: they know they're our gods.
The author of this post pointed out that he said "t's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do." Somehow I hadn't noticed that.
I'm not sure how this affects my reaction, but I wouldn't have written quite what I wrote if I'd noticed that qualifier.
In my 25 years of being a professional mathematician I've found many (though certainly not all) mathematicians to be acutely aware of status, particularly those who work at high-status institutions. If you are a research mathematician your job is to be smart. To get a good job, you need to convince other people that you are smart. So, there is quite a well-developed "pecking order" in mathematics.
I believe the appearance of "humility" in the quotes here arises not from lack of concern with status, but rather various other factors:
1) Most of us know that there are mathematicians much better than us: mathematicians who could, with their little pinkie on a lazy Sunday afternoon, accomplish deeds that we might struggle vainly for years to achieve.
2) Many of us realize that it's wiser to emphasize our shortcomings than boast of our accomplishments.
By the way: people quoted in this article are all extremely high in status, and indeed it's mostly such mathematicians who wind up talking about themselves publicly, answering questions like "Can you remember when and how you became aware of your exceptional mathematical talent?" Every mathematician worth his or her salt knows of Hironaka, Langlands, Gromov, Thurston and Grothendieck. So these are not typical mathematicians: they are our heroes, our gods.
It is nice having humble gods. But still, they're not stupid: they know they're our gods.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
His points about risk aversion are confused. If you make choices consistently, you are maximizing the expected value of some function, which we call "utility". (Von Neumann and Morgenstern) Yes, it may grow sublinearly with regard to some other real-world variable like money or number of happy babies, but utility itself cannot have diminishing marginal utility and you cannot be risk-averse with regard to your utility. One big bet vs many small bets is also irrelevant. When you optimize your decision over one big bet, you either maximize expected utility or exhibit circular preferences.
Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don't have time to list them), in a situation where we don't have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don't apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the set.
This is especially the case for anyone changing their career.
I agree that my remark about risk aversion was poorly stated. What I meant is that if I have a choice either to do something that has a very tiny chance of having a very large good effect (e.g., working on friendly AI and possibly preventing a hostile takeover of the world by nasty AI) or to do something with a high chance of having a small good effect (e.g., teaching math to university students), I may take the latter option where others may take the former. Neither need be irrational.