Some comments on the recruiting plan:
Regarding 5, I would think an important subskill would be recognizing arbitrarity in conceptual distinctions, e.g. between belief and preference, agent and environment, computation and context, ethics and meta-ethics, et cetera. Relatedly, not taking existing conceptual frameworks and their distinctions as word of God. Word of von Neumann is a lot like word of God but still not quite.
By the way I love comments like yours here that emphasize moral uncertainty.
especially skilled in maths, probably at the IMO medal-winning level
What is with you guys and the math olympiad?
Are successes at the IMO a reliable and objective measure of the skills you need?
Well, of course, this is a major filter for intelligence, creativity and plain math "basic front kick" proficiency. However, you also exclude lots and lots of people who may possess the necessary skills but did not choose to enter IMO, were not aware of it, had other personal commitments or simply procrastinated too much etc. Also, the skills IMO medalists have acquired so far are in no way a guarantee that more skills will follow, and may be the result of good teachers or enthusiastic parents. Let me present you some weak evidence, behold, a personal anecdote!
On behalf of my chemistry teacher and owed to my school's policy in general, I entered one of these sciency olympiads a while ago. I procrastinated over the first round, which was a homework assignment, got it in just in time not quite completed, but was allowed to the next round. From there on, I made it to the national team and won a medal at the international competition. Looking back on what I did back then, I'd say the questions were quite easy, not at a level I'd call requiring serious skill. Of course I've continued to learn much...
Another data point: I have a gold IMO medal (only 4 people in my country ever had it), and in the following 20 years I never had a phone call or an e-mail saying: "We have this project where we need math talents, and I saw your name online, so if you are interested I would like to tell you more details."
No one cares. Seems to me the only predictor companies use is: "Did you do the same kind of work at some previous company? How many years?" and then the greater number wins. (Which effectively means that one is paid for their age and their ability to pick the right technology when they finish university and stick with it. Kinda depressing.)
EDIT: As a sidenote, I did not use my math skills significantly since university, so in my case the IMO medal is probably not so good predictor now.
No one cares. Seems to me the only predictor companies use is: "Did you do the same kind of work at some previous company? How many years?" and then the greater number wins. (Which effectively means that one is paid for their age and their ability to pick the right technology when they finish university and stick with it. Kinda depressing.)
Cynic. You're neglecting the influence of a pretty face, good clothes, and a bit of charm!
No, we were IMO fetishists before we met Paul.
If people know of stronger predictors of raw math ability than IMO performance, Putnam performance, and early-age 800 on Math SAT, I'd like to know what they are.
The IMO/IOI and qualification processes for them seem to be useful as early indicators of general intelligence; they obviously don't capture everyone or even a huge fraction of all comparably smart people, but they seem to have fewer false positives by far than almost any other external indicators until research careers begin in earnest.
We used contests heavily in the screening process for SPARC in part for this reason, and in part because there is a community surrounding contests which the SPARC instructors understand and have credibility with, and which looks like it could actually benefit from exposure to (something like) rationality, which seems like an awesome opportunity.
"IMO medal-winning level" is (I presume) intended to refer to a level of general intelligence / affinity for math. As I said, the majority of people at this level don't in fact have IMO medals, and some IMO medalists aren't at this level. The fact that this descriptor gets used, instead of something like "top 0.01%", probably comes down to a combination of wanting to avoid precision (both about what is being measured and how high the bar is), and wanting to use a measure which reflects we...
What's your evidence that you're a marginal IMO medalist?
I only ask because I've noticed that my perception of a person's actual ability and my perception of their ego seem to be negatively correlated among the people I've met, including Less Wrong users. For example, I once met a guy at a party who told me he wasn't much of a coder; next semester he left undergrad to be the CTO of a highly technical Y Combinator startup.
This is part of the reason why I'm a little skeptical of SI's of telling people "send us an e-mail if you did well on the Putnam"--I would guess a large fraction of those who did well on the Putnam think they did well by pure luck. (Imposter syndrome.) SI might be better off trying to collect info on everyone who thinks they might want to work on FAI, no matter how untalented, and judge relative competence for themselves instead of letting FAI contributor wannabes judge themselves. (Or at least specify a score above which one should definitely contact them, regardless of how lucky one feels one got.)
Less Wrong post on mathematicians and status:
http://lesswrong.com/lw/2vb/vanity_and_ambition_in_mathematics/
IAWYC, and so does Wikipedia:
One of the main effects of illusory superiority in IQ is the Downing effect. This describes the tendency of people with a below average IQ to overestimate their IQ, and of people with an above average IQ to underestimate their IQ.
(I personally am a very good example of this, because although I think I'm not terribly bright, I am in fact a genius.)
Hi, I'm new here, so I'm not quite familiar with all the ideas here. However, I am a young mathematician who has some familiarity with how mathematical theories are developed.
Highly intelligent, and especially skilled in maths, probably at the IMO medal-winning level. (FAI team members will need to create lots of new math during the course of the FAI research initiative.)
It might be much cheaper to accept more average mathematicians who meet the other criteria. Generally, to build a new theory, you'll need a few people who can come up with lots of creative ideas, and lots of people who are capable of understanding the ideas, and then taking those ideas and building them into a fleshed out theory. Many mathematicians accept that they are of the second type, and work towards developing a theory to the point where a new creative type can clearly see what new ideas are needed.
Trustworthy. (Most FAI work is not "Friendliness theory" but instead AI architectures work that could be made more dangerous if released to a wider community that is less concerned with AI safety.)
Shouldn't this just be a subset of number 5? I'm sure you would rather have someone who would lie to keep AI risk low than someone who would tell the truth no matter what the cost.
On mathematician personalities:
http://lesswrong.com/lw/2z7/draft_three_intellectual_temperaments_birds_frogs/
Seems likely that the distribution of personalities among math competition winners isn't the same as the distribution of personalities you'd want in an FAI team.
More potential problems with math competitions. Quote by a Fields medalist:
These contests are a bit like spelling bees. There is some connection between good spelling and good writing, but the winner of the state spelling bee does not necessarily have the talent to become a good writer, and some fine writers are not good spellers.
Under ideal conditions, maybe SI would identify "safe" problems that seemed representative of the problem space as a whole and farm these problems out (in a way similar to decision theory has been farmed out some to Less Wrong), inviting the best performers on the safe problems to work on more dangerous problems.
Or SI could simply court proven mathematical researchers.
It should be noted that I'm not a mathematician.
The trouble is that the number of people on Earth who qualify may be very close to 0.
This sounds terribly arrogant until you realize that the requirement of "Deeply committed to AI risk reduction" is on the list. Should probably be emphasized around this statement in the post.
especially skilled in maths, probably at the IMO medal-winning level
(Should distinguish raw intelligence, contest training and research math training. Raw intelligence is crucial for good performance in both contests and math research, but getting good at math takes many years of training that IMO winners won't automatically have.)
Strongly agree. I would also make explicit what is implied above, namely that IMO (etc.) winners will in fact tend to have years of training of a different sort: solving (artificially-devised) contest problems, which may not be as relevant of a skill for SI's purposes.
It seems to me that what SI really wants/needs is a mathematically-sophisticated version of Yudkowsky. Unfortunately, I'm not sure where one goes to find such people. IMO may not be a bad place to start, but one is probably going to have to look elsewhere as well.
Deeply committed to AI risk reduction. (It would be risky to have people who could be pulled off the team—with all their potentially dangerous knowledge—by offers from hedge funds or Google.)
To me this seems naive. Having someone with actually worked in SI on FAI going to Google might be a good thing. It creates connection between Google and SI. If he sees major issues inside Google that invalidate your work on FAI he might be able to alert you. If Google does something that dangerous according to the SI consensus then he's around to tell them about the danger.
Being open is a good thing.
I'm going to open my clueless mouth again: Many of the problems associated with FAI haven't been defined to that well yet. Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools. Perhaps it would be a good idea to have a generalist who has limited familiarity with a large variety of mathematical tools and can direct the team towards existing tools that might solve their problem. See the section called "The Right Way To Learn Math" in this post for more:
Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do.
I question this assumption. I think that building an FAI team may damage your overall goal of AI risk reduction for several reasons:
By setting yourself up as a competitor to other AGI research efforts, you strongly decrease the chance that they will listen to you. It will be far easier for them to write off your calls for consideration of friendliness issues as self-serving.
You risk unde
I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?
I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.
Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.
PD: Hm, so what about the project lead?
SI: Well, he's done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.
PD: Huh. So, how has the work gone so far?
SI: That's the best part, we're keeping it all secret so that our advances don't fall into the wrong hands. You wouldn't want that, would you?
PD: [backing away slowly] No, of course not... Well, I need to do a little more reading about your organization, but this sounds, um, good...
The idea of getting FAI contributors who are unlikely to ever switch jobs seems like it might be the most stringent hiring requirement. It might be worthwhile to look into people who gain government clearances and then move to a nongovernment job, to see if they abuse the top-secret information they had access to.
Shift the Singularity Summit toward being more directly useful for AI risk reduction, and also toward greater profitability—so that we have at least one funding source that is not donations. (Currently underway.)
This in particular seems to be a good subgoal, and I would be interested in the details of what a more directly useful Singularity Summit looks like, and how you get there. (I attended in 2010, and found it to be fun, somewhat educational, but unfocused. (And somewhat useful in that it attracted attention to SIAI.))
The kind of people we'd need for an FAI team are:
- Highly intelligent, and especially skilled in maths, probably at the IMO medal-winning level. (FAI team members will need to create lots of new math during the course of the FAI research initiative.)
- Trustworthy. (Most FAI work is not "Friendliness theory" but instead AI architectures work that could be made more dangerous if released to a wider community that is less concerned with AI safety.)
If FAI is or can be made tractable, it will be a technological system: some combination of hardware ...
One immediate explanation is that any time spent studying formal mathematics is a waste of extremely precious higher cortical capacity which rivals are entirely devoting to pure technological study.
Aren't Turing and von Neumann (surely they invented "computers" as much as anyone) counterexamples to your thesis?
Hopefully you have also considered extracting specific limited-scope math problems and farming them out with grants, like you do for papers. This would increase the pool of available talent and not require training them in AI or rationality.
Shouldn't the very first goal be to fully define an ethical theory of friendliness before even starting on a goal system to implement the theory? I have some doubts that an acceptable theory can be formalized. Our ethical systems are so human-centric that formalizing them in rational terms will likely lead to either a very weak human-centric theory with potential loopholes or a general theory of ethics that places no particular importance on the concerns of humans, even large groups of them. For instance, I find it much more likely that a general theor...
I think the other strategy you mentioned, about promoting FAI research by paying known academics to write papers on the topic, is a better idea. It is more plausible, more direct, less cultish, etc.
Trustworthy. (Most FAI work is not "Friendliness theory" but instead AI architectures work that could be made more dangerous if released to a wider community that is less concerned with AI safety.)
Ah, so this explains why there is no source code visible. To be honest, I was a little worried that all the effort was purely going to (very useful) essays and not actually building things, but it is clear that it is kept under wraps. That is such a shame, but I suppose necessary.
One caution worth noting here is that "trustworthiness" and "altruism" may not be traits that are stable across different situations. As I noted in this post, there's good reason to think human behavior evolved to follow conditional rules, so observed trustworthiness and altruism under some conditions may be very poor evidence of Friendliness for superintelligence-coding purposes.
I've seen a couple sources argue that intelligence enhancement will ideally come before AGI. This could deal with the math ability constraint, which seems to be your strongest. Maybe you feel that sponsoring an intelligence enhancement effort would be beyond SI's organizational scope?
Some important aspects of the future AI 'friendliness' would probably link up with the greater economy surrounding us; and more importantly, it would depend upon the nature of AI interaction with people as well as their behaviour. So, besides the obvious component of mathematics, I feel that some members of the FAI team should also have some background in subjects such as psychology; and also a generic perspective on global issues such as resource management.
Somehow absent from the objectives is "finding out if SI's existence is at all warranted". You also want people "deeply committed to AI risk reduction", not people deeply committed to e.g. actually finding the truth (which could be that all properties of AI you take for true were a wrong wild guess, which should be considered highly likely due to shoot in the dark nature of those guesses). This puts nail in your coffin completely. A former theologist building a religious organization. Jesus Christ, man.
Also, something else: in the gam...
Series: How to Purchase AI Risk Reduction
A key part of SI's strategy for AI risk reduction is to build toward hosting a Friendly AI development team at the Singularity Institute.
I don't take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity "winning." That is a matter for much strategic research and debate.
Either way, I think that building toward an FAI team is good for AI risk reduction, even if we decide (later) that an SI-hosted FAI team is not the best thing to do. Why is this so?
Building toward an SI-hosted FAI team means:
Both (1) and (2) are useful for AI risk reduction even if an SI-hosted FAI team turns out not to be the best strategy.
This is because: Achieving part (1) would make SI more effective at whatever it is doing to reduce AI risk, and achieving part (2) would bring great human resources to the cause of AI risk reduction, which will be useful to a wide range of purposes (FAI team or otherwise).
So, how do we accomplish both these things?
Growing SI into a better organization
Like many (most?) non-profits with less than $1m/yr in funding, SI has had difficulty attracting the top-level executive talent often required to build a highly efficient and effective organization. Luckily, we have made rapid progress on this front in the past 9 months. For example we now have (1) a comprehensive donor database, (2) a strategic plan, (3) a team of remote contractors used to more efficiently complete large and varied projects requiring many different skillsets, (4) an increasingly "best practices" implementation of central management, (5) an office we actually use to work together on projects, and many other improvements.
What else can SI do to become a tighter, larger, and more effective organization?
They key point, of course, is that all these things cost money. They may be "boring," but they are incredibly important.
Attracting and creating superhero mathematicians
The kind of people we'd need for an FAI team are:
There are other criteria, too, but those are some of the biggest.
We can attract some of the people meeting these criteria by using the methods described in Reaching young math/compsci talent. The trouble is that the number of people on Earth who qualify may be very close to 0 (especially given the "committed to AI risk reduction" criterion).
Thus, we'll need to create some superhero mathematicians.
Math ability seems to be even more "fixed" than the other criteria, so a (very rough) strategy for creating superhero mathematicians might look like this:
All these steps, too, cost money.