Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.
Now, the new and better version has arrived. We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths. Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours.
A representative sample of current projects:
- Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
- The Peter Platzer Popular Book Planning Project;
- Editing and publicizing theuncertainfuture.com;
- Improving the LW wiki, and/or writing good LW posts;
- Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
- Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).
Interested, but not sure whether to apply?
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”. That kind of timidity destroys the world, by failing to save it. So if that’s your situation, send us an email. Let us be the one to say “no”. Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.
And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway. Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.
What we’re looking for
At bottom, we’re looking for anyone who:
- Is capable (strong ability to get things done);
- Seriously aspires to rationality; and
- Is passionate about reducing existential risk.
Bonus points for any (you don’t need them all) of the following traits:
- Experience with management, for example in a position of responsibility in a large organization;
- Good interpersonal and social skills;
- Extraversion, or interest in other people, and in forming strong communities;
- Dazzling brilliance at math or philosophy;
- A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
- Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
- Great writing skills and/or marketing skills;
- Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
- Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
- Web programming skill, or website design skill;
- Legal background;
- A history of successfully pulling off large projects or events;
- Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
- Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.
If you think this might be you, send a quick email to jasen@intelligence.org. Include:
- Why you’re interested;
- What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
- Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.
As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.
Looking forward to hearing from you,
Anna
ETA (as of 3/25/10): We are still accepting applications, for summer and in general. Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.
Let me start with my slogan-version of my brand of realism: "Things are a certain way. They are not some other way."
I'll admit up front the limits of this slogan. It fails to address at least the following: (1) What are these "things" that are a certain way? (2) What is a "way", of which "things are" one? In particular (3) what is the ontological status of the other ways aside from the "certain way" that "things are"? I don't have fully satisfactory answers to these questions. But the following might make my meaning somewhat more clear.
To your questions:
First, let me clear up a possible confusion. I'm using "contingent" in the sense of "not necessarily true or necessarily false". I'm not using it in the sense of "dependent on something else". That said, I take independence, like contingency, to be a theory-relative term. Things just are as they are. In and of themselves, there are no relations of dependence or independence among them.
Theories are mechanisms for generating assertions about how things are or would be under various conditions. A theory can be more or less wrong depending on the accuracy of the assertions that it generates.
Theories are not mere lists of assertions (or "facts"). All theories that I know of induce a structure of dependency among their assertions. That structure is a product of the theory, though. (And this relation between the structure and the theory is itself a product of my theory of theories, and so on.)
I should try to clarify what I mean by a "dependency". I mean something like logical dependency. I mean the relation that holds between two statements, P and Q, when we say "The reason that P is true is because Q is true".
Not all notions of "dependency" are theory-dependent in this sense. I believe that "the way things are" can be analyzed into pieces, and these pieces objectively stand in certain relations with one another. To give a prosaic example. The cup in front of me is really there, the table in front of me is really there, and the cup really sits in the relation of "being on" the table. If a cat knocks the cup off the table, an objective relation of causation will exist between the cat's pushing the cup and the cup's falling off the table. All this would be the case without my theorizing. These are facts about the way things are. We need a theory to know them, but they aren't mere features of our theory.