It's interesting how much karma people are getting for saying they've sent an email; I suppose this is our way of encouraging people to apply just in case, and not be put off by the fear of not being good enough.
Also, email sent.
I think there's another good reason to give karma to people who send emails: among other things, as EY noted in Why Our Kind Can't Cooperate, there's a pattern in online circles of being silent when you agree and being loud when you disagree. That sort of thing makes a false impression of what is going on.
I've been a Visiting Fellow since early April, and been writing travel diary entries describing my stay here. Here's a handy overview page for them. Some of them have details about the actual happenings here, some are just me generically musing about my life and what I want to do after the program. A lot of people claim to have liked them, though, so my musings are apparently not too distractingly prominent.
Currently the overview page only has links to the posts; I'll add brief descriptions shortly.
Just thought I'd add that I learned a hell of a lot at the SIAI visiting fellow program last summer, and came away really understanding what was going on in the field of existential risk and AI, rather than just guessing. Highly recommended.
You have high analytic intelligence, a tendency to win math competitions
Are you sure you want people with that sort of tendency? Having worked with high school students teaching them how to do actual math, and also how to think scientifically, the students who are good at math competitions but not much else are not generally that adaptive to good thinking. They frequently (not always, but often) only tackle problems that they are confident they can solve, and often don't have the ability to adapt to solving a problem of a type too far from what they are used to solving.
Edited for grammar.
Applied earlier this year; too busy over the summer, but all being well I should be there around September/October. Eeeee!
"SIAI is tackling the world’s most important task -- the task of shaping the Singularity. The task of averting human extinction."
I'd like to see a defense for this claim: that SIAI can actually have a justified confidence in exerting a positive influence on the future, and that this outweighs any alternative present good that could be done with the resources it is using.
As things stand, there is no guarantee that SIAI will get to make a difference, just as you have no guarantee that you will be alive in a week's time. The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way? If you don't even think unfriendly AI is an issue, that's one sort of discussion, a back-to-basics discussion. But if you do agree it's a potentially terminal problem, then who else is there? Everyone else in AI is a dilettante on this question; AI ethics is always a problem to be solved swiftly and in passing, a distraction from the more exciting business of making machines that can think. SIAI perceive the true seriousness of the issue, and at least have a sensible plan of attack, even if they are woefully underresourced when it comes to making it happen.
I suspect that in fact you're playing devil's-advocate a bit, trying to encourage the articulation of a new and better argument in favor of SIAI, but the sort of argument you want doesn't work. SIAI can of course guarantee that there will continue to be Singularity summits and visiting fellows, and it is reas...
whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.
What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.
Probabilistic AI has more apps than stem cells do right now. For example, google. But the point I am making is that an application of a technology is a logical factor, whereas people actually respond to emotional factors, like whether it breaks taboos that go back to the stone age. For example, anything that involves sex, flesh, blood, overtones of bestiality, overtones of harm to children, trading a sacred good for an unsacred one etc.
The ideal technology for people to want to ban would involve harvesting a foetus that was purchased from a hooker, then hybridizing it with a pig foetus, then injecting the resultant cells into the gonads of little kids. That technology would get nuked by the public.
The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand, has never happened before, involves only nonphysical harards like information, and has nothing to do with flesh, sex or anything disgusting or with fire, sharp objects or other natural disasters.
Because my revealed preferences suck. The difference between even what I want in a sort of ordinary and non-transhumanist way and what I have is enormous. I am 150 pounds heavier than I want to be. My revealed preference is to eat regardless of health/size consequences, but I don't want all of the people in the future to be fat. My revealed preference is also to kill people in pooristan so that I can have cheap plastic widgets or food or whatever. I don't want an extrapolation of my akrasiatic actual actions controlling the future of the universe. I suspect the same goes for you.
From the Author's Note:
Now this story has a plot, an arc, and a direction, but it does not have a set pace. What it has are chapters that are fun to write. I started writing this story in part because I'd bogged down on a book I was working on (now debogged), and that means my top priority was to have fun writing again.
From Kaj Sotala:
The other reason is that Eliezer Yudkowsky showed up here on Monday, seeking people's help with the rationality book he's writing. Previously, he wrote a number of immensly high-quality posts in blog format, with the express purpose of turning them into a book later on. But now that he's been trying to work on the book, he has noticed that without the constant feedback he got from writing blog posts, getting anything written has been very slow. So he came here to see if having people watching him write and providing feedback at the same time would help. He did get some stuff written, and at the end, asked me if I could come over his place on Wednesday. (I'm not entirely sure of why I in particular was picked, but hey.) On Wednesday, me being there helped him break his previous daily record on amount of words written for his book, so I visited again on Friday and agreed to also come back on Monday and Tuesday.
Eliezer is not "busy writing his Harry Potter fanfic." He is working on his book on rationality.
New here :(
But how do they plan to stop an AI appocalypse, or is that one of those things they haven't figured out yet? I think the best bet would be to create AI first, and then use it to make safe AI as well as create plans for stopping an AI appocalypse.
Now is the very last minute to apply for a Summer 2010 Visiting Fellowship. If you’ve been interested in SIAI for a while, but haven’t quite managed to make contact -- or if you’re just looking for a good way to spend a week or more of your summer -- drop us a line. See what an SIAI summer might do for you and the world.
(SIAI’s Visiting Fellow program brings volunteers to SIAI for anywhere from a week to three months, to learn, teach, and collaborate. Flights and room and board are covered. We’ve been rolling since June of 2009, with good success.)
Apply because:
Apply especially if:
(You don’t need all of the above; some is fine.)
Don’t be intimidated -- SIAI contains most of the smartest people I’ve ever met, but we’re also a very open community. Err on the side of sending in an application; then, at least we’ll know each other. (Applications for fall and beyond are also welcome; we’re taking Fellows on a rolling basis.)
If you’d like a better idea of what SIAI is, and what we’re aimed at, check out:
1. SIAI's Brief Introduction;
2. The Challenge projects;
3. Our 2009 accomplishments;
4. Videos from past Singularity Summits (the 2010 Summit will happen during this summer’s program, Aug 14-15 in SF; visiting Fellows will assist);
5. Comments from our last Call for Visiting Fellows; and/or
6. Bios of the 2009 Summer Fellows.
Or just drop me a line. Our application process is informal -- just send me an email at anna at singinst dot org with: (1) a resume/c.v. or similar information; and (2) a few sentences on why you’re applying. And we’ll figure out where to go from there.
Looking forward to hearing from you.