I'll be there.
Singularity Institute Party Feb 22nd
Jasen himself explained it as a desire to prove that SIAI people were especially cooperative and especially good at game theory, which I suppose worked.
Close, I was more trying to prove that I could get the Visiting Fellows to be especially cooperative than trying to prove that they were normally especially cooperative. I viewed it more as a personal challenge. I was also thinking about the long-term, real world consequences of the game's outcome. It was far more important to me that SIAI be capable of effective cooperation and coordinate than that I win a board game, and I thought rallying the team to stick together would be a good team-building exercise. Also, if I actually imagine myself in the real-world situation the game is kinda based off of, I would hugely prefer splitting the world with 5 of my friends than risking everything to get the whole thing. If I delve into my psychology a bit more, I must admit that I tend to dislike losing a lot more than I like getting first place. Emotionally, ties tend to feel almost as good as flat out wins to me.
Finally, an amusing thing to note about that game is that, before it started, without telling anyone, I intentionally became sufficiently intoxicated that I could barely understand the rules (most people can't seem to tell unless I tell them first, which I find hilarious). This meant that my only hope of not losing was to forge a powerful alliance.
Bye Bye Benton: November Less Wrong Meetup
As some of you may know, SIAI is in the process of moving our Visiting Fellows Program to a larger and more permanent location in Berkeley. Nothing is final yet but, however things turn out, November will be the last month we spend at 3755 Benton street. In honor of the house's proud history, we'll be throwing one final Less Wrong meetup this Saturday, the 13th of November, starting at 6pm. Come meet the SingInst staff, the visiting fellows and your fellow Less Wrong readers for one final party in Santa Clara!
As usual, food and drink shall be provided.
Please RSVP at the meetup.com page if you plan to attend.
On a related note, a friend of ours named John Ku has negotiated a donation of 20% stock to SIAI from his company MetaSpring. MetaSpring is a digital marketing consultancy that mostly sells a service of rating the effectiveness of advertising campaigns and they are currently hiring. They are looking for experience with:
Ruby on Rails MySql / Sql web design / user interface JavaScript wordpress php web programming in general sales client communication unix system administration Photoshop / slicing HTML & CSS drupal
If you're interested, contact John Ku at ku@johnsku.com
Call for Volunteers: Rationalists with Non-Traditional Skills
SIAI's Fellows Program is looking for rationalists with skills. More specifically, we're looking for rationalists with skills outside our usual cluster who are interested in donating their time by teaching those skills and communicating the mindsets that lead to their development. If possible, we'd like to learn from specialists who "speak our language," or at least are practiced in resolving confusion and disagreement using reason and evidence. Broadly, we're interested in developing practical intuitions, doing practical things, and developing awareness and culture around detail-intensive technical subskills of emotional self-awareness and social fluency. More specifically:
September Less Wrong Meetup aka Eliezer's Bayesian Birthday Bash
In honor of Eliezer's Birthday, there will be a Less Wrong meetup at 5:00PM this Saturday, the 11th of September, at the SIAI Visiting Fellows house (3755 Benton St, Santa Clara CA). Come meet Eliezer and your fellow Less Wrong / Overcoming Bias members, have cool conversations, eat good food and plentiful snacks (including birthday cake of course!), and scheme out ways to make the world more Bayesian.
As usual, the meet-up will be party-like and full of small group conversations. Rationality games may also be present. Newcomers are welcome. Feel free to bring food to share, or not.
Please RSVP at the meetup.com page if you plan to attend.
Sorry for the last minute notice, I just found out that the 11th was Eliezer's birthday today.
Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year's time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year's time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
Jonah,
Thanks for expressing an interest in donating to SIAI.
(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we're trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don't know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution - FAI - addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don't mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We've met and worked with several promising candidates in the past few months. We'll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu's summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to "help save the human race" is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
I was the main organizer for the NYC LW/OB group until I moved out to the Bay Area a few weeks ago. From my experience, if you want to encourage people to get together with some frequency you need to make doing so require as little effort and coordination as possible. The way I did it was as follows:
We started a google group that everyone interested in the meetups signed up for, so that we could contact each other easily.
I picked an easily accessible location and two times per month, (second Saturdays at 11am and 4th Tuesdays at 6pm) on which meetups would always occur. I promised to show up at both times every month for at least two hours regardless of whether or not anyone else showed up. I figured the worst that could happen was that I'd have two hours of peace and quiet to read or get some work done and that if at least one person showed up we'd almost certainly have a great time.
We've been doing that for about 9 months and I've never been left alone. In fact, we found that twice a month wasn't enough and started meeting every week a few months ago.
At the moment, only one meetup per month is announced to the "public" through the meetup.com group (so that we don't have to explain all of the basics to new people every meeting), one is for general unfocused discussion and two are rationality-themed game nights (such as poker training).
You should probably set up the google/meetup.com group first and poll people on what times work best for them and what kinds of activities they are most interested in and then take it form there.
I wish you the best of luck, and I'd be happy to answer any other questions you might have.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Offhand, I'm guessing the very first response ought to be "Huzzah! I caught myself procrastinating!" in order to get the reverse version of the effect I mentioned. Then go on to "what would I like to do?"
I've been able to implement something like this to great effect. Every time I notice that I've been behaving in a very silly way, I smile broadly, laugh out loud and say "Ha ha! Gotcha!" or something to that effect. I only allow myself to do this in cases where I've actually gained new information: Noticed a new flaw, noticed an old flaw come up in a new situation, realized that an old behavior is in fact undesirable, etc. This positively reinforces noticing my flaws without doing so to the undesirable behavior itself.
This is even more effective when implemented in response to someone else pointing out one of my flaws. It's a little more difficult to carry out because I have to suppress a reflex to retaliate/defend myself that doesn't come up as much when I'm my own critic, but when I succeed it almost completely eliminates the social awkwardness that normally comes with someone critiquing me in public.