Hey Peter,
Thanks for writing this.
I’m the primary researcher working on Connection Theory at Leverage. I don’t have time to give an in-depth argument for why I consider CT to be worth investigating at the moment, but I will briefly respond to your post:
Objections I & II:
I think that your skeptical position is reasonable given your current state of knowledge. I agree that the existing CT documents do not make a persuasive case.
The CT research program has not yet begun. The evidence presented in the CT documents is from preliminary investigations carrie...
I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.
Awesome, I'm very interested in sharing notes, particular since you've been practicing meditation a lot longer than I have.
I'd love to chat with you on Skype if you have the time. Feel free to send me an email at jasen@intelligence.org if you'd like to schedule a time.
First of all, thank you so much for posting this. I've been contemplating composing a similar post for a while now but haven't because I did not feel like my experience was sufficiently extensive or my understanding was sufficiently deep. I eagerly anticipate future posts.
That said, I'm a bit puzzled by your framing of this domain as "arational." Rationality, at least as LW has been using the word, refers to the art of obtaining true beliefs and making good decisions, not following any particular method. Your attitude and behavior with regard...
I'm a newly registered member of LW (long-time lurker) and was thinking of posting about this very topic. Like many in the community, I have a background in science / math / philosophy, but unlike many, I have also spent many years working to understand what Jasen calls the "Buddhist claim" experientially (i.e. through meditation) and being involved with the contemporary traditions that emphasize attaining that understanding. I see myself as an "insider" straddling both communities, well-situated to talk about what Buddhists are going o...
Attention: Anyone still interested in attending the course must get their application in by midnight on Friday the 8th of April. I would like to make the final decision about who to accept by mid April and need to finish interviewing applicants before then.
But "produc[ing] formidable rationalists" sounds like it's meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera.
Precisely. The Singularity Institute was founded due to Eliezer's belief that trying to build FAI was the best strategy for making the world a better place. That is the goal. FAI is just a sub-goal. There is still consensus that FAI is the most promising route, but it does not seem wise to put all of our eggs in one basket. We can't do a...
Good question. I haven't quite figured this out yet, but one solution is to present everyone we are seriously considering with as much concrete information about the activities as we can and then give each of them a fixed number of "outs," each of which can be used to get out of one activity.
Definitely all-consuming.
OK. What's the purpose of having it be all-consuming? Are you selecting for people who are truly committed? Are there returns to scale? Are you trying to break people out of old habits by denying them time in which to indulge them?
Definitely apply, but please note your availability in your answer to the "why are you interested in the program?" question.
It will definitely cost us money but, due to its experimental nature, will be free for all participants for this iteration at least. If we continue offering it in the future, we will probably charge money and offer scholarships.
How much is it likely to cost in the future? That is, what's the opportunity cost of not applying now? An approximate answer is fine.
Edited post.
Congrats indeed!
We'll definitely be writing up a detailed curriculum and postmortem for internal purposes and I expect we'll want to make most if not all of it publicly available.
Probably, though I'm not sure when.
Whoops, thank you. Post edited
That book was part of what gave me the idea. I expect most of the exercises will come from it.
Preach it, brother!
;-)
I'll be there.
I've been able to implement something like this to great effect. Every time I notice that I've been behaving in a very silly way, I smile broadly, laugh out loud and say "Ha ha! Gotcha!" or something to that effect. I only allow myself to do this in cases where I've actually gained new information: Noticed a new flaw, noticed an old flaw come up in a new situation, realized that an old behavior is in fact undesirable, etc. This positively reinforces noticing my flaws without doing so to the undesirable behavior itself.
This is even more effe...
I'll be there.
Jasen himself explained it as a desire to prove that SIAI people were especially cooperative and especially good at game theory, which I suppose worked.
Close, I was more trying to prove that I could get the Visiting Fellows to be especially cooperative than trying to prove that they were normally especially cooperative. I viewed it more as a personal challenge. I was also thinking about the long-term, real world consequences of the game's outcome. It was far more important to me that SIAI be capable of effective cooperation and coordinate than that I...
On a related note, a friend of ours named John Ku has negotiated a donation of 20% stock to SIAI from his company MetaSpring. MetaSpring is a digital marketing consultancy that mostly sells a service of rating the effectiveness of advertising campaigns and they are currently hiring. They are looking for experience with:
Ruby on Rails MySql / Sql web design / user interface JavaScript wordpress php web programming in general sales client communication unix system administration Photoshop / slicing HTML & CSS drupal
If you're interested, contact John Ku at ku@johnsku.com
Jonah,
Thanks for expressing an interest in donating to SIAI.
(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we're trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, ...
If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
...It would have been a good idea for you to watch the vi
I was the main organizer for the NYC LW/OB group until I moved out to the Bay Area a few weeks ago. From my experience, if you want to encourage people to get together with some frequency you need to make doing so require as little effort and coordination as possible. The way I did it was as follows:
We started a google group that everyone interested in the meetups signed up for, so that we could contact each other easily.
I picked an easily accessible location and two times per month, (second Saturdays at 11am and 4th Tuesdays at 6pm) on which meetups wou...
Yeah, It will be recorded. I'll add a link to the post when the video is up.
Thank you for posting this!
Now I feel bad about not spreading the word sooner...but better late than never I suppose.
So far, attendance has ranged from 4 to 8 people per meetup. If enough people are interested in meeting regularly we might have to switch venues. I had a good experience at the Moonstruck Diner during another meetup, so that would probably be my second choice:
http://nymag.com/listings/restaurant/moonstruck-diner/
It was quiet, cheap, had a large empty dining area at the back and they left us a lone to talk for over four hours. If anyone ha...
This is an excellent opportunity to announce that I recently organized an OB/LW discussion group that meets in NYC twice a month. We had been meeting sporadically ever since Robin's visit back in April. The regular meetings only started about a month ago and have been great fun. Here is the google group we've been using to organize them:
http://groups.google.com/group/overcomingbiasnyc
We meet every 2nd Saturday at 11:00am and every 4th Tuesday at 6:00pm at Georgia's Bake Shop (on the corner of 89th street and Broadway). The deal is that I show up every ...
There are instances where nature penalizes the rational. For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.
I would generally avoid calling a behavior irrational without providing specific context. Revenge is no more irrational than a peacock's tail. They are both costly signals that can result in a significant boost to your reputation in the right social context...if you are good enough to pull them off.
Thanks for the info PJ!
PCT looks very interesting and your EPIC goal framework strikes me as intuitively plausible. The current list of IGs that we reference is not so much part of CT as an empirical finding from our limited experience building CT charts. Neither Geoff nor I believe that all of them are actually intrinsic. It is entirely possible that we and our subjects are simply insufficiently experienced to penetrate below them. It looks like I've got a lot reading to do :-)