All of BrandonReinhart's Comments + Replies

Donation sent.

I've been very impressed with MIRI's output this year, to the extent I am able to be a judge. I don't have the domain specific ability to evaluate the papers, but there is a sustained frequency of material being produced. I've also read much of the thinking around VAT, related open problems, definition of concepts like foreseen difficulties... the language and framework for carving up the AI safety problem has really moved forward.

4So8res
Thanks! Our languages and frameworks definitely have been improving greatly over the last year or so, and I'm excited to see where we go now that we've pulled a sizable team together.

Well I totally missed the diaspora. I read star slate codex (but not the comments) and had no idea people are posting things in other places. It surprises me that it even has a name "rationalist diaspora." It seemed to me that people ran out of things to say or the booster rocket thing had played itself out. This is probably because I don't read Discussion, only Main and as Main received fewer posts I stopped coming to Less Wrong. As "meet up in area X" took over the stream of content I unsubscribed from my CSS reader. Over the past few... (read more)

When you’re “up,” your current strategy is often weirdly entangled with your overall sense of resolve and commitment—we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice.

Aside 1: I run into many developers who aren't able to separate their idea from their identity. It tends to make them worse at customer and product oriented thinking. In a high bandwidth collaborative environment, it leads to an assortment of problems. They might not suggest an idea, because ... (read more)

I've always thought that "if I were to give, I should maximize the effectiveness of that giving" but I did not give much nor consider myself an EA. I had a slight tinge of "not sure if EA is a thing I should advocate or adopt." I had the impression that my set of beliefs probably didn't cross over with EAs and I needed to learn more about where those gaps were and why they existed.

Recently through Robert Wiblin's facebook have encountered more interesting arguments and content in EA. I had no concrete beliefs about EA, only vague impres... (read more)

I'm curious about the same thing as [deleted].

Furthermore, a hard to use text may be significantly less hard to use in the classroom where you have peers, teachers, and other forms of guidance to help digest the material. Recommendations for specialists working at home or outside a classroom might not be the same as the recommendations you would give to someone taking a particular class at Berkeley or some other environment where those resources are available.

A flat out bad textbook might seem really good when it is something else such as the teacher, the method, or the support that makes the book work.

"A directed search of the space of diet configurations" just doesn't have the same ring to it.

1Vaniver
I don't know, that title seems pretty awesome to me. (But my research is also in direct search methods, so...)

Thanks for this. I hadn't seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).

I wonder if there is a rationality exercise in 'write pseudocode for problem descriptions, explore the callers and implementations'.

Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.

0Paul Crowley
You're not alone - it's a common mistyping!

Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like 'characteristic of a metaphysical belief system.' The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn't consistent with my more fundamental, empirical beliefs.

So in my mind I have 'WARNING!' tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being ea... (read more)

5bogus
I can see why you would consider what you call "mysticism", or metaphysical belief systems, a warning sign. However, the use of mystical text forms, which is what I was referring to in my comment, is quite unrelated to this kind of metaphysical and cosmological rigidity. Compare, say, Christian fundamentalists versus Quakers or Unitarian Universalists, or Islamic Wahabis and Qutbis versus Sufis: the most doctrinal and memetically dangerous groups make only sparing use of mystical practices, or forbid them outright. Atheists and agnostics are obviously a more challenging case, but it appears that at least some neopagans comfortably identify as such, using their supposed metaphysical beliefs as functionally useful aliefs, to be invoked through a ritual whenever the psychical effects of such rituals are desired. There is in fact an account of just such a ritual practice on LW itself involving the Winter Solstice, which is often celebrated as a festival by neopagan groups. It's hard to describe that account as anything other than a mystical ritual aiming to infuence the participants in very specific ways and induce a desirable stance of mind among them. In fact, that particular practice may be regarded as extremely foolish and memetically dangerous (because it involves a fairly blatant kind of happy-death-spiral) in a way that other mystical practices are not. I now see that post as a cautionary tale about the dangers of self-mindhacking, but that does not justify its wholesale rejection, particularly in an instructional context where long-term change is in fact desired.

I had a dim view of meditation because my only exposure to meditation prior was in mystic contexts. Here I saw people talk about it separate from that context. My assumption was that if you approached it using Bayes and other tools, you could start to figure out if it was bullshit or not. It doesn't seem unreasonable to me that folks interested could explore it and see what turns up.

Would I choose to do so? No. I have plenty of other low hanging fruit and the amount of non-mystic guidance around meditation seems really minimal, so I'd be paying opportunity... (read more)

2Hul-Gil
Are you familiar with the study (studies) about meditation and brain health? I've seen one or two crop up, but I've not read the actual studies themselves - just summaries. IIRC, it appears to reduce the effects of aging. The other reason I consider meditation possibly worth pursuing is that it appears to be an effective "mindhack" in at least one respect: it can be used to reduce or eliminate unpleasant physical and mental sensations. For example, I believe it's been shown to be effective in reducing stress and anxiety, and - more impressively - chronic pain, or even sensations like "chilly". How useful this is is more debatable: while I'm waiting in line, shivering, I probably won't be able to meditate effectively, or have the time to.
1bogus
It strikes me that you may want to take a step further and consider mysticism itself as a functionally useful brain-hack much like meditation. It's very possible that mystical texts could be used to bring out a mental stance conducive to rationality. The Litanies of Tarski and Gendlin are fairly obvious examples, and I'd even argue that HP:MoR seems to be fulfilling that role as a kind of shared mythology tapping into well-understood tropes, at least for the subset of rationalists who like Harry Potter fanfiction.
2SarahNibs
Hm, super-useful was a bad term. The actual impressions I got were "obviously coherent and not bs, and with high enough mean+variance that the value of investigation is very high". Not necessarily the value of any one specific person investigating, but the value of it being investigated. So I went a bit further than your to believe the top of the curve was a) grossly useful and b) of non-negligible likelihood.

To address your second point first, the -attendees- were not a group who strongly shared common beliefs. Some attended due to lots of prior exposure to LW, a very small number were strong x-risk types, several were there only because of recent exposure to things like Harry Potter and were curious, many were strongly skeptical of x-risks. There were no discussions that struck me as cheering for the team -- and I was actively looking for them!

Some counter evidence, though: there was definitely a higher occurrence of cryonicists and people interested in cryon... (read more)

3curiousepic
I just wanted to say this (esp. the second part) is actually one of the most cogent posts about anything that I've read in quite some time, and as such, a self-referential example of the value of the camp. It should probably be more visible, and I recommend making it a discussion post about deciding whether/when to attend.
-1Paul Crowley
Nitpick - cRYonics. Thanks!

I feel like most of the value I got out of the minicamp in terms of techniques came early. This is probably due a combination of effects:

1) I reached a limit on my ability to internalize what I was learning without some time spent putting things to use. 2) I was not well mentally organized -- my rationality concepts were all individual floating bits not well sewn together -- so I reached a point where new concepts didn't fit into my map very easily.

I agree things got more disorganized, in fact, I remember on a couple occasions seeing the 'this isn't the o... (read more)

3NoSignalNoNoise
Relatedly, I wonder what minimum consecutive length of time you need to get a lot out of this. How would the returns from three spaced-apart day-long workshops compare to those from a single three-day workshop? (This would of course work better with a group of people who don't need to travel a significant distance.) Is the New York meetup group what happens if you take this sort of thing, break it into small chunks and spread it out over time? People who attended minicamp can probably provide more informed speculation on these matters than I can.
4SarahNibs
Yes, this. Usually this risk is low, but here it was actually quite high. This particular instance was an Ugly example, because the category - ideas with close temporal association - was false. But there were many scary examples based on good categories. The most outlandish was meditation. Remember that other people's brains are part of evidence, now witness quite a few people who have just spent the last few days on activities that convinced you they are pretty decent (compared to baseline, damn good) at doing their research, discarding bullshit, not strongly espousing ideas they don't strongly hold, examining the ideas they do hold, etc. etc... witness them say with a straight face that meditation, which you (I) assumed was a crock of mystic religion that just took a different turn than the Western religions you're familiar with... witness them say that meditation is super-useful. Then watch your brain say "Bull! Wait, they're good at things. Maybe not bull? Hey, argument from authority, bull after all! Wait, argument from authority is evidence... :S I... have to take this seriously..." IFS, NVC, nootropics? Guess I have to take them seriously too. (I exaggerate slightly, but my feelings were stronger than I think they should have been, so that story is in line with how I felt, if not precisely what my beliefs were)

I attended the 2011 minicamp.

It's been almost a year since I attended. The minicamp has greatly improved me along several dimensions.

  1. I now dress better and have used techniques provided at minicamp to become more relaxed in social situations. I'm more aware of how I'm expressing my body language. It's not perfect control and I've not magically become an extrovert, but I'm better able to interact in random social situations successfully. Concretely: I'm able to sit and stand around people I don't know and feel and present myself as relaxed. I dress better

... (read more)
0Goobahman
Thanks for that. It's fascinating to get a glimpse of what rationality looks like in the real world rather than just online interchanges. Note aside, I'm a big fan of your work. Reassures me to know rationalists are on the team for dota 2
4David Althaus
Rather off-topic, but I'm very interested in rational meditation-advice: Did they suggest specific techniques of meditation like e.g. vipassana or did they recommend some particular books on meditation?
6[anonymous]
What about the cost? I would not call spending $1500 in a week insignificant. And as a baseline, I believe that being surrounded for a week by a group of people who believe strongly in some collection of ideas is a risk at least an order of magnitude higher than an economics lecture. I certainly expect that it would have a much stronger effect on me (as it seems it has had on you) than the lecture would, and I would most certainly not take a risk of this magnitude if I have any non-negligible doubts.

What we know about cosmic eschatology makes true immortality seem unlikely, but there's plenty of time (as it were) to develop new theories, make new discoveries, or find possible new solutions. See:

Cirkovic "Forecast for the Next Eon: Applied Cosmology and the Long-Term Fate of Intelligent Beings"

Adams "Long-term astrophysical processes"

for an excellent overview of the current best-estimate for how long a human-complexity mind might hope to survive.

Just about everything CIrkovic writes on the subject is really engaging.

More importantly, cryonics is useful for preserving information. (Specifically, the information stored by your brain.) Not all of the information that your body contains is critical, so just storing your spinal cord + brain is quite a bit better than nothing. (And cheaper.) Storing your arms, legs, and other extremities may not be necessary.

(This is one place where the practical reasoning around cryonics hits ugh fields...)

Small tissue cryonics has been more advanced than whole-body. This may not be the case anymore, but certainly was say four years ago. S... (read more)

Your company plan sounds very much like how Valve is structured. You may find it challenging to maintain your desired organizational structure, given that you also plan to be dependent on external investment. Also, starting a company with the express goal of selling it as quickly as possible conflicts with several ways you might operate your company to achieve a high degree of success. Many of the recent small studios that have gone on to generate large amounts of revenue (relative to their size) (Terraria / Minecraft / etc) are independently owned and bui... (read more)

0Squibs
Are you still intrested in making this company?
2Daniel_Burfoot
I'd be very interested in a post on the corporate structure, culture, history, etc of Valve. It seems like you guys have figured out a lot of things about how to run a good software company.
2Alexei
I'm not sure I would want to keep the company as flat as Valve does, but that's something I will only be able to judge from experience. If having the explicit goal of selling the company quickly will turn out to be harmful, then I will abandon that goal. I don't follow your reasons for joining Valve. It won't give me the amount of money I want, and it won't give me company managing experience faster than actually starting and managing a company would. Also, I'm not trying to invest or build a rational organization. I'm completely comfortable with not doing direct work on ex-risk reduction. The best method I could come up with in reducing ex-risk indirectly is donating money, hence the startup. I'm not interested in having a career, I'm interested in making lots of money.
1endoself
I don't have enough domain knowledge to evaluate this advice that well, but couldn't a company earn much more than an employee, even after accounting for risk? See the discussion at 80 000 hours for relevant research. Statistics might be different for games than for the industry average, though. 80 000 hours is generally a good resource for evaluating altruistic career decisions. You can, and maybe, depending on your current state of knowledge, should, talk to a member before starting your company.

Carl Shulman has been convincing that I should do nothing directly (in terms of labor) on the problem on AI risks, instead become successful elsewhere, and then direct resources as I am able toward the problem.

However I believe I should 1) continue to educate myself on the topic 2) try to learn to be a better rationalist so when I do have resources I can direct them effectively 3) work toward being someone who can gain access to more resources 4) find ways to better optimize my lifestyle.

At one point I seriously considered running off to San Fran to be in... (read more)

I will say that I feel 95% confident that SIAI is not a cult because I spent time there (mjcurzi was there also), learned from their members, observed their processes of teaching rationality, hung out for fun, met other people who were interested, etc. Everyone involved seemed well meaning, curious, critical, etc. No one was blindly following orders. In the realm of teaching rationality, there was much agreement it should be taught, some agreement on how, but total openness to failure and finding alternate methods. I went to the minicamp wondering (along w... (read more)

Thanks for taking the time to respond.

I rebuilt my guitar thing and added today's datapoint and now it seems to be predicting my path properly. Makes more sense now. I think I was confused at first because I had made a custom graph instead of using the "Do More" prefab.

Neat software!

An exercise we ran at minicamp -- which seemed valuable, but requires a partner -- is to take and argue for a position for some time. Then, at some interval, you switch and argue against the position (while your partner defends). I used this once at work, but haven't had a chance since. The suggestion to swap sides mid argument surprised the two, but did lead to a more effective discussion.

The exercise sometimes felt forced if the topic was artificial and veered too far off course, or if one side was simply convinced and felt that further artificial defense was unproductive.

Still, it's a riff on this theme.

  • It's a little strange that I have to set up the first data point when I register the goal. I'd rather set up the goal, then do the first day's work. I suppose this is splitting hairs.

I created two goals:

https://www.beeminder.com/greenmarine

Both goals have perfectly tight roads. Is this correct? I would like to give myself some variance, since I'll probably not ever do exactly 180 minutes in a day. To start, I fudged the first day's value at the goal value.

Based on how you describe the system, it looks like I should expect to pay $5 if I practice 179 min... (read more)

0dreeves
Great questions! Here are answers! Giving yourself variance: Yes. It should become obvious as you add datapoints. The real nitty gritty about the width of the yellow brick road is here: http://blog.beeminder.com/roadwidth (In short: The width of the road is constructed so that if you're in the correct lane today then you're guaranteed not to lose tomorrow.) Paying $5: Note that the first attempt is free. You only put money at risk if you go off the road and want to reset. Gory details at http://beeminder.com/money (note especially the part about the exponential fee schedule). We've hesitated to expose that option since we're not sure how to handle the case of someone deleting a goal they have a contract on. The option does appear if you delete the only datapoint though. The goal value is the y-value of the end of your yellow brick road. For weight loss it's obvious -- your goal weight. But for many kinds of goals, like "work out 20 minutes a day" for which the y-axis is the total (cumulative) amount reported, the goal value is probably not what you care about. This is confusing and we're scrambling to find a way to make it less so. That works beautifully with beeminder! Just specify your rate as 5*X per week. Well said. And yes, just use the road dial below your graph to flatten your road for the vacation. If it's a weight loss goal and you're going on an all-you-can-eat-buffet-hopping vacation, you can even make the road slope up for a while. Always with that one-week delay of course. Damn straight: http://beeminder.com/meta I think harshness/mildness is the wrong question here. It's just trying to help you find the order of magnitude that the punishment needs to be to get you to treat it as a hard commitment. In some sense the steeper the curve the less harsh since it means wasting less money on punishments that were insufficiently punishing before hitting your Motivation Point. We went with, roughly, 3^x. You answered this one yourself, but, yes, we're f

It would actually be worthwhile to post a small analysis of Lifeboat. How they meet the crank checklist, etc. Do they do anything other than name drop on their website, etc?

4gwern
The filings said they were spending a lot on publishing materials, but didn't say what exactly they were publishing on.

Hiring Luke full time would be an excellent choice for the SIAI. I spent time with Luke at mini-camp and can provide some insight.

  • Luke is an excellent communicator and agent for the efficient transmission of ideas. More importantly, he has the ability to teach these skills to others. Luke has shown this skill publicly on Less Wrong and also on his blog, with this distilled analysis of Eliezer's writing "Reading Yudkowsky."

  • Luke is a genuine modern day renaissance man, a true polymath. However, Luke is very self-aware of his limitations and ha

... (read more)
1taryneast
Do you have links handy? :)

Is "transform function" a technical term from some discipline I'm unfamiliar with? I interpret your use of that phrase as "operation on some input that results in corresponding output." I'm having trouble finding meaning in your post that isn't redefinition.

Here is another question, regarding the basic methdology of study. When you are reading a scholastic work and you encounter an unfamiliar concept, do you stop to identify the concept or continue but add the concept to a list to be pursued later? In other words, do you queue the concept for later inspection or do you 'step into' the concept for immediate inspection?

I expect the answer to be conditional, but knowing what conditions is useful. I find myself sometimes falling down the rabbit hole of chasing chained concepts. Wikipedia makes this mistake easy.

0Gray
Adding to the tangent, in my opinion, the concepts of scholastic philosophy are actually incredibly useful for rationality in general. They usually end up being logic terms, and they are employed well outside of their concept even in modern works. A lot of times, for example, when you read an argument and understand there is something wrong with the argument, but have a hard time putting your finger on what is wrong with the argument, there's typically some scholastic term that will nail it for you. The scholastics were incredibly subtle, and are typically the ones ridiculed when the expression "splitting hairs" comes to fore. But usually that ridicule is made by people who aren't subtle, and don't realize that the distinctions are incredibly important.
1lukeprog
It depends on whether the concept appears to be necessary to my understanding of what I care about or not. Sorry I can't give an example right now.

Here's a question: does learning to read faster provide a net marginal benefit to the pursuit of scholarship? Are there narrow, focused, and confirmed methods of learning to read faster that yield positive results? This would be beneficial to all, but perhaps moreso to those of us that have full time jobs that are not scholarship.

2David_Gerard
It helps with skimming material that isn't very dense, which would be approximately none of what this post is about. If it's comprehension you're after, do a skim followed by slow reading. This is work, but is more likely to gain you understanding.
lukeprog260

I've never had success with 'speed reading' in a way that allows me to consume more words per minute and have the same degree of retention and comprehension, especially for dense scholarly material.

Efficient scholarship benefits much more, I think, from learning to be strategic and have good intuitions about what to read - on the level of fields of knowledge, on the level of books and articles, and on the level of paragraphs within books and articles. I've been doing something like what I described in this post for at least two years and I have the impress... (read more)

Grunching. (Responding to the exercise/challenge without reading other people's responses first.)

Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority ... (read more)

1Charlie_OConnor
I think your scenario is good. I think the group dynamic and individual personality determine when this is easy and when it is difficult. I have been in groups where it is easy to admit mistakes and move on; and I have been in groups where admitting a mistake feels like you are no longer part of the group. So this can be realistic. I find taking the approach of admitting mistakes often helps others follow the same path, and leads to a better group dynamic.
1Cayenne
Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong. Edit - please disregard this post

I donated $275 to the SIAI via the Facebook page. Given the flight prices on Orbitz, this should cover somebody. Maybe not an east coaster or someone overseas.

Pledge fulfilled!

Also: I will be attending mini-camp and have also gotten my own ticket.

I would be willing to do this work, but I need some "me" time first. The SIAI post took a bunch of spare time and I'm behind on my guitar practice. So let me relax a bit and then I'll see what I can find. I'm a member of Alcor and John is a member of CI and we've already noted some differences so maybe we can split up that work.

5jsalvatier
I might be willing to do this, but I am somewhat reluctant because I feel like it might be emotionally taxing. I would however be very enthusiastic about sponsoring someone's else's work and willing to invest a substantial amount. I'm not sure how to go about arranging that, though.

He is full time. According to the filings he reports 40 hours of work for the SIAI. (Form 990 2009, Part VII, Section A -- Page 7).

8PhilGoetz
From the fact that I can't talk to Michael on the phone for more than 10 minutes without another call coming in, I infer that he works more than 40 hours/week.
0wedrifid
I assume that is 40 hours of work per week.

"Michael Vassar's Persistent Problems Group idea does need funding, though it may or may not operate under the SIAI umbrella."

It sounds like they have a similar concern.

I agree, this doesn't deserve to be downvoted.

It should be possible for the SIAI to build security measures while also providing some transparency into the nature of that security in a way that doesn't also compromise it. I would bet that Eliezer has thought about this, or at least thought about the fact that he needs to think about it in more detail. This would be something to look into in a deeper examination of SIAI plans.

At this point an admin should undelete the original SIAI Fundraising discussion post. I can't seem to do it myself. I can update it with a pointer to this post.

Thanks, I added a note to the text regarding this.

Yeah, I'll update it when the 2010 documents become available.

Added to the overview section.

I didn't know about that! I will update the post to use it as soon as I can. Thanks! Most of my work on this post was done by editing the HTML directly instead of using the WYSIWYG editor.

EDIT: All of the images are now hosted on lesswrong.

The older draft contains some misinformation. Much is corrected in the new version. I would prefer people use the new version.

I will donate the amount without earmarking it. It will fill the gap taken by the cost to send someone to the event.

I don't see a lot of value in earmarking funds for the SIAI. I'm working on a document about SIAI finances and from reading the Form 990s I believe they use their funds efficiently. Given my low knowledge of their internal workings and low knowledge of their immediate and medium term goals I would bet that they would be better at figuring out the best use of the money than I would be. Earmarking would increase the chance the money is used inefficiently, not decrease it.

Yes. In general, earmarking is a hideous pain in the backside for charities and leads to great inefficiency thinking about how to deal with this radioactive donation. If the donation is sufficiently large it may be worth it, but it's still a nuisance.

Simple heuristic: if you trust a charity enough to donate to them, just donate and leave them to figure out what to do with it. Don't try to micromanage.

Can everyone see all of the images? I received a report that some appeared broken.

0Rain
All of the images are blocked by my work internet filter. I can see them all at home.

Once I finish the todo at the top and get independent checking on a few things I'm not clear on, I can post it to the main section. I don't think there's value in pushing it to a wider audience before it's ready.

Zvi Mowshowitz! Wow color me surprised. Zvi is a retired professional magic player. I used to read his articles and follow his play. Small world.

9ArisKatsaris
Come now, it's probably a different Zvi Mowshowitz.
0drethelin
Did he retire since last year?
7gwern
Birthday paradox? Given a set of donor-list-readers and another set of donors, there's a better chance than one would expect that there's a commonality. :)

I'm also going to see if I can get a copy of the 2010 filing.

Edit: The 2002 and on data is now largely incorporated. Still working on a few bits. Don't have the 2010 data, but the SIAI hasn't necessarily filed it yet.

Fixed.

The section that led me to my error was 2009 III 4c. The amount listed as expenses is $83,934 where your salary is listed in 2009 VII Ad as $95,550. The text in III 4c says:

"This year Eliezer Yudkowsky finished his posting sequences on Less Wrong [...] Now Yudkowsky is putting together his blog posts into a book on rationality. [...]"

This is listed next to two other service accomplishments (the Summit and Visiting Fellows).

If I had totaled the program accomplishments section I would have seen that I was counting some money twice (and also noticed that the total in this field doesn't feed back into the main sheet's results).

Please accept my apology for the confusion.

9Eliezer Yudkowsky
Hm. $95K still sounds too high, but if I recall correctly, owing to a screwup in our payments processor at that time, my salary for the month of January 2010 was counted into the 2009 tax year instead of 2010. No apology is required; you wrote without malice.

I -- thoughtlessly -- hadn't considered donating to the SIAI as a matter of course until recently (helped do a fund raiser for something else through my company and this made me think about it). Now reading the documentation on GuideStar has me thinking about it more...

Looking at the SIAI filings, I'd be interested in knowing more about the ~$118k that was misappropriated by a contractor (reported in 2009). I hadn't heard of that before. For an organization that raises less than or close to half a million a year, that's a painful blow.

Peter Thiel's contrib... (read more)

I applied to mini-camp. However, I may not be selected because of my personal situation (older, not college educated). I believe the mini-camp program is worth supporting and should be helped to be successful. I am willing to back up this belief with my wallet...and in public, so you all can hold me to it.

Whether or not I am selected, I pledge to pay for the flight of one individual who is (and who isn't me). This person must live in the continental United States.

If the easiest way to fulfill this pledge is to donate to the SIAI, earmarked for this purpos... (read more)

I donated $275 to the SIAI via the Facebook page. Given the flight prices on Orbitz, this should cover somebody. Maybe not an east coaster or someone overseas.

Pledge fulfilled!

Also: I will be attending mini-camp and have also gotten my own ticket.

You're spending after-tax money if you buy the flight yourself, or before-tax if you donate to SIAI, assuming they're 501(c)3. If you trust them to honor a targeted donation (I would), it's better to donate.

Load More