What is the Main/Discussion distinction, and what should it be?

19 ChrisHallquist 30 December 2013 05:09AM

Near the beginning of this year Wei Dai asked why certain people don't post to LessWrong more often, and Yvain replied that:

Less Wrong requires no politics / minimal humor / definitely unambiguously rationality-relevant / careful referencing / airtight reasoning (as opposed to a sketch of something which isn't exactly true but points to the truth.) This makes writing for Less Wrong a chore as opposed to an enjoyable pastime.

But Kaj disagreed that this was the actual standard:

I agree with the "no politics" bit, but I don't think the rest are correct. I've certainly had "sketch of something that isn't quite true but points in the right direction" posts with no references and unclear connections to rationality promoted before (example), as well as ones plastered with unnecessary jokes (example).

This raises two questions: what is the real standard, and what should the standard be?

Because on the one hand, it's not clear Yvain is right, but on the other hand if he is right on the factual question, that standard seems way too high to me. It would suggest that, as John Maxwell says in the same thread, "The overwhelming LW moderation focus seems to be on stifling bad content. There's very little in place to encourage good content."

The wiki sort-of answers the factual question:

These traditionally go in Discussion:

  • a link with minimal commentary 
  • a question or brainstorming opportunity for the Less Wrong community

Beyond that, here are some factors that suggest you should post in Main:

  • Your post discusses core Less Wrong topics. 
  • The material in your post seems especially important or useful. 
  • You put a lot of thought or effort into your post. (Citing studies, making diagrams, and agonizing over wording are good indicators of this.) 
  • Your post is long or deals with difficult concepts. (If a post is in Main, readers know that it may take some effort to understand.) 
  • You've searched the Less Wrong archives, and you're pretty sure that you're saying something new and non-obvious.

But this isn't an entirely unambiguous answer: how many of the five "factors" does a post need to be in Main? Furthermore, it often seems that the "real" rules are significantly different than what the wiki says. Yvain's perception may be incorrect, but I think there were reasons why he (and presumably the people who upvoted his comment) had that perception. Also, Eliezer recently explained that:

Whenever a non-meta post stays under 5, I always feel free to move it to Discussion, especially if an upvoted comment has also suggested it. I don't always, but often do.

This makes me wonder what other poorly-publicized rules there are in this vicinity.

As for what the rules should be, I'm going to limit myself to two general suggestions:

  • The standard for posting in Main should not be so high that it makes posting at LessWrong feel like a chore, thereby chasing away good contributors like Yvain.
  • The standard should not be so high that it would force any significant portion of Eliezer's original sequences off into Discussion.

Finally, whatever standard we settle on, I think it's really important that we make it clearer to people what it is. Aside from the obvious benefits of doing that, I've found that trying to navigate the unclear Main/Discussion distinction is itself often enough to make blogging at LessWrong feel like a chore.

Edited to add: In terms of karma I'm currently the top contributor for the past 30 days on LessWrong by a wide margin. I managed this in spite of the fact that I'm in the middle of doing App Academy and have no time (this past week has been an exception because vacation). I take this not as evidence of how awesome I am, but as evidence that way too little quality content is being posted in Main. 

Critiquing Gary Taubes, Part 3: Did the US Government Give Us Absurd Advice About Sugar?

4 ChrisHallquist 30 December 2013 12:58AM

Previously: Mainstream Nutrition Science on Obesity, Atkins Redux

Here's where I start talking about the thing that initially drove me to write this post series: Taubes' repeated misrepresentation of the views of the mainstream nutrition authorities he attacks. I'll start by going back to Taubes' 2002 article. Immediately after the discussion of Atkins, it contains another set of claims that stood out to me as a huge red flag: 

Thirty years later, America has become weirdly polarized on the subject of weight. On the one hand, we've been told with almost religious certainty by everyone from the surgeon general on down, and we have come to believe with almost religious certainty, that obesity is caused by the excessive consumption of fat, and that if we eat less fat we will lose weight and live longer. On the other, we have the ever-resilient message of Atkins and decades' worth of best-selling diet books, including ''The Zone,'' ''Sugar Busters'' and ''Protein Power'' to name a few. All push some variation of what scientists would call the alternative hypothesis: it's not the fat that makes us fat, but the carbohydrates, and if we eat less carbohydrates we will lose weight and live longer.

The perversity of this alternative hypothesis is that it identifies the cause of obesity as precisely those refined carbohydrates at the base of the famous Food Guide Pyramid -- the pasta, rice and bread -- that we are told should be the staple of our healthy low-fat diet, and then on the sugar or corn syrup in the soft drinks, fruit juices and sports drinks that we have taken to consuming in quantity if for no other reason than that they are fat free and so appear intrinsically healthy. While the low-fat-is-good-health dogma represents reality as we have come to know it, and the government has spent hundreds of millions of dollars in research trying to prove its worth, the low-carbohydrate message has been relegated to the realm of unscientific fantasy.

I'll start with the obvious: We thought sugary soft drinks were intrinsically healthy? To quote an old joke, who do you mean we, kemosabe? Given widespread scientific illiteracy, I wouldn't be surprised if some people have believed that low-fat is a sufficient condition for being healthy, but if so, they didn't get this idea from mainstream nutrition science.

continue reading »

Critiquing Gary Taubes, Part 2: Atkins Redux

6 ChrisHallquist 30 December 2013 12:58AM

Previously: Mainstream Nutrition Science on Obesity

Edit: In retrospect, I think it maybe should have combined this post with part 3. Unfortunately, the problem of what to do with existing comments makes that hard to fix now.

Taubes first made a name for himself as a low-carb advocate in 2002 with a New York Times article titled "What if It's All Been a Big Fat Lie?" When I first read this article, I was getting extremely suspicious by the second paragraph (emphasis added):

If the members of the American medical establishment were to have a collective find-yourself-standing-naked-in-Times-Square-type nightmare, this might be it. They spend 30 years ridiculing Robert Atkins, author of the phenomenally-best-selling ''Dr. Atkins' Diet Revolution'' and ''Dr. Atkins' New Diet Revolution,'' accusing the Manhattan doctor of quackery and fraud, only to discover that the unrepentant Atkins was right all along. Or maybe it's this: they find that their very own dietary recommendations -- eat less fat and more carbohydrates -- are the cause of the rampaging epidemic of obesity in America. Or, just possibly this: they find out both of the above are true.

When Atkins first published his ''Diet Revolution'' in 1972, Americans were just coming to terms with the proposition that fat -- particularly the saturated fat of meat and dairy products -- was the primary nutritional evil in the American diet. Atkins managed to sell millions of copies of a book promising that we would lose weight eating steak, eggs and butter to our heart's desire, because it was the carbohydrates, the pasta, rice, bagels and sugar, that caused obesity and even heart disease. Fat, he said, was harmless.

Atkins allowed his readers to eat ''truly luxurious foods without limit,'' as he put it, ''lobster with butter sauce, steak with béarnaise sauce . . . bacon cheeseburgers,'' but allowed no starches or refined carbohydrates, which means no sugars or anything made from flour. Atkins banned even fruit juices, and permitted only a modicum of vegetables, although the latter were negotiable as the diet progressed.

It's one thing to claim that, all else equal, low-carb diets have advantages over low-fat diets. It's another thing to claim you can eat unlimited amounts of fatty foods without gaining weight.

continue reading »

Donating to MIRI vs. FHI vs. CEA vs. CFAR

18 ChrisHallquist 27 December 2013 03:43AM

In a discussion a couple months ago, Luke said, "I think it's hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR." So I want to have a thread to discuss that.

My own very rudimentary thoughts: I think the research MIRI does is probably valuable, but I don't think it's likely to lead to MIRI itself building FAI. I'm convinced AGI is much more likely to be built by a government or major corporation, which makes me more inclined to think movement-building activities are likely to be valuable, to increase the odds of the people at that government or corporation being conscious of AI safety issues, which MIRI isn't doing.

It seems like FHI is the obvious organization to donate to for that purpose, but Luke seems to think CEA (the Centre for Effective Altruism) and CFAR could also be good for that, and I'm not entirely clear on why. I sometimes get the impression that some of CFAR's work ends up being covert movement-building for AI-risk issues, but I'm not sure to what extent that's true. I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.

This has some immediate real-world relevance to me: I'm currently in the middle of a coding bootcamp and not making any money, but today my mom offered to make a donation to a charity of my choice for Christmas. So any input on what to tell her would be greatly appreciated, as would more information on CFAR and CEA, which I'm sorely lacking in.

Critiquing Gary Taubes, Part 1: Mainstream Nutrition Science on Obesity

13 ChrisHallquist 25 December 2013 06:27PM

Related: Trusting Expert Consensus

Lately, I've been thinking a lot about whether we can find any clear exceptions to the general "trust the experts (when they agree)" heuristic. One example that keeps coming up—at least on LessWrong and related blogs—is Gary Taubes' claims about mainstream nutrition experts allegedly getting obesity horribly wrong.

Taubes is probably best-known for his book Good Calories, Bad Calories. I'd previously had a mildly negative impression of him from discussion of him on Yvain's old blog, particularly some of other posts Yvain and other people linked from there, such as this discussion of Taubes' "carbohydrate hypothesis" and especially this discussion of Taubes' attempt to refute the standard calories-in/calories-out model of weight.

But I figured maybe the criticism of Taubes I'd read hadn't been fair to him, so I decided to read him for myself... and holy crap, Taubes turned out to be far worse than I expected. I decided to write a post explaining why, and then realized that, even if I were somewhat selective about the issues I focused on, I had enough material for a whole series of posts, which I'll be posting over the course of the next week.

The problem with Taubes is not that everything he says is wrong. Much of it is ludicrously wrong, but that's only one half of the problem. The other half is that he says a fair number of things mainstream nutrition science would agree with, but then hides this fact, and instead pretends those things are a refutation of mainstream nutrition science. So it's worth starting with a brief in-a-nutshell version of what mainstream nutrition science actually says about obesity.

(The following summary is drawn from a number of sources, including this, this, and this. Everything I'm about to say will be discussed in much greater detail in subsequent posts.)

Here it goes: people gain weight when they consume more calories than they burn. But both calorie intake and calorie expenditure are regulated by complicated mechanisms we don't fully understand yet. This means the causes of overweight and obesity* are also complicated and not fully understood. It is, however, worth watching out for foods with lots of added fat and sugar, if only because they're an easy way to consume way too many calories.

We currently don't have any great solutions to the problem of overweight and obesity. If you consume fewer calories than you burn, you will lose weight, but sticking to a diet is hard. It's relatively easy to lose weight in the short run, and it's possible to do so on a wide variety of diets, but only a small percentage of people keep the weight off over the long run.

As for low-carb diets, people do lose weight on them, but they do so because low-carb diets generally lead people to restrict their calorie intake even when they aren't actively counting calories. For one thing, it's hard to consume as many calories when you drastically restrict the range of foods you can eat. There's also some evidence that low-carb diets may have some advantages. in terms of, say, warding off hunger, but the evidence is mixed. There's certainly no basis for claiming low-carb diets as a magic bullet for the problems of overweight and obesity.

The above points are not the only issues at stake in Taubes' writings on nutrition. Admittedly, he covers a huge amount of ground, from the relationship between sugar and diabetes to the relationship between fat intake and heart disease to the alleged dangers of extremely-low carbohydrate diets. However, I'll be focusing on his claims about the causes of and solutions to the problems of overweight and obesity, because that seems to be the main thing people talk about when they talk about Taubes supposedly showing how wrong mainstream experts can be.

I'll also focus heavily on how Taubes misrepresents the views of mainstream experts on obesity. In the next post, though, I'll be temporarily setting that issue aside in order to look at what Taubes is proposing as an alternative. This will involve examining some claims made by Dr. Robert Atkins, whose ideas' Taubes champions.

*Note: if the use of "overweight" as a noun sounds weird to you, it does to me too, but I discovered as I researched this article that it's standard usage in the literature on the subject. I came to realize there's a good reason for this usage: it's inaccurate to talk about the problem solely in terms of "obesity," but constantly saying "the problem of people being overweight and obese" gets really wordy.

Next: Atkins Redux

The Statistician's Fallacy

38 ChrisHallquist 09 December 2013 04:48AM

[Epistemic status | Contains generalization based on like three data points.]

In grad school, I took a philosophy of science class that was based around looking for examples of bad reasoning in the scientific literature. The kinds of objections to published scientific studies we talked about were not stupid ones. The professor had a background in statistics, and as far as I could tell knew her stuff in that area (though she dismissed Bayesianism in favor of frequentism). And no, unlike some of the professors in the department, she wasn't an anti-evolutionist or anything like that.

Instead she was convinced that cellphones cause cancer. In spite of the fact that there's scant evidence for that claim, and there's no plausible physial mechanism for how that could happen. This along with a number of other borderline-fringe beliefs that I won't get into here, but that was the big screaming red flag.*

Over the course of the semester, I got a pretty good idea of what was going on. She had an agenda—it happened to be an environmentalist, populist, pro-"natural"-things agenda, but that's incidental. The problem was that when she saw a scientific study that seemed at odds with her agenda, she went looking for flaws. And often she could find them! Real flaws, not ones she was imagining! But people who've read the rationalization sequence will see a problem here...

In my last post, I quoted Robin Hanson on the tendency of some physicists to be unduly dismissive of other fields. But based the above case and a couple others like it, I've come to suspect statistics may be even worse than physics in that way. That fluency in statistics sometimes causes a supercharged sophistication effect.

For example, some anthropogenic global warming skeptics make a big deal of alleged statistical errors in global warming research, but as I wrote in my post Trusting Expert Consensus:

Michael Mann et al's so-called "hockey stick" graph has come under a lot of fire from skeptics, but (a) many other reconstructions have reached the same conclusion and (b) a panel formed by the National Research Council concluded that, while there were some problems with Mann et al's statistical analysis, these problems did not affect the conclusion. Furthermore, even if we didn't have the pre-1800 reconstructions, I understand that given what we know about CO2's heat-trapping properties, and given the increase in atmospheric CO2 levels due to burning fossil fuels, it would be surprising if humans hadn't caused significant warming.

Most recently, I got into a Twitter argument with someone who claimed that "IQ is demonstrably statistically meaningless" and that this was widely accepted among statisticians. Not only did this set off my "academic clique!" alarm bells, but I'd just come off doing a spurt of reading about intelligence, including the excellent Intelligence: A Very Short Introduction. The claim that IQ is meaningless was wildly contrary to what I understood was the consensus among people who study intelligence for a living.

In response to my surprise, I got an article that contained lengthy and impressive-looking statistical arguments... but completely ignored a couple key points from the intelligence literature I'd read: first, that there's a strong correlation between IQ and real-world performance, and second that correlations between the components of intelligence we know how to test for turn out to be really strong. If IQ is actually made up of several independent factors, we haven't been able to find them. Maybe some people in intelligence research really did make the mistakes alleged, but there was more to intelligence research than the statistician who wrote the article let on.

It would be fair to shout a warning about correspondence bias before inferring anything from these cases. But consider two facts:

  1. Essentially all scientific fields rely heavily on statistics.
  2. There's a lot more to mastering a scientific discipline than learning statistics, which limits how well most scientists will ever master statistics.

The first fact may make it tempting to think that if you know a lot of statistics, you're in a priviledged position to judge the validity of any scientific claim you come across. But the second fact means that if you've specialized in statistics, you'll probably be better at it than most scientists, even good scientists. So if you go scrutinizing their papers, there's a good chance you'll find clear mistakes in their stats, and an even better chance you'll find arguable ones.

Bayesians will realize that, since there's a good chance that of happening even when the conclusion is correct and well-supported by the evidence, finding mistakes in the statistics is only weak evidence that the conclusion is wrong. Call it the statistician's fallacy: thinking that finding a mistake in the statistics is sufficient grounds to dismiss a finding.

Oh, if you're dealing with a novel finding that experts in the field aren't sure what to make of yet, and the statistics turns out to be wrong, then that may be enough. You may have better things to do than investigate further. But when a solid majority of the experts agree on a conclusion, and you see flaws in their statistics, I think the default assumption should be that they still know the issue better than you and very likely the sum total of the available evidence does support the conclusion. Even if the specific statistical arguments youv'e seen from them are wrong.

*Note: I've done some Googling to try to find rebuttals to this link, and most of what I found confirms it. I did find some people talking about multi-photon effects and heating, but couldn't find defenses of these suggestions that rise beyond people saying, "well there's a chance."

The Limits of Intelligence and Me: Domain Expertise

28 ChrisHallquist 07 December 2013 08:23AM

Related to: Trusting Expert Consensus

In the sequences, Eliezer tells the story of how in childhood he fell into an affective death spiral around intelligence. In his story, his mistakes were failing to understand until he was much older that intelligence does not guarantee morality, and that very intelligent people can still end up believing crazy things because of human irrationality.

I have my own story about learning the limits of intelligence, but I ended up learning a very different lesson than the one Eliezer learned. It also started somewhat differently. It involved no dramatic death spiral, just being extremely smart and knowing it from the time I was in kindergaarden. To the point that I grew up with the expectation that, when it came to doing anything mental, sheer smarts would be enough to make me crushingly superior to all the other students around me and many of the adults.

In Harry Potter and the Methods of Rationality, Harry complains of having once had a math teacher who didn't know what a logarithm is. I wonder if this is autobiographical on Eliezer's part. I have an even better story, though: in second grade, I had a teacher who insisted there was no such thing as negative numbers. The experience of knowing I was right about this, when the adult authority figure was so very wrong, was probably not good for my humility.

But such brushes with stupid teachers probably weren't the main thing that drove my early self-image. It was enough to be smarter than the other kids around me, and know it. Looking back, there's little that seems worth bragging about. I learned calculus at age 15, not age 8. But that was still younger than any of the other kids I knew took calculus (if they took it at all). And knowing I didn't know any other kids as smart as me did funny things to my view of the world.

I'm honestly not sure I realized there were any kids in the whole world smarter than me until sophomore year, when I qualified to go to a national-level math competition. That was something that no one else at my high school managed to do, not even the seniors... but at the competition itself, I didn't do particularly well. It was one of the things that made me realize that I wasn't, in fact, going to be the next Einstein. But all I took from the math competition was that there were people smarter than me in the world. It didn't, say, occur to me that maybe some of the other competitors had spent more time practicing really hard math problems.

Eliezer once said, "I think I should be able to handle damn near anything on the fly." That's a pretty good description of how I felt at this point in my life. At least as long as we were talking about mental challenges and not sports, and assuming I wasn't going up against someone smarter than myself.

I think my first memory of getting some inkling that maybe sufficient intelligence wouldn't lead to automatically being the best at everything comes from... *drum roll* ...playing Starcraft. I think it was probably junior or senior that I got into the game, and at first I just did the standard campaign playing against the computer, but then I got into online play, and promptly got crushed. And not just by one genius player I encountered on a fluke, but in virtually every match.

This was a shock. I mean, I had friends who could beat me at Super Smash Bros, but Starcraft was a strategy game, which meant it should be like chess, and I'd never had any trouble beating my friends at chess. Sure, when I'd gone to local chess tournaments back in grade school, I'd gotten soundly beat by many of the older players then, but it's not like I'd ever expected all older people to be as stupid as my second grade teacher. But by the time I'd gotten into Starcraft, I was almost an adult, so what was going on?

The answer of course was that most of the other people playing online had played a hell of a lot more Starcraft than me. Also, I'd thought I'd figured the game designer's game-design philosophy (I hadn't), which had let me to make all kinds of incorrect assumptions about the game, assumptions which I could have found out were false if I'd tested them, or (probably) if I'd just looked for an online guide that reported the results of other people's tests.

It all sounds very silly in retrospect, and it didn't change my worldview overnight. But it was (among?) the first of a series of events that made me realize that trying to master something just by thinking about it tends to go badly wrong. That when untrained brilliance goes up against domain expertise, domain expertise will generally win.

A whole bunch of caveats here. I'm not denying that being smart is pretty awesome. As a smart person, I highly recommend it. And acquiring domain expertise requires a certain minimum level of intelligence, which varies from field to field. It's only once you get beyond that minimum that more intelligence doesn't help as much as expertise. Finally, I'm talking about human scale intelligence here, the gap between the village idiot and Einstein is tiny compared to the gap between Einstein and possible superintelligences, so maybe a superintelligence could school any human expert in anything without acquiring any particular domain expertise.

Still, when I hear Eliezer say he thinks he should be able to handle anything on the fly, it strikes me as incredibly foolish. And I worry when I see fellow smart people who seem to think that being very smart and rational gives them grounds to dismiss other people's domain expertise. As Robin Hanson has said:

I was a physics student and then a physics grad student. In that process, I think I assimilated what was the standard worldview of physicists, at least as projected on the students. That worldview was that physicists were great, of course, and physicists could, if they chose to, go out to all those other fields, that all those other people keep mucking up and not making progress on, and they could make a lot faster progress, if progress was possible, but they don’t really want to, because that stuff isn’t nearly as interesting as physics is, so they are staying in physics and making progress there...

Surely you can look at some little patterns but because you can’t experiment on people, or because it’ll be complicated, or whatever it is, it’s just not possible. Partly, that’s because they probably tried for an hour, to see what they could do, and couldn’t get very far. It’s just way too easy to have learned a set of methods, see some hard problem, try it for an hour, or even a day or a week, not get very far, and decide it’s impossible, especially if you can make it clear that your methods definitely won’t work there. You don’t, often, know that there are any other methods to do anything with because you’ve learned only certain methods...

As one of the rare people who have spent a lot of time learning a lot of different methods, I can tell you there are a lot out there. Furthermore, I’ll stick my neck out and say most fields know a lot. Almost all academic fields where there’s lots of articles and stuff published, they know a lot.

(For those who don't know: Robin spent time doing physics, philosophy, and AI before landing in his current field of economics. When he says he's spent a lot of time learning a lot of different methods, it isn't an idle boast.)

Finally, what about the original story that Eliezer says set off his original childhood death spiral around intelligence?:

My parents always used to downplay the value of intelligence. And play up the value of—effort, as recommended by the latest research? No, not effort. Experience. A nicely unattainable hammer with which to smack down a bright young child, to be sure. That was what my parents told me when I questioned the Jewish religion, for example. I tried laying out an argument, and I was told something along the lines of: "Logic has limits, you'll understand when you're older that experience is the important thing, and then you'll see the truth of Judaism." I didn't try again. I made one attempt to question Judaism in school, got slapped down, didn't try again. I've never been a slow learner.

I think concluding experience isn't all that great is the wrong response here. Experience is important. The right response is to ask whether all older, more experienced people see the truth of Judaism. The answer of course is that they don't; a depressing number stick with whatever religion they grew up with (which usually isn't Judaism), a significant number end up non-believers, and a few convert to a new religion. But when almost everyone with a high level relevant experience agrees on something, beware thinking you know better than them based on your superior intelligence and supposed rationality.

Edit: One thing I meant to include when I posted this but forgot: one effect of my experiences is that I tend to see domain expertise where other people see intelligence. See e.g. this old comment by Robin Hanson: are hedge fundies really that smart, or have they simply spent a lot of time learning to seem smart in conversation?

Open Thread, December 2-8, 2013

2 ChrisHallquist 03 December 2013 05:10AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

According to Dale Carnegie, You Can't Win an Argument—and He Has a Point

61 ChrisHallquist 30 November 2013 06:23AM

Related to: Two Kinds of Irrationality and How to Avoid One of Them

When I was a teenager, I picked up my mom's copy of Dale Carnegie's How to Win Friends and Influence People. One of the chapters that most made an impression on me was titled "You Can't Win an Argument," in which Carnegie writes:

Nine times out of ten, an argument ends with each of the contestants more firmly convinced than ever that he is absolutely right.

You can’t win an argument. You can’t because if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine. But what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph. And -

"A man convinced against his will 

"Is of the same opinion still."

In the next chapter, Carnegie quotes Benjamin Franklin saying how he had made it a rule never to contradict anyone. Carnegie approves: he thinks you should never argue with or contradict anyone, because you won't convince them (even if you "hurl at them all the logic of a Plato or an Immanuel Kant"), and you'll just make them mad at you.

It may seem strange to hear this advice cited on a rationalist blog, because the atheo-skeptico-rational-sphere violates this advice on a routine basis. In fact I've never tried to follow Carnegie's advice—and yet, I don't think the rationale behind it is completely stupid. Carnegie gets human psychology right, and I fondly remember reading his book as being when I first really got clued in about human irrationality.

continue reading »

Meetup : San Francisco / App Academy meetup [LOCATION CHANGE]

0 ChrisHallquist 29 November 2013 05:21PM

Discussion article for the meetup : San Francisco / App Academy meetup [LOCATION CHANGE]

WHEN: 07 December 2013 07:00:00PM (-0800)

WHERE: Olivos Restaurant 1017 Larkin Street San Francisco, CA 94109

I've recently arrived in San Francisco for App Academy, and it turns out there are several other LessWrongers in the program. It's a cool group of people, including a guy who studied AIXI at ANU under Marcus Hutter. We talked it over and decided to organize our own meetup at Olivos, a restaurant that's within 20 minutes walking distance of the App Academy office. We'll be discussing Brian Tomasik's essay The Importance of Wild-Animal Suffering. Please read it ahead of time; it's short. The intent is for people to be able to get food and/or drinks if they want to, but it's not assumed that everyone will. RSVP's are appreciated so we can make a reservation, but we'll try to save a couple seats for any extra people who show up. EDIT: After talking amonst ourselves, we decided to change the choice of restaurant.

Discussion article for the meetup : San Francisco / App Academy meetup [LOCATION CHANGE]

View more: Prev | Next