Comment author: lukeprog 28 April 2014 11:09:02PM 31 points [-]

I got a job via an internship via LW, and definitely lots of friends. Louie and I made online contact via LW, and he convinced me to intern with SIAI, and later I was offered a paying job there.

Comment author: ChrisHallquist 29 April 2014 04:27:29AM 24 points [-]

I love how understated this comment is.

Comment author: ChrisHallquist 29 April 2014 04:26:23AM -1 points [-]

People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.

Comment author: Jayson_Virissimo 08 April 2014 06:06:03PM *  11 points [-]

App Academy has been discussed here before and several Less Wrongers have attended (such as ChrisHallquist, Solvent, Curiouskid, and Jack).

I am considering attending myself during the summer and am soliciting advice pertaining to (i) maximing my chance of being accepted to the program and (ii) maximing the value I get out of my time in the program given that I am accepted. Thanks in advance.

EDIT: I ended up applying and just completed the first coding test. Wasn't too difficult. They give you 45 minutes, but I only needed < 20.

EDIT2: I have reached the interview stage. Thanks everyone for the help!

EDIT3: Finished the interview. Now awaiting AA's decision.

EDIT4: Yet another interview scheduled...this time with Kush Patel.

EDIT5: Got an acceptance e-mail. Decision time...

EDIT6: Am attending the August cohort in San Francisco.

Comment author: ChrisHallquist 09 April 2014 08:03:31PM 3 points [-]

Maximizing your chances of getting accepted: Not sure what to tell you. It's mostly about the coding questions, and the coding questions aren't that hard—"implement bubble sort" was one of the harder ones I got. At least, I don't think that's hard, but some people would struggle to do that. Some people "get" coding, some don't, and it seems to be hard to move people from one category to another.

Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the incoming cohort. Really. Ned, the lead instructor, knows what he's doing, and really cares about the students who go through App Academy. And he's seen what has worked or not worked for people in the past.

(I might also add, based on personal experience, "don't get cocky about the assessments." Also "get enough sleep," and should you end up in a winter cohort, "if you go home for Christmas, fly back a day earlier than necessary.")

Comment author: peter_hurford 28 March 2014 10:56:58PM 1 point [-]

for example, can we trust 80,000 Hours' estimates of the multiplier on giving to Give Well and GWWC? Might other organizations (such as the Centre for Effective Altruism, which is behind 80,000 Hours) be more effective at movement-building?

I'm pretty sure that 80K believes that a donation to CEA is the best for movement building.

Comment author: ChrisHallquist 29 March 2014 04:08:30AM 0 points [-]

Presumably. The question is whether we should accept that belief of theirs.

Comment author: Jiro 17 March 2014 07:58:18AM *  1 point [-]

The past 80+ comments from me have all had at least one downvote. There is no reasonable way to interpret this other than as having a stalker.

And the solution to how not to catch false positives is to use some common sense. You're never going to have an automated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.

Comment author: ChrisHallquist 17 March 2014 08:51:55PM -2 points [-]

And the solution to how not to catch false positives is to use some common sense. You're never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.

Right on. The solution to karma abuse isn't some sophisticated algorithm. It's extremely simple database queries, in plain english along the lines of "return list of downvotes by user A, and who was downvoted," "return downvotes on posts/comments by user B, and who cast the vote," and "return lists of downvotes by user A on user B."

Comment author: [deleted] 09 March 2014 06:54:29PM 5 points [-]

Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.

In response to comment by [deleted] on Rationality Quotes March 2014
Comment author: ChrisHallquist 10 March 2014 02:23:10AM 0 points [-]

Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.

This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.

Comment author: Yvain 07 March 2014 08:23:45PM 3 points [-]

I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.

But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

Examples?

Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age."

So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.

Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals.

I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right".

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority

I think it is much much less than the general public, but I don't think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn't a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don't seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true."

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

Comment author: ChrisHallquist 08 March 2014 07:05:05PM 0 points [-]

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.

That leaves us with "proper logical form," about which you said:

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.

Comment author: ChrisHallquist 04 March 2014 05:48:16AM 1 point [-]

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

Comment author: shminux 03 March 2014 09:55:58PM *  0 points [-]

Had you used small and large numbers instead of the terms torture and dust specks, the whole post would have been trivial. I learned a fair bit about my own thinking in the aftermath of reading that infamous post, and I suspect I am not the only one. I even intentionally used politically charged terms in my own post.

Comment author: ChrisHallquist 04 March 2014 03:26:33AM 0 points [-]

Username explicitly linked to torture vs. dust specks as a case where it makes sense to use torture as an example. Username is just objecting to using torture for general decision theory examples where there's no particular reason to use that example.

Comment author: Yvain 03 March 2014 07:12:26PM 9 points [-]

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.

Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them. So when we say we need heuristics to find ideas to pay attention to, I'm assuming we've already started by assuming mainstream academia is always right, and we're looking for which challenges to them we should pay attention to. I agree that "challenges the academics themselves take seriously" is a good first step, but I'm not sure that would suffice to discover the critique of mainstream philosophy. And it's very little help at all in fields like politics.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out, and it also seems like people have a bad habit of being very sensitive to crackpot warning signs the opposing side displays and very obtuse to those their own side displays). But once again, these signs are woefully inadequate. Plantinga doesn't look a bit like a crackpot.

You point out that "Even though appearances can be misleading, they're usually not." I would agree, but suggest you extend this to IQ and rationality. We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

You are right that I rarely have the results of an IQ test (or Stanovich's rationality test) in front of me. So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

So I think it is likely that we both use a basket of heuristics that include education, academic status, estimation of intelligence, estimation of rationality, past track record, crackpot warning signs, and probably some others.

I'm not sure whether we place different emphases on those, or whether we're using about the same basket but still managing to come to different conclusions due to one or both of us being biased.

Comment author: ChrisHallquist 04 March 2014 02:08:18AM 0 points [-]

But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.

With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...

Examples?

We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)

So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."

View more: Prev | Next