Comment author: geniuslevel20 27 July 2013 06:18:18AM 6 points [-]

The main question is why is automation associated with unemployment today when it wasn't in the past. To answer, you have to consider the kinds of jobs created by and lost to automation and the determinants of workers incomes in the jobs.

Most of the industrial revolution is associated an increasing number of workers in manufacturing and fewer in farming. The industrial work force grew primarily at the expense of the peasants or farmers. Today, automation is causing manufacturing jobs to be replaced by service jobs. Farming jobs were the first to go because our need for foodstuffs is limited. Manufacturing jobs went next because manufacturing is easier to automate than services.

But manufacturing jobs paid better than farming jobs; service-industry jobs pay worse than manufacturing jobs. If the jobs pay better, there are also more of them, because well-paid citizens create greater aggregate demand. So today we have manufacturing jobs declining relative to service-industry jobs with the result that the workforce is poorer, which means fewer workers can be employed.

The explanation lies in whatever causes some jobs to be paid considerably more than others. It could be status. Manufacturing jobs are higher status than farming jobs because the city is high status compared to the sticks. And service industry is low status because of the low status of servitude. Groups of workers with higher status get paid better. It probably makes a greater difference than we realize.

Comment author: feanor1600 11 August 2013 02:36:46PM 0 points [-]

"Groups of workers with higher status get paid better." True. But what is the main direction of causation here?

According to basic economics, workers will get paid their marginal product (how much you add to production). This is a pretty good first approximation. Of course, you can get paid in many ways- money, flexible hours, even status. The higher the status of a job the less it needs to pay to attract workers; this is called a compensating differential. High-level politicians are very high-status but don't make that much. Conversely, very low-status jobs (like janitor or garbageman) have to pay a bit more in money wages to get people to work.

Comment author: Yosarian2 24 July 2013 02:20:16PM *  11 points [-]

People who think that automation is currently increasing unemployment don't generally just talk about jobs lost during the Great Recession. They see an overall trend of reduction in employment and wages since at least 2000.

You're absolutely right that the recession was caused by a financial shock. The thing is, a normal effect of recessions is for productivity to increase; businesses lay off workers and then try to figure out how to run their operation more efficiently with less workers, that happens in every recession. The difference might be that this time, it is easier then ever in the past for employers to figure out how to do more with less workers (because of the internet, and automation, and computers, ect), and so even when demand starts to come back up as the GDP grows again, they apparently still don't need to hire many workers.

The economists making the automation argument aren't saying that automation caused the great recession or the loss of jobs that happened then; they tend to think that it's a long ongoing trend that's been going for quite a while, that it was partly hidden for a few years by the housing bubble, but that the great recession has accelerated that trend by increasing the need for employers to find ways to be more cost-effective.

Edit: the main assumption EY is making in this article seems to be here:

Since it should take advanced general AI to automate away most or all humanly possible labor

and I don't think that's true. I think that a majority of labor done today, either physical or intellectual, is basically a series of routine or repeatable tasks, and I think that a big chunk of it could be done by either narrow AI software or robotics or internet-based logistics.

Anyway, you wouldn't really have to automate most or all of human labor to create an unemployment crises; if we hit long-term unemployment levels of 20%-30% that would probably not be sustainable without some fairly significant social and economic changes.

Comment author: feanor1600 11 August 2013 02:24:43PM 0 points [-]

"a normal effect of recessions is for productivity to increase; businesses lay off workers and then try to figure out how to run their operation more efficiently with less workers, that happens in every recession"

This is not true. In fact, the normal effect is the opposite- a productivity decrease. See the data for the US after 1948 here.

If you are looking for a story as to why, in some business cycle theories (such as Real Business Cycle Theory) the recession is caused by a negative shock to productivity.

Comment author: shminux 24 July 2013 08:06:04PM 12 points [-]

Both Q and A seem to be treating unemployment as intrinsically bad, which is a case of lost purposes, a confusion between terminal and instrumental goals.

Comment author: feanor1600 11 August 2013 03:59:25AM 3 points [-]

Involuntary unemployment is bad. Not having to work is good.

Comment author: Wei_Dai 13 May 2013 08:39:59AM 22 points [-]

This is for people interested in optimizing for academic fame (for a given level of talent and effort and other costs). Instead of trying to get a PhD and a job in academia (which is very costly and due to "publish or perish" forces you to work on topics that are currently popular in academia), get a job that leaves you with a lot of free time, or find a way to retire early. Use your free time to search for important problems that are being neglected by academia. When you find one, pick off some of the low-hanging fruit in that area and publish your results somewhere. Then, (A) if you're impatient for recognition, use your results to make an undeniable impact on the world (see Bitcoin for example), or (B) if you're patient, move on to another neglected topic and repeat, knowing that in a few years or decades, the neglected topic you found will likely become a hot topic and you'll be credited for being the first to investigate it.

Comment author: feanor1600 16 June 2013 05:32:46PM 0 points [-]

"Instead of trying to get a PhD and a job in academia (which is very costly and due to "publish or perish" forces you to work on topics that are currently popular in academia), get a job that leaves you with a lot of free time" Part of the attraction of academia to me is that it is exactly the job that leaves you with lots of free time. A professor only has to be in a certain place at a certain time 3-12 hours per week (depending on teaching load), 30 weeks per year. After tenure, you can research whatever you want, especially if you aren't in a lab-science field that leaves you dependent on grants. Even before tenure I can work on neglected problems, so long as they aren't neglected due to their low prestige.

Comment author: satt 17 May 2013 01:29:32AM *  10 points [-]

Instead of trying to get a PhD and a job in academia (which is very costly and due to "publish or perish" forces you to work on topics that are currently popular in academia), get a job that leaves you with a lot of free time, or find a way to retire early.

On the bright side, if we forget the "job in academia" part and just focus on the "PhD" part, a PhD can fit these criteria reasonably well.

Before I justify that, I should acknowledge the many articles arguing, with some justice, that a PhD will ruin your life. These articles make fair points, although I notice they have a lot of overlap, mostly concluding that if you get a PhD you'll spend 6+ years running up masses of debt, with massive teaching loads and no health insurance, worked to death by an ogre as you try to spin literary criticism out of novels analyzed to death decades ago.

The obvious solution: don't do a PhD in a country where taking 7 years to finish is normal; don't do a PhD unless someone's paying you to do it; don't do a PhD in a department that assigns you endless teaching duties; don't do a PhD in a country without a universal healthcare system; don't choose a supervisor who exploits their students; and don't get a literature PhD.

A "don't" is less useful than a "do", so here are some possible "do"s I'd suggest as alternatives:

  • find PhD programmes where the successful students mostly finish within 4 years (in the UK, 3-4 years is a more typical PhD length than 6-7, but there is variation among universities)
  • explicitly say on your PhD applications that you can't afford to do the PhD unless the university waives the tuition fee and offers a stipend (this no doubt reduces your chances of getting a PhD place, but if you're allergic to debts you want to be selective here)
  • when you visit prospective departments, ask the professors and current PhD students how much teaching PhD students have to do (in some departments it's 100% optional, and pays you extra)
  • do a PhD in the UK, which has a health system where most medical services are free at the point of delivery
  • try to get an idea of how hard your potential PhD supervisors work their students (don't just talk to the supervisors themselves — try to talk to their current/former students one-on-one as well)
  • get a PhD in physics, statistics, accountancy, economics, or something else remunerative and popular with employers

With the usual worries about PhDs out of the way, I turn to Wei_Dai's concerns. The first is the publish or perish issue. If you're just doing a PhD, the publish or perish imperative is often weaker than for postdocs & professors. (This again varies with the field and the institution. For example, as I understand things, top-tier US economics PhD students normally publish 3 or 4 serious papers, and basically staple them together for their dissertation. On the other hand, some UK physics students get PhDs without publishing any journal papers at all.) The ultimate hurdle for your work is convincing your supervisor and the handful of external examiners reading your dissertation that it's worthwhile.

Along the same lines, you don't necessarily have to work on fashionable topics if you're getting a PhD. It's quite possible to work on something boring; it need only be just interesting enough to keep your supervisor on board and satisfy your other examiners. (You'll probably want a margin of safety, though, in case your work ends up more boring than expected.) A more objective (but still approximate) rule of thumb: your PhD should be interesting enough to be accepted by the same rank of journal as the papers it's citing. If your PhD doesn't need to serve as a step up into an academic job, it can be as boring as you like as long as it meets the baseline.

Lastly, what about free time? A lot of PhDs eat virtually all of your attention, but some offer ample free time in the first couple of years if the work involved isn't fiddly. For example, you might end up running lots of simulations with a computer program that's already been written. If so, you might well be able to go to your office in the morning, set a run going, and spend the afternoon doing something else.

One catch is that it's not trivial to tell which PhDs are low-effort before the fact. Even if your supervisor accurately tells you what they expect from you, and the other students accurately report that they don't spend much time poring over their work, you might still get unlucky and end up slaving over a computer or an experiment or some equations for 16 hours a day, because research is unpredictable. (Still, compare it to the main alternative: people routinely underestimate how long they'll spend at the workplace — and commuting! — for normal jobs, too. It's not obvious that PhDs are more unpredictable in this regard.)

Nonetheless, if you plan ahead to do straightforward work for an easy-going supervisor who's not in the office most days, you might well be able to spend most days off campus yourself, doing your own independent research instead. And while you're a student, there's nothing stopping you from visiting other departments at your university to pick the brains over there!

Use your free time to search for important problems that are being neglected by academia. When you find one, pick off some of the low-hanging fruit in that area

I don't have any tips for this, though.

Comment author: feanor1600 16 June 2013 05:25:40PM 1 point [-]

"don't do a PhD in a country without a universal healthcare system" Funded PhD's in the US commonly include health insurance coverage as part of your stipend.

This is yet more support for your main point: the fact that getting a PhD in some programs/fields is a bad idea does not mean you should avoid a PhD from any program/field.

Comment author: UngnsCobra 27 May 2013 09:47:56PM 3 points [-]

I'm not confident this is the right outlet (and if so I apologize) but does anyone have tips on good data sources for ex; poultry statistics - trying to get hold of data for each individual country the amount of eggs produced on a year by year, country by country basis. appreciate any tips! Where do you go to find your data? (choose to make this an open question)

Comment author: feanor1600 29 May 2013 03:50:01AM 2 points [-]

1) Find academic papers on the subject, see where they got their data 2) Data-only search sites like Zanran, or set google to search only for .xls 3) Question and answer sites, like r/datasets

Comment author: PhilGoetz 24 May 2013 06:25:07PM *  14 points [-]

This is a very good idea.

He found that the data was not submitted for almost 60% of papers, and that data for 75% of papers were not in a format suitable for replication.

I recently needed large tables of data from 4 different publications. The data was provided... as PDFs. I had to copy thousands of lines of data out from the PDFs by hand. Journals prefer PDF format because it's device-independent.

It's questionable how much good science can do, though, when we're already so far behind in applying biotech research in the clinic. My cousin died last week just after her 40th birthday, partly from a bacterial infection. The hospital couldn't identify the bacteria because they're only allowed to use FDA-approved diagnostic tests. The approved tests involve taking a sample, culturing it, and using an immunoassay to test for proteins of one particular bacteria. This takes days, costs about $400 per test, tests only for one particular species or strain of bacteria per test, has only a small number of possible tests available, and has a high false negative rate. It was a reasonable approach 25 years ago.

Where I work, we take a sample, amplify it via PCR (choosing the primers is a tricky but solved problem), and sequence everything. We identify everything, hundreds of bacterial species, whether they can be cultured or not, in a single quick test. If you don't have a sequencer, you could use a 96-well plate to test against at least 96 bacterial groups, or a DNA hybridization microarray to test against every known bacterial species, for $200-$500. The FDA has no process for approving these tests other than to go through a separate validation process for every species being tested for, and no way to add the DNA of newly-discovered species to a microarray that's been approved.

Comment author: feanor1600 28 May 2013 04:57:14PM 2 points [-]

"I had to copy thousands of lines of data out from the PDFs by hand. Journals prefer PDF format because it's device-independent"

Google "PDF to excel".

Comment author: feanor1600 28 April 2013 01:07:19PM 9 points [-]

Philosopher Michael Huemer has a page "Should I go to graduate school in philosophy?" It begins:

Many philosophy students decide to attend graduate school, knowing almost nothing about the consequences of this decision, or about what the philosophy profession is actually like. By the time they find out, they have already committed several years of their life, and possibly thousands of dollars, to the undertaking. They then learn that their initial assumptions about the field were unrealistically optimistic. They continue in their chosen path, even though, if they had known the facts at the start, they might have chosen a different career path. I have written the following points to provide a more realistic picture for students, before they make this choice

Comment author: satt 02 April 2013 06:18:07AM *  23 points [-]

Within the philosophy of science, the view that new discoveries constitute a break with tradition was challenged by Polanyi, who argued that discoveries may be made by the sheer power of believing more strongly than anyone else in current theories, rather than going beyond the paradigm. For example, the theory of Brownian motion which Einstein produced in 1905, may be seen as a literal articulation of the kinetic theory of gases at the time. As Polanyi said:

Discoveries made by the surprising configuration of existing theories might in fact be likened to the feat of a Columbus whose genius lay in taking literally and as a guide to action that the earth was round, which his contemporaries held vaguely and as a mere matter for speculation.

― David Lamb & Susan M. Easton, Multiple Discovery: The pattern of scientific progress, pp. 100-101

Comment author: feanor1600 11 April 2013 01:25:39PM 0 points [-]

This is how Scott Sumner describes his own work in macroeconomics and NGDP targetting. Others see it as radical and innovative, he thinks he is just taking the standard theories seriously.

Comment author: paulfchristiano 22 January 2013 06:42:54PM *  31 points [-]

I'm also glad to see competition, and I would not be surprised at all if their overviews of the evidence were stronger than GiveWell's. It would be nice to see what they are doing, and I don't trust their results too much without that. I guess there is a book where they describe the meta-analyses they've done, which I have not had a chance to see.

The comparison with GiveWell seems mostly unreasonable, and I think reflects somewhat badly. Most of the points are either mistaken or misleading, and I would be surprised if they could be made by someone writing in good faith. [Edit: apologies for the snarky tone and inaccurate claim!]

  • They suggest GiveWell doesn't understand causal attribution because Holden lists instrumental variables at the top of an inventory of methods of causal attribution. However, the list is in alphabetical order, and Holden says specifically that they rarely see compelling studies of this form.
  • They criticize GiveWell for vote counting and provide a detailed description of what vote counting is. They link to GiveWell's analysis of microfinance as their only example, and suggest it makes elementary statistical errors. The procedure GiveWell is following is to look at a variety of studies with methodological failures, point out that the positive results are in the studies with the largest methodological failures, and that there is strong evidence that those failures will exaggerate the positive impact of microfinance. It is hard to mistake this for vote counting in good faith. A study which shows that eliminating methodological flaws reduces measured positive impacts does constitute evidence that undermines other methodologically flawed studies which find a positive effect. This is not the same as "accepting the null."
  • AidGrade posts the comparison on their blog but doesn't take responsibility for it (disclaiming: these are the views of the author) or allow responses there.

Eva Vivalt thinks that making comparisons between outcome measures is not the place of a charity evaluator, and faults GiveWell for being willing to do so. No argument is provided for this, nor any response to the arguments GiveWell has given in favor of making such judgments.

This seems like a common and defensible position, but as an altruist concerned with aggregate welfare it doesn't make too much sense for me. Yes, there is value in producing the raw estimates of effectiveness on respective outcome measures (which GiveWell does as well), but encouraging discussion about what outcome measures are important is also a valuable public good, and certainly not an active disservice.

@Raemon: saying this is better for "people with choice paralysis or who don't have any idea how to evaluate different types of outcomes" seems to be missing the point. It is a significant, largely empirical challenge to determine which intermediate outcome measures most matter for the things we ultimately care about. Whether or not GiveWell does that passably, it is clearly something which needs to be done and which individual donors are not well-equipped to do.

The two valid points of criticism Eva raises:

  • GiveWell is willing to accept data of the form "these two graphs look pretty similar" when common-sense reasoning suggests that similarity reflects a causal influence. At best, GiveWell's willingness to use such data requires donors to have more trust in them. At worst, it causes GiveWell to make mistakes. Such data was misleading once in the past, indeed in the example GiveWell cites of this approach. That said, a formal statistical analysis wouldn't have helped at all if GiveWell was willing to accept difference-of-difference estimators or matching [see the next point]. Overall this seems like a valid criticism of GiveWell, although I think GiveWell's position is defensible and has been defended, while this criticism includes no argument other than an appeal to incredulity. Indeed, even the incredulity is incorrectly described, as it applies verbatim to difference-of-difference estimators and matching as well, which economists do accept and which the author chastises GiveWell for not explicitly listing in their inventory.
  • GiveWell fails to note difference-of-differences or matching in their list of methods of causal attribution. This is technically valid but a bit petty, given that GiveWell does in fact use such estimators when available. Making an error of omission in an expository blog post probably does not license the conclusion "[GiveWell is] not in a good position to evaluate studies that did not use randomization." [Edit: as Eva points out, GiveWell is in fact in a worse position to evaluate studies that don't use randomization, though I don't think the evidence presented is very relevant to this claim and I think Eva overstates it significantly.]
Comment author: feanor1600 28 January 2013 01:59:44PM 1 point [-]

From the same page, "Instrumental variables are rarely used and have generally become viewed with suspicion; their heyday was the 1980s" This is simply not true, at least within economics. Look at any recent econometrics textbook, or search for "instrumental variables" in EconLit and notice how there are more hits every year between 1970 and now.

View more: Prev | Next