Comment author: PhilGoetz 29 October 2015 06:38:57PM *  1 point [-]

JWW suggests that an AI could partition trial subjects into control and experimental groups such that expected number of events in both was equal, and presumably also such that cases involving assumptions were distributed equally, to minimize the impact of assumptions. For instance, an AI doing a study of responses to an artificial sweetener could do some calculations to estimate the impact of each gene on sugar metabolism, then partition subjects so as to balance their allele frequencies for those genes.

(A more extreme interpretation would be that the AI is partitioning subjects and performing the experiment not in a way designed to test a single hypothesis, but to maximize total information extracted from the experiment. This would be optimal, but a radical departure from how we do science. Actually, now that I think of it, I wrote a grant proposal suggesting this 7 years ago. My idea was that molecular biology must now be done by interposing a layer of abstraction via computational intelligence in between the scientist and the data, so that the scientist is framing hypotheses not about individual genes or proteins, but about causes, effects, or systems. It was not well-received.)

There's another comment somewhere countering this idea by noting that this almost requires omniscience; the method one uses to balance out one bias may introduce another.

Comment author: EHeller 14 November 2015 07:07:50AM 0 points [-]

There is a lot of statistical literature on optimal experimental design, and it's used all the time. Years ago at Intel, we spent a lot of time on optimal design of quality control measurements, and I have no doubt a lot of industrial scientists in other companies spend their time thinking about such things.

The problem is, information is a model dependent concept (derivatives of log-likelihood depend on the likelihood), so if your prior isn't fairly strong, there isn't a lot of improvement to be had. A lot of science is exploratory, trying to optimize the experimental design is premature.

Either way, this isn't stuff you need an AI for at all, it's stuff people talk about and think about now, today, using computer assisted human intellect.

Comment author: johnswentworth 14 November 2015 01:21:57AM 0 points [-]

Exactly! This is a math problem! And it becomes a very complicated math problem very quickly as the prior information gets interesting.

There's nothing magical about an AI; it can't figure out anything a human couldn't figure out in principle. The difference is the "superintelligence" bit: a superintelligent AI could efficiently use much more complicated prior information for experiment design.

Comment author: EHeller 14 November 2015 06:57:13AM 0 points [-]

I don't understand the improvement you think is possible here. In a lot of cases, the math isn't the problem, the theory is known. The difficulty is usually finding a large enough sample size,etc.

Comment author: gjm 12 October 2015 11:24:36AM 4 points [-]

literally no one cares if you do [TAing] poorly

I have heard rumours that students are actually people, and that they care about the quality of the teaching they receive.

In response to comment by gjm on Deliberate Grad School
Comment author: EHeller 12 October 2015 07:04:35PM 3 points [-]

You'd think so, but office hours and TA sections without attendance grades are very sparsely attended.

Comment author: Curiouskid 07 October 2015 05:23:54AM *  5 points [-]

I have some questions about step 1 (find a flexible program):

My understanding is that there are two sources of inflexibility for PhD programs: A. Requirements for your funding source (e.g. TA-ing) and B. Vague requirements of the program (e.g. publish X papers). I'm excluding Quals, since you just have to pass a test and then you're done.

Elsewhere in the comments, someone wrote:

"Grad school is free. At most good PhD programs in the US, if you get in then they will offer you funding which covers tuition and pays you a stipend on the order of $25K per year. In return, you may have to do some work as a TA or in a professor's lab."

So, there are two types of jobs you can have to fund your PhD (TA-ing and being a RA/Research Assistant to a professor). How time-consuming is TA-ing generally? I imagine it varies based on the school/class. How do you find a TA-ing gig that isn't time consuming? Can you generally TA during your entire PhD? I think I vaguely recall a university that only would let you TA for so many semesters.

You could also fund your PhD by getting a fellowship. Philip Guo has written about applying for the NSF, NDSEG, Hertz fellowships. I'm poorly calibrated about how hard it is to get one of these fellowships. I've also heard that certain schools will offer fellowships to some of their students. How hard are these to get relative to the fellowships mentioned above? There are ~33K science PhDs awarded each year. I wonder what distinguishes the ~4% who get fellowships from the median science PhD student.

Let's say that you were really frugal and/or financially independent already. My impression is that many schools would still require you to TA in order to have your tuition waved.

Let’s assume you have the financial aspect of your PhD taken care of (e.g. You have an easy/enjoyable TA job). What other requirements are there other than passing Quals? Could I read interesting books indefinitely until I find something interesting to publish?

I'd like to believe that achieving step 1 is possible, but as eli_sennesh pointed out, this is hard.

Comment author: EHeller 08 October 2015 03:48:42AM 4 points [-]

How hard your quals are depends on how well you know your field. I went to a top 5 physics program, and everyone passed their qualifying exams, roughly half of whom opted to take the qual their first year of grad school. Obviously, we weren't randomly selected though.

Fellowships are a crapshoot that depend on a lot of factors outside your control, but getting funding is generally pretty easy in the sciences. When you work as an "RA" you are basically just doing your thesis research. TAing can be time consuming, but literally no one cares if you do it poorly, so it's not high pressure.

But this is a red flag:

Let’s assume you have the financial aspect of your PhD taken care of (e.g. You have an easy/enjoyable TA job). What other requirements are there other than passing Quals? Could I read interesting books indefinitely until I find something interesting to publish?

That isn't how research works, at least in the sciences. Research is generally 1% "big idea" and 99% slowly grinding it out to see if it works. Your adviser, if he/she is any good, will help you find a big idea that you can make some progress on and you'll be grinding it out every week and meeting with your adviser or other collaborators if you've gotten stuck.

That said, a bad adviser probably won't pay any attention to you. So you can do whatever you want for about 7 years until people realize you've made no progress and the wheels come off the bus (at which point they'll probably hand you a masters degree and send you on your way).

Comment author: Anders_H 14 September 2015 04:16:31AM *  7 points [-]

I have the Irish equivalent of an MD; "Medical Bachelor, Bachelor of Surgery, Bachelor of the Art of Obstetrics". This unwieldy degree puts me in fairly decent company on Less Wrong.

I may be generalizing from a sample of one, but my impression is that medicine selects out rationalists for the following reasons:

(1) The human body is an incompletely understood highly complex system; the consequences of manipulating any of the components can generally not be predicted from an understanding of the overall system. Medicine therefore necessarily has to rely heavily on memorization (at least until we get algorithms that take care of the memorization)

(2) A large component of successful practice of medicine is the ability to play the socially expected part of a doctor.

(3) From a financial perspective, medical school is a junk investment after you consider the opportunity costs. Consider the years in training, the number of hours worked, the high stakes and high pressure, the possibility of being sued etc. For mainstream society, this idea sounds almost contrarian, so rationalists may be more likely to recognize it.

--

My story may be relevant here: I was a middling medical student; I did well in those of the pre-clinical courses that did not rely too heavily on memorization, but barely scraped by in many of the clinical rotations. I never had any real passion for medicine, and this was certainly reflected in my performance.

When I worked as an intern physician, I realized that my map of the human body was insufficiently detailed to confidently make clinical decisions; I still wonder whether my classmates were better at absorbing knowledge that I had missed out on, or if they are just better at exuding confidence under uncertainty.

I now work in a very subspecialized area of medical research that is better aligned with rational thinking; I essentially try to apply modern ideas about causal inference to comparative effectiveness research and medical decision making. I was genuinely surprised to find that I could perform at the top level at Harvard, substantially outperforming people who were in a different league from me in terms of their performance in medical school. I am not sure whether this says something about the importance of being genuinely motivated, or if it is a matter of different cognitive personalities.

In retrospect, I am happy with where this path has taken me, but I can't help but wonder if there was a shorter path to get here. If I could talk to my 18-year old self, I certainly would have told him to stay far away from medicine.

Comment author: EHeller 14 September 2015 05:43:44AM *  6 points [-]

I don't think medicine is a junk investment when you consider the opportunity cost, at least in the US.

Consider my sister, a fairly median medical school graduate in the US. After 4 years of medical school (plus her undergrad) she graduated with 150k in debt (at 6% or so). She then did a residency for 3 years making 50k a year, give or take. After that she became an attending with a starting salary of $220k. At younger than 30, she was in the top 4% of salaries in the US.

The opportunity cost is maybe ~45k*4 years, 180k + direct cost of 150k or so.. So $330k "lost to training," however 35+ years of making 100k a year more than some alternative version that didn't do medical school. Depending on investment and loan decisions by 5 years out you've recouped your investment.

Now, if you don't like medicine and hate the work, you've probably damned yourself to doing it anyway. Paying back that much loan is going to be tough working in any other job. But that is a different story than opportunity cost.

Comment author: Douglas_Knight 03 September 2015 04:38:22AM 1 point [-]

It seems to me that Eliezer is basically correct on the physics. It seems to me that you and SU3 looked at a big jump and instead of trying to figure out what he was trying to say, even to the extent of following the links on the reddit thread, just rounded it off to the nearest error you had a counterexample at hand for.

I think "sneer" is a pretty appropriate description.

I have seen some criticism of the example that engages with it, and maybe it would be best to say that it is not a legitimate argument because it relies on fragile things holding when a closely related fragile thing has shattered. But that is a very different criticism.

Comment author: EHeller 04 September 2015 12:42:50AM *  1 point [-]

I don't see how Eliezer is correct here. Conservation of energy just isn't deeply related to the deeper structure of quantum mechanics in the way Harry suggests. It's not related to unitarity, so you can't do weird non-unitary things.

Comment author: Cyan 31 August 2015 02:22:41AM *  1 point [-]

you're ignoring critical information

No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.

Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still considered a "star free agent". Well, perhaps so -- but I think this little episode illustrates very well that a doctor's unsupported opinion about the efficacy of his or her novel experimental treatment isn't worth the shit s/he wants to place inside your skull.

In response to comment by Cyan on Beautiful Probability
Comment author: EHeller 31 August 2015 05:41:06AM 2 points [-]

Hold on- aren't you saying the choice of experimental rule is VERY important (i.e. double blind vs. not double blind,etc)?

If so you are agreeing with VAuroch. You have to include the details of the experiment somewhere. The data does not speak for itself.

Comment author: TheAncientGeek 29 August 2015 09:18:46AM 0 points [-]

The von Neumann axioms aren't self interpreting .

Physicists are trained to understand things in terms of mathematical formalisms and experimental results, but that falls over when dealing with interpretation. Interpretations canot be settled empirically, by definition,, and formulae are not self interpreting.

Comment author: EHeller 29 August 2015 10:21:31PM 1 point [-]

My point was only that nothing in the axioms prevents macroscopic superposition.

Comment author: Wes_W 20 August 2015 05:27:16PM *  0 points [-]

Cromwell's Rule is not EY's invention, and relatively uncontroversial for empirical propositions (as opposed to tautologies or the like).

If you don't accept treating probabilities as beliefs and vice versa, then this whole conversation is just a really long and unnecessarily circuitous way to say "remember that you can be wrong about stuff".

Comment author: EHeller 20 August 2015 05:44:34PM 2 points [-]

The part that is new compared to Cromwell's rule is that Yudkowsky doesn't want to give probability 1 to logical statements (53 is a prime number).

Because he doesn't want to treat 1 as a probability, you can't expect complete sets of events to have total probability 1, despite them being tautologies. Because he doesn't want probability 0, how do you handle the empty set? How do you assign probabilities to statements like "A and B" where A and B are logical exclusive? (the coin lands heads AND the coin lands tails).

Removing 0 and 1 from the math of probability breaks most of the standard manipulations. Again, it's best to just say "be careful with 0 and 1 when working with odds ratios."

Comment author: Wes_W 20 August 2015 05:02:33PM 0 points [-]

If we're asking what the author "really meant" rather than just what would be correct, it's on record.

The argument for why zero and one are not probabilities is not, "All objects which are special cases should be cast out of mathematics, so get rid of the real zero because it requires a special case in the field axioms", it is, "ceteris paribus, can we do this without the special case?" and a bit of further intuition about how 0 and 1 are the equivalents of infinite probabilities, where doing our calculations without infinities when possible is ceteris paribus regarded as a good idea by certain sorts of mathematicians. E.T. Jaynes in "Probability Theory: The Logic of Science" shows how many probability-theoretic errors are committed by people who assume limits directly into their calculations, without first showing the finite calculation and then finally taking its limit. It is not unreasonable to wonder when we might get into trouble by using infinite odds ratios. Furthermore, real human beings do seem to often do very badly on account of claiming to be infinitely certain of things so it may be pragmatically important to be wary of them.

I... can't really recommend reading the entire thread at the link, it's kind of flame-war-y and not very illuminating.

Comment author: EHeller 20 August 2015 05:14:30PM *  3 points [-]

I think the issue at hand is that 0 and 1 aren't special cases at all, but very important for the math of probability theory to work (try and construct a probability measure where some subset doesn't have probability 1 or 0).

This is incredibly necessary for the mathematical idea of probability ,and EY seems to be confusing "are 0 and 1 probabilities relevant to Bayesian agents?" with "are 0 and 1 probabilities?" (yes, they are, unavoidably, not as a special case!).

View more: Prev | Next