Comment author: KatjaGrace 07 October 2014 02:49:06AM 2 points [-]

If parents had strong embryo selection available to them, how would the world be different, other than via increased intelligence?

Comment author: CarlShulman 07 October 2014 11:29:21PM 5 points [-]

A lot of negative-sum selection for height perhaps. The genetic architecture is already known well enough for major embryo selection, and the rest is coming quickly.

Height's contribution to CEO status is perhaps half of IQ's, and in addition to substantial effects on income it is also very helpful in the marriage market for men.

But many of the benefits are likely positional, reflecting the social status gains of being taller than others in one's social environment, and there are physiological costs (as well as use of selective power that could be used on health, cognition, and other less positional goods).

Choices at actual sperm banks suggests parents would use a mix that placed serious non-exclusive weight on each of height, attractiveness, health, education/intelligence, and anything contributing to professional success. Selection on personality might be for traits that improve individual success or for compatibility with parents, but I'm not sure about the net.

Selection for similarity on political and religious orientation might come into use, and could have disturbing and important consequences.

Comment author: paulfchristiano 07 October 2014 03:18:06PM *  2 points [-]

Looks good to me, with the same set of caveats as the original claim. Though note that both arguments are bolstered if "improvement of people" or "design of machines" in the second sentence is replaced by a more exhaustive inventory. Would be good to think more about the differences.

Comment author: CarlShulman 07 October 2014 11:20:14PM 2 points [-]

This application highlights a problem in that definition, namely gains of specialization. Say you produced humans with superhuman general intelligence as measured by IQ tests, maybe the equivalent of 3 SD above von Neumann. Such a human still could not be an expert in each and every field of intellectual activity simultaneously due to time and storage constraints.

The superhuman could perhaps master any given field better than any human given some time for study and practice, but could not so master all of them without really ridiculously superhuman prowess. This overkill requirement is somewhat like the way a rigorous Turing Test requires not only humanlike reasoning, but tremendous ability to tell a coherent fake story about biographical details, etc.

Comment author: [deleted] 07 October 2014 04:19:39AM *  10 points [-]

As of July 30, GiveWell considers the International Council for the Control of Iodine Deficiency Disorders Global Network (ICCIDD) a contender for their 2014 recommendation, according to their ongoing review. They also mention that they're considering the Global Alliance for Improved Nutrition (GAIN), which they've had their eye on for a few years. They describe some remaining uncertainties -- this has been a major philanthropic success for the past couple decades, so why is there a funding gap now, well before the work is finished? Is it some sort of donor fatigue, or are the remaining countries that need iodization harder to work in, or is it something else?

(Also, average gains from intervention seem to be more like 3-4 IQ points.)

Comment author: CarlShulman 07 October 2014 11:10:12PM 5 points [-]

Part of their reason for funding deworming is also improvements in cognitive skills, for which the evidence base just got some boost.

Comment author: Baughn 08 September 2014 08:26:21PM 0 points [-]

True, but somewhat besides the point; it's the asymptotic speedup that's interesting.

...you know, assuming the thing actually does what they claim it does. sigh

Comment author: CarlShulman 09 September 2014 03:15:55AM 0 points [-]

Also no asymptotic speedup.

Comment author: Baughn 24 January 2013 12:20:50AM *  0 points [-]

The G+ post explains what it's good for pretty well, doesn't it?

It's not a dramatic improvement (yet), but it's a larger potential speedup than anything else I've seen on the protein-folding problem lately.

Comment author: CarlShulman 08 September 2014 04:22:04AM 0 points [-]

You can duplicate that D-Wave machine on a laptop.

Comment author: jsteinhardt 13 July 2014 11:29:38PM *  7 points [-]

Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.

I don't think this is true at all. His points against replicability are very valid and match my experience as a researcher. In particular:

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way.

This is a very real issue and I think that if we want to solve the current issues with science we need to be honest about this, rather than close our eyes and repeat the mantra that replication will solve everything. And it's not like he's arguing against accountability. Even in your quoted passage he says:

The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.

Now, I think he goes too far by saying that no negative findings should be published; but I think they need to be held to a high standard for the very reason he gives. On the other hand, positive findings should also be held to a higher standard.

Note that there are people much wiser than me (such as Andrew Gelman) who disagree with me; Gelman is dissatisfied with the current presumption that published research is correct. I certainly agree with this but for the same reasons that Mitchell gives, I don't think that merely publishing negative results can fix this issue.

Either way, I think you are being quite uncharitable to Mitchell.

Comment author: CarlShulman 14 July 2014 06:21:51PM 4 points [-]

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way

Do you agree with the empirical claim about the frequencies of false positives in initial studies versus false negatives in replications?

Comment author: James_Miller 08 May 2014 01:17:50AM *  1 point [-]

String theorist Luboš Motl strongly disagrees with the analysis writing "Sean Carroll has no clue about physics and is helping to bury the good name of 2 graduate students".

Comment author: CarlShulman 08 May 2014 05:02:33PM 11 points [-]

Scott Aaronson on Motl's reliability, or lack thereof, with details of a specific case.

Comment author: RichardKennaway 14 March 2014 10:21:22PM 5 points [-]

Mere money doesn't solve their problem: they can offer tons of money towards random candidates, but not to the ones which are visibly/reliably talented (which are a small subset of the talented).

A way around that might be to make it known that big salaries are available, but not up front, only by proven merit after being given a job. Does this already happen?

Comment author: CarlShulman 15 March 2014 11:29:38PM 8 points [-]

This actually seems very common in office jobs where you find many workers with million dollar salaries. Wall Street firms, strategy consultancies, and law firms all use models in which salaries expand massively with time, with high attrition along the way: the "up-or-out" model.

Even academia gives tenured positions (which have enormous value to workers) only after trial periods as postdocs and assistant professors.

Main Street corporate executives have to climb the ranks.

Comment author: CarlShulman 14 March 2014 07:23:04AM *  7 points [-]

Moral pluralism or uncertainty might give a reason to construct a charity portfolio which serves multiple values, as might emerge from something like the parliamentary model.

Comment author: Daniel_Burfoot 01 April 2013 01:12:15PM 41 points [-]

Robin used a Dirty Math Trick that works on us because we're not used to dealing with large numbers. He used a large time scale of 12000 years, and assumed exponential growth in wealth at a reasonable rate over that time period. But then for depreciating the value of the wealth due to the fact that the intended recipients might not actually receive it, he used a relatively small linear factor of 1/1000 which seems like it was pulled out of a hat.

It would make more sense to assume that there is some probability every year that the accumulated wealth will be wiped out by civil war, communist takeover, nuclear holocaust, etc etc. Even if this yearly probability were small, applied over a long period of time, it would still counteract the exponential blowup in the value of the wealth. The resulting conclusion would be totally dependent on the probability of calamity: if you use a 0.01% chance of total loss, then you have about a 30% chance of coming out with the big sum mentioned in the article. But if you use a 1% chance, then your likelihood of making it to 12000 years with the money intact is 4e-53.

Comment author: CarlShulman 02 March 2014 05:45:51AM 1 point [-]

As I said in response to Gwern's comment, there is uncertainty over rates of expropriation/loss, and the expected value disproportionately comes from the possibility of low loss rates. That is why Robin talks about 1/1000, he's raising the possibility that the legal order will be such as to sustain great growth, and the laws of physics will allow unreasonably large populations or wealth.

Now, it is still a pretty questionable comparison, because there are plenty of other possibilities for mega-influence, like changing the probability that such compounding can take place (and isn't pre-empted by expropriation, nuclear war, etc).

View more: Prev | Next