All of LGS's Comments + Replies

LGS134

You should show your calculation or your code, including all the data and parameter choices. Otherwise I can't evaluate this.

I assume you're picking parameters to exaggerate the effects, because just from the exaggerations you've already conceded (0.9/0.6 shouldn't be squared and attenuation to get direct effects should be 0.824), you've already exaggerated the results by a factor of sqrt(0.9/0.6)/0.824 for editing, which is around a 50% overestimate.

I don't think that was deliberate on your part, but I think wishful thinking and the desire to paint a comp... (read more)

5kman
The code is pretty complicated and not something I'd expect a non-expert (even a very smart one) to be able to quickly check over; it's not just a 100 line python script. (Or even a very smart expert for that matter, more like anyone who wasn't already familiar with our particular codebase.) We'll likely open source it at some point in the future, possibly soon, but that's not decided yet. Our finemapping (inferring causal effects) procedure produces ~identical results to the software from the paper I linked above when run on the same test data (though we handle some additional things like variable per-SNP sample sizes and missing SNPs which that finemapper doesn't handle, which is why we didn't just use it). The parameter choices which determine the prior over SNP effects are the number of causal SNPs (which we set to 20,000) and the SNP heritability of the phenotype (which we set to 0.19, as per the GWAS we used). The erroneous effect size adjustment was done at the end to convert from the effect sizes of the GWAS phenotype (low reliability IQ test) to the effect sizes corresponding to the phenotype we care about (high reliability IQ test). We want to publish a more detailed write up of our methods soon(ish), but it's going to be a fair bit of work so don't expect it overnight. Yep, fair enough. I've noticed myself doing this sometimes and I want to cut it out. That said, I don't think small-ish predictable overestimates to the effect sizes are going to change the qualitative picture, since with good enough data and a few hundred to a thousand edits we can boost predicted IQ by >6 SD even with much more pessimistic assumptions, which probably isn't even safe to do (I'm not sure I expect additivity to hold that far). I'm much more worried about basic problems with our modelling assumptions, e.g. the assumption of sparse causal SNPs with additive effects and no interactions (e.g. what if rare haplotypes are deleterious due to interactions that don't show up in GW
GeneSmith112

There is one saving grace for us which is that the predictor we used is significantly less powerful than ones we know to exist.

I think when you account for both the squaring issue, the indirect effect things, and the more powerful predictors, they're going to roughly cancel out.

Granted, the more powerful predictor itself isn't published, so we can't rigorously evaluate it either which isn't ideal. I think the way to deal with this is to show a few lines: one for the "current publicly available GWAS", one showing a rough estimate of the gain using the priva... (read more)

LGS50

I don't understand. Can you explain how you're inferring the SNP effect sizes?

3kman
With a method similar to this. You can easily compute the exact likelihood function P(GWAS results | SNP effects), which when combined with a prior over SNP effects (informed by what we know about the genetic architecture of the trait) gives you a posterior probability of each SNP being causal (having nonzero effect), and its expected effect size conditional on being causal (you can't actually calculate the full posterior since there are 2^|SNPs| possible combinations of SNPs with nonzero effects, so you need to do some sort of MCMC or stochastic search). We may make a post going into more detail on our methods at some point.
LGS60

I'm talking about this graph:

What are the calculations used for this graph. Text says to see the appendix but the appendix does not actually explain how you got this graph.

2kman
This is based on inferring causal effects conditional on this GWAS. The assumed heritability affects the prior over SNP effect sizes.
LGS60

You're mixing up h^2 estimates with predictor R^2 performance. It's possible to get an estimate of h^2 with much less statistical power than it takes to build a predictor that good.

 

Thanks. I understand now. But isn't the R^2 the relevant measure? You don't know which genes to edit to get the h^2 number (nor do you know what to select on). You're doing the calculation 0.2*(0.9/0.6)^2 when the relevant calculation is something like 0.05*(0.9/0.6). Off by a factor of 6 for the power of selection, or sqrt(6)=2.45 for the power of editing

3kman
Not for this purpose! The simulation pipeline is as follows: the assumed h^2 and number of causal variants is used to generate the genetic effects -> generate simulated GWASes for a range of sample sizes -> infer causal effects from the observed GWASes -> select top expected effect variants for up to N (expected) edits.
LGS40

The paper you called largest ever GWAS gave a direct h^2 estimate of 0.05 for cognitive performance. How are these papers getting 0.2? I don't understand what they're doing. Some type of meta analysis?

The test-retest reliability you linked has different reliabilities for different subtests. The correct adjustment depends on which subtests are being used. If cognitive performance is some kind of sumscore of the subtests, its reliability would be higher than for the individual subtests.

Also, I don't think the calculation 0.2*(0.9/0.6)^2 is the correct adjust... (read more)

5kman
You're mixing up h^2 estimates with predictor R^2 performance. It's possible to get an estimate of h^2 with much less statistical power than it takes to build a predictor that good. "Fluid IQ" was the only subtest used. Good catch, we'll fix this when we revise the post.
LGS50

Thanks! I understand their numbers a bit better, then. Still, direct effects of cognitive performance explain 5% of variance. Can't multiply the variance explained of EA by the attenuation of cognitive performance! 

 

Do you have evidence for direct effects of either one of them being higher than 5% of variance?

 

I don't quite understand your numbers in the OP but it feels like you're inflating them substantially. Is the full calculation somewhere?

4kman
Not quite sure which numbers you're referring to, but if it's the assumed SNP heritability, see the below quote of mine from another comment talking about missing heritability for IQ: The h^2 = 0.19 estimate from this GWAS should be fairly robust to stratification, because of how the LDSC estimator works. (To back this up: a recent study that actually ran a small GWAS on siblings, based on the same cognitive test, also found h^2 = 0.19 for direct effects.)
LGS70

You should decide whether you're using a GWAS on cognitive performance or on educational attainment (EA). This paper you linked is using a GWAS for EA, and finding that very little of the predictive power was direct effects. Exactly the opposite of your claim:

For predicting EA, the ratio of direct to population effect estimates is 0.556 (s.e. = 0.020), implying that 100% × 0.5562 = 30.9% of the PGI’s R2 is due to its direct effect. 

Then they compare this to cognitive performance. For cognitive performance, the ratio was better, but it's not 0.824, it'... (read more)

9kman
That's variance explained. I was talking about effect size attenuation, which is what we care about for editing. Supplementary table 10 is looking at direct and indirect effects of the EA PGI on other phenotypes. The results for the Cog Perf PGI are in supplementary table 13.
LGS11-3

Your OP is completely misleading if you're using plain GWAS!

GWAS is an association -- that's what the A stands for. Association is not causation. Anything that correlates with IQ (eg melanin) can show up in a GWAS for IQ. You're gonna end up editing embryos to have lower melanin and claiming their IQ is 150

9kman
The IQ GWAS we used was based on only individuals of European ancestry, and ancestry principal components were included as covariates as is typical for GWAS. Non-causal associations from subtler stratification is still a potential concern, but I don't believe it's a terribly large concern. The largest educational attainment GWAS did a comparison of population and direct effects for a "cognitive performance" PGI and found that predicted direct (between sibling) effects were only attenuated by a factor of 0.824 compared to predicted population level effects. If anything I'd expect their PGI to be worse in this regard, since it included variants with less stringent statistical power cutoffs (so I'd guess it's more likely that non-causal associations would sneak in, compared to the GWAS we used).
LGS190

Are your IQ gain estimates based on plain GWAS or on family-fixed-effects-GWAS? You don't clarify. The latter would give much lower estimates than the former

7kman
Plain GWAS, since there aren't any large sibling GWASes. What's the basis for the estimates being much lower and how would we properly adjust for them?
LGS80

And these changes in chickens are mostly NOT the result of new mutations, but rather the result of getting all the big chicken genes into a single chicken.

 

Is there a citation for this? Or is that just a guess

LGS10

Calculating these probabilities is fairly straightforward if you know some theory of generating functions. Here's how it works.

Let  be a variable representing the probability of a single 6, and let  represent the probability of "even but not 6". A single string consisting of even numbers can be written like, say, , and this expression (which simplifies to ) is the same as the probability of the string. Now let's find the generating function for all strings you can get in (A). These strings are generated by the follo... (read more)

LGS70

There's still my original question of where the feedback comes from. You say keep the transcripts where the final answer is correct, but how do you know the final answer? And how do you come up with the question? 

 

What seems to be going on is that these models are actually quite supervised, despite everyone's insistence on calling them unsupervised RL. The questions and answers appear to be high-quality human annotation instead of being machine generated. Let me know if I'm wrong about this. 

 

If I'm right, it has implications for scalin... (read more)

LGS30

I have no opinion about whether formalizing proofs will be a hard problem in 2025, but I think you're underestimating the difficulty of the task ("math proofs are math proofs" is very much a false statement for today's LLMs, for example).

In any event, my issue is that formalizing proofs is very clearly not involved in the o1/o3 pipeline, since those models make so many formally incorrect arguments. The people behind FrontierMath have said that o3 solved many of the problems using heuristic algorithms with wrong reasoning behind them; that's not something a... (read more)

gwern277

Right now, it seems to be important to not restrict the transcripts at all. This is a hard exploration problem, where most of the answers are useless, and it takes a lot of time for correct answers to finally emerge. Given that, you need to keep the criteria as relaxed as possible, as they are already on the verge of impossibility.

The r1, the other guys, and OAers too on Twitter now seem to emphasize that the obvious appealing approach of rewarding tokens for predicted correctness or doing search on tokens, just doesn't work (right now). You need to 'let t... (read more)

LGS30

Well the final answer is easy to evaluate. And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step.

 

Why is the final answer easy to evaluate? Let's say we generate the problem "number of distinct solutions to x^3+y^3+xyz=0 modulo 17^17" or something. How do you know what the right answer is?

I agree that you can do this in a supervised way (a human puts in the right answer). Is that what you mean?

What about if the task is "prove that every i... (read more)

3wassname
I'm not 100% sure, but you could have a look at math-shepard for an example. I haven't read the whole thing yet. I imagine it works back from a known solution.
2wassname
Check out the linked rStar-Math paper, it explains and demonstrates it better than I can (caveat they initially distil from a much larger model, which I see as a little bit of a cheat). tldr: yes a model, and a tree of possible solutions. Given a tree with values on the leaves, they can look at what nodes seem to have causal power. A seperate approach is to teach a model to supervise using human process supervision data , then ask it to be the judge. This paper also cheats a little by distilling, but I think the method makes sense.
LGS92

Do you have a sense of where the feedback comes from? For chess or Go, at the end of the day, a game is won or lost. I don't see how to do this elsewhere except for limited domains like simple programming which can quickly be run to test, or formal math proofs, or essentially tasks in NP (by which I mean that a correct solution can be efficiently verified).

 

For other tasks, like summarizing a book or even giving an English-language math proof, it is not clear how to detect correctness, and hence not clear how to ensure that a model like o5 doesn't giv... (read more)

3Petropolitan
Math proofs are math proofs, whether they are in plain English or in Lean. Contemporary LLMs are very good at translation, not just between high-resource human languages but also between programming languages (transpiling), from code to human (documentation) and even from algorithms in scientific papers to code. Thus I wouldn't expect formalizing math proofs to be a hard problem in 2025. However I generally agree with your line of thinking. As wassname wrote above (it's been quite obvious for some time but they link to a quantitative analysis), good in-silico verifiers are indeed crucial for inference-time scaling. But for the most of real-life tasks there's either no decent, objective verifiers in principle (e. g., nobody knows right answers to counterfactual economics or history questions) or there are very severe trade-offs in verifier accuracy and time/cost (think of wet lab life sciences: what's the point of getting hundreds of AI predictions a day for cheap if one needs many months and much more money to verify them?)
5wassname
Well the final answer is easy to evaluate. And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step. I think tasks outside math and code might be hard. But summarizing a book is actually easy. You just ask "how easy is it to reconstruct the book if given the summary". So it's an unsupervised compression-decompression task. Another interesting domain is "building a simulator". This is an expensive thing to generate solutions for, but easy to verify that it predicts the thing you are simulating. I can see this being an expensive but valuable domain for this paradime. This would include fusion reactors, and robotics (which OAI is once again hiring for!) I don't see them doing this explicitly yet, but setting up an independent, and even adversarial reward model would help, or at least I expect it should.
LGS10

The value extractable is rent on both the land and the improvement. LVT taxes only the former. E.g. if land can earn $10k/month after an improvement of $1mm, and if interest is 4.5%, and if that improvement is optimal, a 100% LVT is not $10k/mo but $10k/mo minus $1mm*0.045/12=$3,750. So 100% LVT would be merely $6,250.

If your improvement can't extract $6.3k from the land, preventing you from investing in that improvement is a feature, not a bug.

2Dagon
OK.  I hate that feature.  Especially since it doesn't prevent imperfect investments, it only punishes the ones that turn out suboptimal, often many years later.  
LGS10

If you fail to pay the LVT you can presumably sell the improvements. I don't think there's an inefficiency here -- you shouldn't invest in improving land if you're not going to extract enough value from it to pay the LVT, and this is a feature, not a bug (that investment would be inefficient).

2Dagon
You can't sell the improvements if they're tied to land that is taxed higher than the improvements bring in (due to mistakes in improvement or changed environment that has increased the land value and the improvements haven't stayed optimal).    The land is taxed at it's full theoretical value, less than the improvements bring in, and the improvements are literally connected to it.
LGS10

LVT applies to all land, but not to the improvements on the land.

We do not care about disincentivizing an investment in land (by which I mean, just buying land). We do care about disincentivizing investments in improvements on the land (by which I include buying the improvement on the land, as well as building such improvements). A signal of LVT intent will not have negative consequences unless it is interpreted as a signal of broader confiscation.

2Dagon
Yeah, I understand the theory - I haven't seen an implementation plan that unbundles land and improvements in practice, so it ends up as "normal property tax, secured by land and improvements, just calculated based on land-rent-value". If you have a proposal where failing to pay the LVT doesn't result in loss of use of the improvements, let me know.
LGS44

More accurately, it applies to a signalling of intent of confiscating other investments; we don't actually care if people panic about land being confiscated because buying land (rather than improving it) isn't productive in any way. (We may also want to partially redistribute resources towards the losers of the land confiscation to compensate for the lost investment -- that is, we may want to the government to buy the land rather than confiscate it, though it would be bought at lower than market prices.)

It is weird to claim that the perceived consequence o... (read more)

4Dagon
Maybe I misunderstand.  I haven't seen the proposal that only applies to buying undeveloped land - all I've seen talks about the land value of highly-developed areas.  You can't currently buy (or build) a building without also buying the land under it.  As soon as the land becomes valueless (because the government is taking all the land's value), the prospect of buying/building/owning/running structures on that land gets infinitely less appealing.
LGS52

Thanks for this post. A few comments:

  1. The concern about new uses of land is real, but very limited compared to the inefficiencies of most other taxes. It is of course true that if the government essentially owns the land to rent it out, the government should pay for the exploration for untapped oil reserves! The government would hire the oil companies to explore. It is also true that the government would do so less efficiently than the private market. But this is small potatoes compared to the inefficiency of nearly every other tax.
  2. It is true that a develop
... (read more)
6Dagon
Well, no.  It applies to sudden SIGNALING OF INTENT to a high LVT.  Any move in this direction, even if nominally gradual, will immediately devalue the ownership of land.  Nobody is going to believe in a long-term plan - near-future governments want the money now, and will accelerate it. In a lot of human public-choice affairs, the slippery slope is real, and everyone knows it.
LGS60

The NN thing inside stockfish is called the NNUE, and it is a small neural net used for evaluation (no policy head for choosing moves). The clever part of it is that it is "efficiently updatable" (i.e. if you've computed the evaluation of one position, and now you move a single piece, getting the updated evaluation for the new position is cheap). This feature allows it to be used quickly with CPUs; stockfish doesn't really use GPUs normally (I think this is because moving the data on/off the GPU is itself too slow! Stockfish wants to evaluate 10 million no... (read more)

3kave
My understanding when I last looked into it was that the efficient updating of the NNUE basically doesn't matter, and what really matters for its performance and CPU-runnability is its small size.
LGS52

So far as I know, it is not the case that OpenAI had a slower-but-equally-functional version of GPT4 many months before announcement/release. What they did have is GPT4 itself, months before; but they did not have a slower version. They didn't release a substantially distilled version. For example, the highest estimate I've seen is that they trained a 2-trillion-parameter model. And the lowest estimate I've seen is that they released a 200-billion-parameter model. If both are true, then they distilled 10x... but it's much more likely that only one is true,... (read more)

1IC Rainbow
I don't know the details, but whatever the NN thing (derived from Lc0, a clone of AlphaZero) inside current Stockfish is can play on a laptop GPU. And even if AlphaZero derivatives didn't gain 3OOMs by themselves it doesn't update me much that that's something particularly hard. Google itself has no interest at improving it further and just moved on to MuZero, to AlphaFold etc.
LGS80

I think AI obviously keeps getting better. But I don't think "it can be done for $1 million" is such strong evidence for "it can be done cheaply soon" in general (though the prior on "it can be done cheaply soon" was not particularly low ante -- it's a plausible statement for other reasons).

Like if your belief is "anything that can be done now can be done 1000x cheaper within 5 months", that's just clearly false for nearly every AI milestone in the last 10 years (we did not get gpt4 that's 1000x cheaper 5 months later, nor alphazero, etc).

I'll admit I'm not very certain in the following claims, but here's my rough model:

  • The AGI labs focus on downscaling the inference-time compute costs inasmuch as this makes their models useful for producing revenue streams or PR. They don't focus on it as much beyond that; it's a waste of their researchers' time. The amount of compute at OpenAI's internal disposal is well, well in excess of even o3's demands.
  • This means an AGI lab improves the computational efficiency of a given model up to the point at which they could sell it/at which it looks impressive,
... (read more)
LGS*33-5

It's hard to find numbers. Here's what I've been able to gather (please let me know if you find better numbers than these!). I'm mostly focusing on FrontierMath.

  1. Pixel counting on the ARC-AGI image, I'm getting $3,644 ± $10 per task.
  2. FrontierMath doesn't say how many questions they have (!!!). However, they have percent breakdowns by subfield, and those percents are given to the nearest 0.1%; using this, I can narrow the range down to 289-292 problems in the dataset. Previous models solve around 3 problems (4 problems in total were ever solved by any previou
... (read more)

This is actually likely more expensive than hiring a domain-specific expert mathematician for each problem

I don't think anchoring to o3's current cost-efficiency is a reasonable thing to do. Now that AI has the capability to solve these problems in-principle, buying this capability is probably going to get orders of magnitude cheaper within the next five minutes months, as they find various algorithmic shortcuts.

I would guess that OpenAI did this using a non-optimized model because they expected it to be net beneficial: that producing a headline-grabbing r... (read more)

9yo-cuddles
small nudge: the questions have difficulty tiers of 25% easy, 50% medium, and 25% hard with easy being undergrad/IMO difficulty and hard being the sort you would give to a researcher in training. The 25% accuracy gives me STRONG indications that it just got the easy ones, and the starkness of this cutoff makes me think there is something categorically different about the easy ones that make them MUCH easier to solve, either being more close ended, easy to verify, or just leaked into the dataset in some form.
LGS10

Sure. I'm not familiar with how Claude is trained specifically, but it clearly has a mechanism to reward wanted outputs and punish unwanted outputs, with wanted vs unwanted being specified by a human (such a mechanism is used to get it to refuse jailbreaks, for example).

I view the shoggoth's goal as minimizing some weird mixture of "what's the reasonable next token here, according to pretraining data" and "what will be rewarded in post-training".

3Ann
For context: https://www.anthropic.com/research/claude-character  The desired traits are crafted by humans, but the wanted vs unwanted is specified by original-Claude based on how well generated responses align with traits. (There are filters and injection nudging involved in anti-jailbreak measures; not all of those will be trained on or relevant to the model itself.)
LGS7-1

I want to defend the role-playing position, which I think you're not framing correctly.

There are two characters here: the shoggoth, and the "HHH AI assistant". The shoggoth doesn't really have goals and can't really scheme; it is essentially an alien which has been subject to selective breeding where in each generation, only the descendant which minimizes training loss survives. The shoggoth therefore exists to minimize training loss: to perfectly predict the next token, or to perfectly minimize "non-HHH loss" as judged by some RLHF model. The shoggoth alw... (read more)

2Ann
While directionally reasonable, I think there might be some conflation of terms involved? Claude to my knowledge is trained with RLAIF, which is a step removed from RLHF, and not necessarily directly on human preferences. Pretraining alone (without annealing) will potentially result in the behavior you suggest from a base model put into the context of generating text for an AI assistant, even without human feedback.
LGS00

The problem with this argument is that the oracle sucks.

The humans believe they have access to an oracle that correctly predicts what happens in the real world. However, they have access to a defective oracle which only performs well in simulated worlds, but performs terribly in the "real" universe (more generally, universes in which humans are real). This is a pretty big problem with the oracle!

Yes, I agree that an oracle which is incentivized to make correct predictions within its own vantage point (including possible simulated worlds, not restricted to ... (read more)

LGS30

Given o1, I want to remark that the prediction in (2) was right. Instead of training LLMs to give short answers, an LLM is trained to give long answers and another LLM summarizes.

LGS30

That's fair, yeah

We need a proper mathematical model to study this further. I expect it to be difficult to set up because the situation is so unrealistic/impossible as to be hard to model. But if you do have a model in mind I'll take a look

LGS30

It would help to have a more formal model, but as far as I can tell the oracle can only narrow down its predictions of the future to the extent that those predictions are independent of the oracle's output. That is to say, if the people in the universe ignore what the oracle says, then the oracle can give an informative prediction.

This would seem to exactly rule out any type of signal which depends on the oracle's output, which is precisely the types of signals that nostalgebraist was concerned about.

4Jeremy Gillen
That can't be right in general. Normal nash equilibria can narrow down predictions of actions. E.g. competition game. This is despite each player's decision being dependent on the other player's action.
LGS30

The problem is that the act of leaving the message depends on the output of the oracle (otherwise you wouldn't need the oracle at all, but you also would not know how to leave a message). If the behavior of the machine depends on the oracle's actions, then we have to be careful with what the fixed point will be.

For example, if we try to fight the oracle and do the opposite, we get the "noise" situation from the grandfather paradox.

But if we try to cooperate with the oracle and do what it predicts, then there are many different fixed points and no telling w... (read more)

2Jeremy Gillen
If the only transmissible message is essentially uniformly random bits, then of what value is the oracle? I claim the message can contain lots of information. E.g. if there are 2^100 potential actions, but only 2 fixed points, then 99 bits have been transmitted (relative to uniform). The rock-paper-scissors example is relatively special, in that the oracle can't narrow down the space of actions at all.  The UP situation looks to me to be more like the first situation than the second.
LGS50

Thanks for the link to reflective oracles!

On the gap between the computable and uncomputable: It's not so bad to trifle a little. Diagonalization arguments can often be avoided with small changes to the setup, and a few of Paul's papers are about doing exactly this. 


I strongly disagree with this: diagonalization arguments often cannot be avoided at all, not matter how you change the setup. This is what vexed logicians in the early 20th century: no matter how you change your formal system, you won't be able to avoid Godel's incompleteness theorems.

Ther... (read more)

2Noosphere89
One caveat to this quote below is that Godel's first incompleteness theorem relies on the assumption of the formal system being recursively enumerable, and if we drop this requirement, then we can get a consistent and complete description of say, first order arithmetic. More here: https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems#Effective_axiomatization
2Jeremy Gillen
Fair enough, the probabilistic mixtures thing was what I was thinking of as a change of setup, but reasonable to not consider it such. I don't see how this is implied. If a fact is consistent across levels, and determined in a non-paradoxical way, can't this become a natural fixed point that can be "transmitted" across levels? And isn't this kind of knowledge all that is required for the malign prior argument to work?
LGS10

I think the problem to grapple with is that I can cover the rationals in [0,1] with countably many intervals of total length only 1/2 (eg enumerate rationals in [0,1], and place interval of length 1/4 around first rational, interval of length 1/8 around the second, etc). This is not possible with reals -- that's the insight that makes measure theory work!

The covering means that the rationals in an interval cannot have a well defined length or measure which behaves reasonably under countable unions. This is a big barrier to doing probability theory. The same problem happens with ANY countable set -- the reals only avoid it by being uncountable.

LGS52

Evan Morikawa?

https://twitter.com/E0M/status/1790814866695143696

LGS132

Weirdly aggressive post.

I feel like maybe what's going on here is that you do not know what's in The Bell Curve, so you assume it is some maximally evil caricature? Whereas what's actually in the book is exactly Scott's position, the one you say is "his usual "learn to love scientific consensus" stance".

If you'd stop being weird about it for just a second, could you answer something for me? What is one (1) position that Murray holds about race/IQ and Scott doesn't? Just name a single one, I'll wait.

Or maybe what's going on here is that you have a strong "S... (read more)

-2DPiepgrass
My post is weirdly aggressive? I think you are weirdly aggressive against Scott. Since few people have read the book (including, I would wager, Cade Metz), the impact of associating Scott with Bell Curve doesn't depend directly on what's in the book, it depends on broad public perceptions of the book. Having said that, according to Shaun (here's that link again), the Bell Curve relies heavily of the work of Richard Lynn, who was funded by, and later became the head of, the Pioneer Fund, which the Southern Poverty Law Center classifies as a hate group. In contrast, as far as I know, the conclusions of the sources cited by Scott do not hinge upon Richard Lynn. And given this, it would surprise me if the conclusions of The Bell Curve actually did match the mainstream consensus. One of Scott's sources says 25-50% for "heritability" of the IQ gap. I'm pretty confident the Bell Curve doesn't say this, and I give P>50% that The Bell Curve suggests/states/implies that the IQ gap is over 50% "heritable" (most likely near 100%). Shaun also indicated that the Bell Curve equated heritability with explanatory power (e.g. that if heritability is X%, Murray's interpretation would be that genetics explains or causes X% of the IQ gap). Shaun persuasively refuted this. I did not come away with a good understanding of how to think about heritability, but I expect experts would understand the subtlety of this topic better than Charles Murray. And as Shaun says: For example, that welfare programs should be stopped, which I think Scott has never advocated and which he would, in spirit, oppose. It also seems relevant that Charles Murray seems to use bad logic in his policy reasoning, as (1) this might be another reason the book was so controversial and (2) we're on LessWrong where that sort of thing usually matters. Having said that, my prior argument that you've been unreasonable does not depend on any of this. A personal analogy: I used to write articles about climate science (ex1
LGS00

Relatedly, if you cannot outright make a claim because it is potentially libellous, you shouldn't use vague insinuation to imply it to your massive and largely-unfamiliar-with-the-topic audience.

 

Strong disagree. If I know an important true fact, I can let people know in a way that doesn't cause legal liability for me.

Can you grapple with the fact that the "vague insinuation" is true? Like, assuming it's true and that Cade knows it to be true, your stance is STILL that he is not allowed to say it?

Your position seems to amount to epistemic equivalent o

... (read more)
2DPiepgrass
[citation needed] for those last four words. In the paragraph before the one frankybegs quoted, Scott said: Having never read The Bell Curve, it would be uncharacteristic of him to say "I disagree with Murray about [things in The Bell Curve]", don't you think?
7Jiro
The vague insinuation isn't "Scott agrees with Murray", the vague insinuation is "Scott agrees with Murray's deplorable beliefs, as shown by this reference". The reference shows no such thing. Arguing "well, Scott believes that anyway" is not an excuse for fake evidence.
LGS85

The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).

Cade correctly informed the readers that Scott is aligned with Murray on race and IQ. This is true and informative, and at the time some people here doubted it before the one email leaked. Basically, Cade's presented evidence sucked but someone going with the heuristic "it's in the NYT so it mu... (read more)

9frankybegs
Clearly. But if you can't do it without resorting to deliberately misleading rhetorical sleights to imply something you believe to be true, the correct response is not to. Or, more realistically, if you can't substantiate a particular claim with any supporting facts, due to the limitations of the form, you shouldn't include it nor insinuate it indirectly, especially if it's hugely inflammatory. If you simply cannot fit in the "receipts" needed to substantiate a claim (which seems implausible anyway), as a journalist you should omit that claim. If there isn't space for the evidence, there isn't space for the accusation.
8wilkox
I'd have more trust in the writing of a journalist who presents what they believe to be the actual facts in support of a claim, than one who publishes vague insinuations because writing articles is hard. He really didn’t. Firstly, in the literal sense that Metz carefully avoided making this claim (he stated that Scott aligned himself with Murray, and that Murray holds views on race and IQ, but not that Scott aligns himself with Murray on these views). Secondly, and more importantly, even if I accept the implied claim I still don’t know what Scott supposedly believes about race and IQ. I don’t know what ‘is aligned with Murray on race and IQ’ actually means beyond connotatively ‘is racist’. If this paragraph of Metz’s article was intended to be informative (it was not), I am not informed.
LGS1-2

What you're suggesting amounts to saying that on some topics, it is not OK to mention important people's true views because other people find those views objectionable. And this holds even if the important people promote those views and try to convince others of them. I don't think this is reasonable.

As a side note, it's funny to me that you link to Against Murderism as an example of "careful subtlety". It's one of my least favorite articles by Scott, and while I don't generally think Scott is racist that one almost made me change my mind. It is just a ver... (read more)

0DPiepgrass
Huh? Who defines racism as cognitive bias? I've never seen that before, so expecting Scott in particular to define it as such seems like special pleading. What would your definition be, and why would it be better? Scott endorses this definition: Setting aside that it says "irrational feeling" instead of "cognitive bias", how does this "tr[y] to define racism out of existence"?
1cubefox
It's okay to mention an author's taboo views on a complex and sensitive topic, when they are discussed in a longer format which does justice to how they were originally presented. Just giving a necessarily offensive sounding short summary is only useful as a weaponization to damage the reputation of the author.
LGS50


What Metz did is not analogous to a straightforward accusation of cheating. Straightforward accusations are what I wish he did.

 

It was quite straightforward, actually. Don't be autistic about this: anyone reasonably informed who is reading the article knows what Scott is accused of thinking when Cade mentions Murray. He doesn't make the accusation super explicit, but (a) people here would be angrier if he did, not less angry, and (b) that might actually pose legal issues for the NYT (I'm not a lawyer).

What Cade did reflects badly on Cade in the sense ... (read more)

This is reaching Cade Metz levels of slippery justification.

He doesn't make the accusation super explicit, but (a) people here would be angrier if he did, not less angry

How is this relevant? As Elizabeth says, it would be more honest and epistemically helpful if he made an explicit accusation. People here might well be angry about that, but a) that's not relevant to what is right and b) that's because, as you admit, that accusation could not be substantiated. So how is it acceptable to indirectly insinuate that accusation instead? 

(Also c), I think yo... (read more)

LGS76

Scott thinks very highly of Murray and agrees with him on race/IQ. Pretty much any implication one could reasonably draw from Cade's article regarding Scott's views on Murray or on race/IQ/genes is simply factually true. Your hypothetical author in Alabama has Greta Thunberg posters in her bedroom here.

0DPiepgrass
Strong disagree based on the "evidence" you posted for this elsewhere in this thread. It consists one-half of some dude on Twitter asserting that "Scott is a racist eugenics supporter" and retweeting other people's inflammatory rewordings of Scott, and one-half of private email from Scott saying things like It seems gratuitous for you to argue the point with such biased commentary. And what Scott actually says sounds like his judgement of ... I'm not quite sure what, since HBD is left without a definition, but it sounds a lot like the evidence he mentioned years later from  * a paper by scientists Mark Snyderman and Stanley Rothman, who, I notice, wrote this book with "an analysis of the reporting on intelligence testing by the press and television in the US for the period 1969–1983, as well as an opinion poll of 207 journalists and 86 science editors about IQ testing", and * "Survey of expert opinion on intelligence: Intelligence research, experts' background, controversial issues, and the media"  (yes, I found the links I couldn't find earlier thanks to a quote by frankybegs from this post which―I was mistaken!―does mention Murray and The Bell Curve because he is responding to Cade Metz and other critics). This sounds like his usual "learn to love scientific consensus" stance, but it appears you refuse to acknowledge a difference between Scott privately deferring to expert opinion, on one hand, and having "Charles Murray posters on his bedroom wall". Almost the sum total of my knowledge of Murray's book comes from Shaun's rebuttal of it, which sounded quite reasonable to me. But Shaun argues that specific people are biased and incorrect, such as Richard Lynn and (duh) Charles Murray. Not only does Scott never cite these people, what he said about The Bell Curve was "I never read it". And why should he? Murray isn't even a geneticist! So it seems the secret evidence matches the public evidence, does not show that "Scott thinks very highly of Murray", doesn'
3frankybegs
This is very much not what he's actually said on the topic, which I've quoted in another reply to you. Could you please support that claim with evidence from Scott's writings? And then could you consider that by doing so, you have already done more thorough journalism on this question than Cade Metz did before publishing an incredibly inflammatory claim on it in perhaps the world's most influential newspaper?
LGS810

Wait a minute. Please think through this objection. You are saying that if the NYT encountered factually true criticisms of an important public figure, it would be immoral of them to mention this in an article about that figure?

Does it bother you that your prediction didn't actually happen? Scott is not dying in prison!

This objection is just ridiculous, sorry. Scott made it an active project to promote a worldview that he believes in and is important to him -- he specifically said he will mention race/IQ/genes in the context of Jews, because that's more pa... (read more)

4cubefox
No, not in general. But in the specific case at hand, yes. We know Metz did read quite a few of Scott's blog posts, and all necessary context and careful subtlety with which he (Scott) approaches this topic (e.g. in Against Murderism) is totally lost in an offhand remark in a NYT article. It's like someone in the 17th century writing about Spinoza, and mentioning, as a sidenote, "and oh by the way, he denies the existence of a personal God" and then moves on to something else. Shortening his position like this, where it must seem outrageous and immoral, is in effect defamatory. If some highly sensitive topic can't be addressed in a short article with the required carefulness, it should simply not be addressed at all. That's especially true for Scott, who wrote about countless other topics. There is no requirement to mention everything. (For Spinoza an argument could be made that his, at the time, outrageous position plays a fairly central role in his work, but that's not the case for Scott.) Luckily Scott didn't have to fear legal consequences. But substantial social consequences were very much on the table. We know of other people who lost their job or entire career prospects for similar reasons. Nick Bostrom probably dodged the bullet by a narrow margin.
LGS66

The evidence wasn't fake! It was just unconvincing. "Giving unconvincing evidence because the convincing evidence is confidential" is in fact a minor sin.

4Alex Vermillion
The evidence offered "Scott agrees with the The Bell Curve guy" is of the same type and strength as those needed to link him to Hitler, Jesus Christ, Eliezer Yudkowsky, Cate Metz, and so on. There was absolutely nothing special about the evidence that tied it to the people offered and could have been recast without loss of accuracy to fit any leaning. As we are familiar with, if you have an observation that proves anything, you do not have evidence.
LGS3013

I assume it was hard to substantiate.

Basically it's pretty hard to find Scott saying what he thinks about this matter, even though he definitely thinks this. Cade is cheating with the citations here but that's a minor sin given the underlying claim is true.

It's really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott's defenders. It reminds me of a guy I know who was cheating on his girlfriend, and she suspected this, and he got really mad at her. Like, "how can you believe I'm cheating on you based on such flimsy evidence? Don't you trust me?" But in fact he was cheating.

5frankybegs
So despite it being "hard to substantiate", or to "find Scott saying" it, you think it's so certainly true that a journalist is justified in essentially lying in order to convey it to his audience?
2Elizabeth
What Metz did is not analogous to a straightforward accusation of cheating. Straightforward accusations are what I wish he did. What he did is the equivalent of angrily complain to mutual friends that your boyfriend liked an instagram post (of a sunset, but you leave that part out), by someone known to cheat (or maybe is just polyamorous, and you don't consider there to be a distinction). If you made a straightforward accusation, your boyfriend could give a factual response. He's not well incentivized to do so, but it's possible.  But if you're very angry he liked an innocuous instagram post, what the hell can he say?
3DPiepgrass
He definitely thinks what, exactly? Anyway, the situation is like: X is writing a summary about author Y who has written 100 books, but pretty much ignores all those books in favor of digging up some dirt on what Y thinks about a political topic Z that Y almost never discusses (and then instead of actually mentioning any of that dirt, X says Y "aligned himself" with a famously controversial author on Z.) It's not true though. Perhaps what he believes is similar to what Murray believes, but he did not "align himself" with Murray on race/IQ. Like, if an author in Alabama reads the scientific literature and quietly comes to a conclusion that humans cause global warming, it's wrong for the Alabama News to describe this as "author has a popular blog, and he has aligned himself with Al Gore and Greta Thunberg!" (which would tend to encourage Alabama folks to get out their pitchforks 😉) (Edit: to be clear, I've read SSC/ACX for years and the one and only time I saw Scott discuss race+IQ, he linked to two scientific papers, didn't mention Murray/Bell Curve, and I don't think it was the main focus of the post―which makes it hard to find it again.)

I don't think "Giving fake evidence for things you believe are true" is in any way a minor sin of evidence presentation

LGS3530

I think for the first objection about race and IQ I side with Cade. It is just true that Scott thinks what Cade said he thinks, even if that one link doesn't prove it. As Cade said, he had other reporting to back it up. Truth is a defense against slander, and I don't think anyone familiar with Scott's stance can honestly claim slander here.

This is a weird hill to die on because Cade's article was bad in other ways.

wilkox2338

It seems like you think what Metz wrote was acceptable because it all adds up to presenting the truth in the end, even if the way it was presented was 'unconvincing' and the evidence 'embarassing[ly]' weak. I don't buy the principle that 'bad epistemology is fine if the outcome is true knowledge', and I also don't buy that this happened in this particular case, nor that this is what Metz intended.

If Metz's goal was to inform his readers about Scott's position, he failed. He didn't give any facts other than that Scott 'aligned himself with' and quoted someb... (read more)

7cubefox
Imagine you are a philosopher in the 17th century, and someone accuses you of atheism, or says "He aligns himself with Baruch Spinoza". This could easily have massive consequences for you. You may face extensive social and legal punishment. You can't even honestly defend yourself, because the accusation of heresy is an asymmetric discourse situation. Is your accuser off the hook when you end up dying in prison? He can just say: Sucks for him, but it's not my fault, I just innocently reported his beliefs.
Elizabeth2930

Let's assume that's true: why bring Murray into it? Why not just say the thing you think he believes, and give whatever evidence you have for it? That could include the way he talks about Murray, but "Scott believes X, and there's evidence in how he talks about Y" is very different than "Scott is highly affiliated with Y"

LGS30

What position did Paul Christiano get at NIST? Is it a leadership position?

The problem with that is that it sounds like the common error of "let's promote our best engineer to a manager position", which doesn't work because the skills required to be an excellent engineer have little to do with the skills required to be a great manager. Christiano is the best of the best in technical work on AI safety; I am not convinced putting him in a management role is the best approach.

LGS97

Eh, I feel like this is a weird way of talking about the issue.

If I didn't understand something and, after a bunch of effort, I managed to finally get it, I will definitely try to summarize the key lesson to myself. If I prove a theorem or solve a contest math problem, I will definitely pause to think "OK, what was the key trick here, what's the essence of this, how can I simplify the proof".

Having said that, I would NOT describe this as asking "how could I have arrived at the same destination by a shorter route". I would just describe it as asking "what d... (read more)

6RobertM
I mean, yeah, they're different things.  If you can figure out how to get to the correct destination faster next time you're trying to figure something out, that seems obviously useful.
2quetzal_rainbow
"Lesson overall" can contain idiosyncratic facts that you can learn iff you run into problem and try to solve it, you can't know them (assuming you are human and not AIXI) in advance. But you can ask yourself "how would someone with better decision-making algorithm solve this problem having the same information as me before I tried to solve this problem" and update your decision-making algorithm accordingly.
LGS10

This is interesting, but how do you explain the observation that LW posts are frequently much much longer than they need to be to convey their main point? They take forever to get started ("what this NOT arguing: [list of 10 points]" etc) and take forever to finish.

5Steven Byrnes
Point 1: I think “writing less concisely than would be ideal” is the natural default for writers, so we don’t need to look to incentives to explain it. Pick up any book of writing advice and it will say that, right? “You have to kill your darlings”, “If I had more time, I would have written a shorter letter”, etc. Point 2: I don’t know if this applies to you-in-particular, but there’s a systematic dynamic where readers generally somewhat underestimate the ideal length of a piece of nonfiction writing. The problem is, the writer is writing for a heterogeneous audience of readers. Different readers are coming in with different confusions, different topics-of-interest, different depths-of-interest, etc. So you can imagine, for example, that every reader really only benefits from 70% of the prose … but it’s a different 70% for different readers. Then each individual reader will be complaining that it’s unnecessarily long, but actually it can’t be cut at all without totally losing a bunch of the audience. (To be clear, I think both of these are true—Point 2 is not meant as a denial to Point 1; not all extra length is adding anything. I think the solution is to both try to write concisely and make it easy for the reader to recognize and skip over the parts that they don’t need to read, for example with good headings and a summary / table-of-contents at the top. Making it fun to read can also somewhat substitute for making it quick to read.)
LGS5-3

I'd say that LessWrong has an even stronger aesthetic of effort than academia. It is virtually impossible to have a highly-voted lesswrong post without it being long, even though many top posts can be summarized in as little as 1-2 paragraphs.

Hmm, I notice a pretty strong negative correlation between how long it takes me to write a blog post and how much karma it gets. For example, very recently I spent like a month of full-time work to write two posts on social status (karma = 71 & 36), then I took a break to catch up on my to-do list, in the course of which I would sometimes spend a few hours dashing off a little post, and there have been three posts in that category, and their karma is 57, 60, 121 (this one). So, 20ish times less effort, somewhat more karma. This is totally in line with ... (read more)

LGS2019

Without endorsing anything, I can explain the comment.

The "inside strategy" refers to the strategy of safety-conscious EAs working with (and in) the AI capabilities companies like openAI; Scott Alexander has discussed this here. See the "Cooperate / Defect?" section.

The "Quokkas gonna quokka" is a reference to this classic tweet which accuses the rationalists of being infinitely trusting, like the quokka (an animal which has no natural predators on its island and will come up and hug you if you visit). Rationalists as quokkas is a bit of a meme; search "qu... (read more)

4GeneSmith
I am just now learning the origin of the quokka meme. The first and only time I ever saw the reference was with no explanation when someone posted this meme on Twitter
LGS*11

This seems harder, you'd need to somehow unfuse the growth plates.

 

It's hard, yes -- I'd even say it's impossible. But is it harder than the brain? The difference between growth plates and whatever is going on in the brain is that we understand growth plates and we do not understand the brain. You seem to have a prior of "we don't understand it, therefore it should be possible, since we know of no barrier". My prior is "we don't understand it, so nothing will work and it's totally hopeless".

A nice thing about IQ is that it's actually really easy to me

... (read more)
Load More