Linch

Wiki Contributions

Comments

Sorted by
Linch20

So there was a lot of competitive pressure to keep pushing to make it work. A good chunk of the Superalignment team stayed on in the hope that they could win the race and use OpenAI’s lead to align the first AGI, but many of the safety people at OpenAI quit in June. We were left with a new alignment lab, Embedded Intent, and an OpenAI newly pruned of the people most wanting to slow it down.”

“And that’s when we first started learning about this all?”

“Publicly, yes. The OpenAI defectors were initially mysterious about their reasons for leaving, citing deep disagreements over company direction. But then some memos were leaked, SF scientists began talking, and all the attention of AI Twitter was focused on speculating about what happened. They pieced pretty much the full story together before long, but that didn’t matter soon. What did matter was that the AI world became convinced there was a powerful new technology inside OpenAI.”

Yarden hesitated. “You’re saying that speculation, that summer hype, it led to the cyberattack in July?”

“Well, we can’t say for certain,” I began. “But my hunch is yes. Governments had already been thinking seriously about AI for the better part of a year, and their national plans were becoming crystallized for better or worse. But AI lab security was nowhere near ready for that kind of heat.

Wow. So many of the sociological predictions became true even though afaict the technical predictions are (thankfully) lagging behind. 

Linch40

Interesting! I didn't consider that angle

Linch40

If the means/medians are higher, the tails are also higher as well (usually). 

Norm(μ=115, σ=15) distribution will have a much lower proportion of data points above 150 than Norm(μ=130, σ=15). Same argument for other realistic distributions. So if all I know about fields A and B is that B has a much lower mean than A, by default I'd also assume B has a much lower 99th percentile than A, and much lower percentage of people above some "genius" cutoff. 

Linch51

Again using the replication crisis as an example, you may have noticed the very wide (like, 1 sd or more) average IQ gap between students in most fields which turned out to have terrible replication rates and most fields which turned out to have fine replication rates.

This is rather weak evidence for your claim ("memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers"), unless you additionally posit another mechanism like "fields with terrible replication rates have a higher standard deviation than fields without them" (why?). 

Linch30

Some people I know are much more pessimistic about the polls this cycle, due to herding. For example, nonresponse bias might just be massive for Trump voters (across demographic groups), so pollsters end up having to make a series of unprincipled choices with their thumbs on the scales. 

Linch20

There's also a comic series with explicitly this premise, unfortunately this is a major plot point so revealing it will be a spoiler:

Linch30

Yeah this was my first thought halfway through. Way too many specific coincidences to be anything else.

Load More