All of Linch's Comments + Replies

Linch
30

I think in an ideal world we'd have prediction markets structured around several different levels of investment risk, so that people with different levels of investment risk tolerance can make bets (and we might also observe fascinating differences if the odds diverge, eg if AGI probabilities are massively different between S&P 500 bets and T-bills bets, for example). 

2MichaelDickens
I believe the correct way to do this, at least in theory, is to simply have bets denominated in the risk-free rate—and if anyone wants more risk, they can use leverage to simultaneously invest in equities and prediction markets. Right now I don't know if it's possible to use margin loans to invest in prediction markets.
Linch
*85

I thought about this a bit more, and I'm worried that this is going to be a long-running problem for the reliability of prediction markets for low-probability events. 

Most of the problems we currently observe seem like "teething issues" that can be solved with higher liquidity, lower transaction costs, and better design (for example, by having bets denominated in S&P 500 or other stock portfolios rather than $s). But if you should understand "yes" predictions for many of those markets as an implicit bet on differing variances of time value of mone... (read more)

1Pat Myron
Polymarket could consider at least being explicit about that limitation and disallow wagers beyond 99% like Kalshi and Manifold currently do:
3samuelshadrach
Update: edited numbers, earlier one was incorrect. IMO in real world examples (not meme examples like this religious one) tail risk will often dominate the price calculation, not time value. Time value seems relevant here only because the tail risk is zero. (Both buyer and seller agree that probability of yes on this market is zero) Let’s say actual probability of some event is 3% yes and both parties agree on this. It still could be rational for a larger investor to buy no and a small investor to buy yes at 3.5% for example. Insurance market is analogous to this, it is possible for both the insurance buyer and seller to be rational at the same time because there is transfer of tail risk. The only person who can rationally accept a 3.5% chance of a $1B portfolio going to zero is someone who owns over $10B. (Assuming a utility function that makes sense for a human being) So it’s the largest investors and ultimately federal banks who absorb most of the tail risk of society. Also ofcourse not everyone is rational when it comes to avoiding taking on tail risk, 2008 financial crisis is an example of this. Beyond a point if federal banks can’t absorb the tail risk they diffuse the losses to everyone. I’m guessing the actual reason you’re interested in this is because you want prediction markets on existential questions, and there too the actual question is who absorbs the tail risk of society on behalf of everyone else. P.S. In markets that are not low probability, variance of asset price (not just time value) will matter when constructing optimal portfolio. So sharpe ratio is a better metric to study than expected value. In general I guess people without financial background are not used to thinking about variance risk and tail risk.
4MichaelDickens
Bets should be denominated in the risk-free rate. Prediction markets should invest traders' money into T-bills and pay back the winnings plus interest. I believe that should be a good enough incentive to make prediction markets a good investment if you can find positive-EV bets that aren't perfectly correlated with equities (or other risky assets). (For Polymarket the situation is a bit more complicated because it uses crypto.)
6Ben
Wouldn't higher liquidity and lower transaction costs sort this out? Say you have some money tied up in "No, Jesus will not return this year", but you really want to bet on some other thing. If transaction costs were completely zero then, even if you have your entire net worth tied up in "No Jesus" bets you could still go to a bank, point out you have this more-or-less guaranteed payout on the Jesus market, and you want to borrow against it or sell it to the bank. Then you have money now to spend. This would not in any serious way shift the prices of the "Jesus will return" market because that market is of essentially zero size compared to the size of the banks that will be loaning against or buying the "No" bets. With low enough transaction costs the time value of money is the same across the whole economy, so buying "yes" shares in Jesus would be competing against a load of other equivalent trades in every other part of the economy. I think selling shares for cash would be one of these, you are expecting loads of people to suddenly want to sell assets for cash in the future, so selling your assets for cash now so you can buy more assets later makes sense.
0danielechlin
Keep in mind their goal is to take money from gambling addicts, not predict the future.
Linch
40

I agree that Tracy does this at a level sufficient to count as "actually care about meritocracy" from my perspective. I would also consider Lee Kuan Yew to actually care a lot about meritocracy, for a more mainstream example.

You could apply it to all endeavours, and conclude that "very few people are serious about <anything>"

Yeah it's a matter of degree not kind. But I do think many human endeavors pass my bar. I'm not saying people should devote 100% of their efforts to doing the optimal thing. 1-5% done non-optimally seems enough for me, and many p... (read more)

Linch
60

I thought about this for more than 10 minutes, though on a micro rather than macro level (scoped as "how can more competent people work on X" or "how can you hire talented people"). But yeah more like days rather than years.

  1. I think one-on-one talent scouting or funding are good options locally but are much less scalable than psychometric evaluations.
  2. More to the point, I haven't seen people try to scale those things either. The closest might be something like TripleByte? Or headhunting companies? Certainly when I think of a typical (or 95th-99th percentile) "person who says they care a lot about meritocracy" I'm not imagining a recruiter, or someone in charge of such a firm. Are you?  
7Garrett Baker
I think much of venture capital is trying to scale this thing, and as you said they don't use the framework you use. The philosophy there is much more oriented towards making sure nobody falls beneath the cracks. Provide the opportunity, then let the market allocate the credit. That is, the way to scale meritocracy turns out to be maximizing c rather than the other considerations you listed, on current margins.
Linch
40

Makes sense! I agree that this is a valuable place to look. Though I am thinking about tests/assessments in a broader way than you're framing it here. Eg things that go into this meta-analysis, and improvements/refinements/new ideas, and not just narrow psychometric evaluations. 

Linch
70

How serious are they about respectability and people taking them seriously in the short term vs selfishly wanting more money and altruistically just wanting to make prediction markets more popular?

Linch
20

Without assigning my own normative judgment, isn't this just standard trader behavior/professional ethics? It seems simple enough to justify thus:

Two parties want to make a bet (trade). I create a platform to facilitate such a bet (trade). Both parties are better off by their own lights after such a trade. I helped them do something that makes them each happier, and make a healthy profit doing so. As long as I'm not doing something otherwise underhanded/unethical, what's the problem here?

I don't think it's conceptually any different from e.g. offering memecoins on your crypto exchange, or (an atheist) selling religious texts on Amazon.

4Eric Neyman
Oh, I don't think it was at all morally bad for Polymarket to make this market -- just not strategic, from the standpoint of having people take them seriously.
Linch
*241

Shower thought I had a while ago:

Everybody loves a meritocracy until people realize that they're the ones without merit. I mean you never hear someone say things like:

I think America should be a meritocracy. Ruled by skill rather than personal characteristics or family connections. I mean, I love my son, and he has a great personality. But let's be real: If we live in a meritocracy he'd be stuck in entry-level.

(I framed the hypothetical this way because I want to exclude senior people very secure in their position who are performatively pushing for meritoc... (read more)

5Ben
There was an interesting Astral Codex 10 thing related to this kind of idea: https://www.astralcodexten.com/p/book-review-the-cult-of-smart Mirroring some of the logic in that post, starting from the assumption that neither you nor anyone you know are in the running for a job, (lets say you are hiring an electrician to fix your house) then do you want the person who is going to do a better job or a worse one? If you are the parent of a child with some kind of developmental problem that means they have terrible hand-eye coordination, you probably don't want your child to be a brain surgeon, because you can see that is a bad idea. You do want your child to have resources, and respect and so on. But what they have, and what they do, can be (at least in principle) decoupled. In other words, I think that using a meritocratic system to decide who does what (the people who are good at something should do it) is uncontroversial. However, using a meritocratic system to decide who gets what might be a lot more controversial. For example, as an extreme case you could consider disability benefit for somebody with a mental handicap to be vaguely against the "who gets what" type of meritocracy. Personally I am strongly in favor of the  "who does what" meritocracy, but am kind of neutral on the "who gets what" one.
1Canaletto
There's a also a bit of divergence in "has skills/talent/power" and "cares about what you care about". Like, yes, maybe there is a very skilled person for that role, but are they trustworthy/reliable/aligned/have the same priorities? You always face the risk of giving some additional power to already powerful adversarial agent. You should be really careful about that. Maybe more focus on the virtue rather than skill.
4Garrett Baker
The field you should look at I think is Industrial and Organizational Psychology, as well as the classic Item Response Theory.

There is a contingent of people who want excellence in education (e.g. Tracing Woodgrains) and are upset about e.g. the deprioritization of math and gifted education and SAT scores in the US. Does that not count?

Given that ~ no one really does this, I conclude that very few people are serious about moving towards a meritocracy.

This sounds like an unreasonably high bar for us humans. You could apply it to all endeavours, and conclude that "very few people are serious about <anything>". Which is true from a certain perspective, but also stretches the word "serious" far past how it's commonly understood.

Linch
20

I agree being high-integrity and not lying is a good strategy in many real-world dealings. It's also better for your soul. However I will not frame it as "being a bad liar" so much as "being honest." Being high-integrity is often valuable, and ofc you accrue more benefits from actually being high-integrity when you're also known as high-integrity. But these benefits mostly come from actually not lying, rather than lying and being bad at it.

4Seth Herd
Right. There's no advantage to being a bad liar, but there may be an advantage to being seen as a bad liar. But it's probably not worth lying badly to get that reputation, since that would also wreck your reputation for honesty.
Linch
151

I've enjoyed playing social deduction games (mafia, werewolf, among us, avalon, blood on the clock tower, etc) for most of my adult life. I've become decent but never great at any of them. A couple of years ago, I wrote some comments on what I thought the biggest similarities and differences between social deduction games and incidences of deception in real life is. But recently, I decided that what I wrote before aren't that important relative to what I now think of as the biggest difference:

> If you are known as a good liar, is it generally advantageo... (read more)

2Viliam
Sometimes being known as smart is already a disadvantage, because some people assume (probably correctly) that it would be easier for a smarter person to deceive them. I wonder how many smart people are out there who have concluded that a good strategy is to hide their intelligence, and instead pretend to be merely good at some specific X (needed for their job). I suspect that many of them actually believe that (it is easier to consistently say something if you genuinely believe that), and that women are over-represented in this group.
2Garrett Baker
Thinking of more concrete, everyday, scenarios where your ability to lie is seen as an asset: * White lies * When someone shares yet-unpublished research results with you * Generally secrets confided to you * Keeping a professional demeanor * Generally being nice * You just have to say that you're fine I'd guess, based on these, that the main effect of being able to lie better is being seen as more consistent, and making complex social or political systems easier to deal with when you are involved. People can share information with you, while not expecting second or third order consequences of that. People can trust that regardless of what happens in your personal life, they will not need to spend their own emotional energy dealing with you. They can trust that they can ask you how they look, and consistently get an ego boost.
2trevor
In the ancestral environment, allies and non-enemies who visibly told better lies probably offered more fitness than allies and non-enemies who visibly made better tools, let alone invented better tools (which probably happened once in 10-1000 generations or something). In this case, "identifiably" can only happen, and become a Schelling point that increases fitness of the deciever and the identifier, if revealed frequently enough, either via bragging drive, tribal reputation/rumors, or identifiable to the people in the tribe unusually good at sensing deception. What ratio of genetic vs memetic (e.g. the line "he's a bastard, but he's our bastard") were you thinking of?
4Seth Herd
All of the below is speculative; I just want to not that there are at least equally good arguments for the advantages of being seen as a bad liar (and for actually being a bad liar). I disagree on the real world advantages. Judging what works from a few examples who are known as good liars (Trump and Musk for instance) isn't the right way to judge what works on average (and I'm not sure those two are even "succeeding" by my standards; Trump at least seems quite unhappy). I have long refused to play social deception games because not only do I not want to be known as a good liar, I don't want to become a good liar! Being known as one seems highly disadvantageous in personal life. Trust from those nearest you seems highly valuable in many situations. The best way to be seen as trustworthy is to be trustworthy. Practicing lying puts you at risk for being known as good at lying could get you a reputation as untrustworthy. Aside from practical benefits of being known as a trustworthy partner for a variety of ventures, being known as a good liar is going to be a substantial barrier to having reliable friendships. I stopped playing social deception games when I noticed how I no longer trusted my friends who'd proven to be good liars. I realized I couldn't read them, so could no longer take them at face value when they told me important things. My other friends who'd proven to be poor liars also became more trustworthy to me. If they'd kept practicing and become good liars, they'd have lost that trust. Faking being a bad liar or being trustworthy seems like a potentially good strategy, but it just seems more trouble than remaining a bad liar and just being honest in your dealings. I'm sure there are some life circumstances where that won't work, but it's nice to live honestly if you can.
3sjadler
Interesting material yeah - thanks for sharing! Having played a bunch of these, I think I’d extend this to “being correctly perceived is generally bad for you” - that is, it’s both bad to be a bad liar who’s known as bad, and bad to be good liar who’s known as good (compared to this not being known). For instance, even if you’re a bad liar, it’s useful to you if other players have uncertainty about whether you’re actually a good liar who’s double-bluffing. I do think the difference between games and real-life may be less about one-time vs repeated interactions, and more about the ability to choose one’s collaborators in general? Vs teammates generally being assigned in the games. One interesting experience I’ve had, which maybe validates this: I played a lot of One Night Ultimate Werewolf with a mixed-skill group. Compared to other games, ONUW has relatively more ability to choose teammates - because some roles (like doppelgänger or paranormal investigator, or sometimes witch) essentially can choose to join the team of another player. Suppose Tom was the best player. Over time, more and more players in our group would choose actions that made them more likely to join Tom’s team, which was basically a virtuous cycle for Tom: in a given game, he was relatively more likely to have a larger number of teammates - and # teammates is a strong factor in likelihood of winning. But, this dynamic would have applied equally in a one-time game I think, provided people knew this about Tom and still had a means of joining his team.
Linch
*210

Single examples almost never provides overwhelming evidence. They can provide strong evidence, but not overwhelming.

Imagine someone arguing the following:
 

1. You make a superficially compelling argument for invading Iraq

2. A similar argument, if you squint, can be used to support invading Vietnam

3. It was wrong to invade Vietnam

4. Therefore, your argument can be ignored, and it provides ~0 evidence for the invasion of Iraq.

In my opinion, 1-4 is not reasonable. I think it's just not a good line of reasoning. Regardless of whether you're for or against ... (read more)

Linch
22

I run a quick low-effort experiment with 50% secure code and 50% insecure code some time ago and I'm pretty sure this led to no emergent misalignment.

Woah, I absolutely would not have predicted this given the rest of your results!

Linch
*Ω131

I think I'm relatively optimistic that the difference between a system that "can (and will) do a very good job with human values when restricted to the text domain: vs "system that can do a very good job, unrestricted" isn't that high. This is because I'm personally fairly skeptical about arguments along the lines of "words aren't human thinking, words are mere shadows of human thinking" that people put out, at least when it comes to human values. 

(It's definitely possible to come up with examples that illustrates the differences between all of human thinking and human-thinking-put-into-words; I agree about their existence, I disagree about their importance).

1David Scott Krueger (formerly: capybaralet)
OTMH, I think my concern here is less: * "The AI's values don't generalize well outside of the text domain (e.g. to a humanoid robot)" and more: * "The AI's values must be much more aligned in order to be safe outside the text domain" I.e. if we model an AI and a human as having fixed utility functions over the same accurate world model, then the same AI might be safe as a chatbot, but not as a robot. This would be because the richer domain / interface of the robot creates many more opportunities to "exploit" whatever discrepancies exist between AI and human values in ways that actually lead to perverse instantiation.  
Linch
60

So there was a lot of competitive pressure to keep pushing to make it work. A good chunk of the Superalignment team stayed on in the hope that they could win the race and use OpenAI’s lead to align the first AGI, but many of the safety people at OpenAI quit in June. We were left with a new alignment lab, Embedded Intent, and an OpenAI newly pruned of the people most wanting to slow it down.”

“And that’s when we first started learning about this all?”

“Publicly, yes. The OpenAI defectors were initially mysterious about their reasons for leaving, citing deep d

... (read more)
Linch
40

Interesting! I didn't consider that angle

Linch
*40

Agreed, I was trying to succinctly convey something that I think is underrated, unfortunately going to miss some nuances.

Linch
*40

If the means/medians are higher, the tails are also higher as well (usually). 

Norm(μ=115, σ=15) distribution will have a much lower proportion of data points above 150 than Norm(μ=130, σ=15). Same argument for other realistic distributions. So if all I know about fields A and B is that B has a much lower mean than A, by default I'd also assume B has a much lower 99th percentile than A, and much lower percentage of people above some "genius" cutoff. 

5johnswentworth
Oh I see, you mean that the observation is weak evidence for the median model relative to a model in which the most competent researchers mostly determine memeticity, because higher median usually means higher tails. I think you're right, good catch.
Linch
*51

Again using the replication crisis as an example, you may have noticed the very wide (like, 1 sd or more) average IQ gap between students in most fields which turned out to have terrible replication rates and most fields which turned out to have fine replication rates.

This is rather weak evidence for your claim ("memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers"), unless you additionally posit another mechanism like "fields with terrible replication rat... (read more)

2johnswentworth
Why would that be relevant?
Linch
30

Some people I know are much more pessimistic about the polls this cycle, due to herding. For example, nonresponse bias might just be massive for Trump voters (across demographic groups), so pollsters end up having to make a series of unprincipled choices with their thumbs on the scales. 

Linch
20

There's also a comic series with explicitly this premise, unfortunately this is a major plot point so revealing it will be a spoiler:

Linch
30

Yeah this was my first thought halfway through. Way too many specific coincidences to be anything else.

Linch
7-2

Constitutionally protected free speech, efforts opposing it were ruled explicitly unconstitutional

God LW standards sure are slipping. 8 years ago people would be geeking out about the game theory implications, commitments, decision theory, alternative voting schemas, etc. These days after the first two downvotes it's just all groupthink, partisan drivel, and people making shit up, apparently. 

Linch
79

My guess is that we wouldn't actually know with high confidence before (and likely even some time after) things-will-definitely-be-fine.

E.g. 3 months after safe ASI people might still be publishing their alignment takes.  

1davekasten
Oh, to be clear I'm not sure this is at all actually likely, but I was curious if anyone had explored the possibility conditional on it being likely
Linch
91

There are also times where "foreign actors" (I assume by that term you mean actors interested in muddying the waters in general, not just literal foreign election interference) know that it's impossible to push a conversation towards their preferred 1)A or 5)B, at least among informed/educated voices, so they try to muddy the waters and push things towards 3). Climate change[1] and covid vaccines are two examples that comes to mind. 

  1. ^

    Though the correct answer for climate change is closer to 2) than 1)

3notfnofn
I actually just meant sowing discord by pushing half the population towards one and the other half towards the other in cases where it doesn't really affect them, but that's a good point. It's important to not be deceived into thinking issues are complicated when they are really not.
Linch
*21

They were likely using inferior techniques to RLHF to implement ~Google corporate standards; not sure what you mean by "ethics-based," presumably they have different ethics than you (or LW) does but intent alignment has always been about doing what the user/operator wants, not about solving ethics. 

2Roko
Well it has often been about not doing what the user wants, actually.
Linch
20

I'm not suggesting to the short argument should resolve those background assumptions, I'm suggesting that a good argument for people who don't share those assumptions roughly entails being able to understand someone else's assumptions well enough to speak their language and craft a persuasive and true argument on their terms.

Linch
40

Reverend Thomas Bayes didn't strike me as a genius either, but of course the bar was a lot lower back then. 

Linch
40

Norman Borlaug (father of the Green Revolution) didn't come across as very smart to me. Reading his Wikipedia page, there didn't seem to be notable early childhood signs of genius, or anecdotes about how bright he is. 

Linch
92

AI News so far this week.
1. Mira Murati (CTO) leaving OpenAI 

2. OpenAI restructuring to be a full for-profit company (what?) 

3. Ivanka Trump calls Leopold's Situational Awareness article "excellent and important read"

4. More OpenAI leadership departing, unclear why. 
4a. Apparently sama only learned about Mira's departure the same day she announced it on Twitter? "Move fast" indeed!
4b. WSJ reports some internals of what went down at OpenAI after the Nov board kerfuffle. 

5. California Federation of Labor Unions (2million+ members) spoke o... (read more)

Linch
20

Mild spoilers for a contemporary science-fiction book, but the second half was a major plot point in 
 

The Dark Forest, the sequel to Three-Body Problem

Linch
*51

I'm aware of Griggs v Duke; do you have more modern examples? Note that the Duke case was about a company that was unambiguously racist in the years leading up to the IQ test (ie they had explicit rules forbidding black people from working in some sections of the company), so it's not surprising that judges will see their implementation of the IQ test the day after the Civil Rights Act was passed as an attempt to continue racist policies under a different name. 

"I've never had issue before" is not a legal argument. 

But it is a Bayesian argument f... (read more)

Linch
3-1

Video games also have potential legal advantages over IQ tests for companies. You could argue that "we only hire people good at video games to get people who fit our corporate culture of liking video games" but that argument doesn't work as well for IQ tests.

IANAL but unless you work for a videogame company(or a close analogue like chess.com) , I think this is just false. If your job is cognitively demanding, having IQ tests (or things like IQ tests with a mildly plausible veneer) probably won't get you in legal trouble[1], whereas I think employment lawye... (read more)

3[anonymous]
.
Answer by Linch
87

There's no such thing as "true" general intelligence. There's just a bunch of specific cognitive traits that happen to (usually) be positively correlated with each other. Some proxies are more indicative than others (in the sense that getting high scores on them consistently correlate with doing well on other proxies), and that's about the best you can hope for.

Within the human range of intelligence and domains we're interested in, IQ is decent, so are standardized test scores, so (after adjusting for a few things like age and location of origin) is income, so is vocabulary, so (to a lesser degree) is perception of intelligence by peers, and so forth.

3M. Y. Zuo
I am not asking about ‘true’ general intelligence? Or whatever that implies. If your not sure, I am asking regarding the term commonly called ‘general intelligence’, or sometimes also known as ‘general mental ability factor’ or ‘g-factor’, in mainstream academic papers. Such as those found in pedagogy, memetics, genetics, etc… See: https://scholar.google.com/scholar?hl=en&as_sdt=0%252C5&q=“general+intelligence”&btnG= Where many many thousands of researchers over the last few decades are referring to this. Here is a direct quote by a pretty well known expert among intelligence researchers, writing in 2004: “ During the past few decades, the word intelligence has been attached to an increasing number of different forms of competence and accomplishment-emo-tional intelligence, football intelligence, and so on. Researchers in the field, however, have largely abandoned the term, together with their old debates over what sorts of abilities should and should not be classified as part of intelligence. Helped by the advent of new technologies for researching the brain, they have increasingly turned their attention to a century-old concept of a single overarching mental power. They call it simply g, which is short for the general mental ability factor. The g factor is a universal and reliably measured distinction among humans in their ability to learn, reason, and solve problems. It corresponds to what most people mean when they describe some individuals as smarter than others, and it's well measured by IQ (intelligence quotient) tests, which assess high-level mental skills such as the ability to draw inferences, see similarities and differences, and process complex information of virtually any kind. Understanding g's biological basis in the brain is the new frontier in intelligence research today. The g factor was discovered by the first mental testers, who found that people who scored well on one type of mental test tended to score well on all of them. Regardless of th
Linch
20

Slightly tangential, but do you know what the correct base rate of Manifold binary questions are? Like is it closer to 30% or closer to 50% for questions that resolve Yes? 

Linch
111

The results of the replication are so bad that I'd want to see somebody else review the methodology or try the same experiment or something before trusting that this is the "right" replication.

Lorenzo
10-1

Manifold claims a brier score of 0.17 and says it's "very good" https://manifold.markets/calibration 

Prediction markets in general don't score much better https://calibration.city/accuracy . I wouldn't say 0.195 is "so bad"

Linch
30

This seems very surprising/wrong to me given my understanding of the animal kingdom, where various different bands/families/social groups/whatever precursor to tribes you think of have ways to decrease inbreeding, but maybe you think human hunter-gatherers are quite different? I'd expect population bottlenecks to be the exception rather than the rule here across the history of our species.

I'd trust the theory + animal data somewhat more on this question than (e.g.) studies on current uncontacted peoples. 

Linch
42

My assumption is that most of my ancestors (if you set a reasonable cutoff in the past at the invention of farming, or written records) would be farmers because from ca. 10kya to only a few hundred years ago, most people were farmers by a huuuge margin.

The question very specifically asked for starting 300,000 years ago, not 10,000. 

1[anonymous]
.
Linch
71

Good comms for people who don't share your background assumptions is often really hard! 

That said I'd definitely encourage Akash and other people who understand both the AI safety arguments and policymakers to try to convey this well. 

Maybe I'll take a swing at this myself at some point soon; I suspect I don't really know what policymakers' cruxes were or how to speak their language but at least I've lived in DC before. 

1M. Y. Zuo
Then this seems to be an entirely different problem? At the very least, resolving substantial differences in background assumptions is going to take a lot more than a ‘short presentation’. And it’s very likely those in actual decision making positions will be much less charitable than me, since their secretaries receive hundreds or thousands of such petitions every week.
Linch
30

Just spitballing, but it doesn't seem theoretically interesting to academics unless they're bringing something novel (algorithmically or in design) to the table, and practically not useful unless implemented widely, since it's trivial for e.g. college students to use the least watermarked model.

Linch
63

I'm a bit confused. The Economist article seems to partially contradict your analysis here:

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive[...]

1ShenZhen
Thanks for that. The "the fate of all mankind" line really throws me. without this line, everything I said above applies. Its existence (assuming that it exists, specificly refers to AI, and Xi really means it) is some evidence towards him thinking that it's important. I guess it just doesn't square with the intuitions I've built for him as someone not particularly bright or sophisiticated. Being convinced by good arguments does not seem to be one of his strong suits. Edit: forgot to mention that I tried and failed to find the text of the guide itself.
Linch
20

Which the old version certainly would have done. The central thing the bill intends to do is to require effective watermarking for all AIs capable of fooling humans into thinking they are producing ‘real’ content, and labeling of all content everywhere.

OpenAI is known to have been sitting on a 99.9% effective (by their own measure) watermarking system for a year. They chose not to deploy it, because it would hurt their business – people want to turn in essays and write emails, and would rather the other person not know that ChatGPT wrote them.

As far as we

... (read more)
7Zvi
If the academics can hack together an open source solution why haven't they? Seems like it would be a highly cited, very popular paper. What's the theory on why they don't do it?
4Davidmanheim
Yeah, I think the simplest thing for image generation is for model hosting providers to use a separate tool - and lots of work on that already exists. (see, e.g., this, or this, or this, for different flavors.) And this is explicitly allowed by the bill. For text, it's harder to do well, and you only get weak probabilistic identification, but it's also easy to implement an Aaronson-like scheme, even if doing it really well is harder. (I say easy because I'm pretty sure I could do it myself, given, say, a month working with one of the LLM providers, and I'm wildly underqualified to do software dev like this.)
Linch
13314

The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

[...]

China’s accelerationists want to keep things t

... (read more)
ShenZhen
13-1

Hmm, apologies if this mostly based on vibes. My read of this is that this is not strong evidence either way. I think that of the excerpt, there are two bits of potentially important info:

  • Listing AI alongside biohazards and natural disasters. This means that the CCP does not care about and will not act strongly on any of these risks.
    • Very roughly, CCP documents (maybe those of other govs are similar, idk) contain several types of bits^: central bits (that signal whatever party central is thinking about), performative bits (for historical narrative coherence
... (read more)
gwern
*327

As I've noted before (eg 2 years ago), maybe Xi just isn't that into AI. People keep trying to meme the CCP-US AI arms race into happening for the past 4+ years, and it keeps not happening.

2Garrett Baker
I see no mention of this in the actual text of the third plenum...
3Seth Herd
This seems quite important. If the same debate is happening in China, we shouldn't just assume that they'll race dangerously if we won't. I really wish I understood Xi Jinping and anyone else with real sway in the CCP better.
2Ben Pace
I wonder if lots of people who work on capabilities at Anthropic because of the supposed inevitability of racing with China will start to quit if this turns out to be true…
2habryka
Anyone have a paywall free link? Seems quite important, but I don't have a subscription.
Linch
20

Why do you think pedigree collapse wouldn't swamp the difference? I think that part's underargued

Linch
20

You are definitely allowed to write to anyone! Free speech! In theory your rep should be more responsive to their own districts however. 

Linch
*3819

Anthropic issues questionable letter on SB 1047 (Axios). I can't find a copy of the original letter online. 

1MichaelDickens
If I want to write to my representative to oppose this amendment, who do I write to? As I understand, the bill passed the Senate but must still pass Assembly. Is the Senate responsible for re-approving amendments, or does that happen in Assembly? Also, should I write to a representative who's most likely to be on the fence, or am I only allowed to write to the representative of my district?
aysja
6024

I think this letter is quite bad. If Anthropic were building frontier models for safety purposes, then they should be welcoming regulation. Because building AGI right now is reckless; it is only deemed responsible in light of its inevitability. Dario recently said “I think if [the effects of scaling] did stop, in some ways that would be good for the world. It would restrain everyone at the same time. But it’s not something we get to choose… It’s a fact of nature… We just get to find out which world we live in, and then deal with it as best we can.” Bu... (read more)

1[comment deleted]
8Zach Stein-Perlman
Here's the letter: https://s3.documentcloud.org/documents/25003075/sia-sb-1047-anthropic.pdf I'm not super familiar with SB 1047, but one safety person who is thinks the letter is fine. [Edit: my impression, both independently and after listening to others, is that some suggestions are uncontroversial but the controversial ones are bad on net and some are hard to explain from the Anthropic is optimizing for safety position.]
Linch
42

Genes vs environment seems like an obvious thing to track. Most people in most places don't move around that much (unlike many members of our community) so if cancers are contagious for many cancers, especially rarer ones, you'd expect to see strong regional correlations (likely stronger than genetic correlations). 

2PeterMcCluskey
Maybe? It doesn't seem very common for infectious diseases to remain in one area. It depends a lot on how they are transmitted. It's also not unusual for a non-infectious disease to have significant geographical patterns. There are cancers which are concentrated in particular areas, but there seem to be guesses for those patterns that don't depend on fungal infections.
Linch
20

Sure, I agree about the pink elephants. I'm less sure about the speed of light.

Load More