My name is Alex Turner. I'm a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
I, uh, didn't say you "say" either of those
I wasn't claiming you were saying I had used those exact phrases.
Your original comment implies that I expressed the sentiments for which you mocked me - such as the anecdote "crystallizing everything wrong about Eliezer" (the quotes are there because you said this). I then replied to point out that I did not, in fact, express those sentiments. Therefore, your mockery was invalid.
Although I don't usually write LW comments, I'm writing a post right now and this is helping me clarify my thoughts on a range of historical incidents.
In hindsight, I'm worried that you wrote this apology. I think it's an unhealthy obeisance.
I suspect you noticed how Eliezer often works to degrade the status of people who disagree with him and otherwise treats them poorly. As I will support in an upcoming essay, his writing is often optimized to exploit intellectual insecurity (e.g. by frequently praising his own expertise, or appealing to a fictional utopia of fictional geniuses who agree that you're an idiot or wrong[1]) and to demean others' contributions (e.g. by claiming to have invented them already, or calling them fake, or emphasizing how far behind everyone else is). It's not that it's impossible for these claims to have factual merit, but rather the presentation and the usage of these claims seem optimized to push others down. This has the effect of increasing his own status.
Anger and frustration are a rational reaction in that situation (though it's important to express those emotions in healthy ways - I think your original comment wasn't perfect there). And yet you ended up the one humbled for focusing on status too much!
See https://www.lesswrong.com/posts/tcCxPLBrEXdxN5HCQ/shah-and-yudkowsky-on-alignment-failures and search for "even if he looks odd to you because you're not seeing the population of other dath ilani."
It does cut against the point of the post. He was wrong in a way that pertains to the key point. He makes fun of "magical categories" as "simple little words that turn out to carry all the desired functionality of the AI", but turns out those "simple little words" actually work. Lol.
In this post, you can also see the implicit reliance on counting arguments against good generalization (e.g. "superexponential conceptspace"). Those arguments are, evidently, wrong - or at least irrelevant. He fell into the standard statistical learning theoretic trap of caring about e.g. VC dimension since he was too pessimistic about inductive biases.
Now you, finally presented with a tiny molecular smiley - or perhaps a very realistic tiny sculpture of a human face - know at once that this is not what you want to count as a smile. But that judgment reflects an unnatural category, one whose classification boundary depends sensitively on your complicated values.
I'll wager that an LLM won't get this one wrong. goes to check - yup, it didn't:
My sense is that neither of us have been very persuaded by those conversations, and I claim that's not very surprising, in a way that's epistemically defensible for both of us. I've spent literal years working through the topic myself in great detail, so it would be very surprising if my view was easily swayed by a short comment chain—and similarly I expect that the same thing is true of you, where you've spent much more time thinking about this and have much more detailed thoughts than are easy to represent in a simple comment chain.
I've thought about this claim more over the last year. I now disagree. I think that this explanation makes us feel good but ultimately isn't true.
I can point to several times where I have quickly changed my mind on issues that I have spent months or years considering:
I think I've probably changed my mind on a range of smaller issues (closer to the size of the deceptive alignment case) but have forgotten about them. The presence of example (1) above particularly suggests to me the presence of similar google-doc-mediated insights which happened fast; where I remember one example, probably I have forgotten several more.
To conclude, I think people in comment sections do in fact spend lots of effort to avoid looking dumb, wrong, or falsified, and forget that they're supposed to be seeking truth.
It seems to me that often people rehearse fancy and cool-sounding reasons for believing roughly the same things they always believed, and comment threads don't often change important beliefs. Feels more like people defensively explaining why they aren't idiots, or why they don't have to change their mind. I mean, if so—I get it, sometimes I feel that way too. But it sucks and I think it happens a lot.
In part, I think, because the site makes truth-seeking harder by spotlighting monkey-brain social-agreement elements.
Your comments' points seem like further evidence for my position. That said, your comment appears to serve the function of complicating the conversation, and that happens to have the consequence of diffusing the impact of my point. I do not allege that you are doing so on purpose, but I think it's important to notice. I would have been more convinced by a reply of "no, you're wrong, here's the concrete bet(s) EY made or was willing to make but Paul balked."
I will here repeat a quote[1] which seems relevant:
[Christiano][12:29]
my desire to bet about "whatever you want" was driven in significant part by frustration with Eliezer repeatedly saying things like "people like Paul get surprised by reality" and me thinking that's nonsense
- The journey is a lot harder to predict than the destination. Cf. "it's easier to use physics arguments to predict that humans will one day send a probe to the Moon, than it is to predict when this will happen or what the specific capabilities of rockets five years from now will be". Eliezer isn't claiming to have secret insights about the detailed year-to-year or month-to-month changes in the field; if he thought that, he'd have been making those near-term tech predictions already back in 2010, 2015, or 2020 to show that he has this skill
First of all, I disagree with the first claim and am irritated that you stated it as a fact instead of saying "I think that...". My overall take-away from this paragraph, as pertaining to my point, is that you're pointing out that Eliezer doesn't make predictions because he can't / doesn't have epistemic alpha. That accords with my point of "EY was unwilling to bet."
- From Eliezer's perspective, Paul is claiming to know a lot about the future trajectory of AI, and not just about the endpoints: Paul thinks progress will be relatively smooth and continuous, and thinks it will get increasingly smooth and continuous as time passes and more resources flow into the field. Eliezer, by contrast, expects the field to get choppier as time passes and we get closer to ASI.
My takeaway, as it relates to my quoted point: Either Eliezer's view makes no near-term falsifiable predictions which differ from the obvious ones, or only makes meta-predictions which are hard to bet on. Sounds to my ears like his models of alignment don't actually constrain his moment-to-moment anticipations, in contrast to my own, which once destroyed my belief in shard theory on a dime (until I realized I'd flipped the data, and undid the update). This perception of "the emperor has no constrained anticipations" I have is a large part of what I am criticizing.
- A way to bet on this, which Eliezer repeatedly proposed but wasn't able to get Paul to do very much, would be for Paul to list out a bunch of concrete predictions that Paul sees as "yep, this is what smooth and continuous progress looks like". Then, even though Eliezer doesn't necessarily have a concrete "nope, the future will go like X instead of Y" prediction, he'd be willing to bet against a portfolio of Paul-predictions: when you expect the future to be more unpredictable, you're willing to at least weakly bet against any sufficiently ambitious pool of concrete predictions.
So Eliezer offered Paul the opportunity for Paul to unilaterally stick his neck out on a range of concrete predictions, so that Eliezer could judge Paul's overall predictive performance against some unknown and subjective baseline which Eliezer has in his head, or perhaps against some group of "control" predictors? That sounds like the opposite of "willing to make concrete predictions" and feeds into my point about Paul not being able to get Eliezer to bet.
Edit: If there was a more formal proposal which actually cashes out into resolution criteria and Brier score updates for both of them, then I'm happier with EY's stance but still largely unmoved; see my previous comment above about the emperor.
Eliezer was also more interested in trying to reach mutual understanding of the views on offer, as opposed to bet let's bet on things immediately never mind the world-views. But insofar as Paul really wanted to have the bets conversation instead, Eliezer sunk an awful lot of time into trying to find operationalizations Paul and he could bet on, over many hours of conversation.
This paragraph appears to make two points. First, Eliezer was less interested in betting than in having long dialogues. I agree. Second, Eliezer spent a lot of time at least appearing as if he were trying to bet. I agree with that as well. But I don't give points for "trying" here.
Giving points for "trying" is in practice "giving points for appearing to try", as is evident from the literature on specification gaming. Giving points for "appeared to try" opens up the community to invasion by bad actors which gish gallop their interlocutors into giving up the conversation. Prediction is what counts.
world-models like Eliezer's ("long-term outcome is more predictable than the detailed year-by-year tech pathway")
Nitpick, but that's not a "world-model." That's a prediction.
even after actual bets were in fact made, and tons of different high-level predictions were sketched out
Why write this without citing? Please cite and show me the credences and the resolution conditions.
If anyone entering this thread wishes to read the original dialogue for themselves, please see section 10.3 of https://www.lesswrong.com/posts/fS7Zdj2e2xMqE6qja/more-christiano-cotra-and-yudkowsky-on-ai-progress
I'm quite excited by this work. Principled justification of various techniques for MELBO, insights into feature multiplicity, a potential generalized procedure for selecting steering coefficients... all in addition to making large progress on the problem of MELBO via e.g. password-locked MATH and vanilla activation-space adversarial attacks.
(I think individual FB questions can toggle whether to show/hide predictions before you've made your own)
I think it should be hidden by default in the editor, with a user-side setting to show by default for all questions.
Great point! I made this design choice back in April, so I wasn't as aware of the implications of localStorage
.
Adds his 61st outstanding to-do item.
IIRC my site checks (in descending priority):
localStorage
to see if they've already told my site a light/dark preference;The idea is "I'll try doing the right thing (auto), and if the user doesn't like it they can change it and I'll listen to that choice." Possibly it will still be counterintuitive to many folks, as Said quoted in a sibling comment.
Suppose you're doing speculative decoding on Gemini using Gemini Flash as the cheap model. Train Flash to have a head for each metric of interest (like "is this token part of text which is scheming"). Then you run Flash anyways for speculative decoding, leading to zero amortized monitoring tax (just the fixed cost of training the heads).