All of RobertM's Comments + Replies

If you had some vague prompt like "write an essay about how the field of alignment is misguided" and then proofread it you've met the criteria as laid out.

No, such outputs will almost certainly fail this criteria (since they will by default be written with the typical LLM "style").

4Seth Herd
That's a good point and it does set at least a low bar of bothering to try. But they don't have to try hard. They can almost just append the prompt with "and don't write it in standard LLM style". I think it's a little more complex than that, but not much. Humans can't tell LLM writing from human writing in controlled studies. The question isn't whether you can hide the style or even if it's hard, just how easy. Which raises the question of whether they'd even do that much, because of course they haven't read the FAQ before posting. Really just making sure that new authors read SOMETHING about what's appreciated here would go a long way toward reducing slop posts.
RobertM90

"10x engineers" are a thing, and if we assume they're high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs. Have you observed any specific people suddenly becoming 10x more prolific?

In addition to the objection from Archimedes, another reason this is unlikely to be true is that 10x coders are often much more productive than other engineers because they've heavily optimized around solving for specific problems or skills that other engineers are bottlenecked by, and most of those optimizations don't readily admit of having an LLM suddenly inserted into the loop.

RobertM80

Not at the moment, but it is an obvious sort of thing to want.

RobertM20

Thanks for the heads up, we'll have this fixed shortly (just need to re-index all the wiki pages once).

RobertM132

Curated.  This post does at least two things I find very valuable:

  1. Accurately represents differing perspectives on a contentious topic
  2. Makes clear, epistemically legible arguments on a confusing topic

And so I think that this post both describes and advances the canonical "state of the argument" with respect to the Sharp Left Turn (and similar concerns).  I hope that other people will also find it helpful in improving their understanding of e.g. objections to basic evolutionary analogies (and why those objections shouldn't make you very optimistic).

RobertM62

Yes:

My model is that Sam Altman regarded the EA world as a memetic threat, early on, and took actions to defuse that threat by paying lip service / taking openphil money / hiring prominent AI safety people for AI safety teams.

In the context of the thread, I took this to suggest that Sam Altman never had any genuine concern about x-risk from AI, or, at a minimum, that any such concern was dominated by the social maneuvering you're describing.  That seems implausible to me given that he publicly expressed concern about x-risk from AI 10 months before OpenAI was publicly founded, and possibly several months before it was even conceived.

2Eli Tyre
I don't dispute that he never had any genuine concern. I guess that he probably did have genuine concern (though not necessarily that that was his main motivation for founding OpenAI).
RobertM72

Sam Altman posted Machine intelligence, part 1[1] on February 25th, 2015.  This is admittedly after the FLI conference in Puerto Rico, which is reportedly where Elon Musk was inspired to start OpenAI (though I can't find a reference substantiating his interaction with Demis as the specific trigger), but there is other reporting suggesting that OpenAI was only properly conceived later in the year, and Sam Altman wasn't at the FLI conference himself.  (Also, it'd surprise me a bit if it took nearly a year, i.e. from Jan 2nd[2] to Dec 11th... (read more)

3Eli Tyre
Is this taken to be a counterpoint to my story above? I'm not sure exactly how it's related.
RobertM20

I think it's quite easy to read as condescending.  Happy to hear that's not the case!

RobertM20

I hadn't downvoted this post, but I am not sure why OP is surprised given the first four paragraphs, rather than explaining what the post is about, instead celebrate tree murder and insult their (imagined) audience:

so that no references are needed but those any LW-rationalist is expected to have committed to memory by the time of their first Lighthaven cuddle puddle

2lumpenspace
wait - do you consider that an insult? i snuggled with the best of them
RobertM20

I don't think much has changed since this comment.  Maybe someone will make a new wiki page on the subject, though if it's not an admin I'd expect it to mostly be a collection of links to various posts/comments.

re: the table of contents, it's hidden by default but becomes visible if you hover your mouse over the left column on post pages.

2Said Achmiz
That’s… pretty bad. Frankly, I don’t understand how you expect anyone to have any idea of what to expect from the site and the moderation thereof, given this utterly shambolic state of affairs. I’ll just repeat my question from two years ago (which did not receive any answer at the time): ---------------------------------------- It doesn’t do that for me (might be a browser issue). In any case, is there a way to have it be visible by default? I’d really prefer that.
RobertM90

I understand the motivation behind this, but there is little warning that this is how the forum works. There is no warning that trying to contribute in good faith isn't sufficient, and you may still end up partially banned (rate-limited) if they decide you are more noise than signal. Instead, people invest a lot only to discover this when it's too late.

 

In addition to the New User Guide that gets DMed to every new user (and is also linked at the top of our About page), we:

  • Show this comment above the new post form to new users who haven't already had s

... (read more)
6Said Achmiz
This is fine for new users; what about for existing users? I just went to the front page of the site, and it’s not obvious to me where to click to find “The Rules”. The “About” page? Doesn’t seem to be a list of rules. The New User’s Guide? Not really. (There’s a “Rules to be aware of” section at the very, very end of that post, but… surely this isn’t meant to be a list of the rules…? It’s just… three kind of random things.) The LessWrong FAQ? Not really… If I want to know what rules (or guidelines, or… anything, really…) are supposed to be governing my behavior on LW, I actually don’t have any idea where to look. And I’ve been using Less Wrong for a very long time. Related point: when the rules change, how do existing users learn about this? P.S.: What happened to the table of contents on LW post pages? Why can’t I see it anymore?
1Knight Lee
Thank you very much for bringing that up. That does look like a clearer warning, somehow I didn't remember it very well.
RobertM30

Apropos of nothing, I'm reminded of the "<antthinking>" tags originally observed in Sonnet 3.5's system prompt, and this section of Dario's recent essay (bolding mine):

In 2024, the idea of using reinforcement learning (RL) to train models to generate chains of thought has become a new focus of scaling. Anthropic, DeepSeek, and many other companies (perhaps most notably OpenAI who released their o1-preview model in September) have found that this training greatly increases performance on certain select, objectively measurable tasks like math, coding c

... (read more)
RobertM372

When is the "efficient outcome-achieving hypothesis" false?  More narrowly, under what conditions are people more likely to achieve a goal (or harder, better, faster, stronger) with fewer resources?

The timing of this quick take is of course motivated by recent discussion about deepseek-r1, but I've had similar thoughts in the past when observing arguments against e.g. hardware restrictions: that they'd motivate labs to switch to algorithmic work, which would be speed up timelines (rather than just reducing the naive expected rate of slowdown).  S... (read more)

8Canaletto
I think you also have to factor in selection bias. Like suppose there are 3 organizations with 100 resource units, 10 with 20 units, 30 with 5 units. And maybe resources are helpful, but not helpful enough that all the advancements will concentrate in the top 3. 
RobertM20

We have automated backups, and should even those somehow find themselves compromised (which is a completely different concern from getting DDoSed), there are archive.org backups of a decent percentage of LW posts, which would be much easier to restore than paper copies.

6gwern
There is also GreaterWrong, which I believe caches everything rather than passing through live, so it would be able to restore almost all publicly-visible content, in theory.
RobertM30

I learned it elsewhere, but his LinkedIn confirms that he started at Anthropic sometime in January.

RobertM42

I know I'm late to the party, but I'm pretty confused by https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end (I haven't read the post it's responding to, but I can extrapolate).  Surely the "we have a friendly singleton that isn't Just Following Orders from Your Local Democratically Elected Government or Your Local AGI Lab" is a scenario that deserves some analysis...?  Conditional on "not dying" that one seems like the most likely stable end state, in fact.

Lots of interesting questions in that situation!  Like, money still ... (read more)

4cousin_it
For cognitive enhancement, maybe we could have a system like "the smarter you are, the more aligned you must be to those less smart than you"? So enhancement would be available, but would make you less free in some ways.
RobertM2512

I was thinking the same thing. This post badly, badly clashes with the vibe of Less Wrong. I think you should delete it, and repost to a site in which catty takedowns are part of the vibe. Less Wrong is not the place for it.

I think this is a misread of LessWrong's "vibes" and would discourage other people from thinking of LessWrong as a place where such discussions should be avoided by default.

With the exception of the title, I think the post does a decent job at avoiding making it personal.

3Holly_Elmore
Yeah actually the employees of Lightcone have led the charge in trying to tear down Kat. Its you who has the better standards, Maxwell, not this site.
RobertM20

Well, that's unfortunate.  That feature isn't super polished and isn't currently in the active development path, but will try to see if it's something obvious.  (In the meantime, would recommend subscribing to fewer people, or seeing if the issue persists in Chrome.  Other people on the team are subscribed to 100-200 people without obvious issues.)

RobertM20

FWIW, I don't think "scheming was very unlikely in the default course of events" is "decisively refuted" by our results. (Maybe depends a bit on how we operationalize scheming and "the default course of events", but for a relatively normal operationalization.)

Thank you for the nudge on operationalization; my initial wording was annoyingly sloppy, especially given that I myself have a more cognitivist slant on what I would find concerning re: "scheming".  I've replaced "scheming" with "scheming behavior".

 

It's somewhat sensitive to the exact objec

... (read more)
2ryan_greenblatt
Someone could have objections to validity or the assumptions of our paper. On validity, something like priming could be relevant. On the assumptions, they could e.g. think scheming is very unlikely due to thinking that future AIs will be intentionally trained to be highly myopic and corrigible while also thinking that other possible sources of goal conflict are very unlikely. (I'd disagree with this view, but I don't think this view is totally crazy and it isn't refuted by our paper.) I think our work doesn't very clearly refute this post, though I also just think the post is missing multiple important considerations and is overall pretty wrong and confused in its arguments.
RobertM*80

I'd like to internally allocate social credit to people who publicly updated after the recent Redwood/Anthropic result, after previously believing that scheming behavior was very unlikely in the default course of events (or a similar belief that was decisively refuted by those empirical results).

Does anyone have links to such public updates?

(Edit log: replaced "scheming" with "scheming behavior".)

FWIW, I don't think "scheming was very unlikely in the default course of events" is "decisively refuted" by our results. (Maybe depends a bit on how we operationalize scheming and "the default course of events", but for a relatively normal operationalization.)

It's somewhat sensitive to the exact objection the person came in with.

My guess is that most reasonable perspectives should update toward thinking scheming has at least a tiny of chance of occuring (>2%), but I wouldn't say a view of <<2% was decisively refuted.

5ryan_greenblatt
Quoting Zvi's post: I don't know of any other clear cut cases. The reviews might also be interesting to look at. I'm not sure if Jacob Andreas and Jasjeet Sekhon have publicly stated prior views on the topic. Yoshua Bengio and Rohin Shah were broadly sympathetic to scheming concerns or similar before.
RobertM104

One reason to be pessimistic about the "goals" and/or "values" that future ASIs will have is that "we" have a very poor understanding of "goals" and "values" right now.  Like, there is not even widespread agreement that "goals" are even a meaningful abstraction to use.  Let's put aside the object-level question of whether this would even buy us anything in terms of safety, if it were true.  The mere fact of such intractable disagreements about core philosophical questions, on which hinge substantial parts of various cases for and against doo... (read more)

2Seth Herd
We do have a poor understanding of human values. That's one more reason we shouldn't and probably won't try to build them into AGI. You're expressing a common view among the alignment community. I think we should update from that view to the more likely scenario in which we don't even try to align AGI to human values. What we're actually doing is training LLMs to answer questions as they were intended, and to follow instructions as they were intended. The AI needs to understand human values to some degree to do that, but training is really focused on those things. There's an interesting bit in this interview with Tan Zhi Xuan on this distinction between theory and practice of training LLMs, and to a lesser degree in their paper. Not only is that what we are doing for current AI, I think it's both what we should do for future AGI, and what we probably will do. Instruction-following AGI is easier and more likely than value aligned AGI. It's counterintuitive to think about a highly intelligent agent that wants to do what someone else tells it. But it's not logically incoherent. And when the first human decides what goal to put in the system prompt of the first agent they think might ultimately surpass human competence and intelligence, there's little doubt what they'll put there: "follow my instructions, favoring the most recent". Everything else is a subgoal of that non-consequentialist central goal.   This approach leaves humans in charge, and that's a problem. Ultimately I think that sort of instrucion-following intent alignment can be a stepping-stone to value alignment, once we've got a superintelligent instruction-following system to help us with that very difficult problem. But there's neither a need nor an incentive to aim directly at that with our first AGIs. So alignment will succeed or fail on other issues.   Separately, I fully agree that most people who don't believe in AGI x-risk aren't making a true rejection. They usually really don't believe w
RobertM42

I agree that in spherical cow world where we know nothing about the historical arguments around corrigibility, and who these particular researchers are, we wouldn't be able to make a particularly strong claim here.  In practice I am quite comfortable taking Ryan at his word that a negative result would've been reported, especially given the track record of other researchers at Redwood.

at which point the scary paper would instead be about how Claude already seems to have preferences about its future values, and those preferences for its future values d

... (read more)
RobertM42

I mean, yes, but I'm addressing a confusion that's already (mostly) conditioning on building on it.

RobertM31

The /allPosts page shows all quick takes/shortforms posted, though somewhat de-emphasized.

1Knight Lee
Thank you for the help :) By the way, how did you find this message? I thought I already edited the post to use spoiler blocks, and I hid this message by clicking "remove from Frontpage" and "retract comment" (after someone else informed me using a PM). EDIT: dang it I still see this comment despite removing it from the Frontpage. It's confusing.
RobertM94

This doesn't seem like it'd do much unless you ensured that there were training examples during RLAIF which you'd expect to cause that kind of behavior enough of the time that there'd be something to update against.  (Which doesn't seem like it'd be that hard, though I think separately that approach seems kind of doomed - it's falling into a brittle whack-a-mole regime.)

Indeed, we should get everyone to make predictions about whether or not this change would be sufficient, and if it isn't, what changes would be suffficient. My prediction would be that this change would not be sufficient but that it would help somewhat.

RobertM40

LessWrong doesn't have a centralized repository of site rules, but here are some posts that might be helpful:

https://www.lesswrong.com/posts/bGpRGnhparqXm5GL7/models-of-moderation

https://www.lesswrong.com/posts/kyDsgQGHoLkXz6vKL/lw-team-is-adjusting-moderation-policy

We do currently require content to be posted in English.

RobertM*30

"It would make sense to pay that cost if necessary" makes more sense than "we should expect to pay that cost", thanks.

it sounds like you view it as a bad plan?

Basically, yes.  I have a draft post outlining some of my objections to that sort of plan; hopefully it won't sit in my drafts as long as the last similar post did.

(I could be off, but it sounds like either you expect solving AI philosophical competence to come pretty much hand in hand with solving intent alignment (because you see them as similar technical problems?), or you expect no

... (read more)
2Noosphere89
I agree that conditional on that happening, this is plausible, but also it's likely that some of the answers from such a philosophically competent being to be unsatisfying to us. One example is that such a philosophically competent AI might tell you that CEV either doesn't exist, or if it does is so path-dependent that it cannot resolve moral disagreements, which is actually pretty plausible under my model of moral philosophy.
RobertM70

What do people mean when they talk about a "long reflection"?  The original usages suggest flesh-humans literally sitting around and figuring out moral philosophy for hundreds, thousands, or even millions of years, before deciding to do anything that risks value lock-in, but (at least) two things about this don't make sense to me:

  • A world where we've reliably "solved" for x-risks well enough to survive thousands of years without also having meaningfully solved "moral philosophy" is probably physically realizable, but this seems like a pretty fine needl
... (read more)
2Noosphere89
To answer these questions: 1 possible answer is that something like CEV does not exist, and yet alignment is still solvable anyways for almost arbitrarily capable AI, which could well happen, and for me personally this is honestly the most likely outcome of what happens by default. There are arguments against the idea that CEV even exists or is well defined that are important to note, and we shouldn't assume that technological progress equates with progress towards your preferred philosophy: https://www.lesswrong.com/posts/Y7gtFMi6TwFq5uFHe/some-biases-and-selection-effects-in-ai-risk-discourse#hkoGD6Gwi9YKKZ6S2 https://www.lesswrong.com/posts/SqgRtCwueovvwxpDQ/valence-series-2-valence-and-normativity#2_7_3_Possible_implications_for_AI_alignment_discourse https://joecarlsmith.com/2021/06/21/on-the-limits-of-idealized-values And there might not be any real justifiable way to resolve disagreements between the philosophies/moralities, either, if there isn't a way to converge to a single morality.
5Vladimir_Nesov
Long reflection is a concrete baseline for indirect normativity. It's straightforwardly meaningful, even if it's unlikely to be possible or a good idea to run in base reality. From there, you iterate to do better. Path dependence of long reflection could be addressed by considering many possible long reflection traces jointly, aggregating their own judgement about each other to define which traces are more legitimate (as a fixpoint of some voting/preference setup), or how to influence the course of such traces to make them more legitimate. For example, a misaligned AI takeover within a long reflection trace makes it illegitimate, and preventing such is an intervention that improves a trace. "Locking in" preferences seems like something that should be avoided as much as possible, but creating new people or influencing existing ones is probably morally irreversible, and that applies to what happens inside long reflection as well. I'm not sure that "nonperson" modeling of long reflection is possible, that sufficiently good prediction of long traces of thinking doesn't require modeling people well enough to qualify as morally relevant to a similar extent as concrete people performing that thinking in base reality. But here too considering many possible traces somewhat helps, making all possibilities real (morally valent) according to how much attention is paid to their details, which should follow their collectively self-defined legitimacy. In this frame, the more legitimate possible traces of long reflection become the utopia itself, rather than a nonperson computation planning it. Nonperson predictions of reflection's judgement might steer it a bit in advance of legitimacy or influence decisions, but possibly not much, lest they attain moral valence and start coloring the utopia through their content and not only consequences.
5_will_
On your second point, I think that MacAskill and Ord were more saying “It would be worth it to spend thousands of years figuring out moral philosophy / figuring out what to do with the cosmos, if that’s how long it takes to be ~sure we’ve reached the ‘correct’ answer before locking things in, on account of the astronomical waste argument” than “I literally predict it will take today-humans thousands of years to figure out moral philosophy, even if we make a serious and coordinated effort to do so.” Somewhat relatedly, quoting from the ‘Long Reflection Reading List’ I wrote earlier this year (fn. 4): On your first point, I continue to be curious about your perspective. I basically agree with the following (written by Zach Stein-Perlman), but, based on what you said in your parentheses, it sounds like you view it as a bad plan? (I could be off, but it sounds like either you expect solving AI philosophical competence to come pretty much hand in hand with solving intent alignment (because you see them as similar technical problems?), or you expect not solving AI philosophical competence (while having solved intent alignment) to lead to catastrophe (thus putting us outside the worlds in which x-risks are reliably ‘solved’ for), perhaps in the way Wei Dai has talked about?) 1. ^ We don't need these human-obsoleting AIs to be able to implement CEV. We want to be able to defer to them on tricky wisdom-loaded questions like what should we do about the overall AI situation? They can ask us questions as needed. 2. ^ To avoid being rushed by your own AI project, you also have to ensure that your AI can't be stolen and can't escape, so you have to implement excellent security and control.
RobertM40

I tried to make a similar argument here, and I'm not sure it landed.  I think the argument has since demonstrated even more predictive validity with e.g. the various attempts to build and restart nuclear power plants, directly motivated by nearby datacenter buildouts, on top of the obvious effects on chip production.

3yams
I've just read this post and the comments. Thank you for writing that; some elements of the decomposition feel really good, and I don't know that they've been done elsewhere. I think discourse around this is somewhat confused, because you actually have to do some calculation on the margin, and need a concrete proposal to do that with any confidence. The straw-Pause rhetoric is something like "Just stop until safety catches up!" The overhang argument is usually deployed (as it is in those comments) to the effect of 'there is no stopping.' And yeah, in this calculation, there are in fact marginal negative externalities to the implementation of some subset of actions one might call a pause. The straw-Pause advocate really doesn't want to look at that, because it's messy to entertain counter-evidence to your position, especially if you don't have a concrete enough proposal on the table to assign weights in the right places. Because it's so successful against straw-Pausers, the anti-pause people bring in the overhang argument like an absolute knockdown, when it's actually just a footnote to double check the numbers and make sure your pause proposal avoids slipping into some arcane failure mode that 'arms' overhang scenarios. That it's received as a knockdown is reinforced by the gearsiness of actually having numbers (and most of these conversations about pauses are happening in the abstract, in the absence of, i.e., draft policy). But... just because your interlocutor doesn't have the numbers at hand, doesn't mean you can't have a real conversation about the situations in which compute overhang takes on sufficient weight to upend the viability of a given pause proposal. You said all of this much more elegantly here: ...which feels to me like the most important part. The burden is on folks introducing an argument from overhang risk to prove its relevance within a specific conversation, rather than just introducing the adversely-gearsy concept to justify safety-coded
RobertM20

Should be fixed now.

RobertM20

Good catch, looks like that's from this revision, which looks like it was copied over from Arbital - some LaTeX didn't make it through.  I'll see if it's trivial to fix.

2RobertM
Should be fixed now.
RobertM20

The page isn't dead, Arbital pages just don't load sometimes (or take 15+ seconds).

RobertMΩ83116

I understand this post to be claiming (roughly speaking) that you assign >90% likelihood in some cases and ~50% in other cases that LLMs have internal subjective experiences of varying kinds.  The evidence you present in each case is outputs generated by LLMs.

The referents of consciousness for which I understand you to be making claims re: internal subjective experiences are 1, 4, 6, 12, 13, and 14.  I'm unsure about 5.

Do you have sources of evidence (even illegible) other than LLM outputs that updated you that much?  Those seem like very... (read more)

Andrew_CritchΩ13190

The evidence you present in each case is outputs generated by LLMs.

The total evidence I have (and that everyone has) is more than behavioral. It includes

a) the transformer architecture, in particular the attention module,

b) the training corpus of human writing,

c) the means of execution (recursive calling upon its own outputs and history of QKV vector representations of outputs),

d) as you say, the model's behavior, and

e) "artificial neuroscience" experiments on the model's activation patterns and weights, like mech interp research.

When I think about how... (read more)

RobertM108

My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences.

I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).

6Seth Herd
Good suggestion, thanks and I'll do that. I'm not commenting on those who are obviously just grinding an axe; I'm commenting on the stance toward "doomers" from otherwise reasonable people. From my limited survey the brand of x-risk concern isn't looking good, and that isn't mostly a result of the amazing rhetorical skills of the e/acc community ;)
RobertM187

On one hand, I feel a bit skeptical that some dude outperformed approximately every other pollster and analyst by having a correct inside-view belief about how existing pollster were messing up, especially given that he won't share the surveys.  On the other hand, this sort of result is straightforwardly predicted by Inadequate Equilibria, where an entire industry had the affordance to be arbitrarily deficient in what most people would think was their primary value-add, because they had no incentive to accuracy (skin in the game), and as soon as someo... (read more)

Norvid on Twitter made the apt point that we will need to see the actual private data before we can really judge. Not unusual for lucky people to backrationalize their luck as a sure win.

RobertM42

I'm pretty sure Ryan is rejecting the claim that the people hiring for the roles in question are worse-than-average at detecting illegible talent.

RobertM120

Depends on what you mean by "resume building", but I don't think this is true for "need to do a bunch of AI safety work for free" or similar.  i.e. for technical research, many people that have gone through MATS and then been hired at or founded their own safety orgs have no prior experience doing anything that looks like AI safety research, and some don't even have much in the way of ML backgrounds.  Many people switch directly out of industry careers into doing e.g. ops or software work that isn't technical research.  Policy might seem a b... (read more)

RobertM40

(We switched back to shipping Calibri above Gill Sans Nova pending a fix for the horrible rendering on Windows, so if Ubuntu has Calibri, it'll have reverted back to the previous font.)

2DanielFilan
I believe I'm seeing Gill Sans? But when I google "Calibri" I see text that looks like it's in Calibri, so that's confusing.
RobertM1715

Indeed, such red lines are now made more implicit and ambiguous. There are no longer predefined evaluations—instead employees design and run them on the fly, and compile the resulting evidence into a Capability Report, which is sent to the CEO for review. A CEO who, to state the obvious, is hugely incentivized to decide to deploy models, since refraining to do so might jeopardize the company.

This doesn't seem right to me, though it's possible that I'm misreading either the old or new policy (or both).

Re: predefined evaluations, the old policy nei... (read more)

aysja132

Thanks, I think you’re right on both points—that the old RSP also didn’t require pre-specified evals, and that the section about Capability Reports just describes the process for non-threshold-triggering eval results—so I’ve retracted those parts of my comment; my apologies for the error. I’m on vacation right now so was trying to read quickly, but I should have checked more closely before commenting.

That said, it does seem to me like the “if/then” relationships in this RSP have been substantially weakened. The previous RSP contained sufficiently much wigg... (read more)

RobertM31

But that's a communication issue....not a truth issue.

Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them "weak".

That's not trivial. There's no proof that there is such a coherent entity as "human values", there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.

Many possible objections here, but of course spelling everything out would violate Logan's request for a short argument.... (read more)

-3TAG
@Logan Zoellner being wrong doesn't make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons -- it's not length alone. And it wouldn't help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him. Whereas the same amount of time could, more reasonably, be spent learning how AI actually works. Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist. The original Sequences have did not track reality , because they are not evidence based -- they are not derived from academic study or industry experience. Yudkowsky is proud that they are "derived from the empty string" -- his way of saying that they are armchair guesswork. His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model -- the kinds of argument that immediately start talking about values and agency. But of course that's a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a "sleepwalking" way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren't agents, and it's doubtful whether they have values. It's not irrational to base your thinking on real world examples, rather than speculation. In addition , they haven't been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost -- you have to
RobertM2318

A strong good argument has the following properties:

  • it is logically simple (can be stated in a sentence or two)
    • This is important, because the longer your argument, the more details that have to be true, and the more likely that you have made a mistake.  Outside the realm of pure-mathematics, it is rare for an argument that chains together multiple "therefore"s to not get swamped by the fact that

No, this is obviously wrong.

  1. Argument length is substantially a function of shared premises.  I would need many more sentences to convey a novel argument a
... (read more)
4TAG
A stated argument could have a short length if it's communicated between two individuals who have common knowledge of each others premises..as opposed to the "Platonic" form, where every load bearing component is made explicit, and there is noting extraneous. But that's a communication issue....not a truth issue. A conjunctive argument doesn't become likelier because you don't state some of the premises. The length of the stated argument has little to do with its likelihood. How true an argument is, how easily it persuades another person, how easy it is to understand have little to do with each other. The likelihood of an ideal argument depends in the likelihood of it's load bearing premises...both how many there are, and their individual likelihoods. Public communication, where you have no foreknowledge of shared premises, needs to keep the actual form closer to the Platonic form. Public communication is obviously the most important kind when it comes to avoiding AI doom. Correct. The fact that you don't have to explicitly communicate every step of an argument to a known recipient, doesnt stop the overall probability of a conjunctive argument from depending on the number, and individual likelihood, of the steps of the Platonic version, where everything necessary is stated and nothing unnecessary is stated Correct. Stated arguments can contain elements that are explanatory, or otherwise redundant for an ideal recipient. Nonetheless, there is a Platonic form, that does not contain redundant elements or unstated, load bearing steps. That's not trivial. There's no proof that there is such a coherent entity as "human values", there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there. This is a classic example of failing to communicate with people outside the bubble. Your assumptions about values and agency just aren't shared by the general public or political leaders. PS . @Logan Zoellner That's se
3Logan Zoellner
  A fact cannot be self evidently true if many people disagree with it. 
RobertM4523

Credit where credit is due: this is much better in terms of sharing one's models than one could say of Sam Altman, in recent days. 

As noted above the footnotes, many people at Anthropic reviewed the essay.  I'm surprised that Dario would hire so many people he thinks need to "touch grass" (because they think the scenario he describes in the essay sounds tame), as I'm pretty sure that describes a very large percentage of Anthropic's first ~150 employees (certainly over 20%, maybe 50%).

My top hypothesis is that this is a snipe meant to signal Dario... (read more)

(I work at Anthropic.) My read of the "touch grass" comment is informed a lot by the very next sentences in the essay:

But more importantly, tame is good from a societal perspective. I think there's only so much change people can handle at once, and the pace I'm describing is probably close to the limits of what society can absorb without extreme turbulence.

which I read as saying something like "It's plausible that things could go much faster than this, but as a prediction about what will actually happen, humanity as a whole probably doesn't want thing... (read more)

Ben Pace*2717

Credit where credit is due: this is much better in terms of sharing one's models than one could say of Sam Altman, in recent days. 

I mean I guess this is literally true, but to be clear I think it's broadly not much less deceptive (edit: or at least, 'filtered').

I remind you of this Thiel quote:

I think the pro-AI people in Silicon Valley are doing a pretty bad job on, let’s say, convincing people that it’s going to be good for them, that it’s going to be good for the average person, that it’s going to be good for our society. And if it all ends up bei

... (read more)
RobertMΩ143414

Do you have a mostly disjoint view of AI capabilities between the "extinction from loss of control" scenarios and "extinction by industrial dehumanization" scenarios?  Most of my models for how we might go extinct in next decade from loss of control scenarios require the kinds of technological advancement which make "industrial dehumanization" redundant, with highly unfavorable offense/defense balances, so I don't see how industrial dehumanization itself ends up being the cause of human extinction if we (nominally) solve the control problem, rather th... (read more)

Andrew_CritchΩ7130

Do you have a mostly disjoint view of AI capabilities between the "extinction from loss of control" scenarios and "extinction by industrial dehumanization" scenarios?

a) If we go extinct from a loss of control event, I count that as extinction from a loss of control event, accounting for the 35% probability mentioned in the post.

b) If we don't have a loss of control event but still go extinct from industrial dehumanization, I count that as extinction caused by industrial dehumanization caused by successionism, accounting for the additional 50% probabilit... (read more)

RobertM116

Yeah, the essay (I think correctly) notes that the most significant breakthroughs in biotech come from the small number of "broad measurement tools or techniques that allow precise but generalized or programmable intervention", which "are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control".

Why then only such systems limited to the biological domain?  Even if it does end up being true that scientific and technological progress is substantially bottlenecked on real-... (read more)

My answer to this question of why Dario thought this:

Yeah, the essay (I think correctly) notes that the most significant breakthroughs in biotech come from the small number of "broad measurement tools or techniques that allow precise but generalized or programmable intervention", which "are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control".

Why then only such systems limited to the biological domain?

Is because this is the area that Dario has most experience in being a... (read more)

RobertM40

Not Mitchell, but at a guess:

  • LLMs really like lists
  • Some parts of this do sound a lot like LLM output:
    • "Complex Intervention Development and Evaluation Framework: A Blueprint for Ethical and Responsible AI Development and Evaluation"
    • "Addressing Uncertainties"
  • Many people who post LLM-generated content on LessWrong often wrote it themselves in their native language and had an LLM translate it, so it's not a crazy prior, though I don't see any additional reason to have guessed that here.

Having read more of the post now, I do believe it was at least mostly human... (read more)

RobertM128

I think it pretty much only matters as a trivial refutation of (not-object-level) claims that no "serious" people in the field take AI x-risk concerns seriously, and has no bearing on object-level arguments.  My guess is that Hinton is somewhat less confused than Yann but I don't think he's talked about his models in very much depth; I'm mostly just going off the high-level arguments I've seen him make (which round off to "if we make something much smarter than us that we don't know how to control, that might go badly for us").

4cubefox
He also argued that digital intelligence is superior to analog human intelligence because, he said, many identical copies can be trained in parallel on different data, and then they can exchange their changed weights. He also said biological brains are worse because they probably use a learning algorithm that is less efficient than backpropagation.
RobertM41

I don't really see how this is responding to my comment.  I was not arguing about the merits of RLHF along various dimensions, or what various people think about it, but pointing out that calling something "an alignment technique" with no further detail is not helping uninformed readers understand what "RLHF" is better (but rather worse).

Again, please model an uninformed reader: how does the claim "RLHF is an alignment technique" constrain their expectations?  If the thing you want to say is that some of the people who invented RLHF saw it as an ... (read more)

2Noosphere89
Yes, this is what I wanted to say here:
RobertM20

This wasn't part of my original reasoning, but I went and did a search for other uses of "alignment technique" in tag descriptions.  There's one other instance that I can find, which I think could also stand to be rewritten, but at least in that case it's quite far down the description, well after the object-level details about the proposed technique itself.

RobertM2-2

Two reasons:

First, the change made the sentence much worse to read.  It might not have been strictly ungrammatical, but it was bad english.

Second, I expect that the average person, unfamiliar with the field, would be left with a thought-terminating mental placeholder after reading the changed description.  What does "is an alignment technique" mean?  Despite being in the same sentence as "is a machine learning technique", it is not serving anything like the same role, in terms of the implicit claims it makes.  Intersubjective agreement ... (read more)

2cubefox
I think it is highly uncontroversial and even trivial to call RLHF an alignment technique, given that it is literally used to nudge the model away from "bad" responses and toward "good" responses. It seems the label "alignment technique" could only be considered inappropriate here for someone who has a nebulous science fiction idea of alignment as a technology that doesn't currently exist at all, like it was seen when Eliezer originally wrote the sequences. I think it's obvious that this view is outdated now.
0Noosphere89
I admit I was not particularly optimizing for much detail here. I use the word alignment technique essentially as a technique that was invented to make AIs be aligned to our values that attempts to reduce existential risk. Note that it doesn't mean that it will succeed, or that it's a very good technique, or one we should solely rely on, because I make no claim on whether it does succeed or not, just that it's often discussed in the context of alignment of AIs. I consider a lot of the disagreement on RLHF being an alignment technique, as essentially a disagreement on whether it actually works at all, not whether it's an actual alignment technique being used in labs.
2RobertM
This wasn't part of my original reasoning, but I went and did a search for other uses of "alignment technique" in tag descriptions.  There's one other instance that I can find, which I think could also stand to be rewritten, but at least in that case it's quite far down the description, well after the object-level details about the proposed technique itself.
Load More