LESSWRONG
LW

466
jacquesthibs
3005Ω100154110
Message
Dialogue
Subscribe

I work primarily on AI Alignment. Scroll down to my pinned Shortform for an idea of my current work and who I'd like to collaborate with.

Website: https://jacquesthibodeau.com

Twitter: https://twitter.com/JacquesThibs

GitHub: https://github.com/JayThibs 

LinkedIn: https://www.linkedin.com/in/jacques-thibodeau/ 

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
On Becoming a Great Alignment Researcher (Efficiently)
2jacquesthibs's Shortform
3y
372
jacquesthibs's Shortform
jacquesthibs2y82

I shared the following as a bio for EAG Bay Area 2024. I'm sharing this here if it reaches someone who wants to chat or collaborate.

Hey! I'm Jacques. I'm an independent technical alignment researcher with a background in physics and experience in government (social innovation, strategic foresight, mental health and energy regulation). Link to Swapcard profile. Twitter/X.

CURRENT WORK

  • Collaborating with Quintin Pope on our Supervising AIs Improving AIs agenda (making automated AI science safe and controllable). The current project involves a new method allowing unsupervised model behaviour evaluations. Our agenda.
  • I'm a research lead in the AI Safety Camp for a project on stable reflectivity (testing models for metacognitive capabilities that impact future training/alignment).
  • Accelerating Alignment: augmenting alignment researchers using AI systems. A relevant talk I gave. Relevant survey post.
  • Other research that currently interests me: multi-polar AI worlds (and how that impacts post-deployment model behaviour), understanding-based interpretability, improving evals, designing safer training setups, interpretable architectures, and limits of current approaches (what would a new paradigm that addresses these limitations look like?).
  • Used to focus more on model editing, rethinking interpretability, causal scrubbing, etc.

TOPICS TO CHAT ABOUT

  • How do you expect AGI/ASI to actually develop (so we can align our research accordingly)? Will scale plateau? I'd like to get feedback on some of my thoughts on this.
  • How can we connect the dots between different approaches? For example, connecting the dots between Influence Functions, Evaluations, Probes (detecting truthful direction), Function/Task Vectors, and Representation Engineering to see if they can work together to give us a better picture than the sum of their parts.
  • Debate over which agenda actually contributes to solving the core AI x-risk problems.
  • What if the pendulum swings in the other direction, and we never get the benefits of safe AGI? Is open source really as bad as people make it out to be?
  • How can we make something like the d/acc vision (by Vitalik Buterin) happen?
  • How can we design a system that leverages AI to speed up progress on alignment? What would you value the most?
  • What kinds of orgs are missing in the space?

POTENTIAL COLLABORATIONS

  • Examples of projects I'd be interested in: extending either the Weak-to-Strong Generalization paper or the Sleeper Agents paper, understanding the impacts of synthetic data on LLM training, working on ELK-like research for LLMs, experiments on influence functions (studying the base model and its SFT, RLHF, iterative training counterparts; I heard that Anthropic is releasing code for this "soon") or studying the interpolation/extrapolation distinction in LLMs.
  • I’m also interested in talking to grantmakers for feedback on some projects I’d like to get funding for.
  • I'm slowly working on a guide for practical research productivity for alignment researchers to tackle low-hanging fruits that can quickly improve productivity in the field. I'd like feedback from people with solid track records and productivity coaches.

TYPES OF PEOPLE I'D LIKE TO COLLABORATE WITH

  • Strong math background, can understand Influence Functions enough to extend the work.
  • Strong machine learning engineering background. Can run ML experiments and fine-tuning runs with ease. Can effectively create data pipelines.
  • Strong application development background. I have various project ideas that could speed up alignment researchers; I'd be able to execute them much faster if I had someone to help me build my ideas fast. 
Reply
jacquesthibs's Shortform
jacquesthibs17m20

Employees at AGI companies might want to consider leaving to start a safety-focused startup if they can. Particularly if they can manage a deal with their former lab where the startup’s work would impact safety during internal deployment.

Their star power alone allows them to raise at ridiculous valuations without an idea or product.

Look at Thinking Machines! Even Anthropic is an example of this. Though I recognize lots of people see those as negative examples.

More safety researchers outside of the labs can try to start companies, but it’s a steeper battle to raise money and build a world-class team than if researchers from AGI labs left to found something new.

It may be easier for a (outside the labs) founder to build an org by recruiting lab employees than building something big on their own.

Consider status when thinking through your career comparative advantage.

That said, if you don’t think you’ll be able to have positive impact with a startup from the outside, there are better options. Employees at labs can have fairly large compute budgets so the startup may need to raise a ton (+100M-1B) to be worth it comparatively.

Reply
AI safety undervalues founders
jacquesthibs20h40

Like, yes, there are some more interesting monitor-shaped RL environments, and I would actually be interested in digging into the details of how good or bad some of them would be

As part of my startup exploration, I would like to discuss this as well. It would be helpful to clarify my thinking on whether there's a shape of such a business that could be meaningfully positive. I've started reaching out to people who work in the labs to get better context on this. I think it would be good to dig deeper into Evan's comment on the topic.

I'm going to start a Google Doc, but I would love to talk in person with folks in the Bay about this to ideate and refine it faster.

Reply
jacquesthibs's Shortform
jacquesthibs20h20

I mainly didn't do it because I thought Ryan wrote a useful post, and I didn't want to derail (what I felt was supposed to be) the conversation further. But maybe you're right, and it would be fine.

Reply
jacquesthibs's Shortform
jacquesthibs2d*6816

Habryka responding to Ryan Kidd:

> the bar at MATS has raised every program for 4 years now

What?! Something terrible must be going on in your mechanisms for evaluating people (which to be clear, isn't surprising, indeed, you are the central target of the optimization that is happening here, but like, to me it illustrates the risks here quite cleanly). 

It is very very obvious to me that median MATS participant quality has gone down continuously for the last few cohorts. I thought this was somewhat clear to y'all and you thought it was worth the tradeoff of having bigger cohorts, but you thinking it has "gone up continuously" shows a huge disconnect.  

Like, these days at the end of a MATS program half of the people couldn't really tell you why AI might be an existential risk at all. Their eyes glaze over when you try to talk about AI strategy. IDK, maybe these people are better ML researchers, but obviously they are worse contributors to the field than the people in the early cohorts. 

One thing to note about the first two MATS cohorts is that they occurred before the FTX crash (and pre-ChatGPT!). [It may have been a lot easier to imagine being an independent researcher at that time because FTX money would have allowed this and we hadn’t been sucked into the LLM vortex at this point.]

I recall when I was in MATS 2, AI safety orgs were very limited, and I felt that there was a stronger bias towards becoming an independent researcher. Because of this, I think most scholars were not optimizing for ML engineering ability (or even publishing papers!), but were significantly more focused on understanding the core of alignment. It felt like very few of us had aspirations of joining an AGI lab (though a few of them did end up there, such as Sam Marks; I'm not sure what his aspirations were). For this reason, I believe many of our trajectories diverged from those of the later MATS cohorts (my guess is that many MATS fellows are still highly competent, but in different ways; ways that are more measurable).

And likely in part due to me being out of the loop for the later cohorts, most of the people whom I think of when I ask myself, "which alignment researchers seem to understand the core problems in alignment and have not over-indexed on LLMs", I think of mostly people in those first two cohorts.


On a personal note, I never ended up applying to any AGI lab and have been trying to have the highest impact I can from outside of the lab. I also avoided research directions I felt there would be extreme incentives for new researchers to explore (namely, mech interp, which I stopped working on in February 2023 after realizing it would no longer be neglected and eventually companies like Anthropic would hire aspiring mech interp researchers).

Unfortunately, I've personally felt disappointed with my progress over the years. Though I think it's obviously harder to have an impact if you are constantly exploring new directions like I have been (had I stuck with mech interp, I might be leading a team in that research direction at this point).

On the other hand, there's another concern I've been wary of in the context of AI safety startups (which is what I'm currently exploring) and research in general: following the short-term success gradient. In startups, you can start with a noble vision and then become increasingly pressured away from the initial vision simply because you are pursuing the customer gradient and "building what people want." If your goal is large-scale (venture) success, then it only makes sense. You need customers and traction for your Series A after all. Even in research, there's only so much fucking around you can do until people want something legible from you.

Anyway, despite not having started a successful AI safety startup at this point, at least part of it has come from taking my time in finding which mountain I want to climb and avoiding locking myself into a path that doesn't end up making progress on the core technical problems in alignment.

Reply111
Andrej Karpathy on LLM cognitive deficits
jacquesthibs7d70

Andrej also tweeted this:

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability.

Folks are trying to develop this cognitive core. They generally seek to leverage better training data strategies and meta-learning to instill problem-solving abilities with less reliance on learned facts to "cheat" while solving a task.

Reply
Insofar As I Think LLMs "Don't Really Understand Things", What Do I Mean By That?
jacquesthibs9d90

I’ve been working towards automated research (for safety) for a long time. After a ton of reflection and building in this direction, I’ve landed on a similar opinion as presented in this post.

I think LLM scaffolds will solve some problems, but I think they will be limited in ways that make it hard to solve incredibly hard problems. You can claim that LLMs can just use a scratchpad as a form of continual online learning, it feels like this will hit limits. Information loss and being able to internalize new information feels like bottlenecks.

Scale will help, but unclear how far it will go and clearly not economical.

That said, I still think automated research for safety is underinvested.

Reply
jacquesthibs's Shortform
jacquesthibs11d20

You're going to change it as you go along, as you get feedback from users and discover what people really need.

This is one part I feel iffy on because I'm concerned that following the customer gradient will lead to a local minima that will eventually detach from where I'd like to go.

That said, it definitely feels correct to reflect on one's alignment and incentives. The pull is real:

All of this makes it tricky to start a pro-alignment company but I think it is worth trying because when people do create a successful company it creates a nexus of smart people and money to spend that can attack a lot of problems that aren't possible in the "nonprofit research" world.

Yeah, that's the vision! I'd have given up and taken another route if I didn't think there was value in pursuing a pro-safety company.

Reply
jacquesthibs's Shortform
jacquesthibs11d3820

Building an AI safety business that tackles the core challenges of the alignment problem is hard.

Epistemic status: uncertain; trying to articulate my cruxes. Please excuse the scattered nature of these thoughts, I’m still trying to make sense of all of it.

You can build a guardrails or evals platform, but if your main threat model involves misalignment via internal deployment with self-improving AI (potentially stemming from something like online learning on hard problems like alignment which leads to AI safety sabotage), it is so tied to capabilities that you will likely never have the ability to influence the process. You can build reliability-as-a-business but this probably speeds up timelines via second-order effects and doesn’t really matter for superintelligence.

I guess you can hone in on the types of problems where Goodharting is an obvious problem and you are building reliable detectors to help reduce it. Maybe you can find companies that would value that as a feature and you can relate it to the alignment-relevant situations.

You can build RL environments, sell evals or sell training data, but you still seemingly end up too far removed from what is happening internally.

You could choose a high-stakes vertical you can make money with as a test-bed for alignment and build tooling/techniques that ensure a high-level of guarantees.

If you have a theory of change, it will likely need to be some technical alignment breakthrough you make legible and low-friction to incorporate or some open source infra the labs can leverage.

You can build ControlArena or Inspect, open-source it, and then try to make a business around it, but of course you are not tackling the core alignment challenges.

Unless your entire theory of change is building infrastructure the labs will port into their local Frankenstein infra and that Control ends up being the only thing the labs needed for solving alignment with AIs. And I guess from a startup perspective, you recognize that building AI safety sabotage monitors doesn’t really relate 1-to-1 with what business owners care about right now. You essentially use your contract with Anthropic as a competence signal for VC money and getting costumers.

You can do mech interp, but again, when are you solving the superalignment problem?

So what do you do if you are under the impression that the greatest source of the risk is within the labs? Of course you can just drop the whole startup direction and do research/governance. Many end up inside the labs. You could keep doing a startup, but basically hope that your evals/monitoring product reduces some sources of risk and you might donate some of the money to fundamental alignment research.

I’m not really sure what to make of this and still have some startup ideas that I think will still be overall good for safety, but these are things I’ve been thinking a lot about recently and wanted to get my thoughts out there if anyone wanted to talk about it. The core thing is that it feels like there’s a lot of startups you can build as an AI company that would do things like robustify the world against AI, but tackling the core and conceptual problems and linking it to a venture-backed business is rough.

Reply321
Daniel Tan's Shortform
jacquesthibs12d20

FYI, I've been thinking about, and I've noted something similar here.

I'm not really sure what to say about the "why would you think the default starting point is aligned". The thing I wonder about is whether there is a way to reliably gain strong evidence of an increasingly misaligned nature developing through training. 

On another note, my understanding is partly informed by this Twitter comment by Eliezer:

Humans doing human psychology will look at somebody lounging listlissly on a sofa and think, "Huh, that person there doesn't seem very ambitious; I bet they're not that dangerous."  They're talking about a real thing in the space of human psychology, but unfortunately that real thing does not map onto math in any simple way.

The sofa human, if we imagine for a moment that we're talking in 1990 before the age of Google Maps, might hear about a new comic-book store and successfully plot their way across town on a previously untaken route, in order to buy a new kind of strategic board game, which they learn to play that night even though they've never played it before, and then they challenge one of their friends and win.  There's all kinds of puzzles the sofa human could solve which a chimpanzee could not, involving means-end reasoning, forward chaining and backchaining meeting in the middle, learning new categories about tactics that work or don't work...

And yet the sofa human seems so soft and safe and unambitious!  You can get a bunch of minimum-wage labor out of them, and they don't try to take over the world at *all*.  They don't even talk about *wanting* to take over the world, except insofar as impotent national-politics gabble is a behavior they've learned to imitate from other humans.  "If only our AIs could be like this!" some people think.

And there are really so many, many things going on here.  I am not sure where I ought to start.  I will start somewhere anyways.

The sofa human has been entrained, on a sub-evolutionary timescale, by intrinsic brain rewards, by externally stimulated punishments, to have been rewarded on past occasions for using means-end reasoning on playing chess, but not for using means-end reasoning on tasks similar to "taking over the world".  They can't, in fact, take over the world, and smaller tasks in the same sequence, like becoming Mayor of Oakland or Governor of California, are also unrewarding to them.  This isn't some deep category written on the structure of stars, but it's a natural category to *you*, who is also human, so it's not surprising that the description of what the sofa human has and hasn't learned to think about has a short description in your own native mental language, and that you can do a good job of predicting them using that description.  It's not a sofa *alien*.

It happens, even, that the board game is *about* taking over the world - or a rather simple logical structure meant to mimic that, under some hypothetical circumstances - and the sofadweller sure is coming up with some clever tactics in that board game!  Weird, huh?

Already we have several important observations, here:
- It's not that the sofadweller lacks the *underlying basic cognitive machinery* to do general means-end reasoning on the particular topic of "world takeovers".  There's a surface-level learned behavior not to *use* the general machinery for that specific topic.  You can ask them to play a board game about it and they'll do that.
- It's not like the sofadweller is way smarter than you and thinks much faster than you and was faced with an actual opportunity to solve their comic-book-related problems by taking over the world as an intermediate step, which they then very corrigibly turned down.  It's not like they were *offered* rulership of the Earth and dominion of the galaxy, via some clearly visible pathway, and turned it down.
- Your ability to describe the sofadweller in simple-sounding standard humanese words like "unambitious" and get out nice useful predictions, possibly has something to do with you two not being utterly alien minds relative to each other.

Reply
Load More
36Automating AI Safety: What we can do today
4mo
0
82What Makes an AI Startup "Net Positive" for Safety?
7mo
23
59How much I'm paying for AI productivity software (and the future of AI use)
1y
18
58Shane Legg's necessary properties for every AGI Safety plan
Q
2y
Q
12
17AISC Project: Benchmarks for Stable Reflectivity
2y
0
76Research agenda: Supervising AIs improving AIs
Ω
3y
Ω
5
294Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky
3y
297
87Practical Pitfalls of Causal Scrubbing
Ω
3y
Ω
17
23Can independent researchers get a sponsored visa for the US or UK?
Q
3y
Q
1
60What‘s in your list of unsolved problems in AI alignment?
Q
3y
Q
9
Load More