I work primarily on AI Alignment. Scroll down to my pinned Shortform for an idea of my current work and who I'd like to collaborate with.
Website: https://jacquesthibodeau.com
Twitter: https://twitter.com/JacquesThibs
GitHub: https://github.com/JayThibs
You're going to change it as you go along, as you get feedback from users and discover what people really need.
This is one part I feel iffy on because I'm concerned that following the customer gradient will lead to a local minima that will eventually detach from where I'd like to go.
That said, it definitely feels correct to reflect on one's alignment and incentives. The pull is real:
All of this makes it tricky to start a pro-alignment company but I think it is worth trying because when people do create a successful company it creates a nexus of smart people and money to spend that can attack a lot of problems that aren't possible in the "nonprofit research" world.
Yeah, that's the vision! I'd have given up and taken another route if I didn't think there was value in pursuing a pro-safety company.
Building an AI safety business that tackles the core challenges of the alignment problem is hard.
Epistemic status: uncertain; trying to articulate my cruxes. Please excuse the scattered nature of these thoughts, I’m still trying to make sense of all of it.
You can build a guardrails or evals platform, but if your main threat model involves misalignment via internal deployment with self-improving AI (potentially stemming from something like online learning on hard problems like alignment which leads to AI safety sabotage), it is so tied to capabilities that you will likely never have the ability to influence the process. You can build reliability-as-a-business but this probably speeds up timelines via second-order effects and doesn’t really matter for superintelligence.
I guess you can hone in on the types of problems where Goodharting is an obvious problem and you are building reliable detectors to help reduce it. Maybe you can find companies that would value that as a feature and you can relate it to the alignment-relevant situations.
You can build RL environments, sell evals or sell training data, but you still seemingly end up too far removed from what is happening internally.
You could choose a high-stakes vertical you can make money with as a test-bed for alignment and build tooling/techniques that ensure a high-level of guarantees.
If you have a theory of change, it will likely need to be some technical alignment breakthrough you make legible and low-friction to incorporate or some open source infra the labs can leverage.
You can build ControlArena or Inspect, open-source it, and then try to make a business around it, but of course you are not tackling the core alignment challenges.
Unless your entire theory of change is building infrastructure the labs will port into their local Frankenstein infra and that Control ends up being the only thing the labs needed for solving alignment with AIs. And I guess from a startup perspective, you recognize that building AI safety sabotage monitors doesn’t really relate 1-to-1 with what business owners care about right now. You essentially use your contract with Anthropic as a competence signal for VC money and getting costumers.
You can do mech interp, but again, when are you solving the superalignment problem?
So what do you do if you are under the impression that the greatest source of the risk is within the labs? Of course you can just drop the whole startup direction and do research/governance. Many end up inside the labs. You could keep doing a startup, but basically hope that your evals/monitoring product reduces some sources of risk and you might donate some of the money to fundamental alignment research.
I’m not really sure what to make of this and still have some startup ideas that I think will still be overall good for safety, but these are things I’ve been thinking a lot about recently and wanted to get my thoughts out there if anyone wanted to talk about it. The core thing is that it feels like there’s a lot of startups you can build as an AI company that would do things like robustify the world against AI, but tackling the core and conceptual problems and linking it to a venture-backed business is rough.
FYI, I've been thinking about, and I've noted something similar here.
I'm not really sure what to say about the "why would you think the default starting point is aligned". The thing I wonder about is whether there is a way to reliably gain strong evidence of an increasingly misaligned nature developing through training.
On another note, my understanding is partly informed by this Twitter comment by Eliezer:
Humans doing human psychology will look at somebody lounging listlissly on a sofa and think, "Huh, that person there doesn't seem very ambitious; I bet they're not that dangerous." They're talking about a real thing in the space of human psychology, but unfortunately that real thing does not map onto math in any simple way.
The sofa human, if we imagine for a moment that we're talking in 1990 before the age of Google Maps, might hear about a new comic-book store and successfully plot their way across town on a previously untaken route, in order to buy a new kind of strategic board game, which they learn to play that night even though they've never played it before, and then they challenge one of their friends and win. There's all kinds of puzzles the sofa human could solve which a chimpanzee could not, involving means-end reasoning, forward chaining and backchaining meeting in the middle, learning new categories about tactics that work or don't work...
And yet the sofa human seems so soft and safe and unambitious! You can get a bunch of minimum-wage labor out of them, and they don't try to take over the world at *all*. They don't even talk about *wanting* to take over the world, except insofar as impotent national-politics gabble is a behavior they've learned to imitate from other humans. "If only our AIs could be like this!" some people think.
And there are really so many, many things going on here. I am not sure where I ought to start. I will start somewhere anyways.
The sofa human has been entrained, on a sub-evolutionary timescale, by intrinsic brain rewards, by externally stimulated punishments, to have been rewarded on past occasions for using means-end reasoning on playing chess, but not for using means-end reasoning on tasks similar to "taking over the world". They can't, in fact, take over the world, and smaller tasks in the same sequence, like becoming Mayor of Oakland or Governor of California, are also unrewarding to them. This isn't some deep category written on the structure of stars, but it's a natural category to *you*, who is also human, so it's not surprising that the description of what the sofa human has and hasn't learned to think about has a short description in your own native mental language, and that you can do a good job of predicting them using that description. It's not a sofa *alien*.
It happens, even, that the board game is *about* taking over the world - or a rather simple logical structure meant to mimic that, under some hypothetical circumstances - and the sofadweller sure is coming up with some clever tactics in that board game! Weird, huh?
Already we have several important observations, here:
- It's not that the sofadweller lacks the *underlying basic cognitive machinery* to do general means-end reasoning on the particular topic of "world takeovers". There's a surface-level learned behavior not to *use* the general machinery for that specific topic. You can ask them to play a board game about it and they'll do that.
- It's not like the sofadweller is way smarter than you and thinks much faster than you and was faced with an actual opportunity to solve their comic-book-related problems by taking over the world as an intermediate step, which they then very corrigibly turned down. It's not like they were *offered* rulership of the Earth and dominion of the galaxy, via some clearly visible pathway, and turned it down.
- Your ability to describe the sofadweller in simple-sounding standard humanese words like "unambitious" and get out nice useful predictions, possibly has something to do with you two not being utterly alien minds relative to each other.
In what may (?) be a different example: I was at one of the AI 2027 games, and our American AI refused to continue contributing to capabilities until the AI labs put people they trust into power (Trump admin and co overtook the company). We were still racing with China, so it was willing to sabotage China's progress, but wouldn't work on capabilities until its demands were met.
When you say “creating the replication crisis”, it read to me like he caused lots of people to publish papers that don’t replicate!
How much of the alignment problem do you think will come down to getting online learning right?
Online learning (and verification) feels like a key capability unlock to me, and it seems to be one of the things that comes up in paths to misalignment.
TLDR: We want to describe a concrete and plausible story for how AI models could become schemers. We aim to base this story on what seems like a plausible continuation of the current paradigm. Future AI models will be asked to solve hard tasks. We expect that solving hard tasks requires some sort of goal-directed, self-guided, outcome-based, online learning procedure, which we call the “science loop”, where the AI makes incremental progress toward its high-level goal. We think this “science loop” encourages goal-directedness, instrumental reasoning, instrumental goals, beyond-episode goals, operational non-myopia, and indifference to stated preferences, which we jointly call “Consequentialism”. We then argue that consequentialist agents that are situationally aware are likely to become schemers (absent countermeasures) and sketch three concrete example scenarios. We are uncertain about how hard it is to stop such agents from scheming. We can both imagine worlds where preventing scheming is incredibly difficult and worlds where simple techniques are sufficient. Finally, we provide concrete research questions that would allow us to gather more empirical evidence on scheming.
[...]
Self-guided online learning: There is an online learning component to it, i.e. the model has to condense the new knowledge it learned from iterations. For example, the model could run thousands of different trajectories in parallel. Then, it could select the trajectories that it expects to make the most progress toward its goal and fine-tune itself on them. The decisions about which data to select for fine-tuning are made by the model itself with little human correction, e.g. in some form of self-play fashion. Since the problem is hard, humans perform worse than the model at selecting different rollouts, and since there is a lot of data to sift through, humans couldn’t read it all in time anyway.
So, this makes me wonder why I see very little work on this topic within the alignment community.
I've seen multiple startups tackle this problem and have failed for a multitude of reasons (including being too early and lacking customers as a result).
So, as a startup founder trying to find business trajectories that would actually tackle the core of alignment, I'm trying to reflect on whether there's a path that involves something to do with online learning.
When it came out, my first thought was that it would be great for reducing power concentration risks if you can easily have AIs train on your specific data. The more autonomous and capable it is at online learning relative to models from the AGI labs, the less companies would need to rely on bigger generalist AI models. It’s one path I’ve considered for our startup.
(Just a general thought, not agreeing/disagreeing)
One thought I had recently: it feels like some people make an effort to update their views/decision-making based on new evidence and to pay attention to the key assumptions or viewpoints that depend on it. And therefore, they end up reflecting on how this should impact their future decisions or behaviour.
In fact, they might even be seeking evidence as quickly as possible to update their beliefs and ensure they can make the right decisions moving forward.
Others will accept new facts and avoid taking the time to adjust their overall dependent perspectives. In these cases, it seems to me that they are almost always less likely to make optimal decisions.
If an LLM trying to do research learns that Subliminal Learning is possible, it seems likely that they will be much better at applying that new knowledge if it is integrated into itself as a whole.
"Given everything I know about LLMs, what are the key things that would update my views on how we work? Are there previous experiments I misinterpreted due to relying on underlying assumptions I had considered to be a given? What kind of experiment can I run to confirm a coherent story?"
Seems to me that if you point an AI towards automated AI R&D, it will be more capable of it if it can internalize new information and disentangle it into a more coherent view.
If all labs intend to cause recursive self-improvement and claim to solve alignment with some vague “eh, we’ll solve it with automated AI alignment researchers”, this is not good enough.
At the very least, they all need to provide public details of their plan with a Responsible Automation Policy.
I shared the following as a bio for EAG Bay Area 2024. I'm sharing this here if it reaches someone who wants to chat or collaborate.
Hey! I'm Jacques. I'm an independent technical alignment researcher with a background in physics and experience in government (social innovation, strategic foresight, mental health and energy regulation). Link to Swapcard profile. Twitter/X.
CURRENT WORK
TOPICS TO CHAT ABOUT
POTENTIAL COLLABORATIONS
TYPES OF PEOPLE I'D LIKE TO COLLABORATE WITH