DanielFilan

Sequences

AXRP - the AI X-risk Research Podcast

Comments

Sorted by

So I guess 1.5% of Americans have worse judgment than I expected (by my lights, as someone who thinks that Trump is really bad). Those 1.5% were incredibly important for the outcome of the election and for the future of the country, but they are only 1.5% of the population.

Nitpick: they are 1.5% of the voting population, making them around 0.7% of the US population.

If you ask people who they're voting for, 50% will say they're voting for Harris. But if you ask them who most of their neighbors are voting for, only 25% will say Harris and 75% will say Trump!

Note this issue could be fixed if you instead ask people who the neighbour immediately to the right of their house/apartment will vote for, which I think is compatible with what we know about this poll. That said, the critique of "do people actually know" stands.

she should have picked Josh Shapiro as her running mate

Note that this news story makes allegations that, if true, make it sound like the decision was partly Shapiro's:

Following Harris's interview with Pennsylvania Governor Josh Shapiro, there was a sense among Shapiro's team that the meeting did not go as well as it could have, sources familiar with the matter tell ABC News.

Later Sunday, after the interview, Shapiro placed a phone call to Harris' team, indicating he had reservations about leaving his job as governor, sources said.

Oh except: I did not necessarily mean to claim that any of the things I mentioned were missing from the alignment research scene, or that they were present.

When I wrote that, I wasn't thinking so much about evals / model organisms as stuff like:

basically stuff along the lines of "when you put agents in X situation, they tend to do Y thing", rather than trying to understand latent causes / capabilities

Yeah, that seems right to me.

DanielFilanΩ16312

A theory of how alignment research should work

(cross-posted from danielfilan.com)

Epistemic status:

  • I listened to the Dwarkesh episode with Gwern and started attempting to think about life, the universe, and everything
  • less than an hour of thought has gone into this post
  • that said, it comes from a background of me thinking for a while about how the field of AI alignment should relate to agent foundations research

Maybe obvious to everyone but me, or totally wrong (this doesn't really grapple with the challenges of working in a domain where an intelligent being might be working against you), but:

  • we currently don't know how to make super-smart computers that do our will
    • this is not just a problem of having a design that is not feasible to implement: we do not even have a sense of what the design would be
    • I'm trying to somewhat abstract over intent alignment vs control approaches, but am mostly thinking about intent alignment
    • I have not thought that much about societal/systemic risks very much, and this post doesn't really address them.
  • ideally we would figure out how to do this
  • the closest traction that we have: deep learning seems to work well in practice, altho our theoretical knowledge of why it works so well or how capabilities are implemented is lagging
  • how should we proceed? Well:
    • thinking about theory alone has not been practical
    • probably we need to look at things that exhibit alignment-related phenomena and understand them, and that will help us develop the requisite theory
      • said things are probably neural networks
    • there are two ways we can look at neural networks: their behaviour, and their implementation.
    • looking at behaviour is conceptually straightforward, and valuable, and being done
    • looking at their implementation is less obvious
    • what we need is tooling that lets us see relevant things about how neural networks are working
    • such tools (e.g. SAEs) are not impossible to create, but it is not obvious that their outputs tell us quantities that are actually of interest
    • in order to discipline the creation of such tools, we should demand that they help us understand models in ways that matter
    • once we get such tools, we should be trying to use them to understand alignment-relevant phenomena, to build up our theory of what we want out of alignment and how it might be implemented
      • this is also a thing that looking at the external behaviour of models in alignment-relevant contexts should be doing
  • so should we be just doing totally empirical things? No.
    • firstly, we need to be disciplined along the way by making sure that we are looking at settings that are in fact relevant to the alignment problem, when we do our behavioural analysis and benchmark our interpretability tools. This involves having a model of what situations are in fact alignment-relevant, what problems we will face as models get smarter, etc
    • secondly, once we have the building blocks for theory, ideally we will put them together and make some actual theorems like "in such-and-such situations models will never become deceptive" (where 'deceptive' has been satisfactorily operationalized in a way that suffices to derive good outcomes from no deception and relatively benign humans)
  • I'm imagining the above as being analogous to an imagined history of statistical mechanics (people who know this history or who have read "inventing temperature" should let me know if I'm totally wrong about it):
    • first we have steam engines etc
    • then we figure out that 'temperature' and 'entropy' are relevant things to track for making the engines run
    • then we relate temperature, entropy, and pressure
    • then we get a good theory of thermodynamics
    • then we develop statistical mechanics
  • exceptions to "theory without empiricism doesn't work":
  • lesson of above: theory does seem to help us analyze some issues and raise possibilities

A way I'd phrase John's sibling comment, at least for the exact case: adding arrows to a DAG increases the set of probability distributions it can represent. This is because the fundamental rule of a Bayes net is that d-separation has to imply conditional independence - but you can have conditional independences in a distribution that aren't represented by a network. When you add arrows, you can remove instances of d-separation, but you can't add any (because nodes are d-separated when all paths between them satisfy some property, and (a) adding arrows can only increase the number of paths you have to worry about and (b) if you look at the definition of d-separation the relevant properties for paths get harder to satisfy when you have more arrows). Therefore, the more arrows a graph G has, the fewer constraints distribution P has to satisfy for P to be represented by G.

I enjoyed reading Nicholas Carlini and Jeff Kaufman write about how they use them, if you're looking for inspiration.

Another way of maintaining Sola Scriptura and Perspicuity in the face of Protestant disagreement about essential doctrines is the possibility that all of this is cleared up in the deuterocanonical books that Catholics believe are scripture but Protestants do not. That said, this will still rule out Protestantism, and it's not clear that the deuterocanon in fact clears everything up.

Load More