We are pleased to announce ILIAD — a 5-day conference bringing together 100+ researchers to build strong scientific foundations for AI alignment.

***Apply to attend by June 30!***

  • When: Aug 28 - Sep 3, 2024
  • Where: @Lighthaven (Berkeley, US)
  • What: A mix of topic-specific tracks, and unconference style programming, 100+ attendees. Topics will include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics and more to be announced.
  • Who: Currently confirmed speakers include: Daniel Murfet, Jesse Hoogland, Adam Shai, Lucius Bushnaq, Tom Everitt, Paul Riechers, Scott Garrabrant, John Wentworth, Vanessa Kosoy, Fernando Rosas and James Crutchfield.
  • Costs: Tickets are free. Financial support is available on a needs basis. 

See our website here. For any questions, email iliadconference@gmail.com 

About ILIAD

ILIAD is a 100+ person conference about alignment with a mathematical focus. The theme is ecumenical, yet the goal is nothing less than finding the True Names of AI alignment.

Participants may be interested in all tracks, only one or two or none at all. The unconference format will mean participants have maximum freedom to direct their own time and energy. 

Program and Unconference Format

ILIAD will feature an unconference format - meaning that participants can propose and lead their own sessions. We believe that this is the best way to release the latent creative energies in everyone attending.

That said, freedom can be scary! If taking charge of your own learning sounds terrifying, rest assured there will be plenty of organized sessions as well. We will also run the topic-specific workshop tracks such as:

  • Computational Mechanics is a framework for understanding complex systems by focusing on their intrinsic computation and information processing capabilities. Pioneered by J. Crutchfield, it has recently found its way into AI safety. This workshop is led by Paul Riechers.
  • Singular learning theory, developed by S. Watanabe, is the modern theory of Bayesian learning. SLT studies the loss landscape of neural networks, using ideas from statistical mechanics, Bayesian statistics and algebraic geometry. The track lead is Jesse Hoogland.
  • Agent Foundations uses tools from theoretical economics, decision theory, Bayesian epistemology, logic, game theory and more to deeply understand agents: how they reason, cooperate, believe and desire. The track lead is Daniel Hermann.
  • Causal Incentives is a collection of researchers interested in using causal models to understand agents and their incentives.  The track lead is Tom Everitt.
  • “How It All Fits Together” turns its attention to the bigger picture — where are we coming from, and where are we going? — under the direction of John Wentworth.  

Financial Support

Financial support for accommodation & travel are available on a needs basis. Lighthaven has capacity to accommodate % of participants. Note that these rooms are shared.  

New Comment
18 comments, sorted by Click to highlight new comments since:

How are applications processed? Sometimes applications are processed on a rolling basis, so it's important to submit as soon as possible. Other times, you just need to apply by the date, so if you're about to post something big, it makes sense to hold-off your application.

We intend to review end of the submit deadline June 30th but I wouldn't hold off on your application. 

Sidenote: I'm a bit confused by the name. The all caps makes it seem like an acronym. But it seems to not be? 

Reply1111
[-]gw401

I
Love
Interesting
Alignment
Donferences

Reply9543322

ah that makes sense thanks

honestly i prefer undonfrences

How about deconferences?

idk, sounds dangerously close to deferences

Insightful

Learning

Implore

Agreed

Delta

Intentional
Lure for
Improvised
Acronym
Derivation

International League of Intelligent Agent Deconfusion

It's the Independently-Led Interactive Alignment Discussion, surely.

Interactively Learning the Ideal Agent Design

> https://www.lesswrong.com/posts/r7nBaKy5Ry3JWhnJT/announcing-iliad-theoretical-ai-alignment-conference#whqf4oJoYbz5szxWc

you didn't invite me so you don't get to have all the nice things, but I did leave several good artifacts and books I recommend lying around. I invite you to make good use of them!

Thank you Lorxus, that's appreciated. I'm sure we can make good use of them.

Unfortunately, we get many more applications than we have spots so we have to make some tough choices. Better luck next time!

Also: if I get accepted to come to ILIAD I am going to make delicious citrus sodas.[1] Maybe I could even run a pair of panels about that?[2] That seemed extremely out of scope though so I didn't put it in the application.

  1. ^

    Better than you've had before. Like, ever. Yes I am serious, I've got lost lore. Also, no limit on the flavor as long as it's a citrus fruit we can go and physically acquire on-site. Also, no need at all for a stove or heating element.

  2. ^

    There is a crucially important time-dependent step on the scale of hours, so a matched pair of panels would be the best format.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?