Abstract

Technical AI Safety 2024 (TAIS 2024) was a conference organised by AI Safety 東京 and Noeon Research, in collaboration with Reaktor JapanAI Alignment Network and AI Industry Foundation. You may have heard of us through ACX.

The goals of the conference were

  1. demonstrate the practice of technical safety research to Japanese researchers new to the field
  2. share ideas among established technical safety researchers
  3. establish a good international reputation for AI Safety 東京 and Noeon Research
  4. establish a Schelling conference for people working in technical safety

We sent out a survey after the conference to get feedback from attendees on whether or not we achieved those goals. We certainly achieved goals 1, 2 and 3; goal 4 remains to be seen. In this post we give more details about the conference, share results from the feedback survey, and announce our intentions to run another conference next year.

Okay but like, what was TAIS 2024?

Technical AI Safety 2024 (TAIS 2024) was a small non-archival open academic conference structured as a lecture series. It ran over the course of 2 days from April 5th–6th 2024 at the International Conference Hall of the Plaza Heisei in Odaiba, Tokyo.

We had 18 talks covering 6 research agendas in technical AI safety:

  • Mechanistic Interpretability
    • Developmental Interpretability
  • Scaleable Oversight
  • Agent Foundations
    • Causal Incentives
    • ALIFE

…including talks from Hoagy Cunningham (Anthropic), Noah Y. Siegel (DeepMind), Manuel Baltieri (Araya), Dan Hendrycks (CAIS), Scott Emmons (CHAI), Ryan Kidd (MATS), James Fox (LISA), and Jesse Hoogland and Stan van Wingerden (Timaeus).

In addition to our invited talks, we had 25 submissions, of which 19 were deemed relevant for presentation. 5 were offered talk slots, and we arranged a poster session to accommodate the remaining 14. In the end, 7 people presented posters, 5 in person and 2 in absentia. Our best poster award was won jointly by Fazl Berez for Large Language Models Relearn Removed Concepts and Alex Spies for Structured Representations in Maze-Solving Transformers.

We had 105 in-person attendees (including the speakers). Our live streams had around 400 unique viewers, and maxed out at 18 concurrent viewers.

Recordings of the conference talks are hosted on our youtube channel.

How did it go?

Very well, thanks for asking!

We sent out a feedback survey after the event, and got 68 responses from in-person attendees (58% response rate). With the usual caveats that survey respondents are not necessarily a representative sample of the population:

Looking good! Let’s dig deeper.

How useful was TAIS 2024 for those new to the field?

Event satisfaction was high across the board, which makes it hard to tell how relatively satisfied population subgroups were. Only those who identified themselves as "new to AI safety" were neutrally satisfied, but the newbies were also the most likely to be highly satisfied.

It seems that people new to AI safety had no more or less trouble understanding the talks than those who work for AI safety organisations or have published AI safety research:

They were also no more or less likely to make new research collaborations:

Note that there is substantial overlap between some of these categories, especially for categories that imply a strong existing relationship to AI safety, so take the above charts with a pinch of salt:

 TotalNew to AI safetyPart of the AI safety communityEmployed by an AI safety orgHas published AI safety research
New to AI safety

26

100%

19%

12%

4%

Part of the AI safety community

28

18%

100%

36%

32%

Employed by an AI safety org

20

15%

50%

100%

35%

Has published AIS research

13

8%

69%

54%

100%

Subjectively, it felt like most of our attendees were actively seeking new connections rather than staying in the groups they came with as we feared they might. This went a long way to bridging gaps between established safety researchers and those new to the field.

How useful was TAIS 2024 for the speakers?

A previous alignment conference in Japan, JAC 2023, received feedback that the conference was not very useful for participants already familiar with alignment. Since one of our primary aims was to demonstrate the practice of technical AI safety research to those new to the field, we felt it was important to ensure that even our most experienced attendees got something out of the conference. How did we do?

According to the post conference survey, everyone learned something new at TAIS 2024, including the speakers:

All but two of our speakers said that participating in TAIS 2024 was worth both the effort they put into preparing for it and the opportunity cost:

Note that only 15 of our 19 speakers responded to our survey, and it’s possible that those speakers who did not think that TAIS 2024 was worth the opportunity cost also did not think filling in the TAIS 2024 feedback survey was worth the opportunity cost. Nevertheless, even including nonrespondents as negatives, at least 70% of our speakers thought participating in TAIS 2024 was worth the opportunity cost.

We thought that more prestigious speakers would get less from the conference, but actually there was no trend:

We also hoped that, in addition to sharing ideas, TAIS 2024 would foster new research collaborations. We weren’t quite as successful in this as we hoped, with only 45% of in-person attendees reporting having met a new research collaborator. Speakers were slightly more likely than attendees to report a new collaboration:

Unlike with opportunity cost, here there was a noticeable trend with h-index, where higher h-index participants were much less likely to report a new collaboration.

It is too early to tell whether these reported collaborations will actually result in impactful research.

How useful was TAIS 2024 for our Japanese attendees?

We considered having TAIS 2024 simultaneously translated into Japanese, but the cost was prohibitively high. Since the event was conducted in English, we were worried that our Japanese attendees, many of whom were more comfortable speaking in Japanese than English, would be left behind.

However, the vast majority (80%) of our attendees spoke technical English:

and language ability made little difference to reported event satisfaction:

This is despite the fact that, as you would expect, people with lower English proficiency rated the talks as being harder to understand:

Around 45% of our attendees spoke conversational Japanese or better, so those attendees who did not get much out of the talks likely got enough benefit from the networking to be satisfied.

Indeed the written feedback confirms that for at least some of our attendees, networking in and around the lectures was more useful than the lectures themselves:

“For me by far the most useful part was the conversations and the people. Whether that results in collaborations and research projects is early to say, but at any rate was really enjoyable. Despite there being quite a few opportunities to meet and chat with people I didn't manage to chat with everyone (I wanted to). So I would recommend maybe having even more time for informal or structured networking next time.”

“I think another day or two with more time for organic meetings/collaboration could have enabled more collaborations.”

When asked what we should do more of next year, 60% of respondents said we should have more structured networking and 40% said we should have more unstructured networking, compared with only 30% who said we should have more lectures.

Only 20% of our attendees were native Japanese speakers. While the majority of attendees resided in Japan, most attendees were not Japanese nationals. While we made an effort to advertise the conference in both Japanese and English (our website and press releases were translated into Japanese, as were all of our official communications, and we reached out directly to many Japanese universities and media publications), it may be that Japanese speakers who would not have enjoyed an English-language conference chose not to attend. It could also be that AI safety is so much more popular in the west than Japan that even given Japan’s relatively small migrant population (~2%), there is more interest in AI safety from migrants than Japanese nationals.

We hope TAIS 2024 has inspired its Japanese attendees to spread the word among their friends and colleagues, and will work hard to ensure that next year’s conference is attractive to Japanese attendees.

Stay tuned for TAIS 2025

Most of our attendees said that if there were a TAIS 2025, they would come:

From the text feedback, it seems like the “Other” responses are expressing that people would like to come, but might find it difficult depending on the location:

“Just wondering if 2025 will be held in Tokyo, too…”

“The main blocker to me coming next year is the cost of jet lag”

“Not sure if I would come again next year, depends on opportunity cost”

“I would be unlikely to find funding to attend a TAIS 2025, but would definitely attend again if I can find funding (or if I am a speaker and travel/accommodation is taken care of again).”

We therefore speculate that almost all attendees would attend TAIS 2025 or a similar event, if it was sufficiently convenient.

We (AI Safety Tokyo and Noeon Research) are currently actively seeking funding and sponsorships for TAIS 2025. Sponsorship gets your logo on our website and any printed materials / conference merchandise, and also entitles you to a short presentation at the start of the conference. We may allow sponsors to set up conference booths depending on venue capacity and demand. If you would like to become a sponsor, please email someone@aisafety.tokyo.

Photo album

We hired a professional photographer for the event, which was a great decision. We've included some photos below; more can be found on Google Drive.

Photo Album

New Comment
1 comment, sorted by Click to highlight new comments since:

I'm very sad that I missed this year's conference. Hopefully, I will be able to attend the next one!