Interdisciplinary economic theorist in Tokyo, happiest/most productive when bringing econ/finance insights to multidisciplinary teams. Rakuten Institute of Technology; formerly University of Birmingham, Turing Institute, G-Research, EIU.
explainable AI: incorporating causal knowledge into Shapley's value (NeurIPS 2020); extended Shapley's value to sets of features (ICLR 2022)
formal verification: academic work in auction theory; founded fintech fovefi for market risk software
neurotech: hobby project BCI https://icibici.github.io/site
policy: Iraq sanctions
I have less experience in Japan than Harold does, but would generally advocate a grounded approach to issues of AI safety and alignment, rather than an abstract one.
I was perhaps most struck over the weekend that I did not speak to anyone who had actually been involved in developing or running safety-critical systems (aviation, nuclear energy, CBW...), on which lives depended. This gave a lot of the conversations the flavour of a 'glass bead game'.
As Japan is famously risk-averse, it would seem to me - perhaps naively - that grounded arguments should land well here.