Some casual conversations with strangers that were high instrumental value:
At my first (online) LessWrong Community Weekend in 2020, I happened to chat with Linda Linsefors. That was my first conversation with anyone working in AI Safety. I’d read about the alignment problem for almost a decade at that point and thought it was the most important thing in the world, but I’d never seriously considered working on it. MIRI had made it pretty clear that the field only needed really exceptional theorists, and I didn’t think I was one of those. That conversation with Linda started the process of robbing me of my comfortable delusions on this front. What she said made it seem more like the field was pretty inadequate, and perfectly normal theoretical physicists could maybe help just by applying the standard science playbook for figuring out general laws in a new domain. Horrifying. I didn't really believe it yet, but this conversation was a factor in me trying out AI Safety Camp a bit over a year later.
At my first EAG, I talked to someone who was waiting for the actual event to begin along with me. This turned out to be Vivek Hebbar, who I'd never heard of before. We got to talking about inductive biases of neural networks. We kept chatting about this research area sporadically for a few weeks after the event. Eventually, Vivek called me to talk about the idea that would become this post. Thinking about that idea led to me understanding the connection between basin broadness and representation dimensionality in neural networks, which ultimately resulted in this research. It was probably the most valuable conversation I’ve had at any EAG so far, and it was unplanned.
At my second EAG, someone told me that an idea for comparing NN representations I’d been talking to them about already existed, and was called centred kernel alignment. I don’t quite remember how that conversation started, but I think it might have been a speed friending event.
My first morning in the MATS kitchen area in Berkeley, someone asked me if I’d heard about a thing called Singular Learning Theory. I had not. He went through his spiel on the whiteboard. He didn’t have the explanation down nearly as well back then, but it still very recognisably connected to how I’d been thinking about NN generalisation and basin broadness, so I kept an eye on the area.
Going to an EA conference was the first time I made friends from Western Europe. (I live in a developing country.)
I realised that Europeans on average experience a higher level of emotional security and willingness to be vulnerable than people of my country. I realised this just by hearing what said people do in their free time, or what their personal relationships are like.
This then pushed me into a rabbit hole trying to figure out why this is the case, and reading more about generational trauma and the various decisions made by country leaders - Deng, Mao, Xi Jinping, Lee Kuan Yew, Nehru, etc - and their impact on people’s psychology.
I became noticeably less optimistic about geopolitical plans like dropping nukes on the other country when they won’t yield on important issue X, after this experience. I realised I need to factor in longterm psychological effects like parents beating their kids because that’s what the previous generation normalised for them.
I have updated upwards on “culture” being a predictor of what any set of people do. Two groups with identical material resources can have vastly different cultures and therefore future outcomes.
"What do you gain from smalltalk?" "I learned not to threaten to nuke countries."
Lmao, amazing.