There's a formatting issue with the link, should be: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2634591/
Preventing neural network weight exfiltration (by third parties or an AI itself)
This is really really interesting; a fairly "normal" infosec concern to prevent IP/PII theft, plus a (necessary?) step in many AGI risk scenarios. Is the claim that one could become a "world expert" specifically in this (ie without becoming an expert in information security more generally)?
It's probably based on GPT-4.
Bing literally says it's powered by "GPT 4.0 technology" in this chat, is that synonymous with GPT-4 (genuinely unsure)?
I've actually wondered if some kind of stripped-down sign language could be a useful adjunct to verbal communication, and specifically if a rationalist version could be used to convey epistemic status (or other non-obvious conversational metadata).
In the (outstanding) show The Expanse, a branch of humanity called "Belters" have been mining the asteroid belt for enough generations that they have begun to diverge (culturally, politically, and even physically) from <humanity-main>. They have such an adjunct sign language, originally developed to communicate in the void of space, fully integrated into their standard communication.
This seems so useful! I'm so frequently frustrated in conversations, trying to align on the same meta-level as my conversational partner, or convey my epistemic status effectively without derailing the object-level conversation.
An unrelated anecdote, on the general awesomeness of signing. Years ago, I was heading home on the NYC subway late at night, and the usual A train din precluded conversation. Most passengers were mindlessly scrolling on their phones or sullenly staring out windows, but four young men were carrying on a silent, boisterous conversation via signing, with full-body laughter and obvious joy.
In that environment, their (presumed) general disability translated to local advantage. Still makes me happy to think about.
There might be another strain in the future. I don’t know how likely this is, but that’s the most likely way that things ‘don’t mostly end’ after this wave
I agree, and I also don't really have great mental handles to model this, but this seems like the most consequential question to predict post-Omicron life. My two biggest surprises of the pandemic have been Delta and Omicron, so sorting this out feels like a high VOI investment.
Here's a messy brain dump on this, mostly I'm just looking for a better framework for thinking about this.
Dogs would be interesting - super smart working dogs might even have a viable labour market, and it seems like the evidence of supercanine IQ would be obvious in a way that's not true of any other species (just given how much exposure most people have to the range of normal canine intelligence).
Sort of analogous to what Loyal is doing for longevity research.