There's a formatting issue with the link, should be: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2634591/
Preventing neural network weight exfiltration (by third parties or an AI itself)
This is really really interesting; a fairly "normal" infosec concern to prevent IP/PII theft, plus a (necessary?) step in many AGI risk scenarios. Is the claim that one could become a "world expert" specifically in this (ie without becoming an expert in information security more generally)?
Indeed, as Vladmir gleaned, I just wanted to clarify that the historical roots of LW & AGI risk are deeper than might be immediately apparent, which could offer a better explanation for the prevalence of Doomerism than, like, EY enchanting us with his eyes or whatever.
I am saddened that this doomerism has gained so much track in a community as great as LW
You're aware that Less Wrong (and the project of applied rationality) literally began as EY's effort to produce a cohort of humans capable of clearly recognizing the AGI problem?
It's probably based on GPT-4.
Bing literally says it's powered by "GPT 4.0 technology" in this chat, is that synonymous with GPT-4 (genuinely unsure)?
I've actually wondered if some kind of stripped-down sign language could be a useful adjunct to verbal communication, and specifically if a rationalist version could be used to convey epistemic status (or other non-obvious conversational metadata).
In the (outstanding) show The Expanse, a branch of humanity called "Belters" have been mining the asteroid belt for enough generations that they have begun to diverge (culturally, politically, and even physically) from <humanity-main>. They have such an adjunct sign language, originally developed to communicate in the void of space, fully integrated into their standard communication.
This seems so useful! I'm so frequently frustrated in conversations, trying to align on the same meta-level as my conversational partner, or convey my epistemic status effectively without derailing the object-level conversation.
An unrelated anecdote, on the general awesomeness of signing. Years ago, I was heading home on the NYC subway late at night, and the usual A train din precluded conversation. Most passengers were mindlessly scrolling on their phones or sullenly staring out windows, but four young men were carrying on a silent, boisterous conversation via signing, with full-body laughter and obvious joy.
In that environment, their (presumed) general disability translated to local advantage. Still makes me happy to think about.
<sarcasm>
And obviously, the entire public health community is up in arms about this…
</sarcasm>
[Narrator: They were not, in fact, up in arms.]
There might be another strain in the future. I don’t know how likely this is, but that’s the most likely way that things ‘don’t mostly end’ after this wave
I agree, and I also don't really have great mental handles to model this, but this seems like the most consequential question to predict post-Omicron life. My two biggest surprises of the pandemic have been Delta and Omicron, so sorting this out feels like a high VOI investment.
Here's a messy brain dump on this, mostly I'm just looking for a better framework for thinking about this.
The lightcone is such a great symbol. It also kind of looks like an hourglass, evoking (to me) the image of time (and galaxies) slipping away. Kudos!
Possibly you're thinking about this: https://www.quantifiedintuitions.org/pastcasting