Based on recent conversations with policymakers, labs and journalists, I see increased coordination around societal evaluation & risk mitigation — (cyber)security mindset is now mainstream.
Also, imminent society-scale harm (e.g. contextual integrity harms caused by over-reliance & precision persuasion since ~a decade ago) has shown to be effective in getting governments to consider risk reasonably.
Well, before 2016, I had no idea I'd serve in the public sector...
(The vTaiwan process was already modeled after CBV in 2015.)
Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.
Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)
Safer than implants is to connect at scale "telepathically" leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.
Yes, that, and a further focus on assistive AI systems that excel at connecting humans — I believe this is a natural outcome of the original CBV idea.
Nice to meet you too & thank you for the kind words. Yes, same person as AudreyTang. (I posted the map at audreyt.org as a proof of sorts.)
Thank you for this. Here's mine:
Always glad to update as new evidence arrives.
Based on my personal experience in pandemic resilience, additional wakeups can proceed swiftly as soon as a specific society-scale harm is realized.
Specifically, as we are waking up to over-reliance harms and addressing them (esp. within security OODA loops), it would buy time for good enough continuous alignment.