This is a reaction to Zvi's post https://www.lesswrong.com/posts/KL2BqiRv2MsZLihE3/going-nova The title of this post references this scene from The Good Place: Disclaimer: I am generally pretty sanguine about the standard AI-borne x-risks. But "Going Nova" is a different issue, a clear and present danger of an s-risk. My point is that...
Epistemic status: rather controversial and not very well researched :) Not super novel, I assume, but a cursory look did not bring up any earlier posts, please feel free to link some. Intuition pump: bigger brain does not necessarily imply a smarter creature. Apes are apparently smarter than elephants and...
The forum has been very much focused on AI safety for some time now, thought I'd post something different for a change. Privilege. Here I define Privilege as an advantage over others that is invisible to the beholder. [EDIT: thanks to JenniferRM for pointing out that "beholder" is a wrong...
Some thoughts based on a conversation at a meetup. Disclaimer: I am less than a dilettante in this area. TL;DR: if this rumored Q* thing represents a shift from "most probable" to "most accurate" token completion, it might be a hint of an unexpected and momentous change from a LARPer...
First, to dispense with what should be obvious, if a superintelligent agent wants to destroy humans, we are completely and utterly hooped. All the arguing about "but how would it...?" indicates lack of imagination. ...Of course a superintelligence could read your keys off your computer's power light, if it found...
TL;DR: I am rather confident that involving governments in regulating AI development will make the world less safe, not more, because governments are basically incompetent and the representatives are beholden to special interests. Governments can do very few things well, some number of things passably, and a lot of things...