AI safety is one of the most critical issues of our time, and sometimes the most innovative ideas come from unorthodox or even "crazy" thinking. I’d love to hear bold, unconventional, half-baked or well-developed ideas for improving AI safety. You can also share ideas you heard from others.
Let’s throw out all the ideas—big and small—and see where we can take them together.
Feel free to share as many as you want! No idea is too wild, and this could be a great opportunity for collaborative development. We might just find the next breakthrough by exploring ideas we’ve been hesitant to share.
A quick request: Let’s keep this space constructive—downvote only if there’s clear trolling or spam, and be supportive of half-baked ideas. The goal is to unlock creativity, not judge premature thoughts.
Looking forward to hearing your thoughts and ideas!
P.S. The AIs are moving fast, the last similar discussion was a month ago and was well received, so let's try again and see how the ideas changed.
This sounds like a rationalization. It seems much more likely the ideas just aren't that high quality if you need a whole hour for a single argument that couldn't possibly be broken up into smaller pieces that don't suck.
Edit: Since if the long post is disliked, you can say "well they just didn't read it", and if the short post is disliked you can say "well it just sucks because its small". Meanwhile, it should in fact be pretty surprising you don't have any interesting or novel or useful insights in your whole 40 minute post which can't be explained in a reasonable length of blog post time.