If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Is there a good case for the usefulness (or uselessness) of brain-computer interfaces in AI alignment (à la Neuralink etc.)? I've searched around a bit, but there seems to be no write-up for the path to making AI go well using BCIs.
Edit: Post about this is up.
Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we'd end up with a limited superintelligence who maybe isn't completely Friendly, but also isn't completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they'd also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind... (read more)