What's preventing MIRI from making massive investments into human intelligence augmentation? If I recall correctly, MIRI is most constrained on research ideas, but human intelligence augmentation is a huge research idea that other grantmakers, for whatever reason, aren't funding. There are plenty of shovel-ready proposals already, e.g. https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing; why doesn't MIRI fund them?
Thank you very much! I won't be sending you a bounty, as you're not an AI ethicist of the type discussed here, but I'd be happy to send $50 to a charity of your choice. Which one do you want?
I've seen plenty of AI x-risk skeptics present their object-level argument, and I'm not interested in paying out a bounty for stuff I already have. I'm most interested in the arguments from this specific school of thought, and that's why I'm offering the terms I offer.
Man, this article hits different now that I know the psychopharmacology theory of the FTX crash...
Have any prizes been awarded yet? I haven't heard anything about prizes, but that could have just been that I didn't win one...
I'm still not sure why exactly people (I'm thinking of a few in particular, but this applies to many in the field) tell very detailed stories of AI domination like "AI will use protein nanofactories to embed tiny robots in our bodies to destroy all of humanity at the press of a button." This seems like a classic use of the conjunction fallacy, and it doesn't seem like those people really flinch from the word "and" like the Sequences tell them they should.
Furthermore, it seems like people within AI alignment aren't taking the "sci-fi" criticism as seriously as they could. I don't think most people who have that objection are saying "this sounds like science fiction, therefore it's wrong." I think they're more saying "these hypothetical scenarios are popular because they make good science fiction, not because they're likely." And I have yet to find a strong argument against the latter form of that point.
Please let me know if I'm doing an incorrect "steelman," or if I'm missing something fundamental here.
Some figures within machine learning have argued that the safety of broad-domain future AI is not a major concern. They argue that since narrow-domain present-day AI is already dangerous, this should be our primary concern, rather than that of future AI. But it doesn't have to be either/or.
Take climate change. Some climate scientists study the future possibilities of ice shelf collapses and disruptions of global weather cycles. Other climate scientists study the existing problems of more intense natural disasters and creeping desertification. But these two fields don't get into fights over which field is "more important." Instead, both fields can draw from a shared body of knowledge and respect each other's work as valuable and relevant.
The same principle applies to machine learning and artificial intelligence. Some researchers focus on remote but high-stakes research like the alignment of artificial general intelligence (AGI). Others focus on relatively smaller but nearer-term concerns like social media radicalization and algorithmic bias. These fields are both important in their own ways, and both fields have much to learn from each other. However, given how few resources have been put into AGI alignment compared to nearer-term research, many experts in the field feel that alignment research is currently more worthy of attention.
(tech executives, ML researchers)
You wouldn't hire an employee without references. Why would you make an AI that doesn't share your values?
(policymakers, tech executives)
The future is not a race between AI and humanity. It's a race between AI safety and AI disaster.
(Policymakers, tech executives)
Sorry; I thought I had used the "Question" type.