You can, hypothetically, build some pretty different interacting systems of ML programs inside the VM I've been building, that has not gotten a lot of interest. I've been thinking about it a fair bit recently.
But I think the general case still stands. How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?
How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?
In the usual way someone who has made a breakthrough convinces others. Reputation helps. Whitepapers help. Toy examples help. Etc., etc.
I don't understand the context, however. That someone, how does he know it's a breakthrough without testing it out? And why would he be so concerned with the opinion of the AI risk community (which isn't exactly held in high regard by most working AI researchers)?
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "