I've been thinking recently and writing a post about potential AGI architecture that seems possible to make with current technology in 3 to 5 years, and even faster if significant effort will be put to that goal.
It is a bold claim, and that architecture very well might not be feasible, but it got me thinking about the memetic hazard of similar posts.
It might very well be true that there is an architecture combining current AI tech in a manner as to create AGI out there; in that case, should we treat it as a memetic hazard? If so, what is the course of action here?
I'm thinking that the best thing to do is to covertly discuss it with the AI Safety crowd, both to understand it's feasibility, and to start working on how to keep this particular architecture aligned (which is a much easier task than aligning something that you don't even know how it will look.)
What are your thoughts on this matter?
If there is no possibility of what you want we can do no better than whatever approach I propose. The remote possibility of controlling infinite matter does indeed dominate all other concerns for any unbounded utility function, so I observe our utility function to be bounded. Having infinite copies of me is fine if me being a particular kind of person implies the copies of me being the same kind of person. Causal connections are not required - if someone knows what kind of person you are even without building a copy of you, that is enough for my "such that" clause.