Physicist and AI researcher with a passion for philosophy. Interested in consciousness, models of reality, meditation, and how we should choose to live and act. Writing over at https://pursuingreality.com
I actually agree that it's likely an AGI will at least start thinking in a way kind of similar to a human, but that in the end this will still be very difficult to align. I actually really recommend that you checkout Understand by Ted Chiang, which basically plays out the exact scenario you mentioned -- a normal guy gets super human intelligence and chaos ensues.
Thanks for the comment, I'll read some more on the distinction of inner and outer alignment, that sounds interesting.
I don't think you would need to get anywhere near perfect simulation in order to begin to have extremely good predictive power over the world. We're already seeing this in graphics and physics modeling.
I think this is a good point, although these are cases where lots of data is available. So I guess any case in which you don't have the data ready would still have more difficulties. Off the top of my head I don't know how limiting this would be in practice, but it should be in lots of cases.
This is a great comment, but you don't need to worry that I'll be indoctrinated!
I was actually using that terminology a bit tongue in cheek, as I perceive exactly what you say about the religious fervour of some AI alignment proponents. I think the general attitude and vibe of Yudkowsky etc is one of the main reasons I was suspicious about their arguments for AI takeoff in the first place.