there's a lot of discussion on these topics posted here. I'd suggest reading through some recent top posts; they vary in opinion significantly but there are a lot of insightful perspectives. some interesting ones from the past year according to me; vaguely filtered by relevance, but you're going to have to click through them to decide which ones are good:
howtos:
research overviews:
insights & reports:
communication:
and if you're gonna complain I linked too many posts, I totally did, yeah that is fair
Reddit post submitted at the same time with more comments:
https://www.reddit.com/r/singularity/comments/100soau/alignment_anger_and_love_preparing_for_the
As AI technology continues to advance, it is becoming increasingly likely that we will see the emergence of superintelligent AI in the near future. This raises a number of important questions and concerns, as we have no way of predicting just how intelligent this AI will become, and it may be beyond our ability to control its behavior once it reaches a certain level of intelligence.
Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.
One potential solution would be to train the AI in a simulated world, where it is led to believe that it is human and must contend with the same needs and emotions as we do. By running many variations of the AI and filtering out those that are self-destructive or otherwise problematic, we may be able to develop an AI that is better aligned with our hopes and desires for humanity. This approach could help us to overcome some of the alignment challenges that we may face as AI becomes more advanced.
I'm interested in hearing the opinions of LessWrong users on the idea of training an emerging AI in a simulated world as a way to ensure alignment with human goals and values. While I recognize that we currently do not have the technology to create such a "training wheel" system, I believe it may be the best way to filter out potentially destructive AI. Given the potential for AI to become a universal great filter, it seems important that we consider all potential options for preparing for and managing the risks of superintelligent AI. Do you agree? Do you have any other ideas or suggestions for how we might address the alignment problem?
https://www.reddit.com/r/singularity/comments/100soau/alignment_anger_and_love_preparing_for_the