I think it's an interesting thing to do but I must also point out that at some point singularity will boom and the work of today's artist or writer would be of small value to the society in a couple of decades. We also don't know how the future will shape up so the chances of producing something worth providing for the future is in doubt. But it's totally do-able for the coming 3-5 years.
Power is addictive. Why do you think people get addicted to cocaine? It's because the cocaine gives them the illusion of believing that they can do what ever they want. In the case of politicians, they have the power to control the lives of people and the path in which society chooses to follow. I don't expect you to understand it, since you haven't been in their position but you should ask your self, who do you feel if you were the only bread winner in the house and you wife/ husband switched positions with you? You would having fun staying home and doing nothing most of the time but you will lose the power of control.
First of all, i admire narrow AI more than AGI. I can't prove that it won't affect us but your logic fails because of the phrase "then causes sterility or death." You just assumed that the AI will cause death or sterility out of the blue and that's totally false. The beauty of narrow AI is that there will be a human in the middle that can control what the AI outputs than an AGI which is totally uncontrollable.
The way AI is going, our aim is to reach General Intelligence or mimic the human brain at some point. I just want to differentiate that with the AI we know today. If we assume that, then there are two end points that we might reach. One would be, we are not as smart as we think and we have made an "intelligent" being, by that I mean stupid and the stupid being has the tools it needs to destroy us and it can at anytime harm us. The second option is we are really smart and we create the intelligent being we have always dreamed about. Think about it, the system we have built would surely by so complex that the smallest change could trigger a big chain reaction. We might start building robots and one robot might have a malfunction, just like the car malfunction that the car industry faced. Now think of the consequence that the world might face. The AI we have built surely out smarts us and if it can think evil, who is to say it can treat us like we treat ants ? Is there a guaranty? No, would surely be the answer and I don't think we should pursue it because either way we go the result is deadly.