Bara Hanzalova

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Are you going to implement critical reading that would continuously ameliorate biases in your work? If so, how?

What do you think of a successor AI that collects data on one’s wellbeing (‘height’ via visual analog scale and ‘depth’ by assessing one’s understanding of the rationale for their situation), impact (thinking and actions toward others), and connections (to verify impact based on network analysis and wellbeing data and to predict populations’ welfare), motivates decreases of suffering groups’ future generations, rewards individuals with impact that is increasing or above a certain level, and withdraws benefits/decreases wellbeing of individuals whose impact is decreasing and below a certain level?