If Aryeh another editor smarter than me sees fit to delete this question, please do, but I am asking genuinely. I'm a 19-year-old college student studying mathematics, floating around LW for about 6 months.
How does understanding consciousness relate to aligning an AI in terms of difficulty? If a conscious AGI could be created that correlates positive feelings* with the execution of its utility function, is that not a better world than one with an unconscious AI and no people?
I understand that there are many other technical problems implicit in ... (read more)
Consciousness is a dissolvable question, I think. People are talking past each other about several real structures, the Qualia Research Institute people are trying to pin down valance for example, a recent ACX post was talking about the global workspace of awareness, etc.
Alignment seems like a much harder problem, as you're not just dealing with some confused concepts, but trying to build something under challenging conditions (e.g. only one shot at the intelligence explosion).
If Aryeh another editor smarter than me sees fit to delete this question, please do, but I am asking genuinely. I'm a 19-year-old college student studying mathematics, floating around LW for about 6 months.
How does understanding consciousness relate to aligning an AI in terms of difficulty? If a conscious AGI could be created that correlates positive feelings* with the execution of its utility function, is that not a better world than one with an unconscious AI and no people?
I understand that there are many other technical problems implicit in ... (read more)