If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
I have said before that I think consciousness research is not getting enough attention in EA, and I want to add another argument for this claim:
Suppose we find compelling evidence that consciousness is merely "how information feels from the inside when it is being processed in certain complex ways", as Max Tegmark claims (and Dan Dennett and others agree). Then, I argue, we should be compelled from a utilitarian perspective to create a superintelligent AI that is provably conscious, regardless of whether it is safe, and regardless whether it kills us humans (or worse), if we know it will try to maximize the subjective happiness of itself and the subagents it creates.
The above isn't my argument (Sam Harris mentioned someone else arguing this) but I am claiming this is one reason why consciousness research is ethically important.
Are we actually optimizing for "subjective happiness"? That's the wireheading scenario. I would say that wireheading humans seems better than killing humans and creating a wireheaded machine, but... both scenarios seem suboptimal.
And if you instead want to make a machine that is much better at "human values" (not just "subjective happiness") than humans... I guess the tricky part is making the machine that is good at human values.