Perhaps. Don't both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I'm a speciesist :-)
Besides, we're not discussing what to do or not to do with hypothetical future conscious AIs. We're discussing whether "we should be looking for ways to engineer friendliness into humans". Humans are not hypothetical and "ways to engineer into humans" are not hypothetical either. They are usually known by the name of "eugenics" and have a... mixed history. Do you have reasons to believe that future attempts to "engineer humans" will be much better?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.