You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dagon comments on Open thread, Oct. 19 - Oct. 25, 2015 - Less Wrong Discussion

3 Post author: MrMind 19 October 2015 06:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 21 October 2015 07:57:50PM *  2 points [-]

I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?

Sure I do. I'm a speciesist :-)

Besides, we're not discussing what to do or not to do with hypothetical future conscious AIs. We're discussing whether "we should be looking for ways to engineer friendliness into humans". Humans are not hypothetical and "ways to engineer <desirable feature> into humans" are not hypothetical either. They are usually known by the name of "eugenics" and have a... mixed history. Do you have reasons to believe that future attempts to "engineer humans" will be much better?

Comment author: Dagon 22 October 2015 01:36:18PM -1 points [-]

Sure I do. I'm a speciesist :-)

I probably am too, but I don't much like it. I want to be a consciousness-ist.

Most humans are hypothetical, just like all AIs are. They haven't existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.

I am merely pointing out that most of what I've read about FAI goals seems to apply to future humans as much or more as to future AIs.