You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Open thread, August 5-11, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 05 August 2013 06:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 06 August 2013 07:46:02AM 1 point [-]

Eliezer assumes in the meta-ethics sequence that you cannot really ever talk outside of your general moral frame. By that assumption (which I think he is still making), Indifferent AI would be friendly or inactive. Unfriendly AI better conveys the externality to humans morality.

Comment author: mwengler 07 August 2013 03:09:25PM -2 points [-]

Perhaps you can never get all the way out.

But certainly someone who talks about human rights and values the survival of the species is speaking less constrained by moral frame than somebody who values only her race or her nation or her clan and considers all other humans as though they were another species competing with "us."

How wrong am I to incorporate AI in my ideas of "us," with the possible result that I enable a universe where AI might thrive even without what we now think of as human? Would this not be analogous to a pure caucasian human supporting values that lead to a future of a light-brown human race, a race with no pure caucasian still in it? Would this Caucasian have to be judged to have committed some sort of CEV-version of genocide?

Comment author: Armok_GoB 13 August 2013 11:14:14PM 0 points [-]

"AI" is really all of mindspace except the tiny human dot. There's an article about it around here somewhere. PLENTY of AIs are indeed correctly incorporated in "us", and indeed unless things go horribly wrong "what we now think of as humans" will be extinct and replaced with these wast and alien things. Think of daleks and GLADoS and chuthulu and Babyeaters here. These are mostly as close to friendly as most humans are, and we're trusting humans to make the seed FAI in the first place.

Unfiendly AI are not like that. The process of evolution itself is basically a very stupid UFAI. Or a pandemic. or the intuition pump in this article http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ . Or even something like a supernova. It's not a character, not even an "evil" one.

((yea this is a gross oversimplification, I'm aiming mostly at causing true intuitions here, not causing true explicit beliefs. The phenomena is related to metaphor.))