Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Ethics of AI and Its Effect On Us

-1 cameroncowan 30 June 2016 02:18AM

As many of you know, I've been pretty passionate about the melding to science through AI and philosophy. I've often asked questions on this forum in regards to thought, culture, and figuring out how to decide what an AI should do and how to keep and AI friendly. Fortunately, Slate.com has given me yet another opportunity to wax on the subject again (unpoetically). 

 

The article is here

One of the subjects of the article is an interesting case in what the future might hold for AI. 

Saqib Shaikh is a microsoft engineering working on one of their many AI projects. His claim to fame is his ability to use AI and machine learning to create smart glasses that he uses to compensate for his loss of sight when he was a child. Rather that improve his sight, the glasses give him sounds and he sees using the sound. From avoiding a skateboarder to finding his family, he gets about. It is an excellent example of how assistive technologies could become an important part of our lives. 

Ethical AI is a rather large topic and it is something that we will have to deal with sooner rather than later. The writer reminds us that we need greater collaboration on AI. This could be challenging thanks to nativism, Brexit, and overall national feelings that are sweeping the world. In the moment when we really need to come together on a technology that could solve a host of problems and help is make really great decisions, we're coming apart at every seam. But that's another story for another time.

We tend to have these pop culture notions about AI. As the article states we're somewhere between HAL and Siri in terms of our understanding and acceptance of AI in our lives. But the fact is that every items are getting smarter and getting infused with technology like never before giving these devices the chance to do even more for us than ever before.

Obviously, if we are creating something as smart and complex as we are we want some assurance that they will behave in a way that we find acceptable. In this way we refer to this boundary of behavior as an ethic and we hope for a friendly AI. The problem with ethics is that it is very subjective and requires judgement and discernment and is deeply cultural. What we think is right in the West might not be so true in parts of Asia where the civilization ethic is very different. How is AI going to respond to those nuances?

We can find some universal truths that most cultures can agree on:

We agree to not kill each other and punish those members who we catch doing it

We generally try to work cooperatively either directly or indirectly (directly in a hunting band, indirectly through economy of scale)

We aren't violent towards each other and we punish members who commit violence to another human or that humans items and home.

When it comes to social graces we won't have to worry about those with the AI we're likely to experience in our lifetimes. 

However, the ability to do "the right thing" will get complicated once we get past "perform this task, then that task, and then that task and report back to me." If an AI is doing legal discovery can they tell nuiance? Can they secure the data in such a way so as not to reveal any information to anyone else (computer or human) and in scenarios when certainly information cannot be used or used in certain contexts, is the AI smart enough to recognize those situations and act in an ethical manner of which we would approve? 

The article talks about trust and that we have to build trust into AI systems. This is where I think culture is vitally important. We humans are as much a product of our culture as anything else. How can we infuse human culture into AI?

 

To this the article says:

"A few people are taking the lead on this question. Cynthia Breazeal at the MIT Media Lab has devoted her life to exploring a more humanistic approach to artificial intelligence and robotics. She argues that technologists often ignore social and behavioral aspects of design. In a recent conversation, Cynthia said we are the most social and emotional of all the species, yet we spend little time thinking about empathy in the design of technology. She said, “After all, how we experience the world is through communications and collaboration. If we are interested in machines that work with us, then we can’t ignore the humanistic approach.”

 

The article has a few ideas about how to proceed with AI:

They include:

Transparency

Assistive

Efficient

Intelligent privacy

Accountability

Unbiased

 

I think this provides a helpful framework but as the article closes he brings up something that I think is vital and that is the transition from "labor saving and automated" to make and creation. Is it not better to keep 15 people employed with assistive AI than to displace those 15 workers with machines that simply do the job with minimal oversight? 

 

A few ideas:

Might creating meyers-briggs personalities for AI help with ethical decisions?

Might look at the Enneagram be helpful as well?

Can we control an AI by creating a system of motivations that causes it to generally work in an ethical way?

Can we remove excess desires so that the AI is motivated only to be helpful to humans and how do we create boundaries that stop and AI from causing harm of a violent or traumatizing nature? 

 

I hope this sparks interesting discussion! Let the discussion begin!

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: cameroncowan 21 April 2016 11:47:54PM 6 points [-]

I took the survey for the 2nd year in a row. Can't wait to see the results.

In response to LessWrong 2.0
Comment author: cameroncowan 12 December 2015 11:08:53PM 0 points [-]

There are a few thoughts here. I mostly came here to read and educate myself on the rationality movement and this sort of thing. I think that LW is a tremendous resource of information and that that information should be collected as a resource and transported to a place like Medium where it can be read and experienced. I think it is very intimidating and I think everything can be organized into three broad categories: Philosophy, Technical, Theoretical

In short, it's time for some media organization and distribution in such a way that people can experience it in a holistic way.

I think it's also time to do some marketing and reach out. What is our message? How can we articulate that message? How can we organize around CFAR and MIRI to do that? It's not just AI but rationality in general. Groups are helpful but there is plenty of online resources to allow people to interact in their own way and make small contributions through conversation. It's not just improving this platform but making the amazing work that is happening here accessible as best we can. When someone writes something technical an effort can and should be made to create that as something much more for people to take in and incorporation. The Sequences are intimidating, how can we break that down into something digestible?

As for community, groups are a good idea to foster and support but creating online accountability groups and such can also be helpful. A social network aspect of this may be helpful as well. How can we provide support for a rationality community? How can we foster greater contributions?

As for helping people I think it's digestible articles, video, and blog posts that have to be created in order to make it really accessible in a fun and exciting way that people can actually use. Could we create some teams for this?

Just putting out ideas here. I think there can be life in LW if we create something new and novel.

I know I quit commenting because there just wasn't much going on.

Comment author: cameroncowan 06 October 2015 08:07:58PM 1 point [-]

I took more leisure time away from the big business of The Cameron Cowan Show.

Comment author: johnsonmx 28 September 2015 10:14:56PM 0 points [-]

A rigorous theory of valence wouldn't involve cultural context, much as a rigorous theory of electromagnetism doesn't involve cultural context.

Cultural context may matter a great deal in terms of how to build a friendly AGI that preserves what's valuable about human civilization-- or this may mostly boil down to the axioms that 'pleasure is good' and 'suffering is bad'. I'm officially agnostic on whether value is simple or complex in this way.

One framework for dealing with the stuff you mention is Coherent Extrapolated Volition (CEV)- it's not the last word on anything but it seems like a good intuition pump.

Comment author: cameroncowan 28 September 2015 11:35:56PM *  0 points [-]

And I guess I'm saying that the sooner we think about these sorts of things the better off we'll be. Going for pleasure good/suffering bad reduced the mindset of AI to about 2 years old. Cultural context gives us a sense of maturity Valence or no.

Comment author: cameroncowan 26 September 2015 05:16:38AM 3 points [-]

You should read The Big Sort by Bill Bishop, he talks about how in America we are literally and physically moving towards areas that favor our political and social ideas. This makes local control easy and national control impossible.

Comment author: cameroncowan 26 September 2015 05:15:47AM 2 points [-]

I can't apply for the News Editor job as I am too busy with my own work but I would like to contribute and perhaps help with promotion across The Cameron Cowan Show network. Let's chat: cameron@cameroncowan.net

Comment author: cameroncowan 26 September 2015 05:14:25AM 0 points [-]

Where is the cultural context in all of this? How does that play in? Pain and pleasure here in the West is different than in the East just as value systems are different. When it comes to creating AGI I think a central set of agreed upon tenets are important. What is valuable? How can we quantify that in a way that makes sense to create AGI? If we want to reward it for doing good things, we have to consider cultural validation. We don't steal, murder or assault people because we have significant cultural incentive not to do so, especially if you live in a stable country. I think that could help. If we can somehow show group approval of the AGI, like favorable opinions, verbal validation and other things that I intrinsically values as we do. We could use our own culture to reinforce norms within it's archetecture.

Comment author: cameroncowan 26 September 2015 05:10:25AM 0 points [-]

We are the people who knew too much.....

Comment author: cameroncowan 29 August 2015 04:15:13AM 0 points [-]

What is your measure? Does it stem from the lack of satisfaction in their work? Their lack of analysis? I feel like word count is not necessary. Zizek is also very accessible because he works in Lacanian psychoanalysis....I need more data!

View more: Next