3 min read

3

I want to bring up some questions that I find crucial to consider about technology in the present day. To contrast with Friendly AI, these questions are about our interaction with technological tools rather than developing a technology that we trust on its own with superhuman intelligence.

 

1. How are computational tools affecting how we perceive, think, and act?

The inspiration for this post is Bret Victor's new talk, The Humane Representation of Thought. I highly recommend it. In particular, you may want to pause and reflect on the first part before seeing his sketch of solutions in the second. In a nutshell, we have a certain range of human capacities. The use of computing as a medium propels us to develop and value particular capacities: visual & symbolic. Others have discussed diminishing our attention spandecision-making capacity, or cultural expectations of decency. Victor's term for this is "inhumane". He argues that the default path of technological progress has certain properties, but preserving humaneness is not one of them.

The FAI discussions seem to miss both sides of the coin on this phenomenon. First that computation, even though it doesn't exist as a superintelligent entity yet, still imposes values. Second that human intelligence is not a static target: humanity can only reasonably be defined as including the tools we use (humanity without writing or humanity without agriculture are very different), so human intelligence changes along with computation.

In other words, can we design computation now such that it carries us humans to superintelligence? Or, at the very least, doesn't diminish our intelligence and life experience. What are the answers when we ask questions of technology?

 

2. How can humans best interact with machines with superhuman aspects of intelligence?

There are already machines with superhuman aspects of intelligence, with applications such as chess, essay grading, or image recognition. These systems are deployed without fully understanding how they work, by the very definition of superhuman intelligence. For instance we don't really understand how a machine learning algorithm reaches its conclusion with an unfathomable amount of data. Even if we can prove certain mathematical properties about the behavior, it will be impossible to empathize with the full range of a computer's decision space. Consider how certain nonsensical images trick image recognition algorithms. Increased machine intelligence will only be harder to predict while having a greater impact.

Luckily, today and in the foreseeable future, we don't simply press a button and let computers run and act indefinitely on their own. Computing is an interactive process. That means there are human-to-machine and machine-to-human channels of communications--commonly called interfaces--that impact our human-machine coevolution. This idea is present throughout our lives, but it is a major disruption that we take for granted.

One example of a machine intelligence interface: LightSide Labs, which does automated grading, has a tool that allows students to submit multiple drafts, each time understanding the computer's analysis along different dimensions (their example has development, language, clarity, and evidence). Other than changing the essay though, there's no opportunity for human-to-machine communication. The student couldn't say "I'm not sure why you rated my evidence low. You might want to look at such-and-such historical document."

Generally, it is only the programmers who have such control over the machine. Even then programming is a highly uncertain domain. Better programming languages and tools make strides on both ease-of-use and predictability, but we seem a lot way off from safe and powerful machine communication available to the lay user (i.e. end-user programming).

In this regard, FAI--because of its focus on intelligence explosion--skips the more obvious step of communication as a means of guiding the path. Parents don't give birth to children with provable value systems, they use discussion and send them to institutions like school and church to perform that duty.

 

It may be true that these concerns would be dwarfed by an intelligence explosion, but they are increasingly concerning on the path to get there. They live in existing domains like UI design and human-computer interaction (if you are new to these fields, I recommend The Design of Everyday Things or The Inmates Are Running the Asylum) and others I'm less familiar with like media studies and technology and society. However, I think these fields need more connections to deep knowledge of machine intelligence.

 

Am I missing anything in my framing of the problem, or is it better covered by an existing framework? How can we contribute?

 

Edit: Changed the first paragraph to de-emphasize the coining of the "FUI" term. Now it's just the title of the post. Proceed!

New Comment
8 comments, sorted by Click to highlight new comments since:

I'm introducing the term Friendly User Interface to complement Friendly AI as a possible area of interest for the LessWrong community.

Speaking only to this. Every new piece of jargon comes with -100 points. Does the usefulness of this term overcome -100 points?

Perhaps I overemphasized the "term introduction". Since the first two comments seem to be questioning whether this term and grouping of ideas should exist at all, now I'm wishing I could go back and frame the post as, "Is anyone here thinking about these kind of things?" Once the activity and attention of the community is better resolved, I could re-examine whether any part of it is worth promoting or rebranding.

In fact, I'm just going to edit out that bit to de-emphasize the term itself.

I think it is a very good question. Forget ideas you may have had about UX 10 or 20 years ago. Google is a user interface to the rest of the internet. "Unfriendly" might not be the word for it, but the impression that it is there to serve me is an illusion. It is becoming too much like the "friendly" used car salesman.

Whatever we want to access on the internet is increasingly mediated by highly intelligent interfaces that have their own agendas, and I doubt we have thought enough about what constraints it would take to keep these agendas from getting out of hand. In a worst case scenario, these agents might systematically mislead people so as to hide some uncontrollable super-agent being put into place. It is the old agency problem. The attempt to impose ethics and good behavior on those we take to be our agents (doctors, lawyers, real estate agents, finance advisers) raises different questions from those aimed at most fellow beings. "Professional ethics" is a name for one sometimes effective approach to the problem, and it imposes a whole other set of constraints than those we put on peoples treatment of one another generally, so I think it is worth looking at from a special angle which might well be neglected by FAI generally.

This brings to mind the infamous case of Google censoring search results in China according to the government's will. That's an example of a deliberate human action, but examples will increasingly be "algorithmic byproduct" with zero human intervention. Unlike humans the algorithm can't questioned or intimidated by the media or taken to a court of law.

Legally and professionally, I suppose the product team could be taken responsible, but I definitely think there needs to be a push for more scrutinizable computation. (There have been discussion along these lines in terms of computer security. Sometimes open source is cited as a solution, but it hasn't necessary helped--e.g. Heartbleed.)

What do you mean with friendly in this context? What's the added value above simple speaking about UX design or information architecture?

As I replied to solipsist, I'm now wishing I had asked what experiences people here have at the intersection of interface design and machine intelligence and gone from there. I find UX design and other fields I mentioned as huge and nebulous--it could be equally about hex codes for button shadows as "humane representations of thought"--but my post is not necessarily reigning that in coherently.

You can't ignore issues such as choosing the right colors when you want to transfer information from comptuers to humans completely but most of the discussion on UX is not about graphic design.