In response to Friendly UI
Comment author: HalMorris 29 December 2014 02:51:41AM 3 points [-]

I think it is a very good question. Forget ideas you may have had about UX 10 or 20 years ago. Google is a user interface to the rest of the internet. "Unfriendly" might not be the word for it, but the impression that it is there to serve me is an illusion. It is becoming too much like the "friendly" used car salesman.

Whatever we want to access on the internet is increasingly mediated by highly intelligent interfaces that have their own agendas, and I doubt we have thought enough about what constraints it would take to keep these agendas from getting out of hand. In a worst case scenario, these agents might systematically mislead people so as to hide some uncontrollable super-agent being put into place. It is the old agency problem. The attempt to impose ethics and good behavior on those we take to be our agents (doctors, lawyers, real estate agents, finance advisers) raises different questions from those aimed at most fellow beings. "Professional ethics" is a name for one sometimes effective approach to the problem, and it imposes a whole other set of constraints than those we put on peoples treatment of one another generally, so I think it is worth looking at from a special angle which might well be neglected by FAI generally.

In response to comment by HalMorris on Friendly UI
Comment author: cicatriz 30 December 2014 04:08:26AM 1 point [-]

This brings to mind the infamous case of Google censoring search results in China according to the government's will. That's an example of a deliberate human action, but examples will increasingly be "algorithmic byproduct" with zero human intervention. Unlike humans the algorithm can't questioned or intimidated by the media or taken to a court of law.

Legally and professionally, I suppose the product team could be taken responsible, but I definitely think there needs to be a push for more scrutinizable computation. (There have been discussion along these lines in terms of computer security. Sometimes open source is cited as a solution, but it hasn't necessary helped--e.g. Heartbleed.)

In response to Friendly UI
Comment author: solipsist 28 December 2014 04:17:19PM 7 points [-]

I'm introducing the term Friendly User Interface to complement Friendly AI as a possible area of interest for the LessWrong community.

Speaking only to this. Every new piece of jargon comes with -100 points. Does the usefulness of this term overcome -100 points?

In response to comment by solipsist on Friendly UI
Comment author: cicatriz 28 December 2014 06:43:41PM 2 points [-]

In fact, I'm just going to edit out that bit to de-emphasize the term itself.

In response to Friendly UI
Comment author: ChristianKl 28 December 2014 04:34:28PM *  0 points [-]

What do you mean with friendly in this context? What's the added value above simple speaking about UX design or information architecture?

In response to comment by ChristianKl on Friendly UI
Comment author: cicatriz 28 December 2014 06:35:43PM 0 points [-]

As I replied to solipsist, I'm now wishing I had asked what experiences people here have at the intersection of interface design and machine intelligence and gone from there. I find UX design and other fields I mentioned as huge and nebulous--it could be equally about hex codes for button shadows as "humane representations of thought"--but my post is not necessarily reigning that in coherently.

In response to Friendly UI
Comment author: solipsist 28 December 2014 04:17:19PM 7 points [-]

I'm introducing the term Friendly User Interface to complement Friendly AI as a possible area of interest for the LessWrong community.

Speaking only to this. Every new piece of jargon comes with -100 points. Does the usefulness of this term overcome -100 points?

In response to comment by solipsist on Friendly UI
Comment author: cicatriz 28 December 2014 06:26:19PM 3 points [-]

Perhaps I overemphasized the "term introduction". Since the first two comments seem to be questioning whether this term and grouping of ideas should exist at all, now I'm wishing I could go back and frame the post as, "Is anyone here thinking about these kind of things?" Once the activity and attention of the community is better resolved, I could re-examine whether any part of it is worth promoting or rebranding.

Friendly UI

1 cicatriz 28 December 2014 03:06PM

I want to bring up some questions that I find crucial to consider about technology in the present day. To contrast with Friendly AI, these questions are about our interaction with technological tools rather than developing a technology that we trust on its own with superhuman intelligence.

 

1. How are computational tools affecting how we perceive, think, and act?

The inspiration for this post is Bret Victor's new talk, The Humane Representation of Thought. I highly recommend it. In particular, you may want to pause and reflect on the first part before seeing his sketch of solutions in the second. In a nutshell, we have a certain range of human capacities. The use of computing as a medium propels us to develop and value particular capacities: visual & symbolic. Others have discussed diminishing our attention spandecision-making capacity, or cultural expectations of decency. Victor's term for this is "inhumane". He argues that the default path of technological progress has certain properties, but preserving humaneness is not one of them.

The FAI discussions seem to miss both sides of the coin on this phenomenon. First that computation, even though it doesn't exist as a superintelligent entity yet, still imposes values. Second that human intelligence is not a static target: humanity can only reasonably be defined as including the tools we use (humanity without writing or humanity without agriculture are very different), so human intelligence changes along with computation.

In other words, can we design computation now such that it carries us humans to superintelligence? Or, at the very least, doesn't diminish our intelligence and life experience. What are the answers when we ask questions of technology?

 

2. How can humans best interact with machines with superhuman aspects of intelligence?

There are already machines with superhuman aspects of intelligence, with applications such as chess, essay grading, or image recognition. These systems are deployed without fully understanding how they work, by the very definition of superhuman intelligence. For instance we don't really understand how a machine learning algorithm reaches its conclusion with an unfathomable amount of data. Even if we can prove certain mathematical properties about the behavior, it will be impossible to empathize with the full range of a computer's decision space. Consider how certain nonsensical images trick image recognition algorithms. Increased machine intelligence will only be harder to predict while having a greater impact.

Luckily, today and in the foreseeable future, we don't simply press a button and let computers run and act indefinitely on their own. Computing is an interactive process. That means there are human-to-machine and machine-to-human channels of communications--commonly called interfaces--that impact our human-machine coevolution. This idea is present throughout our lives, but it is a major disruption that we take for granted.

One example of a machine intelligence interface: LightSide Labs, which does automated grading, has a tool that allows students to submit multiple drafts, each time understanding the computer's analysis along different dimensions (their example has development, language, clarity, and evidence). Other than changing the essay though, there's no opportunity for human-to-machine communication. The student couldn't say "I'm not sure why you rated my evidence low. You might want to look at such-and-such historical document."

Generally, it is only the programmers who have such control over the machine. Even then programming is a highly uncertain domain. Better programming languages and tools make strides on both ease-of-use and predictability, but we seem a lot way off from safe and powerful machine communication available to the lay user (i.e. end-user programming).

In this regard, FAI--because of its focus on intelligence explosion--skips the more obvious step of communication as a means of guiding the path. Parents don't give birth to children with provable value systems, they use discussion and send them to institutions like school and church to perform that duty.

 

It may be true that these concerns would be dwarfed by an intelligence explosion, but they are increasingly concerning on the path to get there. They live in existing domains like UI design and human-computer interaction (if you are new to these fields, I recommend The Design of Everyday Things or The Inmates Are Running the Asylum) and others I'm less familiar with like media studies and technology and society. However, I think these fields need more connections to deep knowledge of machine intelligence.

 

Am I missing anything in my framing of the problem, or is it better covered by an existing framework? How can we contribute?

 

Edit: Changed the first paragraph to de-emphasize the coining of the "FUI" term. Now it's just the title of the post. Proceed!

Comment author: imuli 03 December 2014 02:23:50PM 1 point [-]

I started getting programming gigs.

I've been writing programs and bug fixing for my other work and personal environment for twelve years, always loved programming more than the other work, but...

Comment author: cicatriz 04 December 2014 02:26:54AM 2 points [-]

Any suggestions of where to get programming gigs?

Comment author: cicatriz 03 December 2014 10:08:24PM *  13 points [-]

(Similar to Fluttershy) Culturally, there's a belief that college years are our formative years, and we should be learning to be good, well-rounded (in the liberal arts sense) people. But college is a huge time and money commitment, and the job market is competitive, so I think college ought to be used strategically for advancement in academic or well-paved professional tracks (doctor, lawyer). My college, Harvey Mudd, had a noticeable emphasis on ethics in science and technology and humanities as a hearty side heaping to technical topics. Ideally, ethics would be strategic for career advancement, but in the real world (software engineering), it's never seem to come up in my job placement. Harvey Mudd should be a pretty good model though since they manage to make it work anyway. Alan Kay also suggested a technical and humanities double major (somewhere in that interview...).

The politics of college aside, here's my list of things to learn as soon as and by any means possible:

  • Rationality. Goes without saying here. In particular: using reasoning and empirical data for important questions. I just came across this today that decries the complete lack of empirical basis for programming language design (a topic that's collectively consumed hundreds of thousands of hours of debate, not to mention time developing mediocre solutions). You'll see the same thing in any field (at least fields that are mature enough to even ask the question).
  • Career & finance. Understanding that there's a game to both and having knowledge about those games can get you opportunities and money that you wouldn't otherwise. I recommend Ramit Sethi's material and Tony Robbin's new book.
  • Body & brain. You can often get away with research + rationality for a particular question, but it's good to have prior exposure to solutions to common problems: nutrition/fitness, body language, learning, mental health. For example: thinking "I'm depressed" leads you to: it could be due to a nutritional or neurochemical imbalance, or fixed with changing some thinking habits; instead of "I'm depressed because I'm a failure at life."
  • Technical topics. If you want to make a contribution you really need to focus. Math is generally useful, but that's mostly as a symbolic and visual language rather than any particular deep math topic until you need it. Programming is often useful for automating technical tasks. I've observed people who study physics excelling in different topics. (Perhaps exposure to model building and data-driving theory testing. Perhaps selection bias.)
  • Philosophical and spiritual things. I've only started to respect this recently, but I've found value in Taoist, Buddhist, Catholic, and Stoic teachings. Here's someone else exploring a variety of areas.
  • Microeconomics and game theory come up a lot in the world and knowledge thereof may prevent you from making dumb "If I were in charge..." statements.

Lots of things I wish I knew more about still, like sociology/anthropology, politics, and history, where there's a lot of "why should I learn about this particular thing or another?" that are hard to answer on my own.

Comment author: Kaj_Sotala 17 August 2012 07:44:48AM 7 points [-]

Related idea: semi-computerized instruction.

To the best of my (limited) knowledge, while there are currently various computerized exercises available, they aren't that good at offering instruction of the "I don't understand why this step works" kind, and are often pretty limited (e.g. Khan Academy has exercises which are just multiple choice questions, which isn't a very good bad format). One could try to offer a more sophisticated system - first, present experienced teachers/tutors with a collection of the problems you'll be giving to the students, and ask them to list the most common problems and misunderstandings that the students tend to have with such problems. Then attempt to build a system which will recognize the symptoms of the most common misunderstandings and attempt to provide advice on them, also offering the student the opportunity to ask it themselves using some menu system or natural language parser. (I know some existing academic work along these lines exists, I think applying Bayes nets to build up a model of the students' skills and understanding, but I couldn't find the reference in the place where I thought that I had read it.)

Of course, there will frequently be situations where your existing database fails to understand the student's need. So you combine this with the chance to ask help online, either on a forum with other students, or one-on-one with a paid tutor in an interactive chat session. As the students' problems are resolved, the maintainers follow the conversations and figure out a way for the system to recognize the new problems in the future, either automatically or via the "ask a question" menus.

In particular, the system would be built so that having e.g. forgotten some of the prerequisites in a previous course wouldn't be a problem - if that happened, the system would just automatically lead you to partially rehearse those concepts enough that you could apply them to solve the current problem. At the same time, it could be designed that all of the previous knowledge was being constantly drawn upon, thus providing a natural method for spaced repetition.

This method is naturally most suited for math-like subjects with clear right/wrong answers. But if one wanted to get really ambitious, they could eventually expand the system so as to create a single unified school course that taught everything that's usually taught in high school, abandoning the artificial limits between subjects. E.g. a lesson during which you traveled back in time to witness an important battle (history), helped calculate the cannon ball trajectories for one of the sides (physics), stopped to study a wounded soldier and the effects of the wounds on his body (human biology), and then finally helped the army band play the victory song (music)... or something along those lines. Ideally, there'd be little difference between taking a school lesson and playing a good computer game.

Comment author: cicatriz 17 August 2012 04:19:03PM 4 points [-]

There is an academic field around this called intelligent tutoring systems (http://en.wikipedia.org/wiki/Intelligent_tutoring_system). The biggest company with an ITS, as far as I know, is Carnegie Learning, which provides entire K-12 curricula for it: books, teacher training, software. CL has had mixed evaluations in the past, but I think a fair conclusion at this point is that ITS significantly improves learning outcomes when implemented in an environment where they are able to use software as it's intended to be used (follow the training, spend enough time, etc).

As far as I know there isn't anything quite like this in a widely deployed online system with community discussion as you suggest. Grockit (http://grockit.com) is a social test prep site that is familiar with the ITS community and uses some principles. Khan Academy is continuing to improve, but I can't say whether they will reach the state of the art as far as intelligent tutors go. I'd say there's definitely an opportunity for more ITS in online learning now, but it isn't easy to build.

The Wikipedia article is OK. One example of a recent paper is http://users.wpi.edu/~zpardos/papers/zpardos-its-final22.pdf which also shows some of the human work that goes into modeling the knowledge domain for an ITS.

Comment author: jacoblyles 17 August 2012 12:14:58AM *  23 points [-]

Tagline: Coursera for high school

Mission: The economist Eric Hanushek has shown that if the USA could replace the worst 7% of K-12 teachers with merely average teachers, it would have the best education system in the world. What if we instead replaced the bottom 90% of teachers in every country with great instruction?

The Company: Online learning startups like Coursera and Udacity are in the process of showing how technology can scale great teaching to large numbers of university students (I've written about the mechanics of this elsewhere). Let's bring a similar model to high school.

This Company starts in the United States and ties into existing home school regulations with a self-driven web learning program that requires minimum parental involvement and results in a high school degree. It cloaks itself as merely a tool to aid homeschool parents, similar to existing mail-order tutoring materials, hiding its radical mission to end high school as we know it.

The result is high-quality education for every student. In addition to the high quality, it gives the student schedule flexibility to pursue other interests outside of high school. Many exceptional young people I know dodge the traditional schools early in life. This product gives everyone that opportunity.

By lowering the cost of going home-school, this product will enlargen the home school market and threaten traditional educrats while producing more exceptional minds.

With direct access to millions of students, the website will be able to monetize through one-on-one tutoring markets, college prep services, and other means.

Course material can be bootstrapped by constructing a curriculum out of free videos provided through sources like the Khan Academy. The value-add of the Company will be to tailor the curriculum to the home-school requirements of the particular state of the student.

My background: I cofounded a company that's had reasonable success. I'm not much of a Less Wrong fan - I find the community to be an intellectual monoculture, dogmatic, and full of blind spots to flaws in the philosophy it preaches. BUT this is an idea that needs to happen, as it will provide much value to the world. Contact me at firstname lastname gmail if you have lots of money or can hack. Or hell, steal the idea and do it yourself. Just make it happen.

Comment author: cicatriz 17 August 2012 12:57:54AM 4 points [-]

Your approach -- targeting home-schoolers who are "nonconsumers" of public K-12 education -- is exactly the approach advocated by disruption theory and specifically the book Disrupting Class. Using public education as analogous to established leaders in other industries, disruption always comes from the outside because the leaders aren't structurally able to do anything other than serve their consumers with marginal improvements.

ArtofProblemSolving.com is one successful example that's targeted gifted home-schoolers (and others looking for extracurricular learning) in math. I'm sure there are others. EdSurge.com is a good place to look for existing services, which you can sort by criteria including common core/state-standards aligned (you do have to register for free to get the list of resources). I also have thought about services that build on top of Khan Academy, but I wouldn't underestimate their ability to improve in that area. They just released a fantastic computer science platform. But they are a non-profit, so their growth depends, I suppose, on Bill Gates' mood and other philanthropy. To get to full disruption, it might take a for-profit with, as you suggest, monetization through tutoring and other valuable services.

Comment author: Peter_de_Blanc 14 August 2012 08:47:25AM 6 points [-]

I'm really excited about software similar to Anki, but with task-specialized user interfaces (vs. self-graded tasks) and better task-selection models (incorporating something like item response theory), ideally to be used for both training and credentialing.

Comment author: cicatriz 17 August 2012 12:36:46AM 2 points [-]

I've explored using spaced repetition in various web-based learning interfaces, which are described at http://cicatriz.github.com I'd love to talk more with anyone who's interested. Based on my experiences, I have reservations about when and how exactly spaced repetition should be used and don't believe there's a general solution using current techniques to quickly go from content to SRS cards. But with a number of dedicated individuals working on different domains, there's certainly potential for better learning. I've been working on writing up a series of articles about this. Again, contact me if you want to be notified when that is released.

View more: Next