paulfchristiano comments on AlphaGo versus Lee Sedol - Less Wrong

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 13 March 2016 12:10:04AM *  6 points [-]

The trainers are responsible for getting M to do what the trainers want, and the user trusts the trainers to do what the user wants.

In that case, there would be severe principle-agent problems, given the disparity between power/intelligence of the trainer/AI systems and the users. If I was someone who couldn't directly control an AI using your scheme, I'd be very concerned about getting uneven trades or having my property expropriated outright by individual AIs or AI conspiracies, or just ignored and left behind in the race to capture the cosmic commons. I would be really tempted to try another AI design that does purport to have the AI serve my interests directly, even if that scheme is not as "safe".

If I imagine an employee who sucks at philosophy but thinks 100x faster than me, I don't feel like they are going to fail to understand how to defer to me on philosophical questions.

If an employee sucks at philosophy, how does he even recognize philosophical problems as problems that he needs to consult you for? Most people have little idea that they should feel confused and uncertain about things like epistemology, decision theory, and ethics. I suppose it might be relatively easy to teach an AI to recognize the specific problems that we currently consider to be philosophical, but what about new problems that we don't yet recognize as problems today?

Aside from that, a bigger concern for me is that if I was supervising your AI, I would be constantly bombarded with philosophical questions that I'd have to answer under time pressure, and afraid that one wrong move would cause me to lose control, or lock in some wrong idea.

Consider this scenario. Your AI prompts you for guidance because it has received a message from a trading partner with a proposal to merge your AI systems and share resources for greater efficiency and economy of scale. The proposal contains a new AI design and control scheme and arguments that the new design is safer, more efficient, and divides control of the joint AI fairly between the human owners according to your current bargaining power. The message also claims that every second you take to consider the issue has large costs to you because your AI is falling behind the state of the art in both technology and scale, becoming uncompetitive, so your bargaining power for joining the merger is dropping (slowly in the AI's time-frame, but quickly in yours). Your AI says it can't find any obvious flaws in the proposal, but it's not sure that you'd consider the proposal to really be fair under reflective equilibrium or that the new design would preserve your real values in the long run. There are several arguments in the proposal that it doesn't know how to evaluate, hence the request for guidance. But it also reminds you not to read those arguments directly since they were written by a superintelligent AI and you risk getting mind-hacked if you do.

What do you do? This story ignores the recursive structure in ALBA. I think that would only make the problem even harder, but I could be wrong. If you don't think it would go like this, let me know how you think this kind of scenario would go.

In terms of your #1, I would divide the decisions requiring philosophical understanding into two main categories. One is decisions involved in designing/improving AI systems, like in the scenario above. The other, which I talked about in an earlier comment, is ethical disasters directly caused by people who are not uncertain, but just wrong. You didn't reply to that comment, so I'm not sure why you're unconcerned about this category either.

Comment author: paulfchristiano 19 March 2016 09:42:30PM 3 points [-]

A general note: I'm not really taking a stand on the importance of a singleton, and I'm open to the possibility that the only way to achieve a good outcome even in the medium-term is to have very good coordination.

A would-be singleton will also need to solve the AI control problem, and I am just as happy to help with that problem as with the version of the AI control problem faced by a whole economy of actors each using their own AI systems.

The main way in which this affects my work is that I don't want to count on the formation of a singleton to solve the control problem itself.

You could try to work on AI in a way that helps facilitate the formation of a singleton. I don't think that is really helpful, but moreover it again seems like a separate problem from AI control. (Also don't think that e.g. MIRI is doing this with their current research, although they are open to solving AI control in a way that only works if there is a singleton.)