Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

orthonormal comments on Q&A with new Executive Director of Singularity Institute - Less Wrong

26 Post author: lukeprog 07 November 2011 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread.

Comment author: orthonormal 07 November 2011 05:36:18AM 29 points [-]

To the extent that SIAI intends to work directly on FAI, potential donors (and many others) need to evaluate not only whether the organization is competent, but whether it is completely dedicated to its explicitly altruistic goals.

What is SIAI doing to ensure that it is transparently trustworthy for the task it proposes?

(I'm more interested in structural initiatives than in arguments that it'd be silly to be selfish about Singularity-sized projects; those arguments are contingent on SIAI's presuppositions, and the kind of trustworthiness I'm asking about encompasses the veracity of SIAI on these assumptions.)

Comment author: gwern 07 November 2011 08:00:22PM 21 points [-]

For example, have we heard anything about that big embezzlement?

Comment author: lukeprog 09 November 2011 08:24:51PM 10 points [-]

Some of the money has been recovered. The court date that concerns most of the money is currently scheduled for January 2012.

Comment author: Giles 03 March 2012 04:55:50PM 3 points [-]

January 2012 has passed; any update?

Comment author: lukeprog 03 March 2012 08:17:11PM 5 points [-]

As I understand it, we won a stipulated judgment for repayment of $40k+ of it. Another court date has been scheduled (I think for late March?) to give us a chance to argue for the rest of what we're owed.

Comment author: Gastogh 10 May 2012 09:16:43AM 3 points [-]

Late March has passed. How did things pan out?

Comment author: lukeprog 10 May 2012 03:57:50PM 5 points [-]

We won some more repayment in another stipulated judgment and there's another court date this month.

Comment author: VNKKET 08 November 2011 04:39:20AM *  5 points [-]

Good question. And for people who missed it, this refers to money that was reported stolen on SI's tax documents a few years ago. (relevant thread)

Comment author: lukeprog 09 November 2011 08:25:35PM 1 point [-]

I'm more interested in structural initiatives

Can you give any examples of what you're thinking of, so I can be clearer about what you have in mind when you ask your question?

Comment author: orthonormal 10 November 2011 12:25:13AM 8 points [-]

I'm actually not coming up with any- it seems to be a tough problem. Here's an elaborate hypothetical that I'm not particularly worried about, but which serves as a case study:

Suppose that Robin Hanson is right about the Singularity (no discontinuity, no singleton, just rapid economic doubling until technology reaches physical limits, at which point it's a hardscrapple expansion through the future lightcone for those rich enough to afford descendants), and that furthermore, EY knows it and has been trying to deceive the rest of us in order to fund an early AI, and thus grab a share of the Singularity pie for himself and a few chosen friends.

The thing that makes this seem implausible right now are that the SIAI people I know don't seem to be the sort of people who are into long cons, and also, their object-level arguments about the Singularity make sense to me. But, uh, I'm not sure that I can stake the future on my ability to play a game of Mafia. So I'm wondering if SIAI has come up with any ideas (stronger than a mission statement) to make credible their dedication to a fair Singularity.

Comment author: lukeprog 10 November 2011 02:25:31AM 4 points [-]

Right.

I haven't devoted much time to this because I don't think anybody who has ever interacted with us in person has ever thought this was likely, and I'm not sure if anyone even on the internet has ever made the accusation - though of course some have raised the vague possibility, as you have. In other words, I doubt this worry is anyone's true rejection, whereas I suspect the lack of peer-reviewed papers from SIAI is many people's true rejection.

Comment author: orthonormal 10 November 2011 06:08:50PM 6 points [-]

Skepticism about SIAI's competence screens off skepticism about SIAI's intentions, so of course that's not the true rejection for the vast majority of people. But it genuinely troubles me if nobody's thought of the latter question at all, beyond "Trust us, we have no incentive to implement anything but CEV".

If I told you that a large government or corporation was working hard on AGI plus Friendliness content (and that they were avoiding the obvious traps), even if they claimed altruistic goals, wouldn't you worry a bit about their real plan? What features would make you more or less worried?

Comment author: Vladimir_Nesov 10 November 2011 09:43:49PM *  1 point [-]

I think the key point is that we're not there yet. Whatever theoretical tools we shape now are either generally useful, or generally useless, irrespective of considerations of motive; currently relevant question is (potential) competence. Only at some point in the (moderately distant) future, conditional on current and future work bearing fruit, motive might become relevant.

Comment author: hairyfigment 23 November 2011 09:26:15PM 0 points [-]

What features would make you more or less worried?

I'd worry about selfish institutional behavior, or explicit identification of the programmers' goals with the nation/corporation's selfish interests. Also, I guess, belief in the moral infallibility of some guru.

Otherwise I wouldn't worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person's goals. Well, I have to qualify that -- if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreement on what the future of humanity should look like, then maybe I'd worry. That might give each of them a greater expectation of getting what they want if they go with a more limited goal than CEV.

Comment author: Giles 03 March 2012 05:26:58PM 0 points [-]

An "outside view" might be to put the SI in the reference class of "groups who are trying to create a utopia" and observe that previous such efforts that have managed to gain momentum have tended to make the world worse.

I think the reality is more complicated than that, but that might be part of what motivates these kind of questions.

I think the biggest specific trust-related issue I have is with CEV - getting the utility function generation process right is really important, and in an optimal world I'd expect to see CEV subjected to a process of continual improvement and informed discussion. I haven't seen that, but it's hard to tell whether the SI are being overly protective of their CEV document or whether it's just really hard getting the right people talking about it in the right way.

Comment author: wedrifid 10 November 2011 09:32:18AM 0 points [-]

Am I to take this as a general answer to the overall question of trustworthiness or is this intended just as an answer to the specific example?

Comment author: wedrifid 10 November 2011 09:25:55AM *  2 points [-]

Suppose that Robin Hanson is right about the Singularity (no discontinuity, no singleton, just rapid economic doubling until technology reaches physical limits, at which point it's a hardscrapple expansion through the future lightcone for those rich enough to afford descendants), and that furthermore, EY knows it and has been trying to deceive the rest of us in order to fund an early AI, and thus grab a share of the Singularity pie for himself and a few chosen friends.

It would be clearer to say that Robin is right about the future, that there will not be a singularity. A hardscrapple race through the frontier basically just isn't one.

Comment author: jimrandomh 31 December 2011 08:55:15AM 0 points [-]

If you want to hypothesize that SingInst has secrets plus an evil plan, the secrets and plan have to combine in such a way that it's a good plan.