TheOtherDave comments on How about testing our ideas? - Less Wrong

31 [deleted] 14 September 2012 10:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 14 September 2012 06:17:37PM *  0 points [-]

You can call it that. I call it refining the art of human rationality. I don't think building new knowledge is something that magically only happens in a box designated Academia.

No, but Academia is optimized for that and has hundreds of years of demonstrated effectiveness and accumulated experience.

Is it perfect? No.

Can you build something better from ground up? I don't think so, at least not at a cost smaller than the cost needed to improve it. Certainly the present LW doesn't look remotely like a superior alternative.

Remember SI did years of research basically outside it,

And what did they accomplish? Pretty much nothing, AFAIK.

they only started publishing so they could attract more talent and as a general PR move, not because it was the most efficient way to do it.

If that's their main reason to publish then it seems that they are doing the right thing for the wrong reason.

Are they so deluded to seriously believe they will be the first to build an AGI? I hope not. Therefore, if they want to have a chance to influence the development of AI projects, the best they can do is to disseminate the results of their research in a way that will maximize the number of AI researchers who will notice it and take it seriously. And this way is not Less Wrong, or an Harry Potter fanfiction or meetups or minicamps or all the other stuff they do. It's academic publishing. Academic publishing should be SI's raison d'etre, not a PR move.

So far, AFAIK (I'm not a SI historian, so I might be mistaken) they published a few papers on philosophy, on the same sort of topics the FHI people do (some of these papers are co-authored with them, IIUC). I didn't read all of them, but my impression is that they didn't contribute particularly novel insights.

We have yet to see what results their current research on program equilibrium will yield.

You don't seem to have read the related articles I cited. I strongly suggest you do.

I've skimmed them. The "Science: Do It Yourself" uses an examples that is flawed by a glaring methodological error from the start. That speaks lots of why scientific research, like many other activities, is something best left to professionals.

Comment author: TheOtherDave 14 September 2012 06:38:48PM 3 points [-]

Are they so deluded to seriously believe they will be the first to build an AGI? I hope not.

What I've inferred from statements of key SI folk (most especially Luke) is that they don't think this likely, but they think the possible futures in which it happens are vastly superior to the ones in which it doesn't, so they're working towards it anyway.

the best they can do is to disseminate the results of their research in a way that will maximize the number of AI researchers who will notice it and take it seriously

Yeah, this seems pretty plausible to me as well. (Though also pretty unlikely.)

FWIW, my understanding of SI's original chosen strategy for making AI researchers take LW's ideas about Friendliness seriously was to publicize the Sequences, which would improve the general rationality of people everywhere (aka "raise the sanity waterline"), which would improve the rationality of AI researchers (and those who fund them, etc), which would increase the chances of AI researchers embracing the importance of Friendliness, which would increase the chances of FAI being developed before UFAI, which would save the world.

From what I can tell, SI has since them moved on to other strategies for saving the world, like publishing the Sequences in book form, publishing popular fiction, holding minicamps, etc., but all built on the premise that "raising the sanity waterline" among the most easily reached people is a more viable approach than attempting to reach specific audiences like professional researchers.

Comment author: V_V 16 September 2012 12:18:56PM 2 points [-]

That's seems to be an inefficient approach.

Even if you accept the premise that you can "teach" rationality to AI researchers capable of building an AGI (who probably would not be idiots, but they might be indeed affected by biases), doing so it's still an extremenly unfocused way to accomplish the task of advancing the state of the art on machine ethics.

If you want to advance the state of the art on machine ethics, then the most efficient way of doing it is to do actual research on machine ethics. If AI researchers don't take machine ethics as seriously as you think they should, then the most efficient way to convince them is to put forward your arguments in forms and media accessible and salient to them.

Once you go for peer review, you may receive negative feedback, of course. That might mean two things: That your core claims are wrong, in which case you should recognize that, stop wasting your efforts and move to something else, or that your arguments are uncompelling or unclear, in which case you should improve them, since it is your responsibility to make yourself understood.

Comment author: bogus 14 September 2012 07:04:25PM *  0 points [-]

My admittedly incomplete understanding is that "raising the sanity waterline" activities have now been spun off to the Center for Applied Rationality, which is either planning to incorporate as a non-profit or already incorporated. This would then leave SIAI as focusing on the strictly AGI- and Friendliness-related stuff.

Comment author: TheOtherDave 14 September 2012 07:07:24PM *  1 point [-]

Ah. I'm aware of the SI/CFAR split, but haven't paid much attention to what activities are owned by which entity, or how separate their staffs and resources actually are. E.g., I haven't a clue which entity sponsors LW, if either, or even whether it's possible to distinguish one condition from the other.

Comment author: V_V 16 September 2012 10:39:22AM 0 points [-]

From the information available on their websites, it seems that LW is still operated by SI.

I suggest splitting it off an operating it as a charity separate from both SI and CFAR.