andreas comments on (One reason) why capitalism is much maligned - Less Wrong

1 Post author: multifoliaterose 19 July 2010 03:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 19 July 2010 05:14:55PM *  6 points [-]

Not all publicity is good publicity. The majority of people who I've met off of Less Wrong who have heard of SIAI think that the organization is full of crazy people. A lot of these people are smart. Some of these people have Ph.D.'s from top tier universities in sciences.

I think that SIAI should be putting way more emphasis on PR, networking within academic, etc. This is in consonance with a comment by Holden Karnofsky here

To the extent that your activities will require “beating” other organizations (in advocacy, in speed of innovation, etc.), what are the skills and backgrounds of your staffers that are relevant to their ability to do this?

I'm worried that SIAI's poor ability to make a good public impression may poison the cause of existential risk in the mind of the public and dissuade good researchers from studying existential risk. There are some very smart people who it would be good to have working on Friendly AI who, despite their capabilities, care a lot about their status in broader society. I think that it's very important that an organization that works toward Friendly AI at least be well regarded by a sizable minority people in the scientific community.

Comment author: andreas 19 July 2010 06:50:58PM 13 points [-]

In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.

We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.

Comment author: Utilitarian 25 July 2010 04:46:30AM 6 points [-]

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

Comment author: ata 25 July 2010 07:10:44AM *  3 points [-]

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Agreed; I've had similar thoughts. Given recent popular coverage of the various things called "the Singularity", I think we need to accept that it's pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil's predictions.

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that's because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it's fair to describe SIAI as still being fundamentally about FAI (at least to anyone who's adequately prepared to think about FAI).

Describing it as "a philosophy institute researching hugely important fundamental questions" may give people the wrong impressions, if it's not quickly followed by more specific explanation. When people think of "philosophy" + "hugely important fundamental questions", their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. ("Philosophy" is another term I'm inclined toward avoiding these days.) When I've had to describe SIAI in one phrase to people who have never heard of it, I've been calling it an "artificial intelligence think-tank". Meanwhile, Michael Vassar's Twitter describes SIAI as a "decision theory think-tank". That's probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where "decision theory" already refers to an interesting established field that's relevant to AI but doesn't share with "artificial intelligence" the connotations of missed goals, science fiction geekery, anthropomorphism, etc.