Comment author: gwern 20 June 2012 03:10:52PM 16 points [-]

I still use it for some things, but there's not really a whole lot to discuss; the price is stable, Silk Road is still running, people are still taking it for donations. Infrastructure isn't always exciting.

Comment author: quartz 20 June 2012 10:05:17PM 2 points [-]

One actionable topic that could be discussed: does the current price reflect what we expect Bitcoin's value to be?

Comment author: Wei_Dai 13 April 2012 01:21:10AM *  3 points [-]

This probably deserves a discussion post of its own, but here are some ideas that I came up with. We can:

  • persuade more AI researchers to lend credibility to the argument against AI progress, and to support whatever projects we decide upon to try to achieve a positive Singularity
  • convince the most promising AI researchers (especially promising young researchers) to seek different careers
  • hire the most promising AI researchers to do research in secret
  • use the argument on funding agencies and policy makers
  • publicize the argument enough so that the most promising researchers don't go into AI in the first place
Comment author: quartz 13 April 2012 10:13:43PM 1 point [-]

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

Comment author: quartz 13 April 2012 02:49:13AM *  3 points [-]

You could take a look at the 15 million statements in the database of the Never-Ending Language Learning project. The subset of beliefs for which human-supplied feedback exists and beliefs that have high-confidence truth values may be appropriate for your purpose.

Comment author: siodine 12 April 2012 08:48:19PM *  7 points [-]
Comment author: quartz 12 April 2012 08:55:30PM *  4 points [-]

Nice! This is exactly the kind of evidence I'm looking for. The papers cited in the intro also look highly relevant (James and Rogers, 2005; Sigmon et al, 2009).

Comment author: quartz 12 April 2012 08:39:15PM *  8 points [-]

But the real problem we face is how to build or become a superintelligence that shares our values, and given that this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us), is bad.

Assume you manage to communicate this idea to the typical AI researcher. What do you expect him to do next? It's absurd to think that the typical researcher will quit his field and work on strategies for mitigating intelligence explosion or on foundations of value. You might be able to convince him to work on some topic within AI instead of another. However, while some topics seem more likely to advance AI capabilities than others, this is difficult to tell in advance. More perniciously, what the field rewards are demonstrations of impressive capabilities. Researchers who avoid directions that lead to such demos will end up with less prestigious jobs, i.e., jobs where they are less able to influence the top students of the next generation of researchers. This isn't what the typical AI researcher wants either. So, what's he to do?

Comment author: quartz 12 April 2012 08:17:37PM 2 points [-]

Am I the only one who finds it astonishing that there isn't widely known evidence that a psychoactive substance used on a daily basis by about 90% of North American adults (and probably by a majority of LWers) is beneficial if used in this way? What explains this apparent lack of interest? Discounting (caffeine clearly has short-term benefits) and the belief that, even in the unlikely case that caffeine harms productivity in the long run, the harm is likely to be small?

Comment author: quartz 08 January 2012 04:10:16PM *  6 points [-]

my firmest belief about the timeline for human-level AI is that we can't estimate it usefully. partly this is because i don't think "human level AI" will prove to be a single thing (or event) that we can point to and say "aha there it is!". instead i think there will be a series of human level abilities that are achieved.

This sounds right. SIAI communications could probably be improved by acknowledging the incremental nature of AI development more explicitly. Have they addressed how this affects safety concerns?

Comment author: quartz 22 November 2011 09:19:31AM 2 points [-]

Thanks! More like this, please.

Who are you writing for? If you skipped ahead to the metaethics main sequence and just pointed to the literature for cogsci background, do you expect that they would not understand you?

Comment author: lukeprog 14 November 2011 09:26:16AM 2 points [-]

In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms.

My day brightened imagining that!

Thanks for clarifying.

Comment author: quartz 16 November 2011 08:26:45PM 0 points [-]

Addendum: Since the people who upvoted the question were in the same position as you with respect to its interpretation, it would be good to not only address my intended meaning, but all major modes of interpretation.

Comment author: lukeprog 13 November 2011 05:19:34PM 2 points [-]

How are you going to address the perceived and actual lack of rigor associated with SIAI?

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

Comment author: quartz 14 November 2011 09:23:34AM 7 points [-]

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

I mean the kind of precise, mathematical analysis that would be required to publish at conferences like NIPS or in the Journal of Philosophical Logic. This entails development of technical results that are sufficiently clear and modular that other researchers can use them in their own work. In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.

View more: Next