Wiki Contributions

Comments

Sorted by
quartz40

One actionable topic that could be discussed: does the current price reflect what we expect Bitcoin's value to be?

quartz20

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

quartz40

You could take a look at the 15 million statements in the database of the Never-Ending Language Learning project. The subset of beliefs for which human-supplied feedback exists and beliefs that have high-confidence truth values may be appropriate for your purpose.

quartz40

Nice! This is exactly the kind of evidence I'm looking for. The papers cited in the intro also look highly relevant (James and Rogers, 2005; Sigmon et al, 2009).

quartz100

But the real problem we face is how to build or become a superintelligence that shares our values, and given that this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us), is bad.

Assume you manage to communicate this idea to the typical AI researcher. What do you expect him to do next? It's absurd to think that the typical researcher will quit his field and work on strategies for mitigating intelligence explosion or on foundations of value. You might be able to convince him to work on some topic within AI instead of another. However, while some topics seem more likely to advance AI capabilities than others, this is difficult to tell in advance. More perniciously, what the field rewards are demonstrations of impressive capabilities. Researchers who avoid directions that lead to such demos will end up with less prestigious jobs, i.e., jobs where they are less able to influence the top students of the next generation of researchers. This isn't what the typical AI researcher wants either. So, what's he to do?

quartz41

Am I the only one who finds it astonishing that there isn't widely known evidence that a psychoactive substance used on a daily basis by about 90% of North American adults (and probably by a majority of LWers) is beneficial if used in this way? What explains this apparent lack of interest? Discounting (caffeine clearly has short-term benefits) and the belief that, even in the unlikely case that caffeine harms productivity in the long run, the harm is likely to be small?

quartz80

my firmest belief about the timeline for human-level AI is that we can't estimate it usefully. partly this is because i don't think "human level AI" will prove to be a single thing (or event) that we can point to and say "aha there it is!". instead i think there will be a series of human level abilities that are achieved.

This sounds right. SIAI communications could probably be improved by acknowledging the incremental nature of AI development more explicitly. Have they addressed how this affects safety concerns?

quartz30

Thanks! More like this, please.

Who are you writing for? If you skipped ahead to the metaethics main sequence and just pointed to the literature for cogsci background, do you expect that they would not understand you?

quartz00

Addendum: Since the people who upvoted the question were in the same position as you with respect to its interpretation, it would be good to not only address my intended meaning, but all major modes of interpretation.

quartz90

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

I mean the kind of precise, mathematical analysis that would be required to publish at conferences like NIPS or in the Journal of Philosophical Logic. This entails development of technical results that are sufficiently clear and modular that other researchers can use them in their own work. In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.

Load More