All of quartz's Comments + Replies

One actionable topic that could be discussed: does the current price reflect what we expect Bitcoin's value to be?

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect t... (read more)

1Wei Dai
Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there's not much point in coming up with better arguments, since we can't expect AI researchers to change their behaviors anyway. This seems like a hard problem, but certainly worth thinking about.

You could take a look at the 15 million statements in the database of the Never-Ending Language Learning project. The subset of beliefs for which human-supplied feedback exists and beliefs that have high-confidence truth values may be appropriate for your purpose.

0keefe
Good Link. Wordnet is also the canonical language reference, but probably doesn't serve OP's purpose directly. If you start getting into these kind of graphs though, it's quite useful to move around with.

Nice! This is exactly the kind of evidence I'm looking for. The papers cited in the intro also look highly relevant (James and Rogers, 2005; Sigmon et al, 2009).

But the real problem we face is how to build or become a superintelligence that shares our values, and given that this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us), is bad.

Assume you manage to communicate this idea to the typical AI researcher. What do you expect him to do next? It's absurd to think that the typical researcher will quit his field and work on strategies for mitigating intelligence exp... (read more)

4Wei Dai
This probably deserves a discussion post of its own, but here are some ideas that I came up with. We can: * persuade more AI researchers to lend credibility to the argument against AI progress, and to support whatever projects we decide upon to try to achieve a positive Singularity * convince the most promising AI researchers (especially promising young researchers) to seek different careers * hire the most promising AI researchers to do research in secret * use the argument on funding agencies and policy makers * publicize the argument enough so that the most promising researchers don't go into AI in the first place

Am I the only one who finds it astonishing that there isn't widely known evidence that a psychoactive substance used on a daily basis by about 90% of North American adults (and probably by a majority of LWers) is beneficial if used in this way? What explains this apparent lack of interest? Discounting (caffeine clearly has short-term benefits) and the belief that, even in the unlikely case that caffeine harms productivity in the long run, the harm is likely to be small?

7gwern
Caffeine is extremely cheap, addictive, has minimal effects on health (and may be beneficial, from the various epidemiological associations with tea/coffee & longevity), and costs extra to remove from drinks popular regardless of their caffeine content (coffee and tea again). What would be the point of carefully investigating it? Suppose there was conclusive evidence on the topic, the value of this evidence to me would be roughly $0 or since ignorance is bliss, negative money - because unless the negative effects were drastic (which current studies rule out), I would not change anything about my life. Why? I enjoy my tea too much. My usual tea seller doesn't even have decaffeinated oolong in general, much less various varieties I might want to drink, apparently because de-caffeinating is so expensive it's not worthwhile. What am I supposed to do, give up my tea and caffeine just to save on the cost of caffeine? Buy de-caffeinating machines (which I couldn't even find any prices for, googling)? This also holds true for people who drink coffee or caffeinated soda. (As opposed to a drug like modafinil which is expensive, and so the value of a definitive answer is substantial and would justify some more extensive calculating of cost-benefit.)

my firmest belief about the timeline for human-level AI is that we can't estimate it usefully. partly this is because i don't think "human level AI" will prove to be a single thing (or event) that we can point to and say "aha there it is!". instead i think there will be a series of human level abilities that are achieved.

This sounds right. SIAI communications could probably be improved by acknowledging the incremental nature of AI development more explicitly. Have they addressed how this affects safety concerns?

6Vladimir_Nesov
Science of how to make an AGI is developed gradually, with many prototypes along the way, but the important threshold is where it becomes possible to make a system that can continue open-ended development on its own (if left undisturbed and provided with moderate amount of computing resources). Some time after that point, it may become impossible to stop such a system, and if it ends up developing greater and greater advantage over time, without holding beneficial values, humanity eventually loses. It's the point where the process starts becoming more and more dangerous on its own, until it "explodes" in our faces, like supercritical mass of fissile material.

Thanks! More like this, please.

Who are you writing for? If you skipped ahead to the metaethics main sequence and just pointed to the literature for cogsci background, do you expect that they would not understand you?

3Kaj_Sotala
Writing posts such as these makes the content much more accessible than just providing a reference. Even if Luke uploaded the papers on his website for easy access, reading a paper requires more mental energy than reading a blog post. Even people who'll read the papers if they are convinced that those have something worthwhile can be helped by a blog post that helps convince them that reading the paper is worthwhile.
2lukeprog
Maybe, but I'm not sure they'd believe me. In particular, there seem to be quite a few LWers who have a different picture of concepts than do the cognitive scientists who study concepts. So I need to change minds about a few things before I can proceed with metaethics. It's possible I should write my shelved post on motivational externalism, too, since there are some LWers who don't think it's true. So it might be wise for me to write a summary of the cogsci there, too.

Addendum: Since the people who upvoted the question were in the same position as you with respect to its interpretation, it would be good to not only address my intended meaning, but all major modes of interpretation.

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

I mean the kind of precise, mathematical analysis that would be required to publish at conferences like NIPS or in the Journal of Philosophical Logic. This entails development of technical results that are sufficiently clear and modular that other researchers can use them in their own work. In 15 years, I want to see a textbook on the mathematics of FAI that I can put... (read more)

2lukeprog
My day brightened imagining that! Thanks for clarifying.

How are you going to address the perceived and actual lack of rigor associated with SIAI?

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute. This is likely to pose problems for your plan to work with professors to find research candidates. It is also likely to be an indicator of little high-quality work happening at the Institute.

In his recent Summit presentation, Eliezer states that "most things you need to know to build Friendly AI are rigorous understanding of AGI rather than Friendl... (read more)

2lukeprog
A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute.

David Chalmers has said that the decision theory work is a major advance (along with various other philosophers), although he is frustrated that it hasn't been communicated more actively to the academic decision theory and philosophy communities. A number of current and former academics, including David, Stephen Omohundro, James Miller (above), and Nick Bostrom have reported that work at SIAI has been very helpful for their own research a... (read more)

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute.

I believe that high-quality research is happening at the Singularity Institute.

James Miller, Associate Professor of Economics, Smith College.

PhD, University of Chicago.

7Solvent
Luke discussed this a while back here. I agree that this is an important question.
6Shmi
This is my favorite of the questions so far.