Comment author: gwern 20 June 2012 03:10:52PM 16 points [-]

I still use it for some things, but there's not really a whole lot to discuss; the price is stable, Silk Road is still running, people are still taking it for donations. Infrastructure isn't always exciting.

Comment author: quartz 20 June 2012 10:05:17PM 2 points [-]

One actionable topic that could be discussed: does the current price reflect what we expect Bitcoin's value to be?

Comment author: Wei_Dai 13 April 2012 01:21:10AM *  3 points [-]

This probably deserves a discussion post of its own, but here are some ideas that I came up with. We can:

  • persuade more AI researchers to lend credibility to the argument against AI progress, and to support whatever projects we decide upon to try to achieve a positive Singularity
  • convince the most promising AI researchers (especially promising young researchers) to seek different careers
  • hire the most promising AI researchers to do research in secret
  • use the argument on funding agencies and policy makers
  • publicize the argument enough so that the most promising researchers don't go into AI in the first place
Comment author: quartz 13 April 2012 10:13:43PM 1 point [-]

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

Comment author: quartz 13 April 2012 02:49:13AM *  3 points [-]

You could take a look at the 15 million statements in the database of the Never-Ending Language Learning project. The subset of beliefs for which human-supplied feedback exists and beliefs that have high-confidence truth values may be appropriate for your purpose.

Comment author: siodine 12 April 2012 08:48:19PM *  7 points [-]
Comment author: quartz 12 April 2012 08:55:30PM *  4 points [-]

Nice! This is exactly the kind of evidence I'm looking for. The papers cited in the intro also look highly relevant (James and Rogers, 2005; Sigmon et al, 2009).

Comment author: quartz 12 April 2012 08:39:15PM *  8 points [-]

But the real problem we face is how to build or become a superintelligence that shares our values, and given that this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us), is bad.

Assume you manage to communicate this idea to the typical AI researcher. What do you expect him to do next? It's absurd to think that the typical researcher will quit his field and work on strategies for mitigating intelligence explosion or on foundations of value. You might be able to convince him to work on some topic within AI instead of another. However, while some topics seem more likely to advance AI capabilities than others, this is difficult to tell in advance. More perniciously, what the field rewards are demonstrations of impressive capabilities. Researchers who avoid directions that lead to such demos will end up with less prestigious jobs, i.e., jobs where they are less able to influence the top students of the next generation of researchers. This isn't what the typical AI researcher wants either. So, what's he to do?

Comment author: quartz 12 April 2012 08:17:37PM 2 points [-]

Am I the only one who finds it astonishing that there isn't widely known evidence that a psychoactive substance used on a daily basis by about 90% of North American adults (and probably by a majority of LWers) is beneficial if used in this way? What explains this apparent lack of interest? Discounting (caffeine clearly has short-term benefits) and the belief that, even in the unlikely case that caffeine harms productivity in the long run, the harm is likely to be small?

How does long-term use of caffeine affect productivity?

12 quartz 11 April 2012 11:09PM

I am trying to figure out whether caffeine helps productivity in the long run. Looking back 10 years from now, how much more/less productive will I have been if I were to drink coffee every day, or every second day?

Reviewing what has been written on the topic by our community so far:

  • Justin summarizes effects of caffeine: impairment of long-term memory, narrowed focus, increased short-term memory and recall, increased attentional control, increased memory retention and retrieval. From this, he tentatively concludes in favor of use for tasks that benefit from these effects. However, does this conclusion still hold for regular use? We need to take into account reduced stimulation due to increase in tolerance and potentially impairment during withdrawal.
  • In a comment on Justin's article, simplyeric writes: "There are studies (that I read years ago, and have no link to) that show that consistency is better... that consistent low-level caffeine drinkers are more alert than their non-caffeine colleagues, but less jittery than high-caffeine people (optimum seemed to be 2-3 cups per day)." This is the kind of evidence I am interested in. Does anyone recall such studies?
  • Gwern's review of nootropics lists a number of potential negative effects, including effects on memory, performance (in high doses), sleep, and mood. In justifying its use despite these effects, he states that "[his] problems tend to be more about akrasia and energy and not getting things done, so even if a stimulant comes with a little cost to long-term memory, it's still useful for [him]". Is there conclusive evidence that in the long run, caffeine increases energy and helps with akrasia?
  • Skatche's review of psychoactive drugs presents anecdotal evidence in favor: "Taken on a fairly regular daily schedule, caffeine seems to improve my attention, motivation and energy level." To what extent do such anecdotes reflect true improvement? If regular use of caffeine were to result in decreased baseline performance and if the effect of caffeine were limited to restoring baseline, this could feel similar from the inside.
Edit: Studies pointed out by siodine suggest that caffeine has few or no beneficial long-term effects:
If caffeine was consumed, the adverse effects of lowered alertness and headache were avoided, but even after 100+150 mg of caffeine their alertness was not raised above the level of alertness showed by nonconsumers of caffeine (group N) who received placebo (Figure 1, middle panel). This result is similar to that from an early study comparing responses to caffeine of coffee drinkers and abstainers (Goldstein et al, 1969), and is consistent with the claim, supported by a variety of subsequent findings, that regular caffeine consumption provides little or no net benefit for alertness or performance on tests of vigilance (James and Rogers, 2005; Sigmon et al, 2009).
The study also demonstrated robust acute effects of caffeine unconfounded by caffeine withdrawal, but no evidence for net beneficial effects of daily caffeine administration.
Overall, there is little evidence of caffeine having beneficial effects on performance or mood under conditions of long-term caffeine use vs abstinence. Although modest acute effects may occur following initial use, tolerance to these effects appears to develop in the context of habitual use of the drug.
Comment author: quartz 08 January 2012 04:10:16PM *  6 points [-]

my firmest belief about the timeline for human-level AI is that we can't estimate it usefully. partly this is because i don't think "human level AI" will prove to be a single thing (or event) that we can point to and say "aha there it is!". instead i think there will be a series of human level abilities that are achieved.

This sounds right. SIAI communications could probably be improved by acknowledging the incremental nature of AI development more explicitly. Have they addressed how this affects safety concerns?

Comment author: quartz 22 November 2011 09:19:31AM 2 points [-]

Thanks! More like this, please.

Who are you writing for? If you skipped ahead to the metaethics main sequence and just pointed to the literature for cogsci background, do you expect that they would not understand you?

Comment author: lukeprog 14 November 2011 09:26:16AM 2 points [-]

In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms.

My day brightened imagining that!

Thanks for clarifying.

Comment author: quartz 16 November 2011 08:26:45PM 0 points [-]

Addendum: Since the people who upvoted the question were in the same position as you with respect to its interpretation, it would be good to not only address my intended meaning, but all major modes of interpretation.

View more: Next