Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ike 28 January 2015 02:43:17AM 3 points [-]
Comment author: NancyLebovitz 28 January 2015 03:28:36AM 0 points [-]

Thank you. That was a lot easier to follow, and I might just make nhs.uk/news a habit.

Comment author: NancyLebovitz 27 January 2015 07:00:59PM 2 points [-]

Could someone get past the paywall for this?

It's a paper linking some commonly used prescription drugs to increased risk of dementia, and none of the popular press articles I've seen about it say how large the increased risk is.

Comment author: NancyLebovitz 24 January 2015 11:58:14AM *  4 points [-]

What makes teams more effective

It isn't the total IQ of the team, and whether they're working face to face doesn't matter.

The factors discovered were that the members make fairly equal contributions to discussions, level of emotional perceptiveness, and number of women, though part of the effect of number of women is partially explained by women tending to be emotionally perceptive.

On the one hand, I've learned to be skeptical of social science research-- and I add some extra skepticism for experiments that are simulations of the real world. In this case, the teams were working on toy problems.

On the other hand, this study appeals strongly to my prejudice in favor of niceness. I found the presence of women to be a surprising factor, since I haven't noticed women as being easier to work with.

A notion: the fairly equal contribution part may be, not exactly that everyone contributes more, but that if the conversation is dominated by a few voices, those voices tend to repeat themselves a lot, and therefore contribute little compared to the time they take up.

Comment author: sediment 21 January 2015 07:16:40PM 4 points [-]

A month or two ago I started taking Modafinil occasionally; I've probably taken it fewer than a dozen times overall.

I think I'd expected it to give a kind of Ritalin-like focus and concentrate, but that isn't really how it affected me. I'd describe the effects less in terms of "focus" and more in terms of a variable I term "wherewithal". I've recently started using this term in my internal monologue to describe my levels of "ability to undertake tasks". E.g., "I'm hungry, but I definitely don't have the wherewithal to cook anything complicated tonight; better just get a pizza." Or, on waking up: "Hey, my wherewithal levels are unusually high today. Better not fritter that away." (Semantically, it's a bit like the SJ-originating concept of "spoons" but without that term's baggage.) It's this quantity which I think Modafinil targets, for me: it's a sort of "wherewithal boost". I don't know how well this accords with other people's experience. I do think I've heard some people describe it as a focus/concentration booster. (Perhaps I should try another nootropic to get that effect, or perhaps my brain is just beyond help on that front.)

I did, however, start to feel it suppressed my appetite to unhealthily, even dangerously, low levels. (After taking it for two days in a row, I felt dizzy after coming down a flight of stairs.) I realize that it's possible to compensate for this by making oneself eat when one doesn't feel hungry, but somehow this doesn't seem that pleasant. For this reason, I've been taking it less recently.

I'd be curious to know whether others experience the appetite suppression to the same extent; it's not something that I hear people talk about very much. Perhaps others are just better at dealing with it than I am or don't care.

It's also hard to say how much of its positive effects were placebo, given that I took it on days when I'd already determined I wanted to "get a lot of shit done".

I might still try armodafinil at some point.

Comment author: NancyLebovitz 23 January 2015 09:40:03PM 1 point [-]

I wonder if activation energy is a good way of describing difficulties with getting started.

Discussion of different kinds of werewithal

Comment author: FrameBenignly 13 January 2015 05:35:59AM *  1 point [-]

Things I think should be treaded upon carefully if not avoided altogether:

  • jokes (a lot of people may think you're serious)
  • the act of sex (and associated fetishes)
  • violence
  • politics
  • illegal activities
  • pop culture
  • art (this I'm weak in my opinion of; I'm guessing art discussion would be quite welcome under certain conditions, but I'm highly uncertain what those conditions are)
  • auditory, written, and performance art (in case you thought I was only referring to visual art)
  • pro-religious arguments (personal opinion: there is a lower threshold for anti-religious comments; I do not by any means mean to imply all or even most anti-religious comments have been poor or that pro-religious comments have been superior overall)
  • anti-rationality arguments (same as above)
  • anything that goes on in your bathroom
Comment author: NancyLebovitz 23 January 2015 07:08:42PM 4 points [-]

A lot of this is material which is well accepted at LW.

Humor is commonly upvoted. It's possible that you have a different concept than I do, and mean something specific by jokes. There's a certain kind of hostile humor which may be more trouble than it's worth, but if so, we're going to need to be a lot clearer about what it is.

I'm not sure how much explicit talk about sex there's been here (as distinct from, say, talk about orientation or polyamory), but I don't think a discussion of how to improve sexual experiences would be out of place.

I personally wish torture wasn't so casually used in philosophical arguments-- I'm not convinced that detaching from my revulsion against torture would be an improvement in how I relate to the world. However, I don't think this is a point of view I'm likely to convince people about.

We do have a norm against recommending illegal violence, especially against named targets.

We've got a weak norm against politics. I wouldn't mind seeing strong norms of pushing people to say how they have come to their conclusions about the outcomes of various political policies and structures. I suspect a great many opinions have much weaker justifications than their holders believe.

We have a monthly media thread which includes art of many kinds, and this hasn't caused any problems that I can think of. Also, HPMOR is extremely popular at LW.

We're pretty cautious about discussing activities which are illegal in first world countries.

"Anything that goes on in your bathroom"? I believe we've had some discussions of flossing which have not been a problem, and also a mention or two of how often to bathe or whether shampoo is useful, but that isn't what you meant.

I've run across something which I believe is valuable for [bathroom activity redacted], and I've been hesitant to post about it-- I've gotten at least one weird reaction for mentioning it in person, and feel some embarrassment about bringing up the subject. This reminds me that I probably should post about it, but possibly with rot13 so that people have some warning if they'd rather not read it.

In general, you seem to want to avoid subjects which tend to lead to strong visceral reactions. I think you're enough of an outlier on this that you aren't likely to change the culture. I would be interested in any method which would make it possible to have an emotionally filtered LW-- while I think you're an outlier, you're probably also not the only person with your preferences.

Comment author: DataPacRat 19 January 2015 01:23:58AM 1 point [-]

There have been many Iterated Prisoner's Dilemma tournaments; at least a couple were done here on Less Wrong. Most such tourneys haven't included noise; to find out about the ones that did, try googling for some combination of the phrases "contrite tit for tat", "generous tit for tat", "tit for two tats", "pavlov", and "grim".

Comment author: NancyLebovitz 20 January 2015 09:16:51PM 2 points [-]

Has there been research on Prisoner's Dilemma where the players have limited amounts of memory for keeping track of previous interactions?

Comment author: advancedatheist 19 January 2015 12:21:43AM *  9 points [-]

Well, someone had to say it:


Dylan Evans Founder and CEO of Projection Point; author, Risk Intelligence

The Great AI Swindle

Smart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.

This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.

Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.

This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.

It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?

Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.

But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.

Comment author: NancyLebovitz 20 January 2015 09:15:31PM 3 points [-]

Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding.

I think this is a bad line of thought even before we get to the hypothesis that people are pushing UFAI risks for the money.

For one thing, people just get things wrong a lot-- it doesn't take bad motivations.

For another, it's very easy to jump to the conclusion that what seems to be correct to you is so obviously correct that other people must be getting it wrong on purpose.

For a third, even if you're right that other people are engaged in motivated thinking, you might be wrong about the motivation. For example, concern about UFAI might be driven by anxiety, or by "ooh, shiny! cool idea!" more than by narcissism or money.

advancedatheist, how sure are you of your motivations?

Comment author: Username 16 January 2015 02:07:05PM 6 points [-]

While I don't think this post is completely terrible, I do think there are a few things that would make people downvote it:

  • Status violation, of the form "hi, I'm new, but I'm going to teach you something"

  • The length-to-insight ratio is way too high. The general idea of "you should be able to cite concrete examples of what you've learned today" could be expressed in a much shorter post

  • Reads like a cross between Tim Ferriss and those horrible chain emails you used to get from elderly relatives about seizing the day and making every moment count

Comment author: NancyLebovitz 16 January 2015 02:48:14PM 2 points [-]

I recommend reacting to actual upvotes and downvotes rather than hypothetical karma.

Instead of generalizing to other people from your reactions, just say what you liked/didn't like about aspects of a post.

If you're interested in writing about problems with commonly given advice, I'm interested in reading it.

Comment author: Kawoomba 15 January 2015 07:33:22AM -3 points [-]

Is it ok to call people poopy-heads, but in a mature and intelligent manner?

Signs and portents ...

Comment author: NancyLebovitz 15 January 2015 03:11:07PM 10 points [-]

I don't recommend it, but I'll have to see individual cases to know whether I'd bring down the banhammer.

As a general thing, I don't recommend using insults which might stabilize bad behavior by making it part of a person's identity. Also, I have a gut level belief that people are less likely to think clearly when they're angry.

Comment author: Username 14 January 2015 04:14:12AM *  6 points [-]

(request for guidance from software engineers)

I'm a recent grad who's spent the last six years formally studying mathematics and informally learning programming. I have experience writing code for school projects and I did a brief but very successful math-related internship that involved coding. I was a high-performing student in mathematics and I always thought I was good at coding too, especially back in high school when I did programming contests and impressive-for-my-age personal projects.

A couple months ago I decided to look for a full-time programming job and got hired fairly soon, but since then it's been a disaster. I'm at a fast-moving startup where I need to learn a whole constellation of codebase components, languages, technologies, and third-party libraries/frameworks but I'm given no dedicated time to do so. I was immediately assigned a list of bugs to fix and without context and understanding of the relevant background knowledge I frantically debug/google/ask for help until somehow I discover the subtle cause of the bug. Three times already I've received performance pressure, and things aren't necessarily looking up. Other new hires from various backgrounds seem to be doing just fine. All this despite my being a good coder and a smart person even by LW standards. I did well in the job interview.

When I was studying and working in academia, I found that the best way to be productive at something (say, graph theory research) is to gradually transition from learning background to producing output. Thoroughly learning background in an area is an investment with great returns since it gives me context and a "top-down" view that allows me to quickly answer questions, place new knowledge into an already dense semantic web, and easily gauge the difficulty of a task. I could attempt to go into more details but the core is: Based on my experience, "hitting the ground running" by prioritizing quick output and only learning background knowledge as necessary per task is inefficient and I'm bad at it.

At the moment my only strong technology skills are the understanding of the syntax and semantics of a couple of programming languages.

Am I at the wrong company? Am I in the wrong profession -- should I go back to academia, spend four years getting a PhD, and work in more mathy positions? Thanks!

Comment author: NancyLebovitz 14 January 2015 04:04:41PM 5 points [-]

This is very much from the outside, but how sure are you that the other new hires are doing just fine? Could they (or some of them) be struggling like you are?

View more: Next