Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jan. 19 - Jan. 25, 2015

3 Post author: Gondolinian 19 January 2015 12:04AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Next Open Thread


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (302)

Comment author: [deleted] 19 January 2015 11:02:47AM 19 points [-]

There seem to be some parents (and their children) here. I myself am the father of 3yo and 1yo daughters. Is there any suggestions you have for raising young rationalists, and getting them to enjoy critical, skeptical thinking without it backfiring from being forced on them?

Comment author: Illano 20 January 2015 04:18:40PM 10 points [-]

I also am the father of 3yo and 1yo daughters. One of the things I try to do is let their critical thinking or rationality actually have a payoff in the real world. I think a lot of times critical thinking skills can be squashed by overly strict authority figures who do not take the child's reasoning into account when they make decisions. I try to give my daughters a chance to reason with me when we disagree on something, and will change my mind if they make a good point.

Another thing I try to do, is intentionally inject errors into what I say sometime, to make sure they are listening and paying attention. (e.g. This apple is purple, right? ) I think this helps to avoid them just automatically agreeing with parents/teachers and critically thinking through on their own what makes sense. Now my oldest is quick to call me out on any errors I may make when reading her stories, or talking in general, even when I didn't intentionally inject them.

Lastly, to help them learn in general, make their learning applicable to the real world. As an example, both of my daughters, when learning to count, got stuck at around 4. To help get them over that hurdle, I started asking them questions like, "How many fruit snacks do you want?" and then giving them that number. That quickly inspired them to learn bigger numbers.

Comment author: passive_fist 20 January 2015 08:55:18PM 3 points [-]

This sounds like solid parenting; my only concern is that you might not be taking the psychology of children into account. Children sometimes really do need an authority figure to tell them what's true and what isn't; the reason for truth is far less important at that stage (and can be given later, maybe even years later).

One issue that could arise is that if you don't show authority then your child may instead gravitate to other authority figures and believe them instead. A child may paradoxically put more faith in the opinions of someone who insists on them irrationally than someone who is willing to change their beliefs according to reason or evidence (actually, this applies to many adults too). It's possible that "demeanor and tone of voice" trumps "this person was wrong in the past."

The point is that children's reasoning is far far less developed than adults and you have to take their irrationalities into account when teaching them.

Comment author: polymathwannabe 20 January 2015 09:14:11PM 2 points [-]

The best thing about my Catholic high school was that it was run by the Salesian Order, which prefers a preventive method based on always giving good reasons for the rules.

Comment author: Evan_Gaensbauer 20 January 2015 05:45:13AM 6 points [-]

[This isn't a direct response to Mark, but a reply to encourage more responses]

To add another helpful framing, if you don't have children, but think as an adult part of your attraction to LessWrong was based on how your parents raised you with an appreciation with rationality, how did that go? Obvious caveats about how memories of childhood are unreliable and fuzzy, and personal perspectives on how your parents raised you will be biased.

I was raised by secular parents, who didn't in particular put a special emphasis on rationality when raising me, compared to other parents. However, for example, Julia and Jesse Galef have written on their blog of how their father raised them with rationality in mind.

Comment author: [deleted] 20 January 2015 02:39:58PM *  5 points [-]

Thanks for the call to action. In my own case I became a rationalist in spite of my upbringing. So people like me who don't have that background could really use advice from those who do :)

Comment author: ilzolende 21 January 2015 02:37:06AM 3 points [-]

They left Scientific American lying around a lot. The column that had the fewest prerequisites was Michael Shermer's skepticism column. Also, people around me kept trying to fix my brain, and when I ran into cognitive bias and other rationality topics, they were about fixing your own brain, so then I assumed that I needed to fix it.

In terms of religion stuff: My parents raised me with something between Conservative and Reform Judaism, but they talked about other religions in a way that implied Judaism was not particularly special, and mentioned internal religious differences, and I got just bored enough in religious services to read other parts of the book, which had some of the less appealing if more interesting content. (It wasn't the greatest comparative religious education: I thought that the way Islam worked was that they had the Torah, the New Testament, and the Qur'an as a third book, sort of the way the Christians had our religious text as well as the New Testament as a second book.)

Comment author: wadavis 21 January 2015 10:52:04PM 2 points [-]

Thank for putting up this branch Evan, I don't have children. I think my raising helped my rationality, but the lens of time is known to distort, so take it with a grain of salt.

Most of my rationality influence was a lead by example case. Accountability and agency were encouraged too, they may have made fertile soil for rational thought.

Ethics conversations were had and taken seriously (paraphrase: 'Why does everyone like you?' 'Cause I always cooperate' 'Don't people defect against you?' 'Yes, but defectors are rare and I more than cover my losses when dealing with other cooperators').

Thinking outside the box was encouraged (paraphrase: 'interfering the receiver is a 10 yard penalty, I can't do that.' 'What's worse, 10 yards or a touchdown?' 'But it is against the rules.' 'Why do you think the rule is for only 10 yards, and not kicked from the game? Do you think the rule, and penalty, are part of the game mechanics?').

Goal based action was encouraged, acting on impulse was treated as being stupid (paraphrase: 'Why did you get in a fight' 'I was being bullied' 'Did fighting stop the bullying?' 'No' 'Ok, what are you going to try next?').

Comment author: Gunnar_Zarncke 20 January 2015 09:33:01PM 4 points [-]

I am also father of four boys now 3, 6, 8 and 11. You can find some parenting resources linked on my user page.

Comment author: Gram_Stone 19 January 2015 05:17:19PM *  9 points [-]

Julia Galef, President and Co-founder of the Center for Applied Rationality, has video blogged on this twice. The first was How to Raise a Rationalist Kid, and the second is Wisdom from Our Mother, which might be a bit more relevant to you because, in that video, her brother Jesse specifically discusses what his mother did in situations where he wasn't enthusiastic about learning something. I should say that it has more to do with when your kids think that they're bad at things than with when they reject something out of hand. To that I would say, and I think many others would say: Kids are smart and curious, rationalism makes sense, and if they don't reject everything else kids have learned throughout history out of hand, then they probably won't reject rationalism out of hand.

Comment author: JoshuaZ 19 January 2015 10:26:32PM 2 points [-]

I know of families who have used the "tooth fairy" as an opportunity to do critical thinking. I think it has gotten mentioned here before. Apparently sometimes children do this on their own. This post is relevant.

Comment author: [deleted] 19 January 2015 07:04:31PM 13 points [-]

Something I frequently see from people defending free speech is some variant of the idea "in the marketplace of ideas, the good ones will win out". Is anyone familiar with any deeper examination of this idea? For instance, whether an idea market actually exists, how much it resembles a marketplace for goods, how it might reliably go wrong, etc.

Comment author: Vaniver 19 January 2015 10:18:30PM *  15 points [-]

I think you're better off looking into theories of memetics; that is, a marketplace doesn't seem to be as good an analogy as an ecology. That makes the somewhat less cheery argument that 'good' doesn't mean 'true' so much as 'effective at spreading,' and in particular memes can win by poisoning their competitors through allelopathy, just like an oak tree.

Comment author: g_pepper 19 January 2015 11:40:10PM 1 point [-]

This video is somewhat on topic: The New (and Old) Attacks on Free Thought: Jonathan Rauch on Kindly Inquisitors

Jonathan Rauch discusses the new edition of his book, Kindly Inquisitors, and presents a thoughtful and rational defense of free speech. I believe he makes some comparisons between the marketplace of ideas and economic markets and he certainly makes an argument similar to the one that you mention. It is an excellent video, IMO, and well worth watching.

Comment author: Plasmon 19 January 2015 07:26:34AM 12 points [-]

Recently, there has been talk of outlawing or greatly limiting encryption in Britain. Many people hypothesize that this is a deliberate attempt at shifting the overton window, in order to get a more reasonable sounding but still quite extreme law passed.

For anyone who would want to shift the overton window in the other direction, is there a position that is more extreme than "we should encrypt everything all the time" ?

Comment author: ilzolende 19 January 2015 08:04:32AM 13 points [-]

Assuming you just want people throwing ideas at you:

Make it illegal to communicate in cleartext? Add mandatory cryptography classes to schools? Requiring everyone to register a public key and having a government key server? Not compensating identity theft victims and the like if they didn't use good security?

Comment author: VincentYu 19 January 2015 12:11:24PM 7 points [-]

Requiring everyone to register a public key

This is already the case in Estonia, where every citizen over the age of 14 has a government-issued ID card containing two X.509 RSA key pairs. TLS client authentication is widely deployed for Estonian web services such as internet banking.

(Due to ideological differences regarding the centralization of trust, I think it's unlikely that governments will adopt OpenPGP over X.509.)

Comment author: [deleted] 19 January 2015 12:29:04PM 1 point [-]

Giving people an official RSA keypair in their smartcard government IDs is fine. That solves all sorts of problems, and enables a bunch of really cool tech.

Requiring that every public key used in any context be registered with the government, or worse, some sort of key escrow, is a totally different matter.

Comment author: ilzolende 20 January 2015 12:25:25AM 2 points [-]

I was thinking less "everyone must register all their public keys, and you can't have a second identity with its own key" and more "everyone has to have at least 1 public key officially associated with them so that they can sign things and be sent stuff securely." And that Estonian system sounds pretty cool.

Comment author: Alsadius 20 January 2015 04:23:22AM 1 point [-]

What would you estimate the probability of ever having the former without the latter being? Of having that happy state last for more than a few years?

Comment author: [deleted] 20 January 2015 02:46:15PM 3 points [-]

Well the former pretty much describes the current state of affairs. Anyone with a government ID card or national healthcare ID probably has a chip embedded with an escrowed signing key. There's really nothing unique about Estonia here -- they're using the same system everyone else is using. Even if your country, like the USA, doesn't have a national ID of some kind or doesn't have a chip embedded, your passport does. The international standard governing "smart passports" being issued by just about every country in existence for the past 5-10 years includes embedded digital signature capability.

Now I don't really know how to estimate the probability of sliding into the latter case. I don't see them as intrinsically connected however.

Comment author: Lumifer 20 January 2015 06:22:23PM 2 points [-]

Generating private/public key pairs is trivially easy.

Comment author: emr 19 January 2015 06:06:41PM 7 points [-]

Frame attempts to limit the use of encryption as unilateral disarmament, and name specific threats.

As in, if the government "has your password", how sure are you that your password isn't eventually going to be stolen by Chinese government hackers? Putin? Estonian scammers? Terrorists? Your ex-partner? And you know that your allies over in (Germany, United States, Israel, France) are going to get their hands on it too, right? And have you thought about when (hated political party) gets voted into power 5 years from now?

A second good framing is used by the ACLU representative in the Guardian article: You won't be able to use technologies X Y and Z, and you'll fall behind other countries technologically and economically.

Comment author: fubarobfusco 19 January 2015 08:46:12AM 4 points [-]

To be a bit more specific than "we should encrypt everything all the time":

Mandatory full-disk encryption on all computer systems sold, by analogy to mandatory seat belts in cars — it used to be an optional extra, but in the modern world it's unsafe to operate without it.

Comment author: adamzerner 23 January 2015 03:42:54AM *  9 points [-]

I just started using the Less Wrong Study Hall. It's been great! I find myself to be more productive, and there's something fun about being amongst the company of other friendly people.

I don't have anything insightful to say. I'd just like to reiterate that:

1) It exists and you should consider using it (it seems that not too many people know about it).

2) I (and others) think that there should be a link to it in the sidebar.

Comment author: Dorikka 19 January 2015 02:18:36AM 9 points [-]

At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)

(This is a repost from last week's open thread due to many upvotes and few replies. However, see here for Gwern's response.)

Comment author: btrettel 19 January 2015 06:18:24PM *  18 points [-]

I meant to post something about my experience with armodafinil about a year ago, but I never got around to it. My overall experience was strongly negative. Looks like I did write a long post in a text file a day or so after taking armodafinil, so here's what I had to say back then:

Some background:

I'm a white male in my mid-20s. I have excessive daytime sleepiness, and I believe this is because I'm a long sleeper who has difficulty getting an adequate duration of sleep. There are several long sleepers in my family. My mother and I tend to not like how stimulants make us feel, e.g., pseudoephedrine makes us fairly nervous, though it will help our nasal congestion from allergies and help wake us up. I was interested in trying modafinil because I hear it has proportionally less of the negative effects compared against its wake-promoting effects.

My neurologist gave me a few samples of armodafinil, which is basically a variant of modafinil. I was busy in the month after I met my neurologist last and didn't think about taking it at all, but come mid-February I remembered to try it.

Saturday, Feb. 15, 2014:

I woke up at 8:30 am, as I usually did, and started eating a chocolate chip muffin for breakfast. During the breakfast I took 4000 IU of vitamin D and 150 mg of armodafinil. I took these at 8:37 am.

I started organizing files on my computer. I still felt fairly tired, and considered going back to sleep, but I did not because I try to keep a very regular sleep schedule. I will take naps in the afternoon (before 8 pm, or so, to avoid delaying my bedtime) if necessary, but I try to wait until then. Until around 10:30 am, I thought armodafinil was doing absolutely nothing. I know armodafinil takes some time to kick in, but I didn't expect that long. Maybe I'm one of the people for which modafinil doesn't work?

At around 11 am I realized that I felt weird. It was obvious that the armodafinil had kicked in fierce at that point. I checked my heart rate: 75 bpm, which is higher than normal, though not as high as other stimulants take me. I wouldn't quite describe how I felt as more awake, though I don't think I could involuntarily fall asleep now. It felt as if I could fall asleep if I wanted to, but I didn't want to. I felt a bit more nervous, perhaps, but that might just be the placebo effect. It certainly was not as strong as what 60 mg of pseudoephedrine does to me. I got a phone call from my apartment manager saying that they'll be showing my apartment today, so I (slowly) started sweeping and vacuuming to make my apartment a bit more presentable. I was pacing around like crazy while doing this.

At about 11:30 am I took a shower. I started realizing that I have no impulse control. Instead of washing myself, I'd start, get distracted by some thought, think about that for a while, realize I'm in the shower, forget where I was in my shower routine, etc. I started thinking that armodafinil might have given me ADHD, which is odd given that I've read it might be useful for the treatment of ADHD.

After the shower I consulted the note packet that came with the armodafinil. Given what these notes said, I think I was experiencing a side effect. The notes said to discontinue use of armodafinil if you experience these symptoms. "Okay, can do." is what I thought.

I went to the LessWrong meetup and told Vaniver that I think armodafinil is not doing nice things for me. Another LWer suggested that perhaps these effects go away with repeated use; I said that I didn't know, but I don't intend to find out. During the entire meetup I had a lot of difficulty sitting still. I got up a few times to get water, or a napkin, or a bag of chips, but I don't think I actually wanted any of those things; I guess I just didn't want to stay still.

The early afternoon is the hardest time for me to stay awake, and this meetup spanned that time entirely. I yawned a few times during the meetup, but I didn't become so drowsy that I had to take a nap, as I often do. I take this as evidence that armodafinil helps my EDS, though it's not that strong because I never really felt "awake" during this entire process. I felt really weird in a way that I can't quite describe.

After the meetup (about 4 pm), I rode my bike to the downtown library to return a book. Purely subjectively, I'd say armodafinil increased my endurance. I'm in reasonable shape now, but I felt that I could maintain 20+ mph easier today than a few days ago. Objectively, though, it doesn't seem that my average speed increased much if at all; it was about 13 mph on Saturday and 12 to 13 on most days.

When I got back to my apartment, I felt a little better. Still fidgety and easily distracted, but slightly better. Perhaps the exercise helps, or the armodafinil was wearing off? I go running around now usually anyway, so I hoped this would help more. I went on a run, but it didn't have quite the effect the bike ride did. I then started making dinner, but I was continually distracted by my computer through that.

I noticed that my tinnitus was much worse today. Not sure if this was due to the armodafinil, but it sounded at least 10 dB louder than usual. Ambient noises could not mask it.

Around 10 pm, I started feeling more tired, so I figured the armodafinil must be wearing off. I still felt odd and easily distracted, though. I read on my couch for a while until I felt as if I could fall asleep quickly, and I slept briefly on my couch. I woke up and moved to my bed, where it took me a while to fall asleep again, but I did. I woke up several times during the night and felt I had to try quite a few positions before I found something comfortable. This wasn't particularly restful. Otherwise, I don't think armodafinil did much to my nighttime sleep. I think if it hadn't caused some manic symptoms, I probably wouldn't have had any issues sleeping.

Sunday, Feb. 16, 2014:

I still felt a little odd when I woke up, but it was very obvious now that these effects were wearing off. I had read that armodafinil has a half-life of about 12 to 15 hours, so using a simple exponential decay with a conservative half-life, I saw that I still had the equivalent of about 45 mg of armodafinil in my system. Tomorrow morning that decrease to about 15 mg; after the third day it's down to 5 mg. I can't wait for this to be out of my system.

Overall, I'd say taking armodafinil was worthwhile as I learned something about myself, which is that I probably should avoid stimulants as much as possible.

(Not from my original intended post: I want to note that I'm doing much better now, after getting more sleep. No stimulants necessary. I haven't seen a neurologist since I wrote the post above and probably won't again.)

Comment author: hg00 21 January 2015 06:07:20AM *  4 points [-]

Your main complaints about your drug experience seem to be (a) feeling unusual, (b) having some difficulty managing your attention, (c) feeling excessively fidgety, (d) louder tinnitus, and (e) sleep difficulty. As someone who has experimented with psychoactive drugs a fair amount, including modafinil, my impression is that (a) and (b) are pretty common with psychoactive drugs and are almost always transient and harmless (unless you're driving a car, biking, operating heavy machinery, etc.). ((c) is less common but definitely present with some, e.g. coffee. (d) and (e) are probably good reasons to stop using a particular drug.) In fact, I've gotten to the point where I consider feeling unusual and having my attention work differently to be fun, interesting experiences to observe and learn from.

So my thought is that before trying modafinil, maybe people should experiment with small doses of strongly psychoactive drugs that don't have a 12-hour half life, perhaps in a safe & supervised environment, to learn that altered mental states aren't scary and can be pretty useful for certain tasks--they're like distinct mental gears you can enter using cheap, reliable external aids.

(For example, drink half a cup of coffee, then a full cup of coffee, then two cups of coffee on separate days to know what it's like to be highly stimulated, and a cup of beer, two cups of beer, and four cups of beer on separate days to know what it's like to be highly disinhibited. Kratom is another highly useful but little known legal psychoactive; for example, this successful blogger primarily credits kratom with his success at building his online empire, and I'm not surprised at all given my kratom experiences... any resistance I have to doing tasks seems to just melt away on kratom.)

(Disclaimer: I'm a foolish young person and maybe you should ignore everything I'm saying. Also if you really did experience stimulant induced mania you should probably follow the instructions on the label.)

Comment author: btrettel 28 January 2015 11:05:02PM *  0 points [-]

Appreciate your response and perspective, hg00.

I think smaller doses are prudent for people experimenting with these things. If I were to try armodafinil again, I would have cut the pill in half or even quarters. (I had no real choice in the pill dosage, as I only received a sample.) Though, in retrospect I think avoiding (ar)modafinil all together would be smart because the half-life is way too long.

I'm basically straight-edge, though I'm open minded and willing to try some drugs if I think they might have a positive effect on me. I've only tried nootropics, and so far I have not been impressed. Either they do nothing or make me feel really strange. Others' experiences may vary. There doesn't seem to be anything here for me. At this point I have no intention of ever trying a drug for non-medical reasons.

What I experienced isn't exactly clear, but, I didn't like what I experienced. In fact, it took several weeks for me to fully recover from taking armodafinil. After a few weeks or so I felt mostly normal, and a bit later the tinnitus finally died down. The latter isn't that unusual for my tinnitus, actually. After exposure to a loud noise I might have louder tinnitus for several weeks. (Not that mine ever is quiet. It doesn't bother me, but I imagine normal for me would drive most people nuts. It never goes away and probably will only ever get worse, and I accept that.)

Comment author: hg00 29 January 2015 09:56:44PM 1 point [-]

Understood. I don't doubt your self-assessments, just wanted to provide a contrasting perspective. For tinnitus, you might want to try googling "tinnitus replacement therapy" or experimenting with ear/jaw/neck massage; both of these seem to have been helpful for me.

Comment author: btrettel 30 January 2015 01:25:11AM 0 points [-]

I've looked into tinnitus retraining therapy (I think this is what you meant) but decided I'm not bothered enough by my tinnitus to go that route. I'll keep it in mind if this changes. I have not heard about massage helping tinnitus. I'll have to give that a shot as I'm sure it would be enjoyable even without tinnitus relief.

Otherwise, I've found noise machines to be helpful. Sometimes I also listen to a brown noise mp3 when working and I don't want to listen to music. I find that this totally masks my tinnitus, masks most ambient noises, and is rather pleasant (it sounds like a waterfall). (I want to note that my brother finds artificial noise to be worse than tinnitus, so your mileage may vary.)

If you use Linux and have the right software installed you can run the following commands to generate a brown noise mp3:

sox -c 2 --null out.wav synth 30:00 brownnoise vol -0.4dB fade t 3 30:00
lame --preset insane out.wav out.mp3
Comment author: hg00 30 January 2015 06:04:24AM 0 points [-]

The core idea behind tinnitus retraining therapy is to listen to noise that doesn't totally mask the tinnitus but is more salient than it. The principle being that it helps you think of your tinnitus as background noise. Seems to work for me.

Comment author: sediment 21 January 2015 07:16:40PM 4 points [-]

A month or two ago I started taking Modafinil occasionally; I've probably taken it fewer than a dozen times overall.

I think I'd expected it to give a kind of Ritalin-like focus and concentrate, but that isn't really how it affected me. I'd describe the effects less in terms of "focus" and more in terms of a variable I term "wherewithal". I've recently started using this term in my internal monologue to describe my levels of "ability to undertake tasks". E.g., "I'm hungry, but I definitely don't have the wherewithal to cook anything complicated tonight; better just get a pizza." Or, on waking up: "Hey, my wherewithal levels are unusually high today. Better not fritter that away." (Semantically, it's a bit like the SJ-originating concept of "spoons" but without that term's baggage.) It's this quantity which I think Modafinil targets, for me: it's a sort of "wherewithal boost". I don't know how well this accords with other people's experience. I do think I've heard some people describe it as a focus/concentration booster. (Perhaps I should try another nootropic to get that effect, or perhaps my brain is just beyond help on that front.)

I did, however, start to feel it suppressed my appetite to unhealthily, even dangerously, low levels. (After taking it for two days in a row, I felt dizzy after coming down a flight of stairs.) I realize that it's possible to compensate for this by making oneself eat when one doesn't feel hungry, but somehow this doesn't seem that pleasant. For this reason, I've been taking it less recently.

I'd be curious to know whether others experience the appetite suppression to the same extent; it's not something that I hear people talk about very much. Perhaps others are just better at dealing with it than I am or don't care.

It's also hard to say how much of its positive effects were placebo, given that I took it on days when I'd already determined I wanted to "get a lot of shit done".

I might still try armodafinil at some point.

Comment author: SolveIt 21 January 2015 10:37:01PM 3 points [-]

Huh, along with the low side effects, sounds like a candidate for a weight loss drug.

Comment author: sediment 25 January 2015 03:13:52PM 2 points [-]

Yes, perhaps for some, but I'm already closer to underweight than I am to overweight, so for me that's a big con.

Comment author: NancyLebovitz 23 January 2015 09:40:03PM 1 point [-]

I wonder if activation energy is a good way of describing difficulties with getting started.

Discussion of different kinds of werewithal

Comment author: sediment 25 January 2015 03:10:48PM 1 point [-]

Yep, the model in that post is quite close to the one I'm trying to describe.

Comment author: [deleted] 22 January 2015 03:38:46AM *  1 point [-]

I took modafinil twice. I'd been having problems staying awake during the day -- it's hard for me to sleep before 2am -- and those completely disappeared. I had more energy then than I've had in a while. No negatives. The only reason I haven't gotten more is that I don't have a mailing address.

(Disclaimer: I drink a lot of coffee and tea, use a lot of snus, and drink like a relevant ethnic stereotype on weekends.)

Comment author: Dr_Manhattan 21 January 2015 01:52:37AM 1 point [-]

Mixed feelings. If you need wakefullness it's available on tap, but with a side of anxiety and trouble going to sleep later if your dosage is not perfectly calibrated.

Comment author: Alex_Miller 19 January 2015 01:05:16AM 23 points [-]

In my small fourth grade class of 20 students, we are learning how to write essays, and get to pick our own thesis statements. One kid, who had a younger sibling, picked the thesis statement: "Being an older sibling is hard." Another kid did "Being the youngest child is hard." Yet another did "Being the middle child is hard", and someone else did "Being an only child is hard." I find this as a rather humorous example of how people often make it look like they're being oppressed.

Does anyone know why people do this?

Comment author: Punoxysm 19 January 2015 04:35:10AM 24 points [-]

Be charitable; don't assume they're trying to present themselves as martyrs. Instead they could be outlining the peculiar challenges and difficulties of their particular positions.

Life is hard for everyone at times.

Comment author: mwengler 20 January 2015 01:08:50PM 7 points [-]

Anybody should be able to write an essay "why my life is hard." They should also be able to write an essay "why my life is easy." It might be a great exercise to have every student write a second essay on a thesis which is essentially the opposite of the thesis of their first essay.

Comment author: dxu 21 January 2015 04:40:42AM *  4 points [-]

I wouldn't ascribe conscious intent to their actions, but it may be that making your own life seem harder is an evolved social behavior. Remember, humans are adaptation-executors, not fitness-maximizers, so it's entirely possible that the students thought they were being honest, when in fact they may have been subconsciously exaggerating the difficulties they were facing in day-to-day life.

Related: Why Does Power Corrupt?

Comment author: Gondolinian 19 January 2015 01:24:17AM *  10 points [-]

One kid, who had a younger sibling, picked the thesis statement: "Being an older sibling is hard." Another kid did "Being the youngest child is hard." Yet another did "Being the middle child is hard", and someone else did "Being an only child is hard." I find this as a rather humorous example of how people often make it look like they're being oppressed.

Taken at face value, the four statements aren't incompatible. Saying that being X is hard in an absolute sense isn't the same as saying that being X is harder than being Y in a relative sense, or that X people are being oppressed.

Comment author: B_For_Bandana 20 January 2015 06:16:58PM *  1 point [-]

Sure, but the point is that the same argument applies to the flipside: everyone could've written essays like "X is fun" or "Y is fun" without contradiction. But they chose "hard" instead. Why?

Comment author: JoshuaZ 19 January 2015 02:39:09AM 5 points [-]

It is much easier to notice the things in your situation that don't go well than notice all the things that happen in someone else's situation.

I'm curious; have you pointed this out to the students? If so, how did they react?

Comment author: James_Miller 19 January 2015 03:42:11AM 9 points [-]

Alex Miller, my son, is one of the students.

Comment author: JoshuaZ 19 January 2015 04:44:31AM 2 points [-]

Ah, that clarifies that. I think I read "we are learning" as the teacher saying that since I've seen teachers use that language (e.g. "next week we'll learn about derivatives").

Comment author: James_Miller 19 January 2015 05:46:49AM 21 points [-]

Alex greatly enjoyed being mistaken for his teacher.

Comment author: [deleted] 19 January 2015 10:59:45AM *  1 point [-]

So nice that you two are able to enjoy LessWrong together. Given that this is an open threat, is there anything you (or Alex) would like to share about raising rationalists? My daughters are 3yo and 1yo, so I'm only beginning to think about this...

EDIT: I made a top-level post here.

Comment author: James_Miller 20 January 2015 12:44:51AM *  2 points [-]

Alex loves using rationality to beat me in arguments, and part of why he is interested in learning about cognitive biases is to use them to explain why I'm wrong about something. I have warned him against doing this with anyone but me for now. I recommend the game Meta-Forms for your kids when they get to be 4-6. When he was much younger I would say something silly and insist I was right to provoke him into arguing against me.

Comment author: Calien 31 January 2015 11:20:36AM 1 point [-]

Has anyone gotten their parents into LessWrong yet? (High confidence that some have, but I haven't actually observed it.)

Comment author: gjm 19 January 2015 01:18:25AM 4 points [-]

The more you can blame whatever difficulties and frustrations you have on things outside your control, the less you have to think of them as your own fault. People like to think well of themselves.

Comment author: cameroncowan 20 January 2015 05:13:34AM 3 points [-]

Each experience has its own difficulties that are unknown unless you've lived it.

Comment author: RichardKennaway 20 January 2015 02:01:58PM 1 point [-]

Each experience has its own difficulties that are unknown unless you've lived it.

Corollary: one's own difficulties always seem bigger than everyone else's.

Comment author: ZankerH 19 January 2015 11:22:57AM 1 point [-]

Because running in the oppression olympics is the easiest way to gain status in most western societies. Looks like even children are starting to realise that, or maybe they're being indoctrinated to do so in other classes or at home.

Comment author: [deleted] 22 January 2015 03:43:19AM *  12 points [-]

I would like to point out that this is the only comment in the thread that doesn't assume that this behavior is culturally invariant, and suggest that the rest of LW think about that for a while.

Comment author: emr 22 January 2015 08:22:50AM 3 points [-]

I think the term "oppression olympics" is needlessly charged.

But it is a good question: Under what conditions will someone voice a complaint, and about what?

We learn early on that voicing certain complaints results in social punishment, even when those complaints are "valid" according to the stated moral aspirations of the community. If memory holds, the process of learning which complaints can be voiced is painful.

But at the same time, not all superficially negative self-disclosures are a true social loss: Signaling affliction seems to have been a subcultural strategy for quite a while, nowadays in teenagers, but we also have famous references to the over-the-top displays of grief and penitence from ancient (Judeo-Christian) cultures. And of course, complaints can also result in support, or can play a role in political games.

So there's a cost-benefit happening somewhere in the system, which we might hope to be reasonably specific about.

To touch on some controversies: There's a big push to reduce the dissonance between what we publicly accept as grounds for complaint and what we actually punish people for complaining about. Accepting for the moment that our stated principles are okay (which is where I expect you might disagree), this can still go wrong several ways:

  1. People may mistake the aspiration for reality e.g. we tell kids they should complain about bullying and feel like we're making progress, but then we allow the system to punish kids just as harshly as ever after their disclosure, because we can't or won't change it.

  2. Or we feel that offering non-complaint-based advice is perpetuating or accepting a discrepancy between "valid complaints" and "effective complaints", e.g. the outcry when someone suggests a concrete way to avoid being sexually assaulted, or voices a concern about "victim mentality" (the mistake of thinking that complaining is more effective than it really is, often because everyone is only pretending that we are going to take complaints more seriously now)

  3. The project is eaten by political concerns e.g. we find ourselves debating exactly which groups get to participate in the new glasnost of complaining about complaint-hypocrisy.

  4. A group becomes unable to exclude to bad actors who cloak themselves in the new language of moral progress. Social justice groups, who are very concerned with unfair exclusion, have this problem to a non-trivial degree.

The "Oppression olympics" is mostly point 3, with a bit of point 4. I'm actually far more concerned with points 1 and 2.

Comment author: seer 27 March 2015 07:39:22AM 6 points [-]

Accepting for the moment that our stated principles are okay (which is where I expect you might disagree)

This is not a good thing to accept, since the stated principals are themselves subject to change. Hence

5. Once society starts taking complaint X seriously enough to punish the perpetrator, people start making (weaker) complaint X'. Once society takes that complaint seriously people start making complaint X'', etc.

I would argue that long term 5. is actually the biggest problem.

Comment author: BenLowell 19 January 2015 06:53:17PM 1 point [-]

A lot of times different ways that people act are different ways of getting emotional needs, even if that isn't a conscious choice. In this case it is likely that they want recognition and sympathy for different pains they have have. Or, it's more likely the case that the different hurts they have (being lonely, being picked on, getting hand-me-downs, whatever) are easily brought to mind. But when the person tells someone else about the things in their life that bother them, it's possible that someone could say "hey, it sounds like you are really lonely being an only child" and they would feel better.

Some different example needs are things like attention, control, acceptance, trust, play, meaning. There is a psychological model of how humans work that thinks of emotional needs similar to physical needs like hunger, etc. So people have some need for attention, and will do different things for attention. They also have a need for emotional safety, just like physical safety. So just like if someone was sitting on an uncomfortable chair will move and complain about how their chair is uncomfortable, someone will do a similar thing if their big brother is picking on them.

Another reason people often make it look like they are being oppressed is that they feel oppressed. I don't know if you are mostly talking about people your age, or everyone, but it is not a surprise to me that lots of kids feel oppressed, since school and their parents prevent them from doing what they want. Plenty of adults express similar feelings though, i just expect not as many.

Comment author: jsu 19 January 2015 11:29:10AM *  1 point [-]

Maybe they are friends and discussed their thesis topics with each other. I find it unlikely that 4 out of 20 students would come up with sibling related topics independently.

Comment author: gjm 19 January 2015 12:24:48PM 5 points [-]

Or maybe they picked them out loud in class, and some of those were deliberate responses to others.

So what happens is: Albert is an oldest child whose younger sister is loud and annoying and gets all the attention. He says "I'm going to write about how being an older sibling is hard". Beth is a youngest child whose older brothers get all the new clothes and toys and things; she gets their hand-me-downs. She thinks Albert's got it all wrong and, determined to set the record straight, says "I'm going to write about how being the youngest child is hard." Charles realises that as a middle child he has all the same problems Albert and Beth do, and misses out on some of their advantages, and says he's going to write about that. Diana hears all these and thinks, "Well, at least they have siblings to play with and relate to", and announces her intention to explain how things are bad for only children.

Notice that all these children may be absolutely right in thinking that they have difficulties caused by their sibling situation. They may also all be right in thinking that they would be better off with a different sibling situation. (Perhaps there's another youngest child in the class who loves it -- but you didn't hear from him.)

Comment author: DanielLC 19 January 2015 06:47:41AM 1 point [-]

If it's given how successful you were, it looks better if it was under worse circumstances. Thus, people benefit from overstating their challenges. Since people aren't perfect liars, they also overestimate their challenges.

Comment author: Good_Burning_Plastic 28 March 2015 03:07:28PM 0 points [-]
Comment author: solipsist 22 January 2015 03:47:35AM 8 points [-]

Who chooses the Featured Articles of the week?

Comment author: Douglas_Knight 28 January 2015 10:59:30PM *  2 points [-]

The homepage is controlled from the wiki here; it includes the template Lesswrong:FeaturedArticles that google tells me is here. From the history, the editor of three years tenure has wiki username Costanza and is probably the same as the LW user of the same name.

Comment author: sixes_and_sevens 19 January 2015 10:50:57AM 8 points [-]

Tell us about your feed reader of choice.

I've been using Feedly since Google Reader went away, and has enough faults (buggy interface, terrible bookmarking, awkward phone app that needs to be online all the time) to motivate me towards a new one. Any recommendations?

Comment author: ZankerH 19 January 2015 11:20:46AM *  3 points [-]

After Reader was shut down, instead of trusting my RSS feeds to another always-online provider I decided to use local clients. I use dropbox to maintain the feed list and read status synchronised between all devices I need it on.

Comment author: harshhpareek 20 January 2015 11:10:06PM *  2 points [-]

I tried using RSS readers, but I tended to forget to check their websites or apps. I could have trained myself to check them more often but I ended up using https://blogtrottr.com/ instead. It sends RSS feeds to your email inbox, so I can check blogs along with my email in the morning.

I haven't had any issues so far. They send you ads along with the feed to generate revenue. Having a revenue model is a solid plus in my book.

What I don't like about it: they don't have accounts so managing subscriptions is a little hard.

Comment author: twanvl 19 January 2015 05:46:47PM 2 points [-]

I switched to The Old Reader, which, as the name suggests, is pretty close to Google Reader in functionality.

Comment author: gjm 19 January 2015 12:04:23PM 2 points [-]

I use rawdog. It runs on my computer and generates a single HTML file, which contains a nice unified list of articles (rather than the common alternative, a list of feeds which I then have to drill down into). It doesn't rely on any external services other than the feeds themselves. By diddling with the template it uses to generate the HTML, I have given it a little interactivity (e.g., I can tell it to "collapse" some feeds so that they show only article titles rather than content; I can then un-collapse individual articles).

Last I checked, it didn't work on Windows but could be coerced into doing so by fiddling with the source code (it's in Python).

There is a thing called Tiny Tiny RSS that, from what others have said, I suspect may offer kinda-similar functionality but better (with perhaps a bit more effort to get it set up initially). I keep meaning to check it out but failing to do so.

Comment author: philh 19 January 2015 11:46:52AM 2 points [-]

I use newsblur and it's fine, but I don't use bookmarking or an app or basically anything interesting.

Comment author: polymathwannabe 19 January 2015 08:39:10PM 1 point [-]

I've found Feedly on a browser is much more manageable than the Android app.

Comment author: roystgnr 20 January 2015 03:17:37PM 1 point [-]

I've tried TheOldReader, which worked well, even when they had to handle the sudden influx of Google Reader refugees. I'm currently using InoReader, which works very well, and Bloglines, which seems to be broken (for nearly a week now IIRC, and not for the first time in the last year).

Comment author: Pfft 20 January 2015 04:22:59AM *  1 point [-]

I use Digg Reader. It does not have any social networking features, but otherwise it basically works like Google Reader did.

For a while I was also using The Old Reader, but I switched away when it briefly looked like they were going to shut down. Digg Reader and The Old Reader seem very similar.

Comment author: RichardKennaway 19 January 2015 02:13:25PM 1 point [-]

I used Safari until Apple removed the RSS functionality, then switched to Vienna. OSX only.

Comment author: jaime2000 19 January 2015 02:07:55AM *  22 points [-]

Since Eliezer has forsaken us in favor of posting on Facebook, can somebody with an account please link to his posts? His page cannot be read by someone who is not logged in, but individual posts can be read if the url is provided. As someone who abandoned his Facebook account years ago, I find this frustrarting.

Comment author: [deleted] 19 January 2015 11:49:43AM 22 points [-]

Here's a month's worth:

https://www.facebook.com/yudkowsky/posts/10153041257924228

https://www.facebook.com/yudkowsky/posts/10153033570824228

https://www.facebook.com/yudkowsky/posts/10153030238814228

https://www.facebook.com/yudkowsky/posts/10153021749629228

https://www.facebook.com/yudkowsky/posts/10152977126839228

https://www.facebook.com/yudkowsky/posts/10152972605814228

https://www.facebook.com/yudkowsky/posts/10152972301299228

https://www.facebook.com/yudkowsky/posts/10152964087234228

https://www.facebook.com/yudkowsky/posts/10152957903859228

https://www.facebook.com/yudkowsky/posts/10152947952344228

https://www.facebook.com/yudkowsky/posts/10152946520029228

https://www.facebook.com/yudkowsky/posts/10152945423789228

https://www.facebook.com/yudkowsky/posts/10152941108249228

https://www.facebook.com/yudkowsky/posts/10152940624254228

https://www.facebook.com/yudkowsky/posts/10152938634304228

https://www.facebook.com/yudkowsky/posts/10152937953959228

https://www.facebook.com/yudkowsky/posts/10152933586294228

https://www.facebook.com/yudkowsky/posts/10152929868929228

https://www.facebook.com/yudkowsky/posts/10152919146569228

https://www.facebook.com/yudkowsky/posts/10152918491764228

https://www.facebook.com/yudkowsky/posts/10152915799124228

https://www.facebook.com/yudkowsky/posts/10152912313154228

https://www.facebook.com/yudkowsky/posts/10152908949454228

https://www.facebook.com/yudkowsky/posts/10152904788444228

https://www.facebook.com/yudkowsky/posts/10152902713609228

https://www.facebook.com/yudkowsky/posts/10152900703339228

Comment author: jaime2000 20 January 2015 01:58:37AM 2 points [-]

Thank you! This is great.

Comment author: mwengler 20 January 2015 01:04:19PM 8 points [-]

Why would you not create a sockpuppet facebook account for the purposes of reading posts you want to read?

Comment author: philosophytorres 24 January 2015 12:40:40AM 7 points [-]

Hello! I'm working on a couple of papers that may be published soon. Before this happens, I'd be extremely curious to know what people think about them -- in particular, what people think about my critique of Bostrom's definition of "existential risks." A very short write-up of the ideas can be found at the link below. (If posting links is in any way discouraged here, I'll take it down right away. Still trying to figure out what the norms of conversation are in this forum!)

A few key ideas are: Bostrom's definition is problematic for two reasons: first, it's account of who an existential risk affects is too promiscuous. It opens up the door for counterexamples in which humanity is violently destroyed yet no existential risk occurs. And second, Bostrom's typology is incoherent. It fails to recognize that a consequence's scope has both spatial and temporal components, where different degrees of each can be combined with the other in different ways. At the end of the paper, I propose my own definition - one that attempts to solve both of these problems. Figure C may be particularly helpful.

Thoughts? I am more than open to feedback!

http://philosophytorres.org/XRiskologytheConceptofanExistentialRisk.pdf

Comment author: Manfred 26 January 2015 05:41:10AM 0 points [-]

This is a nice paper, and is probably the sort of thing philosphers can really sink their teeth into. One thing I really wanted was some addressing of the basic "something that would cause much of what we value about the universe to be lost" definition of 'catastrophic', which you could probably even find Bostrom endorsing somewhere.

Comment author: GuySrinivasan 19 January 2015 01:52:37AM 7 points [-]

We're looking for beta testers for the 16th "annual" Microsoft puzzle hunt. Interested folks should PM me, especially if you're in the Seattle area.

Comment author: [deleted] 21 January 2015 03:16:20PM 6 points [-]

Uhm. this is an rather weird way to describe how I think.. but, I feel like I've come full-circle. I'm automatically thinking of ways to optimize, automatically try to better understand the world about me. I'm reading LW articles and I sometimes think "yeah, I know about this".. I no longer feel the "Aha! How did I not realize this seemingly obvious thing I should have thought of already that hurts me nerd always-be-right ego!" but rather, I read mid-post and just feel like I know this stuff already.

Naturally, I still am not 100% perfect, but I still think I'm on the right path. I've been mostly a lurker and registered not long ago. Has anyone else gotten the same feel? This feeling isn't really backed up by anything other than having a "I know this already" thought.

Comment author: Vaniver 23 January 2015 04:29:35PM *  3 points [-]

Has anyone else gotten the same feel?

Yes. Oftentimes people who played lots of games will describe the feeling as "leveling up," and it's a normal and desirable part of growth. This quote is relevant: it's important to not say "well, I've leveled up, no more growth necessary!", but instead always be on the lookout for the way to get to the next level. But the path that got you from level n-1 to n and the path that gets you from level n to level n+1 may be very different, and the restlessness that comes with feeling like you know this stuff is useful for getting you to look elsewhere.

(I'm not saying that you're "done with LW," but I do think you're "done with lurking" and I think that you've done the right thing by registering; it makes for different kinds of interaction, which leads to different kinds of learning.)

Comment author: Viliam_Bur 22 January 2015 03:41:36PM *  3 points [-]

I don't have a link, but something like this was already mentioned on LW... when you have already mastered some kind of thinking, it seems "obvious", even if it seemed original and awesome when you were reading it for the first time.

Although, this only proves that you have become more familiar with LW style of thinking. It does not automatically follow that "LW style of thinking" is "rationality". (Although I personally believe it is related.)

Comment author: Evan_Gaensbauer 23 January 2015 02:30:57PM 1 point [-]

I haven't "come full-circle", but I've had a similar experience. I haven't read all of LessWrong Sequences, maybe not even half. Some old friends of mine got me into the meetup at a time when I was studying microeconomics, and started majoring in cognitive science. So, I was enthralled by discussion, and went around the Internet and life learning about related topics. Occasionally, I read Sequences essays I haven't read before, and I realize I get the gist halfway through reading it.

That's my "yeah, I know about this...". It works for me epistemically. It might have helped that I tried to rationalize the existence of the Christian God as a child, up to the point of deism not specific to any religion, and finally to virtual atheism. I found by the time I encountered arguments for or against the existence of God in theology or philosophy in university, I wasn't phased by any of them because I'd generated all of them on my own before. That's another "yeah, I know about this" set of experiences, rather than a series of "Aha!'s" I expected. These mental exercises may have prepared me for future thinking on LessWrong.

Sometimes I'm not as curious as I used to be, and I don't often automatically think of ways to optimize. Instrumentally, I don't believe I'm "on the right path" for fulfilling my own goals. However, that is confounded by other factors of my own life I'm not willing to discuss publicly. So, I'm unsure how instrumentally rational I may or may not be.

Comment author: Username 25 January 2015 06:04:00PM 5 points [-]

I have (what I presume to be) massive social anxiety. I live near lots of communities of interest that probably contain lots of people I would like to meet and spend time with, but the psychological "activation energy" required to go to social events and not leave halfway though is huge, and so I usually end up just staying at home. I would prefer to be out meeting people and doing things, but when I actually try to do this, I get overcome by anxiety (or something resembling it), and I need to leave. Has anyone else had this problem, and if so, what techniques helped you overcome it? "Just practice" doesn't seem to be working--when I am able to muster up the willpower to go to social events (even very structured ones, which are much easier to deal with), it takes more and more willpower to stay there as the event goes on, and this doesn't seem to be changing.

Comment author: fubarobfusco 25 January 2015 06:58:51PM 2 points [-]

In my personal experience, what I thought was anxiety largely went away when I was treated for depression.

So I'm just gonna recommend what Scott has to say on that matter:

http://slatestarcodex.com/2014/06/16/things-that-sometimes-help-if-youre-depressed/

Comment author: Username 25 January 2015 11:05:49PM 0 points [-]

Thank you!

Based on the test Scott linked and my own subjective experience, it seems very unlikely that I am depressed. Which aspects of your treatment helped with what you thought was anxiety?

Comment author: ChristianKl 19 February 2015 12:23:51PM 0 points [-]

Do you do any sports? Martial arts classes for example gives you an environment where you face your anxiety head on.

Comment author: MrMind 26 January 2015 10:49:34AM 0 points [-]

I can offer at least two point of view.
The first is that what I thought was massive social anxiety was actually just social inexperience, that is a large part of my anxiety derived from not knowing what was the accepted social protocol in a given situation. Usually sitting quietly and observing what others did helped.
The second is that you need to subdivide and identify which steps of social interactions you are able to do and which you aren't. For example, instead of just throwing yourself into a social gathering, you can (for example) get ready and go out from your house, but not get in front of the place. Or you can get in front of the place but not enter. Or you can enter but you have a sense of urgency that prompts you to leave immediately after, etc. Instead of "just practice" the whole interactions, identify the smallest next step that you can practice, and if you can't practice that step, subdivide into even smaller units (e.g. literally just doing the next step).

Comment author: VincentYu 26 January 2015 01:22:36AM 0 points [-]

I recommend reading section 19 (on the management of social anxiety disorder) in the recent treatment guidelines from the British Association for Psychopharmacology (pp. 17–19). A sample:

19.1. Recognition and diagnosis

Social anxiety disorder is often not recognised in primary medical care (Weiller et al., 1996) but detection can be enhanced through the use of screening questionnaires in psychologically distressed primary care patients (Donker et al., 2010; Terluin et al., 2009). Social anxiety disorder is often misconstrued as mere ‘shyness’ but can be distinguished from shyness by the higher levels of personal distress, more severe symptoms and greater impairment (Burstein et al., 2011; Heiser et al., 2009). The generalised sub-type (where anxiety is associated with many situations) is associated with greater disability and higher comorbidity, but patients with the non-generalised subtype (where anxiety is focused on a limited number of situations) can be substantially impaired (Aderka et al., 2012; Wong et al., 2012). Social anxiety disorder is hard to distinguish from avoidant personality disorder, which may represent a more severe form of the same condition (Reich, 2009). Patients with social anxiety disorder often present with symptoms arising from comorbid conditions (especially depression), rather than with anxiety symptoms and avoidance of social and performance situations (Stein et al., 1999). There are strong, and possibly two-way, associations between social anxiety disorder and dependence on alcohol and cannabis (Buckner et al., 2008; Robinson et al., 2011).

19.2. Acute treatment

The findings of meta-analyses and randomised placebocontrolled treatment studies indicate that a range of approaches are efficacious in acute treatment (Blanco et al., 2013). CBT [cognitive behavioral therapy] is efficacious in adults (Hofmann and Smits, 2008) and children (James et al., 2005): cognitive therapy appears superior to exposure therapy (Ougrin, 2011), but the evidence for the efficacy of social skills training is less strong (Ponniah and Hollon, 2008). Antidepressant drugs with proven efficacy include most SSRIs (escitalopram, fluoxetine, fluvoxamine, paroxetine, sertraline), the SNRI venlafaxine, the MAOI phenelzine, and the RIMA moclobemide.

[...]

19.4. Comparative efficacy of pharmacological, psychological and combination treatments

Pharmacological and psychological treatments, when delivered singly, have broadly similar efficacy in acute treatment (Canton et al., 2012). However, acute treatment with cognitive therapy (group or individual) is associated with a reduced risk of symptomatic relapse at follow-up (Canton et al., 2012). It is unlikely that the combination of pharmacological with psychological treatments is associated with greater overall efficacy than with either treatment, when given alone, as only one in four studies of the relative efficacy of combination treatment found evidence for superior efficacy (Blanco et al., 2010). The findings of small randomised placebo-controlled studies suggest that the efficacy of psychological treatment may be enhanced through prior administration of d-cycloserine (Guastella et al., 2008; Hofmann et al., 2006) or cannabidiol (Bergamaschi et al., 2011).

From a patient perspective, the guidelines suggest that each of the following four approaches should be similarly effective for the treatment of social anxiety as long as the care provider is adequately trained and up-to-date with current best practice:

  • Pharmacotherapy
    • given by a psychiatrist.
    • given by a primary care physician.
  • Psychotherapy
    • with a therapist.
    • in a group setting.
Comment author: JoshuaZ 20 January 2015 04:34:35PM *  5 points [-]

Precommitting to a secret prediction which I'll reveal on April 15. MD5 hash for the prediction is 38bd807a6872f6a5622aa2b011fd8f03 .

Comment author: gjm 20 January 2015 05:45:19PM 7 points [-]

This is advance notice that unless your prediction is a short bit of plaintext that obviously doesn't have more than a few bits' worth of scope for massaging, your use of MD5 is likely to be taken as showing that you cheated.

Comment author: JoshuaZ 20 January 2015 06:04:46PM 5 points [-]

Valid point. Here is the SHA-1 hash: f886dee5be3192819b3cd596cd73919f5c1e0a2c .

Comment author: gjm 20 January 2015 06:07:00PM 11 points [-]

Copy of JoshuaZ's SHA-1 hash as of 2015-01-20 18:06 GMT: f886dee5be3192819b3cd596cd73919f5c1e0a2c .

Comment author: Vaniver 20 January 2015 05:36:55PM 1 point [-]

Hash copy: 38bd807a6872f6a5622aa2b011fd8f03

Comment author: JoshuaZ 20 January 2015 04:36:14PM 1 point [-]

I just realized that editing the grammar above was an issue since it doesn't show when the edit occurred so repeating the hash here in a comment which will remain unedited: 38bd807a6872f6a5622aa2b011fd8f03 .

Comment author: NancyLebovitz 24 January 2015 11:58:14AM *  4 points [-]

What makes teams more effective

It isn't the total IQ of the team, and whether they're working face to face doesn't matter.

The factors discovered were that the members make fairly equal contributions to discussions, level of emotional perceptiveness, and number of women, though part of the effect of number of women is partially explained by women tending to be emotionally perceptive.

On the one hand, I've learned to be skeptical of social science research-- and I add some extra skepticism for experiments that are simulations of the real world. In this case, the teams were working on toy problems.

On the other hand, this study appeals strongly to my prejudice in favor of niceness. I found the presence of women to be a surprising factor, since I haven't noticed women as being easier to work with.

A notion: the fairly equal contribution part may be, not exactly that everyone contributes more, but that if the conversation is dominated by a few voices, those voices tend to repeat themselves a lot, and therefore contribute little compared to the time they take up.

Comment author: Gram_Stone 24 January 2015 08:54:57PM *  2 points [-]

Here are the papers:

I wonder how all female groups compare to groups with just one male, and how all male groups compare to groups with just one female. It seems to me like it's harder for any one person to dominate whenever people feel the need to signal egalitarian values like a preference for gender or racial equality. I don't know anything about statistics yet, so maybe this is implausible, but I think part of the reason that diversity was an insignificant predictor was that poor theory of mind caused by (?) ingroup favoritism dominates the effect as diversity increases and it drowns out the effect of the need to signal egalitarian values, so I think it would be cool to see how the collective intelligence changes when you go from 'completely' homogenous to 'almost' homogenous in experimental groups composed of subjects from cultures that value egalitarianism highly. I would like to see this replicated by subjects from less egalitarian cultures as well, but that's hard sometimes.

Comment author: Viliam_Bur 26 January 2015 09:03:52AM *  1 point [-]

My guess: People in the team need to communicate. This can be essentially achieved by two ways:

1) All team members voice their opinions openly.

2) Some team members don't voice their opinions, but other members are good at reading emotions, so the latter recognize when the former believe they know something relevant.

If this model is true, we would see that equal contribution (no one is silent) or emotional perceptiveness (other people recognize when the silent person wants to say something) increase the team output.

Comment author: Lumifer 22 January 2015 05:23:32PM 4 points [-]

People are perennially interested in the reliability of hard drives. Here is useful hard data. Summary:

At Backblaze, as of December 31, 2014, we had 41,213 disk drives spinning in our data center, storing all of the data for our unlimited backup service. That is up from 27,134 at the end of 2013. ... The table below shows the annual failure rate through the year 2014.

tl;dr Avoid 3Tb Seagate Barracuda drives.

Comment author: zedzed 25 January 2015 03:23:25PM *  1 point [-]

I spend time in hardware enthusiast communities and not so impressed with Backblaze. Even here, the Seagate failure rates seem suspiciously anomalous.

Also, SSDs, which are probably a better match for most people here (my rig has run a 256 GB SSD for the past 2.5 years and I'm yet to want for more storage). Especially for laptops; they use less power (= your battery lasts longer) and can stand up to shock (so your laptop doesn't break if you drop it).

Comment author: Lumifer 26 January 2015 05:09:14PM *  1 point [-]

I did not mean to endorse any particular service or give recommendations as to which storage devices should people buy. I found hard data which is rare to come by, I shared it. If you think the data is wrong or misleading, do tell.

Comment author: zedzed 26 January 2015 07:28:32PM *  0 points [-]

Consensus is that modern HDD's from reputable manufacturers have approximately equal low failure rates, especially after the first year. You should still back up important data (low != 0), but the differences failure rates in consumer space is small enough to not really sway purchasing decisions.

Their methodology probably doesn't extrapolate well because they're testing the drives in what amounts to a NAS and the WD reds (which did well) are NAS drives, and therefore designed to operate 24/7 with vibration and nongreat cooling, whereas the Seagate Barracudas are just absolutely not NAS drives (unlike, say, the Seagate NAS drives). So, it's not really surprising they had a much higher failure rate, but it'd also be incorrect to conclude that you should avoid them. If I'm building a rig for work, internet use, or gaming {1}, then my HDD's going to be in a well-cooled, non-vibrating environment, and not used in use 24/7, so I'm essentially throwing away 15% price premium for the WD Red's (or 60% for the HGST Deskstar's). OTOH, if you're backing up your data locally on a NAS, pay the gorram premium.

{1} Again, though, SSD's are increasingly likely the way to go. You can get a sufficiently good 256 GB SSD for about the price of a 3 GB HDD and if you're never going to use more than 250 GB (which, I'm guessing is at least 80% of people reading this who don't already know whether an SSD or HDD better meets their needs), you're essentially getting substantially better performance (up to an order of magnitude), more reliability, and less noise for free. I harp on this because SSD's come in a 2.5-inch form factor and the more the standard storage option is SSD, the more cases won't have a whole bunch of room taken up with 3.5-inch bays I don't use. More importantly, there'll finally be budget laptops that I don't have to immediately take apart, clone the OS onto an SSD, reassemble, and figure out what to do with the HDD it came with just to get a decent experience. Gah! SSD's are the right choice for most people and there's externalities when they get HDD's instead because "more gigabytes".

Comment author: Lumifer 26 January 2015 07:44:36PM *  1 point [-]

Consensus is that modern HDD's from reputable manufacturers have approximately equal low failure rates, especially after the first year.

I am sorry, the link shows hard data which disproves that statement and not in a gentle way, either.

So, it's not really surprising they had a much higher failure rate

Didn't your first sentence state that all failure rates are "approximately equal"? Make up your mind.

my HDD's going to be in a well-cooled, non-vibrating environment

Assumption not in evidence. I've seen a LOT of computers totally taken over by dust bunnies :-) The reason you go look at that grey disk where the fan vent used to be is that your bios starts screaming at you that the machine is overheating :-D

SSD's are the right choice for most people

Yes, but that's irrelevant to the original post which looks at reliability of rotating-platter hard drives. If you think you don't care about the issue, well, what are you doing in this subthread?

Comment author: zedzed 26 January 2015 09:01:28PM *  2 points [-]

My above comment was poorly written. Sorry. Hem.

Consumer-grade HDD's, used properly, all have about same, low failure rate. If you treat your desktop like a NAS or server, they will drop like flies (as evidenced). If you treat your desktop like a desktop, then a lot of the price-raising enterprise-grade features (vibration resistance, 24/7 operation) count for zilch. They're still higher-end drives, and will last longer, but assuming you give your desktop a fraction of the maintenance you give your car (like, take 5 minutes to blow it out every other year), not a lot.

Assumption not in evidence.

Mea culpa. I'll give you heat, but vibration tolerance and 24/7 operation are enterprise-grade features with minimal relevance to desktop hard drives. Evidence. Evidence. Why I'm inclined to distrust anything Backblaze publishes + evidence.

tl;dr Looking at this data and concluding "avoid Seagate Barracuda drives" is a bit like noticing that bikers survive accidents more often when they're wearing a helmet and then issuing a blanket recommendation to a population primarily of car-drivers to wear bike helmets. Sure, it'll reduce your expecting mortality when you go out for a drive, but not nearly as much as you'd expect from the biking numbers.

Comment author: Lumifer 27 January 2015 01:29:29AM *  2 points [-]

Consumer-grade HDD's, used properly, all have about same, low failure rate. If you treat your desktop like a NAS or server, they will drop like flies (as evidenced).

Sigh. No. Really, go look at the data. I am not going to take the "consensus" of the anand crowd over it.

Hitachi Deskstar 7K2000 is a consumer-grade non-enterprise hard drive. In the sample of ~4,600 drives it has 1.1% annual failure rate in the NAS environment.

Seagate Barracuda 7200.14 is a consumer-grade non-enterprise hard drive. In the sample of ~1,200 drives it has 43.1% annual failure rate in the NAS environment.

Those are VERY VERY DIFFERENT failure rates.

I, for example, have five-drive zfs array at home which is on 24/7. I am very much interested in what kind of drives will give me a 1% failure rates and which kind of drives will give me 43% failure rates. I am not average, but I hardly think I'm unique in that respect in the LW crowd.

Comment author: zedzed 28 January 2015 01:00:31AM *  0 points [-]

Do we actually disagree about anything?

We certainly agree that the Barracuda's are crap in NAS's. I believe that WD Red's are a major improvement and Hitachi Deskstars a further improvement, which is just reading the Backblaze data (which is eminently applicable to NAS environments), so I'm we're in complete agreement that, for NAS's, Barracuda << Red < 7K2000.

However, I also contend that, in a desktop PC, a lot of what makes the Reds and 7K2000 more reliable (e.g. superior vibration resistance) will count for very little, so they'll still fail less often, just not 1/40th as much. Even if they're four times as reliable, moving from, say, a 4% annual failure rate vs a 1% annual failure rate may not be worth the price premium (using Newegg pricing, the Hitachi drive costs 72.5% more, but on Amazon, the Hitachi drive is cheaper. Yay Hitachi?), especially since RAID 1 is a thing (which would give us a 0.16% annual failure rate at a 100% price premium). Obviously, if you can find higher-quality drives for less than lower-quality drives, use those. But, in what we'd naively expect to be the normal case, if you're paying for features that drastically reduce failure rates in NAS environments, but using your drives in a desktop environment where these features are doing little to extend your drive life, then you're probably better off using RAID 1.

(Why do I use low single-digit annual failure rates? Because I remember Linus of Linus Tech Tips, who worked as a product manager at NCIX and therefore is privy to RMA and warranty rates, implied that's about right. He produces a metric shit-ton of content, though, so there's no way I'm going to dig it up.)

I'm also interested why you're dismissive of AnandTech. I currently believe they're gold standard of tech reviews, but if they're not as reputable as I believe they are, I would very much like to stop believing they are.

Comment author: Lumifer 28 January 2015 04:42:56PM *  2 points [-]

Do we actually disagree about anything?

Yes. You keep saying that there are no significant differences in reliability between hard drives of similar class (consumer or enterprise, basically) in similar conditions. I keep saying there are.

I'm also interested why you're dismissive of AnandTech. I currently believe they're gold standard of tech reviews, but if they're not as reputable as I believe they are, I would very much like to stop believing they are.

I don't follow the hardware scene much nowadays, but I don't think AnandTech was ever considered the "gold standard" except maybe by AnandTech itself. It's a commercial website, not horrible, but not outstanding either. Garden-variety hardware reviews, more or less. In any case, I trust discussion on the forums much more than I trust official reviews (recall the Sturgeon's Law).

Comment author: somnicule 20 January 2015 08:58:41AM 4 points [-]

Didn't get a response in the last thread, so I'm asking again, a bit more generally.

I've recently been diagnosed with ADHD-PI. I'm wondering how to best use that information to my advantage, and am looking for resources that might help manage this. Does anyone have anything to recommend?

In the short-term I'm trying to lower barriers for things like actually eating by preparing snacks in snaplock bags, printing out and laminating checklists to remind me of basic tasks, and finding more ways to get instant feedback on progress in as many areas as I can (for coding, this means test-driven development).

Comment author: atorm 20 January 2015 12:42:44PM 5 points [-]

My experience of ADHD includes a tendency to become distracted by thought while moving between tasks or places. I have found that headphones with an audiobook help lock my attention down to two tracks instead of half a dozen: I'm either thinking about my task, or the words in my ear. Obviously your mileage may vary, but ADHD people develop all sorts of coping methods, so my broad advice is "experiment with lots of things to help get things done, even if other people are skeptical of their effectiveness."

Comment author: SanguineEmpiricist 19 January 2015 12:45:50AM *  4 points [-]

(http://www.fooledbyrandomness.com/genealogy.jpg

Genealogy of the ideas contained in Taleb's work. Pretty useful. I had it embedded but it took up the entire page for me.

Comment author: Gram_Stone 23 January 2015 03:55:08PM *  8 points [-]

Just thought of something. If you want to talk about variation and selection but you can't say 'evolution' without someone flipping a table, then talk about animal husbandry instead.

EDIT: Heh, turns out Darwin actually did this.

Comment author: Alsadius 20 January 2015 04:01:32PM *  3 points [-]

I'm looking at setting up my own website, both for the experience and to allow hosting of some files for a game I'm making. What I'd like is to register a domain, probably (myrealname).com and/or .ca, both of which are available, set up a wiki on it, and host a few(reasonably large) files. Thing is, I have a computer that stays on 24/7, and I'm generally competent with computers, so I suspect I can probably get by without paying for hosting, which appeals to me.

Can anyone link me to guides on how to do this? My Googling is turning up shockingly little, just "Pay someone for hosting!". I've registered domains before, but never done any hosting.

Comment author: Lumifer 20 January 2015 06:16:20PM 6 points [-]

The two relevant questions here are:

  • What's your ISP's upload speed and stated policy towards home servers? A lot of ISPs prohibit servers for residential customers, though actual enforcement is rare.

  • Are you sure you're up to the task of handling security for your home server that will be exposed to the 'net?

Comment author: Alsadius 21 January 2015 03:56:27AM 1 point [-]

What's your ISP's upload speed and stated policy towards home servers? A lot of ISPs prohibit servers for residential customers, though actual enforcement is rare.

You're right, it's prohibited. That doesn't concern me too much.

Are you sure you're up to the task of handling security for your home server that will be exposed to the 'net?

Frankly, no, I'm not sure at all. Good point :/

Follow-up question: What sort of domain/hosting sites can give me, say, a gig of storage and a few gigs a month of bandwidth for a low price?

Comment author: philh 21 January 2015 10:20:07AM 3 points [-]

You can run a small server on EC2 for free for a year. After that there will be cheaper options, but not necessarily cheaper enough for you to care. http://aws.amazon.com/ec2/pricing/

Comment author: ZankerH 20 January 2015 05:17:14PM *  2 points [-]

You'll need to configure and run a web server on your computer. The most commonly used, publicly documented, free and accessible to people just trying stuff out is LAMP. You'll then need to point your domain at the IP address of your server.

Thing is, I have a computer that stays on 24/7

What kind of hardware are we talking about? How much traffic are you looking at supporting? What kind of internet connection do you have at home? Are you familiar with the concept of mathematical multiplication?

Comment author: btrettel 20 January 2015 05:16:41PM 2 points [-]

Acquiring hosting is straightforward. Pick a company with a good reputation, a reasonable price, and all the features you need, sign up, and pay. (I can't be of much help here, as I've used the same hosting company since 2004 or so, and I'm not sure if I could get a better deal elsewhere.)

The remainder is more specific, and that might be why you are having trouble finding tuturials. E.g., uploading and setting up a wiki could mean you read tutorials on SSH or FTP, tutorials on file permissions, and/or tutorials on the wiki-specific details of setting up a wiki. All of this depends on your experience level. When I started out, I knew none of this, and I basically figured it out as I went along.

Comment author: Douglas_Knight 28 January 2015 10:33:39PM 0 points [-]

Start by paying someone for hosting. That's enough to learn about. Maybe start by paying Amazon nothing for a year of EC2 hosting. Once you understand how to host a website, you can migrate it to your home computer, where you will run into additional difficulties, like installing a base webserver and automatically updating your DNS. But probably you should stick with paid hosting. For static files, Amazon S3 is extremely cheap. For a full-fledged webserver to install your wiki, Nearly Free Speech will do, and is probably cheaper than Amazon, especially at your usage level.

Comment author: Evan_Gaensbauer 20 January 2015 05:31:04AM 3 points [-]

More on Slate Star Codex than on LessWrong, there is discussion of memes as a useful concept for explaining or thinking about cultural evolution. The term 'memetics' is thrown around to correspond to the theory of memes as a field of inquiry. I want to know more about memetics, lest I would consider it not worth my time to think about it more deeply. More broadly, if not definitely a pseudoscience, it skirts that border more frequently. I expect the discourse on memes might be at least a bit less speculative if us amateur memeticists here knew more about it. Thus, I've generated a post covering memetics. Some of them are notes on the history of memetics as a field, and others are interesting. I don't go in-depth in explaining any idea, but sources are provided so readers can pursue individual, uh, memes...from within memeplexes themselves:

https://www.facebook.com/notes/evan-gaensbauer/notes-of-interest-on-memetics-part-i/10153033128194461

That's a link to the note as published by me on Facebook, as I don't have my own blog. It should be accessible publicly. If you can't access it, logged into Facebook or not, let me know, and I'll see if I can solve that problem.

Comment author: somnicule 20 January 2015 09:20:48AM 2 points [-]

You could post this as a top level discussion post here, if you want to make it more available and reduce trivial inconveniences to those without access to facebook.

Comment author: passive_fist 19 January 2015 03:15:45AM 3 points [-]

I've been thinking about (and writing out my thoughts on) the real meaning of entropy in physics and how it relates to physical models. It should be obvious that entropy(physical system) isn't well-defined; only entropy(physical model, physical system) is defined. Here, 'physical model' might refer to something like the kinetic theory of gases, and 'physical system' would refer to, say, some volume of gas or a cup of tea. It's interesting to think about entropy from this perspective because it becomes related to the subjectivist interpretation of probability. I want to know if anyone knows of any links to similar ideas and thoughts.

Comment author: mwengler 20 January 2015 12:59:25PM 4 points [-]

There are approximations in figuring entropy and thermal statistics that may be wrong in very nearly immeasurable ways. The one that used to stick in my head was the calculation of the probability of all the gas in a volume showing up briefly in one-half the volume. Without doing math I figured it is actually much less than the classic calculated result, because the classic result assumes zero correlation between where any two molecules are, and once any kind of significant density difference exists between the two sides of the volume this will break.

But entropy is still real in the sense that it is "out there." An entire civilization is powered (and cooled) by thermodynamic engines, engines which quite predictable provide useful functionalities in ways predictable in detail from calculations of entropy.

A glass of hot water burns your skin even if you know the water and the skin's precise characterization in parameter space before they come in contact. Fast moving (relative to the skin) molecules of water break the bonds of some bits of skin they come in contact with. On the micro scale it may look like a scene from the matrix with a lot of slow moving machine gun bullets. The details of the destruction may be quite beautiful and "feel" cold, but essentially thanks to the central limit theorem, a whole lot of what happens will be predictable in a quite useful, and quite unavoidable way without having to appeal to the detail.

I think the only sense in which you can extract energy from water with a specially built machine that is custom designed for the current parameter space of the water, it is the machine which is at 0 or at least low temperature. And so the fact that useful energy can be extracted from the interaction of finite temperature water and a cold machine is totally consistent with entropy being real, thermal differences can power machines. And they do, witness the cars, trucks, airplanes and electric grid that are essential for our economy. The good news is you can get all the energy you need without knowing the detailed parameter space of the hot water, which is helpful because you then don't have to redesign your cold machine every few microseconds as you bring in new hot water to it from which to extract the next bit of energy.

Entropy is as real as energy whether it feels that way or not, and that is why machines work even when left unattended by consciousnesses to perceive their entropy and its flows.

Comment author: passive_fist 20 January 2015 08:04:57PM *  1 point [-]

I think you're getting several things wrong here.

because the classic result assumes zero correlation between where any two molecules are, and once any kind of significant density difference exists between the two sides of the volume this will break.

The assumption of zero correlation is valid for ideal gases. It will not break if there is a density difference. We're talking about statistical correlation here.

Entropy is as real as energy whether it feels that way or not, and that is why machines work even when left unattended by consciousnesses to perceive their entropy and its flows.

"Entropy is in the mind" doesn't mean that you need consciousness for entropy to exist. All you need is a model of the world. Part of Jaynes' argument is that even though probabilities are subjective, entropy emerges as an objective value for a system (provided the model is given), since any rational Bayesian intelligence will arrive at the same value, given the same physical model and same information about the system.

Comment author: spxtr 19 January 2015 06:52:16AM 2 points [-]

I made a post about this a month or so ago. Yay!

Comment author: passive_fist 19 January 2015 07:18:54AM 1 point [-]

That's pretty much exactly what I had in mind. Thanks.

Comment author: shminux 19 January 2015 04:34:52AM 1 point [-]

In this way entropy is not much different from energy. The latter also depends on the model as much as on the physical system itself.

Comment author: passive_fist 19 January 2015 05:19:50AM 1 point [-]

I'm going to disagree with you here. Not that energy doesn't depend on our models. It just depends on them in a very different way. The entropy of a physical system is the Shannon entropy of its distribution of 'microstates'. But there is no distribution of microstates 'out there'. It's a construction that purely exists in our models. Whereas energy does exist 'out there'. It's true that no absolute value can be given for energy and that it's relative, but in a way energy is far more 'real' than entropy.

Comment author: DanielLC 19 January 2015 06:50:01AM 1 point [-]

Potential energy depends on what you set the zero level to, but I agree that this is very different than entropy. In particular, the difference in energy between two systems is well-defined.

Comment author: buybuydandavis 19 January 2015 04:04:10AM 1 point [-]

It's interesting to think about entropy from this perspective because it becomes related to the subjectivist interpretation of probability.

If you haven't already read Jaynes derivation of maxent, and the further derivation of much of statistical mechanics from those principles, that would be a good place to start.

Comment author: buybuydandavis 19 January 2015 12:11:57AM 3 points [-]

Anyone have a source for a summary of full life extension testing/supplementation regime?

Thiel?Kurzweil?

I've let things slide for a while, and want to get back on track with a full regime, including hormones and pharmaceuticals. I'm thinking cardiovascular, blood sugar, hormone, and neuroprotection.

Comment author: CellBioGuy 19 January 2015 03:03:24AM *  13 points [-]

I would not recommend hormones. Beware of Algernon's law - if a simple biochemical tweak were always helpful, it'd probably already be that way. In particular a lot of things that try to work against 'aging' as opposed to specific dysfunctions will probably cause cancer. Thiel is a particular offender there, he recently started taking HGH with the justification "we'll probably have cancer licked in a decade or two". I read that statement to some people in my lab, where it provoked universal laughter.

Comment author: buybuydandavis 19 January 2015 03:51:33AM 4 points [-]

if a simple biochemical tweak were always helpful, it'd probably already be that way.

The key question: helpful, for what?

There's no reason to think Evolution has optimized my machinery for longevity.

As for giggles, I'll bet on Kurzweil's predictions over the people in your lab.

Comment author: Punoxysm 19 January 2015 04:40:06AM *  7 points [-]

Remember the 80/20 rule. Don't over-optimize; it could be expensive and dangerous.

At least get your diet in line before you worry too much about pharmaceuticals.

Comment author: James_Miller 19 January 2015 12:37:22AM 7 points [-]

Start with medical tests for cholesterol, blood pressure, vitamin D, magnesium, diabetes and anything else your doctor recommends based on your age, family and disease history.

Comment author: moridinamael 19 January 2015 04:10:27AM *  1 point [-]

I don't know if this is what you meant by "summary", but Kurzweil's book (co-written with <edit> some homeopath) Transcend (Amazon) is his most up-to-date effort. I've read it mostly and it seems well researched and also explains the science behind its recommendations.

I also thought I'd mention that there are now certain compounds (Wikipedia) which show some evidence of initiating telomerase production in adult humans. If this drug works as well in humans as it has been shown to work in mice, it should significantly increase your healthspan.

Comment author: Risto_Saarelma 19 January 2015 07:25:58AM 11 points [-]
Comment author: moridinamael 19 January 2015 09:27:01PM 7 points [-]

Oh.

Comment author: RowanE 19 January 2015 01:27:06PM 4 points [-]

How long do the effects of caffeine tolerance, where when you're not on caffeine you're below baseline and caffeine just brings you back to normal, last? If I took tolerance breaks inbetween stretches of caffeine use, could I be better off on average than if I simply avoided it entirely?

Comment author: BrassLion 19 January 2015 04:21:16PM *  7 points [-]

I think you are thinking about this the wrong way. People become caffeine tolerant quickly, but tolerance goes away pretty quickly too. You would get more benefit out of the opposite approach - spending most of your time without caffeine, but drinking a cup of coffee rarely, when you really need it. You would effectively be caffeine naive most of the time, with brief breaks for caffeine use, and this never develop much of a tolerance. If it's been a long time since that first cup of coffee that you don't remember it, trust me, the effects of caffeine on a caffeine-naive brain are incredible.

I know I once read a study that says you can get back to caffeine naive in two weeks if you go cold turkey, but I can't find anything on it again for the life of me. I do remember distinctly that going cold turkey is a bad plan, as the withdrawal effects are pretty unpleasant - slowly lowering your dose is better.

On a more practical level, it is certainly possible to have relatively little caffeine, such that you aren't noticeably impaired on zero caffeine, while still having some caffeine. The average coffee drinker is far beyond this point. I would try to lower your daily dose over the course of a month or so until you are consuming less than a cup of coffee a day - ideally, a lot less, like no cups of coffee. Try substituting tea (herbal or otherwise) if you need something hot to drink to help kill the craving - herbal tea has no caffeine, black tea has about 1/4 of the caffeine per cup, and if you add cream and sugar the taste will be familiar.

EDIT: VincentYu's comment above is interesting in light of this. I am not going to perform my own meta analysis on this, but there are a great deal of studies that find that caffeine tolerance and caffeine withdrawal are real things - a quick Google Scholar search for "caffeine tolerance" will find them.

I am now very interested in a large study on this without the possible conflict of interest. Also, I find it odd that they choose to not include studies before 1992.

Comment author: Douglas_Knight 28 January 2015 10:03:10PM 0 points [-]

If it's been a long time since that first cup of coffee that you don't remember it, trust me, the effects of caffeine on a caffeine-naive brain are incredible.

Yes, a cup of coffee is too much.

Comment author: VincentYu 20 January 2015 03:58:38AM *  6 points [-]

where when you're not on caffeine you're below baseline and caffeine just brings you back to normal

This is a hypothesized explanation for the acute performance-enhancing effects of caffeine that fits well with the Algernon argument, but it is not a conclusive result of the literature. For instance, the following recent review disputes that.

Einöther SJL, Giesbrecht T (2013). Caffeine as an attention enhancer: reviewing existing assumptions. Psychopharmacology, 225:251–74.

Abstract (emphasis mine):

Rationale: Despite the large number of studies on the behavioural effects of caffeine, an unequivocal conclusion had not been reached. In this review, we seek to disentangle a number of questions.

Objective: Whereas there is a general consensus that caffeine can improve performance on simple tasks, it is not clear whether complex tasks are also affected, or if caffeine affects performance of the three attention networks (alerting, orienting and executive control). Other questions being raised in this review are whether effects are more pronounced for higher levels of caffeine, are influenced by habitual caffeine use and whether there [sic] effects are due to withdrawal reversal.

Method: Literature review of double-blind placebo controlled studies that assessed acute effects of caffeine on attention tasks in healthy adult volunteers.

Results: Caffeine improves performance on simple and complex attention tasks, and affects the alerting, and executive control networks. Furthermore, there is inconclusive evidence on dose-related performance effects of caffeine, or the influence of habitual caffeine consumption on the performance effects of caffeine. Finally, caffeine’s effects cannot be attributed to withdrawal reversal.

Conclusions: Evidence shows that caffeine has clear beneficial effects on attention, and that the effects are even more widespread than previously assumed.

The authors' conclusions:

  • Caffeine improves performance on both simple and complex attention tasks.
  • Caffeine improves alerting, executive control and potentially also orienting.
  • There is inconclusive evidence on dose-related performance effects of caffeine.
  • There is inconclusive evidence on the influence of habitual caffeine consumption on the performance effects of caffeine.
  • Caffeine’s effects cannot be attributed to withdrawal reversal.

Note the following conflict of interest:

The authors are employees of Unilever, which markets tea and tea-based beverages.

Comment author: CronoDAS 25 January 2015 10:54:08AM 2 points [-]

I've got a problem. My sleep schedule is FUCKED UP.

Yesterday, I went to bed at around 8:00 AM and got up at 10:00 PM. I don't normally sleep 14 hours, but I've somehow become nocturnal; sleeping from 7 AM until up at 5 PM isn't particularly unusual for me. I'm not actually sleep deprived, but always sleeping through "normal business hours" tends to cause me problems - I can't get to the bank even when it's important - and isn't very convenient for my girlfriend either. My father jokes that I must be turning into a vampire because I'm never awake when the sun is up. Now, I don't actually have a job or go to school, and my only fixed-time obligation is to help my wheelchair-bound mother get into bed, which tends to start at around 1 AM and finish between 3 AM and 4 AM. (There's nobody else to do it at that hour and getting her to go to bed at a different time, or to get ready faster, is practically impossible and not worth the screaming.)

Any advice?

Comment author: Viliam_Bur 26 January 2015 09:07:53AM 1 point [-]

Some kind of polyphasic sleep? E.g. from 9 PM to 1 AM (4 hours) and then from 4 AM to 8 AM (4 hours).

Comment author: Manfred 25 January 2015 10:47:29PM *  1 point [-]

You could get your dad to wake you up at 1 pm every day if he's around For me, having a person wake me up is way more effective. Alternately, just do it the hard way and stay up for 30 hrs.

Comment author: gjm 25 January 2015 02:32:04PM 1 point [-]

It's hard to see what scope there is for the problem to get all that much better if you are required to be awake from 1am to 3am (or later) every day. It seems like the best you can do is to try to establish a routine of always going straight to bed (and not reading, browsing the internet, etc., once there) after dealing with your mother, which might maybe get you a ~ 4am-12pm sleeping time on typical days.

Comment author: CronoDAS 25 January 2015 03:54:44PM 2 points [-]

That's actually a lot better than what I've been doing recently. :(

Comment author: tut 25 January 2015 01:21:16PM 1 point [-]

What happens if you try to go to bed just ten minutes earlier each day than you did the day before, using an alarm clock? What about ten minutes later?

Comment author: JoshuaZ 25 January 2015 01:04:27AM *  2 points [-]

(Warning: politics)

Posting a few links to relevant followups to the "Comment 171" situation and the related sexual harassment scandal and MIT's reaction which prompted that discussion. I'm posting these because the issue has come up in the last few weeks of open threads.

This piece seems like an excellent example of reading others as charitably as possible and essentially steelmanning every argument involved. It also gives a pretty good summary of the entire situation with relevant links.

Also, one of the women involved in the original sexual harassment situation has come forward to provide some details of what was actually going on: here. Here is the Slashdot thread on the article.

Comment author: [deleted] 22 January 2015 04:18:20PM 2 points [-]

Any worthwhile reading post that isn't found on the Sequences? (http://wiki.lesswrong.com/wiki/Sequences)

I recommend this one http://lesswrong.com/lw/iri/how_to_become_a_1000_year_old_vampire/ although I've read it a long time ago - I may have a different opinion on it currently. Re-reading it now.

Comment author: wobster109 23 January 2015 04:15:50AM 2 points [-]

Can it be non-LW material? I found this to be an excellent no-background-needed introduction to AI. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Comment author: Grothor 21 January 2015 10:05:24PM 2 points [-]

I am taking a graduate course called "Vision Systems". This course "presents an introduction to the physiology, psychophysics, and computational aspects of vision". The professor teaching the course recommended that those of us that have not taken at least an undergraduate course in perception get an introductory book on the subject. The one he recommends, which is also the one he uses for his undergraduate course, is this: http://www.amazon.com/Sensation-Perception-Looseleaf-Third-Edition/dp/0878938761 Unfortunately, this book goes for $60-75 for used loose leaf, all the way up to $105 for new hardcover. I'd rather not pay that, unless I can get an independent recommendation for it, or for some other book on the subject.

Does anybody here have a recommendation? Are there good course notes available on the web somewhere?

Comment author: Furcas 20 January 2015 05:16:54AM 2 points [-]

Is there an eReader version of the Highly Advanced Epistemology 101 for Beginners sequence anywhere?

Comment author: Darklight 19 January 2015 04:10:06PM 2 points [-]

I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you'd rather you can private message me your answer.

The question is:

Truth or Happiness? If you had to choose between one or the other, which would you pick?

Comment author: philh 19 January 2015 05:07:06PM 4 points [-]

I don't think this question is sufficiently well-defined to have a true answer. What does it mean to have/lack truth, what does it mean to have/lack happiness, and what are the extremes of both of these?

If I have all the happiness and none of the truth, do I get run over by a car that I didn't believe in?

If I have all the truth but no happiness, do I just wish I would get run over? Is there anything to stop me from using the truth to make myself happy again? Failing that is there anything that could motivate me to sit down for an hour with Eliezer and teach him the secrets of FAI before I kill myself? This option at least seems like it has more loopholes.

Comment author: Darklight 19 January 2015 06:13:06PM 1 point [-]

I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I'm curious how Less Wrongers would handle the ambiguity as well.

In the context of the question, it can perhaps be better defined as:

If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?

It's interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.

Comment author: Jiro 19 January 2015 06:27:42PM 2 points [-]

Defining happiness as "guaranteed increased utility" is questionable. It doesn't consider situations of blissful ignorance, where

  1. We can't seem to agree whether being blissfully ignorant about something one does not want is a loss of utility at all
  2. If that does count as a loss of utility, utility would not equate to happiness because you can't be happy or sad about something you don't know about.
Comment author: Darklight 19 January 2015 10:12:13PM 1 point [-]

For simplicity's sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.

Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don't know about, but Eudaimonia is a bit of a strange concept.

Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?

Consider this rephrasing of the question:

If you were in a situation where someone (possibly Omega... okay let's assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?

Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren't sure what "Truth" or "Happiness" means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a "correct" rational answer?

If it's not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?

Comment author: adamzerner 20 January 2015 06:30:36AM *  2 points [-]

Great question! I'm glad you brought it up!

Personally, it's a bit of an ugh field for me. And is something I'm confused about, and really wish I had a good answer to.

To me, this get's at a more general question of, "what should your terminal values be?". It is my understanding that rationality can help you to achieve terminal values, but not to select them. I've thought about it a lot and have tried to think of a reason why one terminal value is "better" or "more rational" than another... but I've pretty much failed. I keep arriving at the conclusion that "what should your terminal values be?" is a Wrong Question, which becomes pretty obvious once it's dissolved.

But at the same time... it's such an important question that the slightest bit of uncertainty really bothers me. Think of it in terms of expected value - a huge magnitude multiplied by a small probability can still be huge. If I misunderstood something and I'm pursuing the wrong terminal goal(s)... well that'd be bad (how bad depends on how different my current goals are from "the real goals").

I'd love to hear others' takes on this. It appears that people live their lives as if things other than Your Happiness matter. Like Altruism and Truth. Ie, people pursue terminal values other than their own happiness. Is this true? I've really be interested in seeing a LW survey on terminal goals.

Comment author: DanielLC 26 January 2015 01:58:23AM 0 points [-]

Truth is a tool. If it can't be used to fulfill my goal of happiness, what good is it? That being said, if you just meant my happiness, then I'd take truth and use it to increase net happiness.

Comment author: DataPacRat 19 January 2015 12:31:24AM 2 points [-]

Not Quite the Prisoner's Dilemma

Evolving strategies through the Noisy Iterated Prisoner's Dilemma has revealed all sorts of valuable insights into game theory and decision theory. Does anyone know of any similar tournaments where the payouts weren't constant, so that any particular round might or might not qualify as a classic Prisoner's Dilemma?

Comment author: Gram_Stone 19 January 2015 12:26:18PM 3 points [-]

I've never studied any branch of ethics, maybe stumbling across something on Wikipedia now and then. Would I be out of my depth reading a metaethics textbook without having read books about the other branches of ethics? It also looks like logic must play a significant role in metaethics given its purpose, so in that regard I should say that I'm going through Lepore's Meaning and Argument right now.

Comment author: TheAncientGeek 19 January 2015 06:44:38PM 5 points [-]

You could dip a toe on Stanford Encyclopedia of Philosophy.

Comment author: gjm 19 January 2015 01:50:55PM 4 points [-]

The best way to tell is to read the metaethics textbook and see what happens. If it turns out you need a crash course on (say) utilitarian thinking, you can always do that and then return to metaethics.

What is your reason for wanting to read a metaethics textbook? I ask because the most obvious reason (I think) is "because I want to live a good life, so I want to figure out what constitutes living a good life, and for that I need a coherent system of ethics" but I'd have thought that most people thinking in those terms and inclined to read philosophy textbooks would already have looked into (at least) whatever variety of ethics they find most congenial.

Comment author: Gram_Stone 19 January 2015 02:47:43PM *  1 point [-]

Good point. I ordered it yesterday, and it's supposed to be an easy introduction, so we'll see what happens.

Well it seems to me that there are so many different schools of normative ethics, that unless we're all normative moral relativists (I don't think we are), most people must be wrong about normative ethics. I've seen claims here that mainstream metaethics has it all wrong, I just found out that lukeprog's got his own metaethics sequence, and some of the things that he claims to resolve seem like they would have profound implications for normative ethics. I guess I feel like I'm saving myself time not reading about a million different theories of normative ethics (kind of like I think I'm saving myself time not reading about a million different types of psychotherapy, unless it's for some sort of test) and just learning about where the mainstream field of metaethics is, and then seeing where Eliezer and Luke differ from it, and if I agree.

Is it crazy to want to have some idea of what ethical statements mean before I use them as a justification for my behavior? That you say "whatever variety of ethics they find most congenial," makes me think that you might not think it is that crazy. And I mean, I'm at least not murdering anyone right now; I have time for this. And if I don't ever take the time, then I could end up becoming the dreaded worse-than-useless.

I'm also curious about FAI so I'm generally schooling myself in LW-related stuff, hence the books on logic and AI and ethics. I'm working towards others as well.

Comment author: is4junk 19 January 2015 03:28:01PM 2 points [-]

I was looking at this article as a starting point. I end up at either error theory or non-cognativism. Is there value in reading further down the tree or would it be like learning more phlogiston theory (at least for me)?

Comment author: [deleted] 19 January 2015 12:37:43PM 2 points [-]

Does it matter? It's not very hard to get up to speed on ethics. Either skim an introductory textbook, or spend a few hours on the Stanford Philosophy encyclopedia.

Comment author: Gram_Stone 20 January 2015 02:43:55AM *  1 point [-]

I found my own answer in the comments of the course recommendations for friendliness thread. Luke says:

It's really hard to find good writing on metaethics. My recommendation would be to read the chapter on ethical reductionism from Miller's [Contemporary Metaethics: An Introduction], my own unfinished sequence on metaethics, and Eliezer's new sequence (most of it's not metaethics, but it's required reading for understanding the explanation of his 2nd attempt to explain metaethics, which is more precise than his first attempt in the earlier Sequences).

On normative ethics, Luke says elsewhere:

I don't read much on normative ethics, but Smart & Williams' Utilitarianism: For and Against has some good back-and-forth on the major issues, at least up to 1973. The other advantage of this book is that it's really short.

But there are probably better books on the subject I'm just not aware of.

From what I see, he seems to attribute a similarly low significance to most of contemporary normative ethics.

Also, the Stanford Encyclopedia of Philosophy has been suggested twice, in case I do need to know anything in particular about normative ethics. I'll keep that in mind.

For posterity, as far as I can tell, the most popular undergraduate text on normative ethics is Rachels' The Elements of Moral Philosophy. The 7th edition has good reviews on Amazon. Apparently the 8th edition is too new to have reviews.

Comment author: Furcas 20 January 2015 05:12:23AM 1 point [-]

and Eliezer's new sequence (most of it's not metaethics, but it's required reading for understanding the explanation of his 2nd attempt to explain metaethics, which is more precise than his first attempt in the earlier Sequences).

Where is this 2nd attempt to explain metaethics by Eliezer?

Comment author: advancedatheist 19 January 2015 12:21:43AM *  8 points [-]

Well, someone had to say it:

http://edge.org/response-detail/26073

Dylan Evans Founder and CEO of Projection Point; author, Risk Intelligence

The Great AI Swindle

Smart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.

This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.

Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.

This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.

It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?

Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.

But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.

Comment author: James_Miller 19 January 2015 12:40:50AM *  23 points [-]

it provides some of those who advance it with a lucrative income stream.

Not me! As I fully expected, I've earned less than the minimum wage for my book on the singularity. And I get the impression that most people involved in the singularity movement are earning far less than they could given their skill set.

Comment author: gjm 19 January 2015 12:50:27AM 17 points [-]

someone had to say it

You say that as if the point of view expressed by Dylan Evans here is one that hasn't been expressed before. It seems to me more like what until recently was the default reaction to any concerns about unfriendly AI.

Comment author: emr 19 January 2015 04:37:12AM *  6 points [-]

I've noticed a pattern: Someone implies that some (critical or controversial) position X isn't represented here, even though X is obviously represented, often by prominent posters in highly up-voted comments.

I think what happens is that some advocates of X literally cannot recognize their own position when it's presented in a non-tribal manner.

Comment author: fubarobfusco 19 January 2015 04:56:35AM 11 points [-]

Alternately, claiming novelty is something akin to a bravery debate.

Comment author: jkaufman 20 January 2015 03:41:09PM 9 points [-]

It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.

GiveWell recommends extremely few charities. Unless you similarly write off the Red Cross, United Way, the Salvation Army, and everyone else GiveWell doesn't recommend, this looks like motivated skepticism.

Comment author: gjm 19 January 2015 12:47:18AM 9 points [-]

It seems to me that there are two key points in Evans's argument where he makes a controversial claim and needs to justify it, and that at both he kinda cheats.

The first is where he goes from a description of the "Pascal's Mugging" scenario to saying that that's a good way to describe concerns over unfriendly AI. (Rather than, e.g., seeing them as analogous to insurance, where one pays a modest but annoying sum for alleged protection against various unlikely but potentially devastating events.) He doesn't make any attempt at all to justify this; I think he just hopes that the reader won't notice.

The second is where he suggests that "some of those who advance [UFAI arguments]" are getting a lucrative income stream from doing so. It seems to me that actually awfully few are, and most of those could have got richer faster and more reliably by other more normal means. So if he's saying about their motives what he seems to be, then again he really owes the reader some justification. Which, again, is not there.

(Maybe there's a third. I think his last paragraph is just repeating the one that precedes it. But maybe he's suggesting some other, more powerful "economic interests" at work; if so, it's not at all clear to me who he has in mind.)

Comment author: NancyLebovitz 20 January 2015 09:15:31PM 3 points [-]

Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding.

I think this is a bad line of thought even before we get to the hypothesis that people are pushing UFAI risks for the money.

For one thing, people just get things wrong a lot-- it doesn't take bad motivations.

For another, it's very easy to jump to the conclusion that what seems to be correct to you is so obviously correct that other people must be getting it wrong on purpose.

For a third, even if you're right that other people are engaged in motivated thinking, you might be wrong about the motivation. For example, concern about UFAI might be driven by anxiety, or by "ooh, shiny! cool idea!" more than by narcissism or money.

advancedatheist, how sure are you of your motivations?

Comment author: RowanE 19 January 2015 01:08:21PM 6 points [-]

I think the entire core of his argument is a sleight-of-hand between "improbable" and "the kind of absurd improbability involved in Pascal's wager", without even (as others have pointed out) giving any arguments for why it's improbable in the first place.

Comment author: JoshuaZ 19 January 2015 01:35:36AM 2 points [-]

The idea that AI is a low probability risk is one that has some merit, but one doesn't need a Pascal's Mugging sort of scenario to consider it to be a problem. If it is only 5 or 10 percent of existential risk in the next century then it is already a serious problem. In general, all existential risks are underfunded by a lot. The only difference with AI is that for a long time it has been even more underfunded than other sources of existential risk.

Comment author: Evan_Gaensbauer 20 January 2015 05:38:55AM 2 points [-]

Scott Alexander, alias Yvain, conducted a companion survey for the readership of his blog, Slate Star Codex, to parallel and contrast with the survey of the LessWrong community. The issue I ponder below will likely come to light when the results from that survey are published. However, I'm too curious thinking about this to wait, even if present speculation is later rendered futile.

Slate Star Codex is among my favorite websites, let alone blogs. I spend more time reading it than I do on LessWrong, and it may only be second to Wikipedia or Facebook for website which I spend the most time on. Anyway, like almost everyone else reading this, I migrated to Slate Star Codex from LessWrong. So, in my mind, it seems alien to me that Slate Star Codex would have a readership that doesn't have virtually complete overlap with the LessWrong readership.

I imagine readers of Slate Star Codex not familiar with LessWrong include: * medical professionals within a couple of degrees, socially, of Scott's professional circles * some neoreactionaries, and social justice activists, from across the blogosphere

Does anyone else have an impression of who might read Slate Star Codex who doesn't read LessWrong? Alternatively, if you don't like Slate Star Codex, or are turned off by it, I'm curious as to why. I've encountered virtually unanimous appreciation of Slate Star Codex from among my friends who read LessWrong, so I'm fascinated by the possibility of outlying opinions.

Comment author: ahbwramc 20 January 2015 04:43:14PM 4 points [-]

A number of SSC posts have gone viral on Reddit or elsewhere. I'm sure he's picked up a fair number of readers from the greater internet. Also, for what it's worth, I've turned two of my friends on to SSC who were never much interested in LW.

But I'll second it being among my favourite websites.

Comment author: knb 20 January 2015 07:39:02AM 2 points [-]

SSC seems to have a pretty wide fanbase on Tumblr. I'm sure he's picked up a very large non-LW fanbase over the years; he's been blogging forever.

Comment author: is4junk 20 January 2015 02:03:28AM 1 point [-]

Public voting and public scoring

I am sure this has been debated here before but I keep dreaming of it anyway. Let's say everyone's upvotes and downvotes were public and you could independently score posts using this data with your own algorithm. If the algorithms to score posts were also public then you could use another users scoring algorithm instead of writing your own (think lesswrong power-user).

As a simple example, lets say my algorithm is to average the score of userRational and userInsightful and userRational algorithm is just lesswrong regular score minus Usertroll's votes.

The benefits would be a better curated garden, more users, and more discussion.

Comment author: JoshuaZ 20 January 2015 02:56:55AM *  1 point [-]

Currently, the backlog to changing the codebase here is so big and there's so little work going on it that even if there was a consensus for this change it would be unlikely to happen.

More specific to this proposal, there are at least two problems with this idea: First: it could easily lead to further group think: Suppose a bunch of Greens zero out all voting by certain people who have identified as Blues and a bunch of Blues do the same. Then each group will see a false consensus for their view based on the votes. Second, making votes public by default could easily influence how people vote if they are intimidated by repercussions for downvoting high-status users or popular arguments, or even just not downvoting because it could make enemies.

Comment author: Viliam_Bur 20 January 2015 08:00:06AM 2 points [-]

Yeah, I suspect this would just move the game one step more meta. Instead of attacking enemies by mass downvoting now people would attack their enemies by public campaigns based on alleged patterns in the targets' votes. Then we could argue endlessly about what patterns are okay or not okay.

Comment author: is4junk 20 January 2015 04:11:35PM 1 point [-]

I agree there still would be very easy ways to punish enemies or even more common 'friends' that don't toe the line.

I do think it would identify some interesting cliques or color teams. The way I envision using it would be more topic category based. For instance, for topic X I average this group of peoples opinions but a different group on topic Y.

On the positive side, if you have a minority position on some topic that now would be downvoted heavily you could still get good feedback from your own minority clique.

Comment author: iarwain1 19 January 2015 09:34:52PM 1 point [-]

General question: I've read somewhere that there's a Bayesian approach to at least partially justifying simplicity arguments / Occam's Razor. Where can I find a good accessible explanation of this?

Specifically: Say you're presented with a body of evidence and you come up with two sets of explanations for that evidence. Explanation Set A consists of one or two elegant principles that explain the entire body of evidence nicely. Explanation Set B consists of hundreds of separate explanations, each one of which only explains a small part of the evidence. Assuming your priors for each individual explanation is about equal, is there a Bayesian explanation for our intuition that we should bet on Explanation Set A?

What about if your prior for each individual explanation in Set B is higher than the priors for the explanations in Set A?

Example:

Say you're discussing Bible Criticism with a religious friend who believes in the traditional notion of complete Mosaic authorship but who is at least somewhat open to alternatives. To your friend, the priors for Mosaic authorship are much higher than the priors for a documentary or fragmentary hypothesis. (If you want numbers, say that your friend's priors are .95 in favor of Mosaic authorship.)

Now you present the arguments, many of which (if I understand them correctly) boil down to simplicity arguments:

  • Mosaic authorship requires either a huge number of tortured explanations for individual verses, or it requires saying "we don't know" or "God kept it secret for some reason". Documentary-type hypotheses, on the other hand, postulate a few basic principles and use them to explain virtually everything.
  • Several different lines of local internal evidence often point to exactly the same conclusions. For example, an analysis of the repetitions within a story might lead us to divide up the verses between authors in a certain way, while at the same time an independent stylistic analysis leads us to virtually the same thing. So we again have a single explanation set that resolves multiple sets of difficulties, which again is simpler / more elegant than the alternative of proposing numerous individual explanations to resolve each difficulty, or just throwing up our hands and saying God keeps lots of secrets.

The question is, is your friend justified in rejecting your simplicity-based arguments based on his high priors? What about if his priors were lower, say .6 in favor of Mosaic authorship? What about if he held 50-50 priors?

Comment author: IlyaShpitser 19 January 2015 11:43:08PM *  6 points [-]

The B approach to Occam's razor is just a way to think carefully about your possible preference for simplicity. If you prefer simpler explanations, you can bias your prior appropriately, and then the B machinery will handle how you should change your mind with more evidence (which might possibly favor more complex explanations, since Nature isn't obligated to follow your preferences).


I don't think it's a good idea to use B in settings other than statistical inference, or probability puzzles. Arguing with people is an exercise in xenoanthropology, not an exercise in B.

Comment author: shminux 20 January 2015 12:30:29AM 3 points [-]

Upvoted for

Arguing with people is an exercise in xenoanthropology

Comment author: DanielLC 26 January 2015 02:06:37AM 2 points [-]

Assuming your priors for each individual explanation is about equal, is there a Bayesian explanation for our intuition that we should bet on Explanation Set A?

Do you mean your prior for A is about your prior for B, or your priors for each element are about the same?

If you mean the first, then there is no reason to favor one over the other. Occam's razor just says the more complex explanation has a lower prior.

If you mean the second, then there is a very good reason to favor A. If A has n explanations, B has m, all explanations are independant and of probability p, then P(A) = p^n and P(B) = p^m. A is exponentially more likely than B. In real life, assuming independence tends to be a bad idea, so it won't be quite so extreme, but the simpler explanation is still favored.

Comment author: Vaniver 19 January 2015 10:13:45PM 2 points [-]

I think you'll get somewhere by searching for the phrase "complexity penalty." The idea is that we have a prior probability for any explanation that depends on how many terms / free parameters are in the explanation. For your particular example, I think you need to argue that their prior probability should be different than it is.

I think it's easier to give a 'frequentist' explanation of why this makes sense, though, by looking at overfitting. If you look at the uncertainty in the parameter estimates, they roughly depend on the number of sample points per parameter. Thus the fewer parameters in a model, the more we think each of those parameters will generalize. One way to think about this is the more free parameters you have in a model, the more explanatory power you get "for free," and so we need to penalize the model to account for that. Consider the Akaike information criterion and Bayesian information criterion.

Comment author: Coscott 19 January 2015 01:53:38AM *  1 point [-]

What app does less wrong recommend for to-do lists? I just started using Workflowy (recommended from a LW friend), but was wondering if anyone had strong opinions in favor of something else.

P.S. If you sign up for workflowy here, you get double space.

EDIT: The above link is my personal invite link, and I get told when someone signs up using it, and I get to see their email address. I am not going to do anything with them, but I feel obligated to give this disclaimer anyway.

Comment author: harshhpareek 20 January 2015 11:37:45PM 3 points [-]

It depends on why I'm making the list.

If I'm making a todo list for a project I'm working on, Workflowy is good because its simple and supports hierarchical lists.

For longer lived stuff where I add and delete stuff like grocery/shopping lists or books to read, I use wunderlist because they have an android app, a standalone windows app and it looks pretty. Browser-based apps annoy me so I like the windows app and the android app is nice to have when I'm actually in the grocery store.

When I'm making a list because I need to be productive and not as a way to plan, I use a paper todolist: http://www.amazon.com/gp/product/B0006HWLW2/ref=oh_aui_detailpage_o08_s00?ie=UTF8&psc=1. Checking things off on paper does wonders for productivity and having the printed thing helps set the mood.

Comment author: Risto_Saarelma 20 January 2015 01:56:55AM 1 point [-]

I use a paper notebook, inspired by bullet journal and autofocus for daily/weekly goals when the list stays under 20 or so items. Recently a project started ballooning into more items than this system could handle, so I picked up todo.txt a month ago. I've been very happy with it so far. The system works with just a regular text editor and keeping all the lines in the file lexically sorted, but it's also a markup format that can be used with specific tools. I keep the project-specific list synced with a symbolic directory link from the project directory tree to Dropbox, and currently use the Simpletask app to update the list on my phone. Seems to work well for everything I need.

Comment author: [deleted] 20 January 2015 12:44:17AM 1 point [-]

I've tried a bunch, but Todoist is the only one that's powerful, flexible, quick, and easy enough for me to want to use.