Open thread, Aug. 10 - Aug. 16, 2015

5 Post author: MrMind 10 August 2015 07:29AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (283)

Comment author: Username 10 August 2015 01:31:42PM 13 points [-]

Impulsive Rich Kid, Impulsive Poor Kid, an article about using CBT to fight impulsivity that leads to criminal behaviour, especially among young males from poor backgrounds.

How much crime takes place simply because the criminal makes an impulsive, very bad decision? One employee at a juvenile detention center in Illinois estimates the overwhelming percentage of crime takes place because of an impulse versus conscious decision to embark on criminal activity:

“20 percent of our residents are criminals, they just need to be locked up. But the other 80 percent, I always tell them – if I could give them back just ten minutes of their lives, most of them wouldn’t be here.”

...

The teenager in a poor area [who is] is not behaving any less automatically than the teenager in the affluent area. Instead the problem arises from the variability in contexts—and the fact that some contexts call for retaliation.” To illustrate their theory, they offer an example: If a rich kid gets mugged in a low-crime neighborhood, the adaptive response is to comply -- hand over his wallet, go tell the authorities. If a poor kid gets mugged in a high-crime neighborhood, it is sometimes adaptive to refuse -- stand up for himself, retaliate, run. If he complies, he might get a reputation as someone who is easy to bully, increasing the probability he will be victimized in the future. The two kids, conditioned by their environment, learn very different automatic responses to similar stimuli: someone else asserting authority over them.

The authors of “Thinking, Fast and Slow” extend the example further by asking you to imagine these same two kids in the classroom. If a teacher tells the rich kid to sit down and be quiet, his automatic response to authority on the street -- comply, sit down and be quiet -- is the same as the adaptive response for this situation. If a teacher tells the poor kid to sit down and be quiet, his automatic response to authority on the street -- refuse, retaliate -- is maladapted to this situation. The poor kid knows the contexts are different, but still on a certain level feels like his reputation is at stake when he’s confronted at school, and acts-out, automatically.

...

The researchers examined clinical studies of programs that keep this in mind and focus on teaching kids to regulate their automaticity. These interventions were designed to help young people, “recognize when they are in a high-stakes situation where their automatic responses might be maladaptive,” and slow down and consider them. One of the interventions studied was the Becoming a Man (BAM) program, conducted in public schools with disadvantaged young males, grades 6-12, on the south and west sides of Chicago.

“What makes the interventions we study particularly interesting is that they do not attempt to delineate specific behaviors as “good,” but rather focus on teaching youths when and how to be less automatic and more contingent in their behavior.”

Researchers randomly assigned students to have the opportunity to participate in BAM, as a course conducted once a week throughout the 2009-2010 school year.

The course is actually a program of cognitive behavioral therapy (CBT). CBT helps people identify harmful psychological and behavioral patterns, and then disrupt them and foster healthier ones. It’s used by a wide range of people for a wide range of issues, including to treat depression, anger management, and anxiety disorders. The particular style of CBT used in BAM focuses on three fundamental skills:

  1. Recognize when their automatic responses might get them into trouble,

  2. Slow down in those situations and behave less automatically,

  3. Objectively assess situations and think about what response is called-for. One thing participants are taught in BAM is that “a shift to an aversive emotion” is an important cue for when they are prone to act automatically. Anger, for example, was a common cue among participants in the study group. They were also taught tricks to help them slow down to consider their situation before acting: including deep breathing and other relaxation techniques. Lastly, they were guided through self-reflection and assessment of their own behavior: examining their “automatic” missteps, thinking about how they might have acted differently.

The researchers found that, during the program year, program participants had a 44% lower arrest rate for violent crimes than the control group. They repeated the intervention in 2013-2014 with a new group, and found that program participants had a 31% lower arrest rate for violent crimes than the control group.

Comment author: Viliam 11 August 2015 07:47:02AM 6 points [-]

Impulsive Rich Kid, Impulsive Poor Kid

Unrelated to the real content of the article, but my first reaction after reading the title was: "obviously, the impulsive Rich Kid can afford a better lawyer".

Comment author: Lumifer 10 August 2015 03:48:49PM 12 points [-]

if I could give them back just ten minutes of their lives, most of them wouldn’t be here.

He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.

The remainder of the post actually argues that persistent, stable "reflexes" are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.

Comment author: query 10 August 2015 08:50:55PM 3 points [-]

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated.

That said, we cant actually retroactively edit anyways.

Comment author: Lumifer 11 August 2015 12:40:03AM 1 point [-]

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence.

I don't think that's the model (or if it is, I think it's wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.

Comment author: Emile 11 August 2015 07:32:11AM 4 points [-]

if I could give them back just ten minutes of their lives, most of them wouldn’t be here.

He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.

I disagree. Let's take drivers who got into a serious accident : if you "gave them just back ten minutes" so that they avoided getting into that accident, most of them wouldn't have had another accident later on. It's not as if the world neatly divided into safe drivers, who never have accidents, and unsafe drivers, who have several.

Sure, those kids that got in trouble are more likely to have problematic personalities, habits, etc. which would make it more likely to get in trouble again - but that doesn't mean more likely than not. Most drivers don't get have (serious) accidents, most kids don't get in (serious) trouble, and if you restrict yourself to the subset of those who already had it once, I agree a second problem is more likely, but not certain.

Comment author: Lumifer 11 August 2015 02:37:44PM 2 points [-]

but that doesn't mean more likely than not

How do you know?

most kids don't get in (serious) trouble

Yeah, but we are not talking about average kids. We're talking about kids who found themselves in juvenile detention and that's a huge selection bias right there. You can treat them as a sample (which got caught) from the larger underlying population which does the same things but didn't get caught (yet). It's not an entirely unbiased sample, but I think it's good enough for our handwaving.

but not certain.

Well, of course. I don't think anyone suggested any certainties here.

Comment author: FrameBenignly 12 August 2015 01:09:54AM *  1 point [-]

To use the paper's results, it looks like they're getting roughly 10 in 100 in the experiment condition and 18 in 100 for the control. Those kids were selected because they were considered high risk. If among the 82 of 100 kids who didn't get arrested there are >18 who are just as likely to be arrested as the 18 who were, then emile's conclusion is correct across the year. The majority won't be arrested next year. Across an entire lifetime however.... They'd probably become more normal as time passed, but how quickly would this occur? I'd think Lumifer is right that they probably would end up back in jail. I wouldn't describe this as a very regular problem though.

Comment author: Romashka 11 August 2015 01:58:18PM 1 point [-]

Would you think that in future, when such technologies will probably become widespread, driver training should include at least one grisly crash, simulated and showed in 3-D? Or at least a mild crash?

Comment author: Username 10 August 2015 12:46:36PM *  12 points [-]

The moral imperative for bioethics by Steven Pinker.

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

Comment author: [deleted] 11 August 2015 01:11:49PM 2 points [-]

I'm all in favor of "social justice" in medicine by its conventional definition, but that's not even a particularly difficult problem. Universal medical systems already exist and function well all across the planet. Likewise, nobody's actually going to vote for Brave New World.

It really does seem like "social justice", in a bioethical context, simply isn't the True Rejection.

Comment author: Gunnar_Zarncke 12 August 2015 11:04:20PM 0 points [-]

These online text comments would better belong to the Media Thread. Esp. as they are many.

Comment author: Username 10 August 2015 01:13:18PM 11 points [-]

Composing Music With Recurrent Neural Networks

It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. Very cool stuff!) Now seems like as good a time as ever to experiment with what a neural network can do.

For a while now, I’ve been floating around vague ideas about writing a program to compose music. My original idea was based on a fractal decomposition of time and some sort of repetition mechanism, but after reading more about neural networks, I decided that they would be a better fit. So a few weeks ago, I got to work designing my network. And after training for a while, I am happy to report remarkable success!

Comment author: pianoforte611 11 August 2015 09:48:58PM *  4 points [-]

It's certainly very interesting. It's a slight improvement over Markov chain music. That tends to sound good for any stretch of 5 seconds, but lacks a global structure making it pretty awful to listen to for any longer stretch of time. This music still lacks much of the longer range structures that make music sound like music. It's a lot like stitching together 5 different Chopin compositions. It is stylistically consistent, but the pieces don't fit together.

Having said that, it is very interesting to see what you can get out of a network with respect to consonance, dissonance, local harmonic context and timing. I'm most impressed by the rhythm, it sounds more natural to my ear than the note progression.

Comment author: Viliam 12 August 2015 07:50:20AM 2 points [-]

Maybe there are situations where these imperfections of music wouldn't matter, for example if used as a background music for a computer game.

Comment author: Houshalter 10 August 2015 01:12:25PM 11 points [-]

Do Artificial Reinforcement-Learning Agents Matter Morally?

I've read this paper and find it fascinating. I think it's very relevant to Lesswrong's interests. Not just that it's about AI,but also that it asks hard moral and philosophical questions.

There are many interesting excerpts. For example:

The drug midazolam (also known as ‘versed,’ short for ‘versatile sedative’) is often used in procedures like endoscopy and colonoscopy... surveyed doctors in Germany who indicated that during endoscopies using midazolam, patients would ‘moan aloud because of pain’ and sometimes scream. Most of the endoscopists reported ‘fierce defense movements with midazolam or the need to hold the patient down on the examination couch.’ And yet, because midazolam blocks memory formation, most patients didn’t remember this: ‘the potent amnestic effect of midazolam conceals pain actually suffered during the endoscopic procedure’. While midazolam does prevent the hippocampus from forming memories, the patient remains conscious, and dopaminergic reinforcement-learning continues to function as normal.

Comment author: Betawolf 10 August 2015 09:41:29PM 4 points [-]

The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned.

Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.

Comment author: Sherincall 12 August 2015 10:35:52AM 9 points [-]

CIA's The Definition of Some Estimative Expressions - what probabilities people assign to words such as "probably" and "unlikely".

CIA actually has several of these articles around, like Biases in Estimating Probabilities. Click around for more.

In hindsight, it seems obvious that they should.

Comment author: gwern 14 August 2015 12:58:56AM 8 points [-]

Modafinil survey: I'm curious about how modafinil users in general use it, get it, their experiences, etc, and I've been working on a survey. I would welcome any comments about missing choices, bad questions, etc on the current draft of the survey: https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform?fbzx=2867338011413840797

Comment author: btrettel 14 August 2015 01:13:18PM *  3 points [-]

Great idea.

One suggestion: This survey seems to be for people who use modafinil regularly. I might suggest doing something (perhaps creating another survey) to get opinions from people who tried modafinil once or twice and disliked it. My one experience with Nuvigil was quite bad, and I recall Vaniver saying that he thought modafinil did nothing at all for him.

Comment author: ChristianKl 14 August 2015 11:41:29AM *  3 points [-]

In general, do you find brand-name -afinils more effective than generics?

I think that answer should have more than just (yes) and (no) as an answer. At least it should have a "I don't know" answer.


I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it. Maybe even "At what time of the day did you use it?"


I would be interested in a question about how many hours the person sleeps on average.

Have you thought about having a question about bodyweight? I would be interested in knowing whether heavier people take a larger dose.

Comment author: gwern 14 August 2015 06:27:35PM 0 points [-]

I've added 'the same' as a third option to the generic vs brand-name question, and 2 questions about average hours of sleep a night & body weight.

I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it.

What would the response there be, an exact date or n days ago or what?

Comment author: Tem42 14 August 2015 06:44:30PM 2 points [-]

I have no experience with -afinils, but it seems to me that there will surely be cases of people who have tried only brand-name (or, alternatively, only generic) -afinil, and therefore cannot accurately respond to the question

In general, do you find brand-name -afinils more effective than generics?

With yes, no, or the same. The correct answer would be "I don't know". If I were taking this survey, I would skip that question rather than try to guess which answer you wanted in that case. But if I were designing the survey, I would go with ChristianKl's suggestion.

Comment author: ChristianKl 15 August 2015 01:04:11AM 1 point [-]

I've added 'the same' as a third option to the generic vs brand-name question

I would guess that a majority of the respondents haven't testing multiple kind of modafinil and thus are not equipped to answer the question at all. "I don't know" seems to be the proper answer for them.

Comment author: gwern 06 September 2015 08:43:02PM 1 point [-]

Alright, I've added a don't-know option and added a 'when did you last use' question.

Comment author: ChristianKl 15 August 2015 01:02:01AM 1 point [-]

What would the response there be, an exact date or n days ago or what?

Both would be possible but I think "n days ago" is more standard. It makes the data analysis easier.

Comment author: RichardKennaway 14 August 2015 10:27:07PM *  2 points [-]

A few details:

In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G.

I was surprised to see Vitamin D listed as a nootropic, and Google turns up nothing much on the subject. Fixing a deficiency of anything will likely have a positive effect on mental function, but that is drawing the boundary of "nootropic" rather wide.

Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco? Cancer is a reason to not smoke tobacco, but I don't think it's a reason not to ask about it. Or are those who smoke not smart enough to be in the target population for the survey? :)

ETA: Also a typo in "SNP status of COMT RS4570625": the subtext mentions rs4680, not rs4570625. I dont know what "Val/Met" and "COMT" mean, but are those specific to RS4680 or correct for all three SNPs?

Comment author: gwern 06 September 2015 08:59:07PM 0 points [-]

In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G.

Oops. Shouldn't've assumed they'd be the same...

but that is drawing the boundary of "nootropic" rather wide.

It is but it's still common and can be useful. The nootropics list is based on Yvain's previous nootropics survey, which I thought might be useful for comparison. (I also stole a bunch of questions from his LW survey too, figuring that they're at least battle-tested at this point.)

Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco?

I have no interest in tobacco, solely nicotine. Although now that you object to that, I realize I forgot to specify vaping/e-cigs as included.

Comment author: ChristianKl 15 August 2015 04:58:14PM 0 points [-]

Val/Met

Aminoacids. Val stands for valine. Met stands for methionine.

COMT

I think COMT is Catechol-O-methyl transferase which is the protein in question.

Comment author: Username 12 August 2015 04:53:11PM *  7 points [-]

A Scientific Look at Bad Science

By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders) [2]. This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.

“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent) [3].

Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’ ” [4].

Comment author: RichardKennaway 14 August 2015 10:05:23PM *  6 points [-]

I just came across this: "You're Not As Smart As You Could Be", about Dr. Samuel Renshaw and the tachistoscope. This is a device used for exposing an image to the human eye for the briefest fraction of a second. In WWII he used it to train navy and artillery personnel to instantly recognise enemy aircraft, apparently with great success. He also used it for speed reading training; this application appears to be somewhat controversial.

I remember the references to Renshaw in some of Heinlein's stories, and I knew he was a real person, but this is the first time I've seen a substantial account of his work.

A few more references:

Wikipedia is rather brief.

Open access review article about work with the tachistoscope, in the Journal of Behavioral Optometry, 2003. This is the closest thing I've found to a modern reference.

An academic paper by Renshaw himself from 1945. Despite its antiquity, it is paywalled. I have not been able to access the full text.

This information is mostly rather old and musty, and there appears to be little modern interest. With current computers, it should be very easy to duplicate the technology, although low-level graphics expertise is likely needed to get very short, precise exposure times.

Comment author: Sarunas 15 August 2015 12:26:19PM 2 points [-]
Comment author: RichardKennaway 15 August 2015 12:46:29PM 1 point [-]

Thanks.

Comment author: WhyAsk 10 August 2015 07:20:57PM *  6 points [-]

Even if half of what's posted here is at present beyond me or if I am not currently interested in a specific topic, I can learn a lot from this forum.

Comment author: Viliam 11 August 2015 08:10:32AM 2 points [-]

That's how it's meant to be used, I guess. Many people here are probably not interested in all topics.

Comment author: ChristianKl 13 August 2015 05:30:16PM *  5 points [-]

Is there a good book about how to read scientific papers? A book that neither says that papers should never trusted nor that is oblivious of the real world where research often doesn't replicate?

That goes deeper than just the password of correlation isn't causation? That not only looks at theoretical statistics but that's more empirical about heustritics for trusting papers to successfully replicate?

Comment author: iarwain1 13 August 2015 03:29:44PM *  5 points [-]

On the subject of prosociality / wellbeing and religion, a recent article challenges the conventional wisdom by claiming that, depending on the particular situation, atheism might be just as good or even better for prosociality / wellbeing than religion is.

Comment author: Lumifer 12 August 2015 04:27:51PM *  4 points [-]

Augur -- a blockchain general-purpose prediction market running on Ethereum.

Anyone knows anything about it? Gwern..?

Comment author: gwern 12 August 2015 05:32:37PM *  5 points [-]

Yes, I've paid close attention to Truthcoin and it. They are both interesting projects with a chance of success although it's hard to make any strong predictions or claims before they are up and running, in part because of the running feud between Paul and the Augur guys. (For example, they both seem to agree that the original consensus/clustering algorithm using SVD/PCA will not work in an adversarial setting, but will Augur's new clustering algorithm succeed? It comes with no formal proofs other than it seems to work in the simulations; Paul seems to dislike it but has not in any of his rants that I've seen explain why he thinks it will not work or what a better solution would be.)

I will probably buy a bit of the Augur crowdsale so I can try it out myself.

Comment author: Lumifer 11 August 2015 04:44:31PM 4 points [-]

The soon-to-be prisoner's dilemma in real life, no less :-)

Comment author: WalterL 12 August 2015 07:55:02PM 3 points [-]

I mean, its not like you couldn't already send mail to the sheriff. A stylish flyer is just a reminder that its possible. Good for them.

Comment author: MrMind 12 August 2015 09:38:27AM 1 point [-]

Uhm, I wonder if they are aware that prisoner's dilemma is defeated through pre-commitment. They are weeding out small dealers strengthening the big ones.

Comment author: Lumifer 12 August 2015 03:36:56PM 1 point [-]

I think the police is mostly playing a PR game and/or amusing themselves. The idea of ratting on a competitor is simple enough to occur to drug dealers "naturally" :-)

Also note that this is not quite a PD where defecting gives you a low-risk slightly positive outcome. Becoming a police informer is... frowned upon on the street and is actually a high-risk move, usually taken to avoid a really bad outcome.

Comment author: Tem42 12 August 2015 04:00:10PM 3 points [-]

I would expect that it is slightly more than a PR stunt; it seems to me that most of the people who will use this 'service' are disgruntled citizens with no direct connection to the drug trade. Anyone who wants to accuse someone of trading in drugs now has an easy, anonymous, officially sanctioned way to do so, and clear instruction as to what information is most useful -- without having to ask!

I suspect that framing it as "drug dealers backstabbing drug dealers" is just a publicly acceptable way to introduce a snitching program that would otherwise be frowned upon by many.

Comment author: Lumifer 12 August 2015 04:13:43PM 2 points [-]

"If you see something, tell us" kind of thing? Maybe, that makes some sense.

I wonder how good that police department is at dealing with false positives X-/

Comment author: Username 10 August 2015 12:55:48PM 9 points [-]

Why Lonely People Stay Lonely

One long-held theory has been that people become socially isolated because of their poor social skills — and, presumably, as they spend more time alone, the few skills they do have start to erode from lack of use. But new research suggests that this is a fundamental misunderstanding of the socially isolated. Lonely people do understand social skills, and often outperform the non-lonely when asked to demonstrate that understanding. It’s just that when they’re in situations when they need those skills the most, they choke.

In a paper recently published in the journal Personality and Social Psychology Bulletin, Franklin & Marshall College professor Megan L. Knowles led four experiments that demonstrated lonely people’s tendency to choke when under social pressure. In one, Knowles and her team tested the social skills of 86 undergraduates, showing them 24 faces on a computer screen and asking them to name the basic human emotion each face was displaying: anger, fear, happiness, or sadness. She told some of the students that she was testing their social skills, and that people who failed at this task tended to have difficulty forming and maintaining friendships. But she framed the test differently for the rest of them, describing it as a this-is-all-theoretical kind of exercise.

Before they started any of that, though, all the students completed surveys that measured how lonely they were. In the end, the lonelier students did worse than the non-lonely students on the emotion-reading task — but only when they were told they were being tested on their social skills. When the lonely were told they were just taking a general knowledge test, they performed better than the non-lonely. Previous research echoes these new results: Past studies have suggested, for example, that the lonelier people are, the better they are at accurately reading facial expressions and decoding tone of voice. As the theory goes, lonely people may be paying closer attention to emotional cues precisely because of their ache to belong somewhere and form interpersonal connections, which results in technically superior social skills.

But like a baseball pitcher with a mean case of the yips or a nervous test-taker sitting down for an exam, being hyperfocused on not screwing up can lead to over-thinking and second-guessing, which, of course, can end up causing the very screwup the person was so bent on avoiding. It’s largely a matter of reducing that performance anxiety, in other words, and Knowles and her colleagues did manage to find one way to do this for their lonely study participants, though, admittedly, it is maybe not exactly applicable outside of a lab. The researchers gave their volunteers an energy-drink-like beverage and told them that any jitters they felt were owing to the caffeine they’d just consumed. (In actuality, the beverage contained no caffeine, but no matter — the study participants believed that it did.) They then did the emotion-reading test, just like in the first experiment. Compared to scores from that first experiment, there was no discernible difference in scores for the non-lonely, but the researchers did see improvement among the lonely participants — even when the task had been framed as a social-skills test.

It may be difficult to trick yourself into believing your nerves are from caffeine and not the fact that you really, really, really want to make a good impression in some social setting, but there are other ways to change your own thinking about anxiety. One of my recent favorites is from Harvard Business School’s Alison Wood Brooks, who found that when she had people reframe their nerves as excitement, they subsequently performed better on some mildly terrifying task, like singing in public. At the very least, this current research presents a fairly new way to think about lonely people. It’s not that they need to brush up on the basics of social skills — that they’ve likely already got down. Instead, lonely people may need to focus more on getting out of their own heads, so they can actually use the skills they’ve got to form friendships and begin to find a way out of their isolation.

Comment author: Viliam 11 August 2015 08:00:05AM 3 points [-]

I imagine such behavior could happen if someone had a bad experience in the past, that they were disproportionally punished in some social situation. The punishment didn't even have to be a predictable logical consequence; maybe they just had a bad luck and met some psycho. Or maybe they were bullied at school, etc.

If their social skills are otherwise okay, they may intellectually understand what is usually the best response, but in real life they are overwhelmed by fear and their behavior is dominated by avoiding the thing that "caused" the bad response in the past. For example, if the bad thing happened after saying "hello" to a stranger, they may be unable to speak with strangers, even if they know from observing others that this is a good thing to do.

Then the framing of the test could make students think either about "what is generelly the right approach?" or "what would I do?"

Comment author: FrameBenignly 10 August 2015 06:29:50PM 2 points [-]

21 people per group (86/4) is not a strong result unless it's a large effect size which I doubt. I wouldn't put hardly any faith in this paper. Maybe raise your prior by 3% but it's hard to be that precise with beliefs.

Comment author: pianoforte611 11 August 2015 09:40:34PM *  6 points [-]

I'd like to see fewer low quality scientific criticisms here. Instead of speculating on effect sizes without reading the paper, and bloviating on sample sizes without doing the relevant power calculations, perhaps try looking at the results section?

With respect to this paper, the results were consistent and significant across three tasks - an eye task, a facial expression task, and a vocal tone task. They did a non-social task (an anagram task) and found no significant effect (though that wasn't the purpose of doing the task, its a bit more complicated than that). They also did an interesting caffeine experiment to see if they could relieve social anxiety by convincing participants that the anxiety was due to a (fake) caffeinated drink.

Anyways, as with any research in this area, it's too soon to be confident of what the results mean. But armchair uninformed scientific criticism will not advance knowledge.

(In hindsight this is a bit of an overreaction, but I've seen too many poor criticisms of papers and too much speculation particularly on Reddit, but also here and on several blogs, and not nearly enough careful reading)

Comment author: Douglas_Knight 14 August 2015 01:59:39AM 0 points [-]

I would like to see fewer low quality science papers posted. FB put in way more work than was justified. My new policy is to down vote every psychology paper posted without any discussion of the endemic problems in psychology research and why that paper might not be pure noise.

Comment author: pianoforte611 14 August 2015 02:42:47AM 2 points [-]

Are all psychology papers garbage? And if only some are, how do you tell which is which if you don't read past the first line of the abstract? (which FB didn't, because he was unaware that more than one experiment was conducted).

Comment author: Douglas_Knight 14 August 2015 03:03:21AM 3 points [-]

We have to filter the papers somehow and the the people who do the filtering have to read them. But that doesn't mean that the people doing the filtering should be people on LW. Username relied on a journalist for filtering. This does filter for interesting topics, but not for quality. That Username did not link the actual paper suggests that he did not read it. Thus my prior is that it is of median quality and pure noise. Even if psychology papers were all perfectly accurate, there are way too many that get coverage and it is unlikely that one getting coverage this month is worth reading.

There are standard places to look for filters: review articles and books.

Comment author: pianoforte611 14 August 2015 03:16:50AM 0 points [-]

Okay that's very fair.

Comment author: [deleted] 10 August 2015 10:11:54PM *  1 point [-]

Shouldn't the authors be aware of this? (I think one of them is even fairly well known in psychology circles.)

Comment author: RichardKennaway 12 August 2015 10:36:38AM 1 point [-]

I am sure the authors are more informed about their work than anyone who has not read it.

Comment author: FrameBenignly 11 August 2015 03:51:54PM 1 point [-]

I'm not sure what the correlation is between prominence and paper quality. At any rate, he-s a co-author; not the main author. Co-authors can sometimes have very little to do with the actual paper.

Comment author: Username 10 August 2015 01:02:18PM 7 points [-]

Dead enough by Walter Glannon

To honour donors, we should harvest organs that have the best chance of helping others – before, not after, death

Now imagine that before the stroke our hypothetical patient had expressed a wish to donate his organs after his death. If neurologists could determine that the patient had no chance of recovery, then would that patient really be harmed if transplant surgeons removed life-support, such as ventilators and feeding tubes, and took his organs, instead of waiting for death by natural means? Certainly, the organ recipient would gain: waiting too long before declaring a patient dead could allow the disease process to impair organ function by decreasing blood flow to them, making those organs unsuitable for transplant.

But I contend that the donor would gain too: by harvesting his organs when he can contribute most, we would have honoured his wish to save other lives. And chances are high that we would be taking nothing from him of value. This permanently comatose patient will never see, hear, feel or even perceive the world again whether we leave his organs to whither inside him or not.

Comment author: ZankerH 10 August 2015 05:12:51PM *  6 points [-]

This might have the side-effect of putting even more people off signing up for donation. Most people I've talked to about it who are opposed cite horror stories about doctors prematurely "giving up" on donors to get at their organs.

Comment author: DanielLC 10 August 2015 07:49:58PM 3 points [-]

There are reasons why you shouldn't kill someone in a coma that doesn't want to be killed when they're in a coma even if you disagree with them about what makes life have moral value. If they agreed to have the plug pulled when it becomes clear that they won't wake up, then it seems pretty reasonable to take out the organs before pulling the plug. And given what's at stake, given permission, you should be able to take out their organs early and hasten their deaths by a short time in exchange for making it more likely to save someone else.

And why are you already conjecturing about what we would have wanted? We're not dead yet. Just ask us what we want.

Comment author: WalterL 10 August 2015 06:59:53PM 4 points [-]

Honest question, if you are cool with killing a person in a coma, based on the fact that they will never sense again, how do you feel about a person doing life in solitary? They may sense, but they aren't able to communicate what they sense to any other human.

What exactly makes life worth its organs, in your eyes?

Comment author: DanielLC 10 August 2015 07:44:49PM 2 points [-]

A person in solitary still has experiences. They just don't interact with the outside world. People in a coma are, as far as we can tell, not conscious. There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.

Comment author: ChristianKl 10 August 2015 08:42:25PM 3 points [-]

There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.

By that standard how about harvesting the organs of babies?

Comment author: James_Miller 10 August 2015 11:30:30PM *  4 points [-]

By that standard how about harvesting the organs of babies?

Planned Parenthood does this for aborted babies.

Comment author: DanielLC 10 August 2015 11:45:22PM 2 points [-]

I think babies are more person-like than the animals we eat for food. I'm not an expert in that though. They're still above someone in a coma.

Comment author: Lumifer 11 August 2015 12:51:17AM -2 points [-]

I think babies are more person-like than the animals we eat for food. I'm not an expert in that though.

More for the "shit LW people say" collection :-)

Comment author: WalterL 10 August 2015 07:58:19PM 2 points [-]

Yeah, and I'm asking, do those experiences "count"?

If organs are going from comatose humans to better ones, and we've decided that people who aren't sensing don't deserve theirs, how about people who aren't communicating their senses? It seems like this principal can go cool places.

If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them (there will be forms, and an adorableness quotient, and Love Weighting). All that the world would be out is the silent contemplation of the interior of a cell. Clearly a net gain, yeah?

So, are we stopping at "no sensing -> we jack your meats", or can we cook with gas?

Comment author: DanielLC 10 August 2015 11:42:36PM 1 point [-]

It's not about communication. It's not even about sensing. It's about subjective experience. If your mind worked properly but you just couldn't sense anything or do anything, you'd have moral worth. It would probably be negative and it would be a mercy to kill you, but that's another issue entirely. From what I understand, if you're in a coma, your brain isn't entirely inactive. It's doing something. But it's more comparable to what a fish does than a conscious mammal.

Someone in a coma is not a person anymore. In the same sense that someone who is dead is not a person anymore. The problem with killing someone is that they stop being a person. There's nothing wrong with taking them from not a person to a slightly different not a person.

If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them

A mass murderer is still a person. They think and feel like you do, except probably with less empathy or something. The world is better off without them, and getting rid of them is a net gain. But it's not a Pareto improvement. There's still one person that gets the short end of the stick.

Comment author: Tem42 10 August 2015 08:39:36PM 1 point [-]

I can't tell if you have a recommendation. If you have a model to suggest, please share it.

Comment author: Tem42 10 August 2015 07:22:48PM *  1 point [-]

Given that this is suggested to be a voluntary system, it doesn't really matter what Walter Glannon thinks -- it matters what you think.

Personally, I would be more interested in signing up for this if I was assured that the permanent damage was to the grey matter, and would be happy if this included both comas and permanent vegetative states. But YMMV.

It is worth noting here that being in solitary confinement does not necessarily prevent you from writing, receiving visitors, or making telephone calls (it depends on your local jurisdiction). Also, very few people are sentenced to be in solitary confinement until they die. In those places where this sort of sentence is permitted, it is unlikely that prisoners would be allowed any choice in their fate, but it is not obviously bad for a justly imprisoned person to choose suicide (with or without organ donation) in lieu of a life sentence.

EDIT: on re-reading, I see that this was not stated to always be a voluntary procedure; the author goes back and forth between voluntary and involuntary procedures. In involuntary cases, I agree that the simple criteria of "brain functions at a level too low to sustain consciousness but enough to sustain breathing and other critical functions without mechanical support" is too lax. I would still agree with the author in general that DDR is too strong.

Comment author: Tem42 10 August 2015 05:20:51PM 1 point [-]

You can approximate this by writing a living will (and you should write a living will regardless of whether or not you are an organ donor.)

However, I agree there should be more finely grained levels of organ donation, and that this should be a clear option.

Comment author: Romashka 15 August 2015 12:45:27PM *  3 points [-]

How soon do people who comment upon something here know the answer at once? For example, the valuable advice on statistics I received several times seems to be generated by pattern-recognition (at least at my level of understanding). I myself often have to spend more time framing my comments than actually recognizing what I want to express (not that I always succeed, but there's an internal mean-o-meter which says, this is it.) OTOH, much of the material I simply don't understand, not having sufficient prerequisite knowledge; the polls are aimed at the areas with which you personally interact.

I mostly know what I am going to say

The posts to which I don't have an immediate answer

Please add your comments on which topics you have to 'slow down' - anonymously, if you wish.

Edit to add: My answer undergoes changes before I submit it;

Submitting...

Comment author: satt 15 August 2015 01:45:58PM *  2 points [-]

A lot of my comments here are correcting/supplementing/answering someone else's comment. Reflecting on how I think the typical sequence goes, it might be something like

  • as I read a comment, get a sensation of "this seems prima facie wrong" or "that sounds misleading" or whatever
  • finish reading, then re-read to check I'm not misunderstanding (and sometimes it turns out I have misunderstood)
  • translate my gut feeling of wrongness into concrete criticism(s)
  • rephrase & rephrase & rephrase & rephrase what I've written to try to minimize ambiguity and maybe adjust the politeness level

and so it's hard to say how long it takes me to "mostly know what I am going to say". I often have a vague outline of what I ought to say within 10 or 20 seconds of noticing my feeling that Something's Wrong, but it can easily take me 10 or 20 minutes to actually decide what I'm going to say. For instance, when I read this comment, I immediately thought, "I don't think that can be right; Russia's a violent country and some wars are small", but it took me a while (maybe an hour?) to put that into specific words, and decide which sources to link.

Edit to add: I agree that pattern recognition plays an important part in this. A big part of expertise, I reckon, is just planting hundreds & hundreds of pattern-recognition rules into your brain so when you see certain errors or fallacies you intuitively recognize them without conscious effort.

Comment author: Romashka 15 August 2015 02:06:40PM 1 point [-]

I am somewhat afraid then, that reading about fallacies won't change my ability to recognize them significantly. Perhaps 'rationality training' should really focus on the editing part, not on the recognizing part. I'll add another question.

Comment author: satt 18 August 2015 12:44:33AM 0 points [-]

Depends how your mind works, I guess. I read about fallacies when I was young and I feel like that helped me recognize them, even without much deliberate practice in recognizing them (but I surely had a lot of accidental & semi-accidental practice).

Recognition is probably more important than the editing part, because the editing part isn't much use without having the "Aha! That's probably a fallacy!" recognitions to edit, and because you might be able to do a good job of intuitively recognizing fallacies even if you can't communicate them to other people cleanly & unambiguously.

Comment author: Tem42 14 August 2015 02:55:01AM 3 points [-]

Being a comparatively new user, and thus having limited karma, I can't engage fully with The Irrationality Game. Seeing as how it's about 5 years out of date, is there any interest in playing the game anew? Are there rules on who should/can post such things?

Comment author: Zian 22 August 2015 06:22:18AM 0 points [-]

Looks interesting. Feel free to try.

Comment author: ChristianKl 14 August 2015 11:19:18AM 0 points [-]

Are there rules on who should/can post such things?

No. You are free to start new threads like this in discussion. Karma votes on the new thread will tell you to what extend the community is happy that you started a new thread.

If you find yourself posting threads that get negative karma, try to understand why they get negative karma and don't repeat mistakes.

Comment author: Tem42 14 August 2015 03:52:15PM 0 points [-]

My question was actually a bit more targeted - I should have been more precise.

Will_Newsome posted the original Irrationality Game, and he has left the site (well, hasn't posted for months. Perhaps I need to PM him and ask if he's still around). His original post was really very well written, and while I could re-write it, I would probably not change much. So basically, if I repost the idea of an established user who is no longer around... Is that really okay?

I would have no objection to posting under Username, if that made it 'more okay', and I wouldn't mind at all if someone else posted it rather than I -- I just want to play an active version of the game.

I will also double-check and see if Will_Newsome might still be on-site and interested.

Comment author: btrettel 11 August 2015 07:44:30PM 3 points [-]

Are there any guidelines for making comprehensive predictions?

Calibration is good, as is accuracy. But if you never even thought to predict something important, it doesn't matter if you have perfect calibration and accuracy. For example, Google recently decided to restructure, and I never saw this coming.

I can think of a few things. One is to use a prediction service like PredictionBook that aggregates predictions from many people. I never would have considered half the predictions on the site. Another is to get in the habit of recognizing when you don't think something will change and questioning that. E.g., I never would have thought not wearing socks would become stylish, but it seems to have caught on at least among some people.

Questioning literally everything you can think of might work, but it seems pretty inefficient. I'm interested in predictions which are important in some sense.

Any ideas would be appreciated.

Comment author: Lumifer 11 August 2015 08:22:58PM 2 points [-]

Are there any guidelines for making comprehensive predictions?

Are you asking how to generate a universe of possible outcomes to consider, basically?

Comment author: btrettel 11 August 2015 09:59:36PM 0 points [-]

Yes, that's one way to put it. The main restriction would be to pick "important" predictions, whatever "important" means here.

One other idea I just had would be to make a list of general questions you can ask about anything along with a list of categories to apply these questions to.

Comment author: Lumifer 12 August 2015 03:27:43PM *  1 point [-]

The main restriction would be to pick "important" predictions, whatever "important" means here.

I don't know if there is any useful algorithm here. The space of possibilities is vast, black swans lurk at the outskirts, and Murphy is alive and well :-/

You can try doing something like this:

  • List the important (to you) events or outcomes in some near future
  • List everything that could potentially affect these events or outcomes.

and you get your universe of "events of interest" to assign probabilities to.

I doubt this will be a useful exercise in practice, though.

Comment author: btrettel 12 August 2015 04:01:41PM 0 points [-]

Yes, upon reflection, it seems that something along these lines is probably the best I can do, and you're right that it probably will not be useful.

I'll give it a try and evaluate whether I'd want to try again.

Comment author: Daniel_Burfoot 10 August 2015 11:00:07PM 6 points [-]

If the Efficient Market Hypothesis is true, shouldn't it be almost as hard to lose money on the market as it is to gain money? Let's say you had a strategy S that reliably loses money. Shouldn't you be able to define an inverse strategy S', that buys when S sells and sells when S buys, that reliably earns money? For the sake of argument rule out obvious errors like offering to buy a stock for $1 more than its current price.

Comment author: Vaniver 11 August 2015 12:03:17AM 10 points [-]

shouldn't it be almost as hard to lose money on the market as it is to gain money?

Consider the dynamic version of the EMH: that is, rather than "prices are where they should be," it's "agents who perceive mispricings will pounce on them, making them transient."

Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent. There's an asymmetry between "there is no free money left to be picked up" and "if you drop your money, it will not be picked up" that makes the first true (in the static case) and the second false.

Comment author: Lumifer 11 August 2015 01:04:18AM 3 points [-]

Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent.

Well, that looks like an "offering to buy a stock for $1 more than its current price" scenario. You can easily lose a lot of money by buying things at the offer and selling them at the bid :-)

But let's imagine a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid.

Assuming you can competently express a market view, can you systematically lose money by consistently taking the wrong side under EMH?

Comment author: Salemicus 11 August 2015 11:02:23AM *  2 points [-]

Consider penny stocks. They are a poor investment in terms of expected return (unless you have secret alpha). But they provide a small chance of very high returns, meaning they operate like lottery tickets. This isn't a mispricing - some people like lottery tickets, and so bid up the price until they become a poor investment in terms of expected return (problem for the CAPM, not for the EMH). So you can systematically lose money by taking the "wrong" side, and buying penny stocks.

Does that count as an example, or does that violate your "risk-adjusted terms" assumption? I think we have to be careful about what frictions we throw out. If we are too aggressive in throwing out notions like an "equity premium," or hedging, or options, or market segmentation, or irreducible risk, or different tolerances to risk, we will throw out the stuff that causes financial markets to exist. An infinite frictionless plane is a useful thought experiment, but you can't then complain that a car can't drive on such a plane.

Comment author: Lumifer 11 August 2015 02:53:55PM *  0 points [-]

Yes, we have to be quite careful here.

Let's take penny stocks. First, there is no exception for them in the EMH so if it holds, the penny stocks, like any other security, must not provide a "free" opportunity to make money.

Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? Because it's a single number which has nothing do with risk. A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small. The distribution of penny stock returns can be very skewed and heavy-tailed, but EMH does not demand anything of the returns distributions.

So I think you have to pick one of two: either penny stocks provide negative expected return (remember, in our setup the risk-free rate is zero), but then EMH breaks; or the penny stocks provide non-negative expected return (though with an unusual risk profile) in which case EMH holds but you can't consistently lose money.

Does that violate your "risk-adjusted terms" assumption?

My "risk-adjusted terms" were a bit of a handwave over a large patch of quicksand :-/ I mostly meant things like leverage, but you are right in that there is sufficient leeway to stretch it in many directions. Let me try to firm it up: let's say the portfolio which you will use to consistently lose money must have fixed volatility, say, equivalent to the volatility of the underlying market.

Comment author: Salemicus 11 August 2015 07:58:30PM 2 points [-]

Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? ... A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small.

Yes, I mean expected return. If you hold penny stocks, you can expect to lose money, because the occasional big wins will not make up for the small losses. You are right that we can imagine lotteries with positive expected return, but in the real world lotteries have negative expected return, because the risk-loving are happy to pay for the chance of big winnings.

[If] penny stocks provide negative expected return ... then EMH breaks

Why?

Suppose we have two classes of investors, call them gamblers and normals. Gamblers like risk, and are prepared to pay to take it. In particular, they like asymmetric upside risk ("lottery tickets"). Normals dislike risk, and are prepared to pay to avoid it (insurance, hedging, etc). In particular, they dislike asymmetric downside risk ("catastrophes").

There is an equity instrument, X, which has the following payoff structure:

99% chance: payoff of 0 1% chance: payoff of 1000

Clearly, E(X) is 10. However, gamblers like this form of bet, and are prepared to pay for it. Consequently, they are willing to bid up the price of X to (say) 11.

Y is the instrument formed by shorting X. When X is priced at 11, this has the following payoff structure:

99% chance: payoff of 11 1% chance: payoff of -989

Clearly, E(Y) is 1. In other words, you can make money, in expectation, by shorting X. However, there is a lot of downside risk here, and normals do not want to take it on. They would require E(Y) to be 2 (say) in order to take on a bet of that structure.

So, assuming you have a "normal" attitude to risk, you can lose money here (by buying X), but you can't win it in risk-adjusted terms. This is caused by the market segmentation caused by the different risk profiles. Nothing here is contrary to the EMH, although it is contrary to the CAPM.

Thoughts:

  1. Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life.
  2. A more realistic model might include some deep-pocketed funds with a neutral attitude to risk who could afford to short X. But in real life, there is market segmentation and a lack of liquidity. Penny stocks are illiquid and hard to short, and so are many other high-beta instruments.
  3. The logical corollary of this model is that safe, boring equities will outperform stocks with lottery-ticket-like qualities. And it therefore follows that safe, boring equities will outperform the market as a whole. And that also seems true in real life.
  4. There are plausible microfoundations for why there might be a "gambler" class of investor. For example, fund managers are risking their clients' capital, not their own, and are typically paid in a ranking relative to their peers. Their incentives may well lead them to buy lottery tickets.
Comment author: Lumifer 11 August 2015 08:34:06PM 1 point [-]

However, there is a lot of downside risk here, and normals do not want to take it on.

By itself, no. But this is diversifiable risk and so if you short enough penny stocks, the risk becomes acceptable. To use a historical example, realizing this (in the context of junk bonds) is what made Michael Milken rich. For a while, at least.

market segmentation caused by the different risk profiles

This certainly exists, though it's more complicated than just unwillingness to touch skewed and heavy-tailed securities.

Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life.

In real life shorting penny stocks will run into some transaction-costs and availability-to-borrow difficulties, but options are contracts and you can write whatever options you want. So are you saying that selling deep OOM options is a free lunch?

As for the rest, you are effectively arguing that EMH is wrong :-)

Full disclosure: I am not a fan of EMH.

Comment author: Salemicus 11 August 2015 08:47:09PM *  1 point [-]
  1. Who says this risk is diversifiable? Nothing in the toy model I gave you said the risk was diversifiable. Maybe all the X-like instruments are correlated.
  2. No, I'm not saying that selling deep OOM options is a free lunch, because of the risk profile. And these are definitely not diversifiable.
  3. I am not arguing that EMH is wrong. I have given you a toy model, where a suitably defined investor cannot make money but can lose money. The model is entirely consistent with the EMH, because all prices reflect and incorporate all relevant information.
Comment author: Lumifer 11 August 2015 08:53:18PM 0 points [-]

toy model

Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it?

As to toy models, if I get to define what classes of investors exist and what do they do, I can demonstrate pretty much anything. Of course it's possible to set up a world where "a suitably defined investor cannot make money but can lose money".

And deep OOM options are diversifiable -- there is a great deal of different markets in the world.

Comment author: Salemicus 11 August 2015 09:03:27PM 1 point [-]

Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it?

Yeah, but you wanted "a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid." That doesn't describe reality, so describing events in your scenario necessitates a toy model.

In the real world, it is trivial to show how you can lose money even if the EMH is true: you have to pay tax, transaction costs are non-zero, the ex post risk is not known, etc.

deep OOM options are diversifiable -- there is a great deal of different markets in the world.

There's still a lot of correlation. Selling deep OOM options and then running into unexpected correlation is exactly how LTCM went bust. It's called "picking up pennies in front of a steamroller" for a reason.

Comment author: Davidmanheim 11 August 2015 04:22:39AM 1 point [-]

Yes. Unless you think that all possible market information is reflected now, before it becomes available, someone makes money when information emerges, moving the market.

Comment author: Lumifer 11 August 2015 02:29:17PM *  0 points [-]

Yes, you can (theoretically) make money by front-running the market. But I don't think you can systematically lose money that way (and stay within EMH) and that's the question under discussion.

Comment author: ChristianKl 11 August 2015 06:17:55PM 1 point [-]

If someone is making money by front-running the market another person at the other side of the trade is losing money.

Comment author: Viliam 11 August 2015 08:07:43AM 12 points [-]

I guess the difference is that if you offer to sell a ton of gold for $1, you will find a buyer, but if you offer to buy a ton of gold for $1, you will not find a seller.

The inverse strategy will not produce the inverse result.

Comment author: pcm 11 August 2015 07:01:31PM 2 points [-]

Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading).

It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem).

The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.

Comment author: VoiceOfRa 11 August 2015 04:46:21AM 3 points [-]

No, because you can't sell what you don't have.

Comment author: Lumifer 11 August 2015 07:16:59PM 3 points [-]

In the financial markets you can, easily enough.

Comment author: VoiceOfRa 12 August 2015 07:20:22AM 2 points [-]

Sort of. You have to pay someone additional money for the right/ability to do so.

Comment author: Lumifer 12 August 2015 03:30:45PM 1 point [-]

You have to pay a broker to sell what you have as well :-P

Comment author: VoiceOfRa 13 August 2015 05:13:42AM 2 points [-]

A lot less.

Also, this further breaks the asymmetry between making and loosing money.

Comment author: Lumifer 13 August 2015 02:31:15PM 0 points [-]

A lot less.

I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere.

It is perfectly possible to short a stock, cover it at exactly the same price and end up with more money in your account.

Comment author: VoiceOfRa 14 August 2015 04:54:06AM 2 points [-]

I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere.

Maybe if you have the right connections, and the broker really trust you. The issue is suppose you short a stock, the price goes up and you can't cover it. Someone has to assume that risk, and of course will want a risk premium for doing so.

Comment author: Lumifer 14 August 2015 02:29:16PM 1 point [-]

Maybe if you have the right connections, and the broker really trust you.

It doesn't have anything to do with connections or broker trust. It's standard operating practice for all broker clients.

The issue is suppose you short a stock, the price goes up and you can't cover it.

If the price goes sufficiently up, you get a margin call. If you can't meet it, the broker buys the stock to cover using the money in your account without waiting for your consent. The broker has some risk if the stock gaps (that is, the price moves discontinuously, it jumps directly from, say, $20 to $40), but that's part of the risk the broker normally takes.

Comment author: Douglas_Knight 27 August 2015 11:07:18PM 1 point [-]

Actually, when you short a stock, you must pay an interest rate to the person from whom you borrowed the stock. That interest rate varies from stock to stock, but is always above the risk-free rate. Thus, if you short a stock and do nothing interesting with the cash and eventually cover it at the original price, you will lose money.

Comment author: James_Miller 10 August 2015 11:36:07PM 3 points [-]

No because of taxes, transaction cost, and risk/return issues.

Comment author: Good_Burning_Plastic 13 August 2015 09:34:02PM *  1 point [-]

The EMH works because everybody is trying to gain money, so everybody except you trying to gain money and you trying to lose money isn't the symmetric situation. The symmetric situation is everybody trying to lose money, in which case it'd be pretty hard indeed to do so. And if everybody except you was trying to lose money and you were trying to gain money it'd be pretty easy for you to do so. I think this would also be the case in absence of taxes and transaction costs. IOW I think Viliam nailed it and other people got red herrings.

Comment author: Username 10 August 2015 01:08:08PM 6 points [-]

A database of philosophical ideas

Current Total Ideas: 17,046

Comment author: Dahlen 10 August 2015 02:27:29PM 4 points [-]

Meta: How come there have been so many posts recently by the generic Username account? More people wanting to preserve anonymity, or just one person who can't be bothered to make an account / own up to most of what they say?

Comment author: Username 10 August 2015 09:40:17PM 5 points [-]

The similar formatting of the comments suggests that in this thread it's mostly one person with a lot of links to share.

Personally, I just haven't been bothered to make an account, and have been using the username account exclusively for about 5 years. I'd estimate 30-50% of all the posts on the account were made by me over this timeframe, though writing style suggests to me that a good number of people have used it as a one-shot throwaway, and several people have used it many times.

Comment author: Vaniver 10 August 2015 06:08:07PM 5 points [-]

just one person who can't be bothered to make an account

My dominant hypothesis is at least three people who couldn't be bothered to make accounts, and that this has further normalized the usage of Username as a generic lurker account.

Comment author: ChristianKl 10 August 2015 06:30:39PM 2 points [-]

That leaves the question of whether that's okay or whether we should simply disable the account.

Submitting...

Comment author: Dahlen 11 August 2015 09:06:58AM 2 points [-]

Oh, I wasn't suggesting that; I was just hoping that whoever has been exclusively posting from that account can take a hint and consider using LW the typical way. It's confusing to see so many posts at once by that account and not know whether there's one person or several using it.

Comment author: Elo 10 August 2015 10:45:14PM *  1 point [-]

I think its a reasonable solution to people not wanting to make an account or also the occasional anonymous post. I have used it once or twice to make separate comments.

But I should add that you can see a list of your nearest meetups if you set your location on your own account.

Edit: holy hell the person who posted all the OT comments here is really annoying and should make an account and stop link-dropping. If the account is being abused that bad we should shut it down and I would change my vote in the poll.

Comment author: [deleted] 11 August 2015 04:23:53PM 3 points [-]

From the upvote to downvote ratio it looks like more members think the posts by Username in the open thread are worthwhile - at the time of writing they are mostly among the higher top-level comments on this week's open thread, and several have sparked at least a bit of subsequent discussion in the form of follow up comments.

True, they're only links (with quoted text) but this doesn't particularly strike me as abuse of the Username account.

Comment author: Elo 11 August 2015 05:45:27PM 1 point [-]

I am suspicious of the link-drop attitude to posting anywhere. Even if it looks to have value added this time.

Comment author: iarwain1 12 August 2015 09:27:05PM *  2 points [-]

There's a new article on academia.edu on potential biases amongst philosophers of religion: Irrelevant influences and philosophical practice: a qualitative study.

Abstract:

To what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained? This paper investigates irrelevant influences in philosophy through a qualitative survey on the personal beliefs and attitudes of philosophers of religion. In the light of these findings, I address two questions: an empirical one (whether philosophers of religion are influenced by irrelevant factors in forming their philosophical attitudes), and an epistemological one (whether the influence of irrelevant factors on our philosophical views should worry us). The answer to the empirical question is a confident yes, to the epistemological question, a tentative yes.

Comment author: g_pepper 12 August 2015 11:28:08PM 0 points [-]

To what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained?

I would expect a person's education to shape his/her philosophical views; if one's philosophy is not shaped by one's education, then one has had a fairly superficial education.

Comment author: iarwain1 13 August 2015 12:27:05AM 0 points [-]

She means that you're biased towards the way you were taught vs. alternatives, regardless of the evidence. The example she gives (from G.A. Cohen) is that most Oxford grads tend to accept the analytic / synthetic distinction while most Harvard grads reject it.

Comment author: g_pepper 13 August 2015 01:16:21AM 0 points [-]

Yes, I got that from reading the paper. However, the wording of the abstract seems quite sloppy; taken at face value it suggests that a person's education, K-postdoc (not to mention informal education) should have no influence on the person's philosophy.

Moreover, the paper's point (illustrated by the Cohen example) is not really surprising; one's views on unanswered questions are apt to be influenced by the school of thought in which one was educated - were this not the case, the choice of what university to attend and which professor to study under would be somewhat arbitrary. Moreover, I don't think that she made a case that philosophers are ignoring the evidence, only that the philosopher's educational background continues to exert an influence throughout the philosopher's career. From a Bayesian standpoint this makes sense - loosely speaking, when the philosopher leaves graduate school, his/her education and life experience to that point constitute his/her priors, which he/she updates as new evidence becomes available. While the philosopher's priors are altered by evidence, they are not necessarily eliminated by evidence. This is not problematic unless overwhelming evidence one way or the other is available and ignored. The fact that whether or not to accept the analytic / synthetic distinction is still an open question suggests that no such overwhelming evidence exists - so I am not seeing a problem with the fact that Oxford grads and Harvard grads tend (on average) to disagree on this issue.

Comment author: AstraSequi 12 August 2015 09:49:20AM 2 points [-]

A question that I noticed I'm confused about. Why should I want to resist changes to my preferences?

I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B. If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more, so I can't say anything against it. If I had a button that could block the modification, I would press it, but I feel like that's only because I have a meta-preference that my preferences tend to maximizing happiness, and the meta-preference has the same problem.

A quicker way to say this is that future-me has a better claim to caring about what the future world is like than present-me does. I still try to work toward a better world, but that's based on my best prediction for my future preferences, which is my current preferences.

Comment author: Squark 12 August 2015 08:11:01PM *  4 points [-]

"I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B". You'll be happier with B, so what? Your statement only makes sense of happiness is part of A. Indeed, changing your preferences is a way to achieve happiness (essentially it's wireheading) but it comes on the expense of other preferences in A besides happiness.

"...future-me has a better claim to caring about what the future world is like than present-me does." What is this "claim"? Why would you care about it?

Comment author: AstraSequi 13 August 2015 11:24:05AM 0 points [-]

I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.

Comment author: Squark 13 August 2015 08:10:11PM *  1 point [-]

Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self's happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don't want your preferences to change.

Comment author: RichardKennaway 12 August 2015 12:00:54PM 4 points [-]

Why should I want to resist changes to my preferences?

Because that way leads to

  • wireheading

  • indifference to dying (which wipes out your preferences)

  • indifference to killing (because the deceased no longer has preferences for you to care about)

  • readiness to take murder pills

and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

So much for the destructive critique. What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? Think of yourself at half your present age — then think of yourself at twice your present age (and for those above the typical LessWrong age, imagined still hale and hearty).

Which changes should be shunned, and which embraced?

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

[1] Which is not to say I think that Lewis' treatment is definitive. For example, there is hardly a word there relating to intelligence, rationality, curiosity, "internal" honesty (rather than honesty in dealing with others), vigour, or indeed any of Eliezer's "12 virtues", and I think a substantial number of the ancient list of Roman virtues don't get much of a place either. Lewis has sought the Christian virtues, found them, and looked no further.

Comment author: AstraSequi 13 August 2015 11:53:16AM *  0 points [-]

Because that way leads to wireheading, indifference to dying (which wipes out your preferences), indifference to killing (because the deceased no longer has preferences for you to care about), readiness to take murder pills, and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

I already have preferences about my preferences, so I wouldn’t self-modify to kill puppies, given the choice. I don’t know about wireheading (which I don’t have a negative emotional reaction toward), but I would resist changes for the others, unless I was modified to no longer care about happiness, which is the meta-preference that causes me to resist. The issue is that I don’t have an “ultimate” preference that any specific preference remain unchanged. I don’t think I should, since that would suggest the preference wasn’t open to reflection, but it means that the only way I can justify resisting a change to my preferences is by appealing to another preference.

What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? …

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

I know about CEV, but I don’t understand how it answers the question. How could I convince my future self that my preferences are better than theirs? I think that’s what I’m doing if I try to prevent my preferences from changing. I only resist because of meta-preferences about what type of preferences I should have, but the problem recurses onto the meta-preferences.

Comment author: RichardKennaway 13 August 2015 03:40:31PM 0 points [-]

The issue is that I don’t have an “ultimate” preference

Do you need one?

If you keep asking "why" or "what if?" or "but suppose!", then eventually you will run out of answers, and it doesn't take very many steps. Inductive nihilism — thinking that if you have no answer at the end of the chain then you have no answer to the previous step, and so on back to the start — is a common response, but to me it's just another mole to whack with Modus Tollens, a clear sign that one's thinking has gone wrong somewhere. I don't have to be able to spot the flaw to be sure there is one.

How could I convince my future self that my preferences are better than theirs?

Your future self is not a person as disconnected from yourself as the people you pass in the street. You are creating all your future yous minute by minute. Your whole life is a single, physically continuous object:

"Suppose we take you as an example. Your name is Rogers, is it not? Very well, Rogers, you are a space-time event having duration four ways. You are not quite six feet tall, you are about twenty inches wide and perhaps ten inches thick. In time, there stretches behind you more of this space-time event, reaching to perhaps nineteen-sixteen, of which we see a cross-section here at right angles to the time axis, and as thick as the present. At the far end is a baby, smelling of sour milk and drooling its breakfast on its bib. At the other end lies, perhaps, an old man someplace in the nineteen-eighties.

"Imagine this space-time event that we call Rogers as a long pink worm, continuous through the years, one end in his mother's womb, and the other at the grave..."

Robert Heinlein, "Life-line"

Do you want your future self to be fit and healthy? Well then, take care of your body now. Do you wish his soul to be as healthy? Then have a care for that also.

Comment author: Viliam 12 August 2015 12:36:59PM 7 points [-]

If I offered you now a pill that would make you (1) look forward to suicide, and (2) immediately kill yourself, feeling extremely happy about the fact that you are killing yourself... would you take it?

Comment author: AstraSequi 13 August 2015 11:26:54AM 0 points [-]

No, but I don’t see this as a challenge to the reasoning. I refuse because of my meta-preference about the total amount of my future-self’s happiness, which will be cut off. A nonzero chance of living forever means the amount of happiness I received from taking the pill would have to be infinite. But if the meta-preference is changed at the same time, I don’t know how I would justify refusing.

Comment author: Tem42 12 August 2015 01:08:03PM *  2 points [-]

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more.

You may find that you do have a moral system that is more consistent (and hopefully, more good) if you maintain a preference for not-killing puppies. Hopefully this moral system is well enough thought-out that you can defend keeping it. In other words, your preferences won't change without a good reason.

If I had a button that could block the modification, I would press it

This is a bad thing. If you have a good reason to change your preferences (and therefore your actions), and you block that reason, this is a sign that you need to understand your motivations better.

"tonight we will modify you to want to kill puppies,"

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason. Your goal should be to kill this person, and start modifying your preferences based on reason instead. On the other hand, if this person is modifying your preferences through reason, you should make sure you understand the rhetoric and logic used, but as long as you are sure that what e says is reasonable, you should indeed change your preference.

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies. Most people find that they do not have to have an emotional preference for dealing with unpleasant tasks, and manage to get by with a sense of 'job well done' once they have convinced themselves intellectually that a task needs to be done. It is understandable if you feel that 'job well done' might not apply to killing puppies, but I am fairly agnostic on the matter, so I won't try to convince you that puppy population control is your next step to sainthood. However, if after much introspection you do find that puppies need to be killed and you seriously don't like doing it, you might want to consider paying someone else to kill puppies for you.

Edited for format and to remove an errant comma.

Comment author: AstraSequi 13 August 2015 11:40:52AM *  0 points [-]

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

Yes, that’s the approach. The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.

When I read the discussions here on AI self-modification, I think: why should the AI try to make its future-self follow its past preferences? It could maximize its future utility function much more easily by self-modifying such that its utility function is maximized in all circumstances. It seems to me that timeless decision theory advocates doing this, if the goal is to maximize the utility function.

I don’t fully understand my preferences, and I know there are inconsistencies, including acceptable ones like changes in what food I feel like eating today. If you have advice on how to understand the basis and value of my preferences, I’d appreciate hearing it.

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason.

I’m assuming there aren’t any side effects that would make me resist based on the process itself, so we can say that’s “magical”. Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life. Does that make a difference?

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies.

I’m defining preference as something I have a positive or negative emotional reaction about. I sometimes equivocate with what I think my preferences should be, because I’m trying to convince myself that those are my true preferences. The idea of killing puppies was just an example of something that’s against my current preferences. Another example is “we will modify you from liking the taste of carrots to liking the taste of this other vegetable that tastes different but is otherwise identical to carrots in every important way.” This one doesn’t have any meta-preferences that apply.

Comment author: Tem42 13 August 2015 04:11:54PM 0 points [-]

I see that this conversation is in danger of splitting into different directions. Rather than make multiple different reply posts or one confusing essay, I am going to drop the discussion of AI, because that is discussed in a lot of detail elsewhere by people who know a lot more than I.

meta-preferences

We are using two different models here, and while I suspect that they are compatible, I'm going to outline mine so that you can tell me if I'm missing the point.

I don't use the term meta-preferences, because I think of all wants/preferences/rules/and general-preferences as having a scope. So I would say that my preference for a carrot has a scope of about ten minutes, appearing intermittently. This falls under the scope of my desire to eat, which appears more regularly and for greater periods of time. This in turn falls under the scope of my desire to have my basic needs met, which is generally present at all times, although I don't always think about it. I'm assuming that you would consider the later two to be meta-preferences.

I don’t know how to justify resisting an intervention that would change my preferences

I would assume that each preference has a value to it. A preference to eat carrots has very little value, being a minor aesthetic judgement. A preference to meet your basic needs would probably have a much higher value to it, and would probably go beyond the aesthetic.

If it were easy for me to modify my preferences away from cheeseburgers, I can find a clear reason (or ten) to do so. I justify it by appealing to my higher-level preferences (I would like to be healthier). My preference to be healthier has more value than a preference to enjoy a single meal -- or even 100 meals.

But if it were easy to modify my preferences away from carrots, I would have to think twice. I would want a reason. I don't think I could find a reason.

Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life.

I would set up an example like this: I like carrots. I don't like bell peppers. I have an opportunity to painlessly reverse these preferences. I don't see any reason to prefer or avoid this modification. It makes sense for me to be agnostic on this issue.

I would set up a more fun example like this: I like Alex. I do not like Chris. I have an opportunity to painlessly reverse these preferences.

I would hope that I have reasons for liking Alex, and not liking Chris... but if I don't have good reasons, and if there will not be any great social awkwardness about the change, then yes, perhaps Alex and Chris are fungible. If they are fungible, this may be a sign that I should be more directed in who I form attachments with.

The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.

In the Alex/Chris example, it would be interesting to see if you ever reached a preference that you did mind changing. For example, you might be willing to change a preference for tall friends over short friends, but you might not be willing to change a preference for friends that kick orphans with friends who help orphans.

If you do find a preference that you aren't willing to change, it is interesting to see what it is based on -- a moral system (if so, how formalized and consistent is it), an aesthetic preference (if so, are you overvaluing it? Undervaluing it?), or social pressures and norms (if so, do you want those norms to have that influence over you?).

It is arguable, but not productive, to say that ultimately no one can justify anything. I can bootstrap up a few guidelines that I base lesser preferences on -- try not to hurt unnecessarily (ethical), avoid bits of dead things (aesthetic), and don't walk around town naked (social). I would not want to switch out these preferences without a very strong reason.

Comment author: Dahlen 11 August 2015 09:38:06AM 2 points [-]

What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?

I have some years of college left before I'll be a certified professional, and I'm good but not world-class awesome at a variety of things, yet judging by encounters with some well and truly employed people, I find myself wondering how come I'm either not employed or duped into working for free, while these doofuses have well-paying jobs. The answer tends to be, for lack of trying on my part, but it would be quite a nasty surprise if I do begin to try and it turns out that my most relied-upon quality turns out not to be worth much. So, better to ask: how much is intelligence worth for earning money, when not supplemented by the relevant pieces of paper or loads of experience?

Comment author: btrettel 11 August 2015 12:40:21PM *  3 points [-]

Programming is a skill, but not a particularly rare one. Beyond a certain level of intelligence, I don't think there's much if any correlation between programming ability and intelligence. Moreover, I think programming is one area where standard credentials don't matter too much. If you have a good project on GitHub, that can be enough.

gwern wrote something related before:

I've often seen it said on Hacker News that programmers could clean up in many other occupations because writing programs would give them a huge advantage. And I believe Michael Vassar has said here that he thought a LWer could take over a random store in SF and likewise clean up.

Personally, I think going off raw intelligence doesn't work so well, especially if you'll be reinventing the wheel because of your lack of domain knowledge. Getting rare skills which are in demand is a smart strategy, and you'd be better off going that route. Here's a good book built on that premise.

Comment author: Lumifer 11 August 2015 03:15:16PM 2 points [-]

What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?

A manager :-) A business manager, a small business owner, a civil servant, a dictator, a leader of the free world :-/

Generally speaking, there is something of a Catch-22 situation. The low-level entry jobs are easy to get into, but they don't really care about your intelligence. But high-level jobs where intelligence matters require demonstration not only of intelligence, but also of the ability to use it which basically means they want to see past achievements and accomplishments.

There are shortcuts, but they are usually called "graduate schools".

Comment author: ChristianKl 11 August 2015 06:10:54PM 0 points [-]

The low-level entry jobs are easy to get into, but they don't really care about your intelligence.

In Germany technical telephone support would be a low-level job where intelligence is useful but I don't know to what extend that exists in the US where the language situation is different.

Comment author: VoiceOfRa 18 August 2015 02:48:46AM *  2 points [-]

In the US those jobs tend to be outsourced to other English speaking countries with lower wages, most commonly India.

Comment author: ChristianKl 11 August 2015 10:15:31AM 1 point [-]

There are plenty people in MENSA who don't have high paying jobs.

Comment author: shminux 11 August 2015 02:46:36PM 0 points [-]

Apply your general intelligence to figuring out what you are especially good at, then see if there are relevant paid jobs.

Comment author: WalterL 12 August 2015 08:04:58PM 1 point [-]

I think he's trying to do that, by making this post.

@OP: the best place I've seen for lazy smart people to make money is in coding jobs. If 4 year college is out, go to an online code learning place and get some nonsense degree. (App Academy, or whatevs). Then apply a bunch. If you have a friend who is a coder, see if they have a hookup.

Once you have a job the only way to lose it is to be aggressively inept or engage in one of the third rail categories of HR, racism sexism or any other ism.

Comment author: VoiceOfRa 18 August 2015 02:49:54AM 4 points [-]

Or for the company you work for to go bust.

Comment author: Username 10 August 2015 01:37:47PM 2 points [-]

An Introverted Writer’s Lament by Meghan Tifft

Whether we’re behind the podium or awaiting our turn, numbing our bottoms on the chill of metal foldout chairs or trying to work some life into our terror-stricken tongues, we introverts feel the pain of the public performance. This is because there are requirements to being a writer. Other than being a writer, I mean. Firstly, there’s the need to become part of the writing “community”, which compels every writer who craves self respect and success to attend community events, help to organize them, buzz over them, and—despite blitzed nerves and staggering bowels—present and perform at them. We get through it. We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies for a night of validation and participation.

Comment author: Tem42 10 August 2015 06:23:52PM 8 points [-]

This is interesting, but I think that it is using an incorrect definition of introversion. I interpret an introvert as someone who prefers to spend time by themselves or in situations in which they are working on their own, rather than in situations in which they are interacting with other people. This does not mean that they necessarily need to feel extreme stress at public speaking or at parties/social events. They may feel bored, annoyed, frustrated, or indifferent to these events, or they may even like them, but feel the opportunity cost of the time they take is not really worth it.

"our terror-stricken tongues, we introverts feel the pain of the public performance"; "blitzed nerves and staggering bowels"; "We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies"

This doesn't sound like introversion. This sounds like an anxiety disorder.

Comment author: WalterL 12 August 2015 08:06:48PM 3 points [-]

Hmm, I generally read introvert as "recharges when alone", whereas extrovert "recharges with others". I don't usually associate introvert with being unable to do public speaking. That's a phobia, isn't it?

Comment author: Clarity 10 August 2015 11:25:11AM *  3 points [-]

I would like to see some targeted efforts to make the Sequences and other rationality materials available to less aspirational, curious or intellectual audiences.

Rationality fiction reaches out to curious audiences. Intellectual audiences may stumble upon rationality material while researching their respective fields of interest. Aspirants to rationality may stumble upon it while looking to better themselves and those around them.

Many ordinary people can benefit from the concepts here. And they will likely find their way to it, should their be an evident benefit to them, by contact with the the above classes of people who are in direct contact with first generation rationality materials produced here. I can see this at my local LW group, where it was hard to find someone who actually read LessWrong, based on my one visit. Though, it may be an artifact of the way the group was marketed in the past outside of the community.

Those who find learning difficult and distasteful can also benefit from rationality materials. So, I would like to start a discussion of suggestions by which material here could be adapted for use by a broader audience. I'll start us of by introducing the existing evidence on the subject of evidence-based teaching. This is easy, because a gentleman by the name of firstname Hattie synthesised 800 meta-analyses in education to figure that entire field out.

Using his own example, I will share a small mnemonic that future posters may like to keep in mind to keep things more accessible to those less cognitively flexible. It may be easier to take this approach, of consciously adopting writing styles that pander to the lowest common denominator, without reducing the sophistication of the content, than to restyle past discussions and sequences for that purpose.

DECIDES

D ecide on audience, goals, and position

E stimate main ideas and details

F igure best order of main ideas and details

E xpress the position in the opening

N ote each main idea and supporting points

D rive home the message in the last sentence

S earch for errors and correct

Comment author: [deleted] 10 August 2015 01:35:08PM *  3 points [-]

Have a look at the postings by Gleb Tsipursky who has been deeply involved in exactly such an enterprise: trying to bring rationality to much more widespread ("ordinary") audiences through a nonprofit organisation "Intentional Insights".

It is a controversial goal, and he's certainly faced criticism here on LW for the approach he takes. But very close to the main ideas expressed in your comment.

(I am not affiliated with Gleb or Intentional Insights, just thought it was relevant enough to mention in this context)

Comment author: Stephen_Cole 10 August 2015 02:20:29PM 2 points [-]

Has there been discussion of Jack Good's principle of nondogmatism? (see Good Thinking, page 30).

The principle, stated simply in my bastardized version, is to believe no thing with probability 1. It seems to underlie Good's type 2 rationality (to maximize expected utility, within reason).

This is (almost) in accord with Lindley's concept of Cromwell's rule (see Lindley's Understanding Uncertainty or https://en.wikipedia.org/wiki/Cromwell%27s_rule). And seems to be closely related to Jaynes' mind projection fallacy.

Comment author: Tem42 10 August 2015 05:30:06PM 4 points [-]

There have been discussions on this topic, although perhaps not framed as nondogmatism. If you have not read 0 and 1 are not probabilities and infinite certainty, you might find them and related articles interesting.

Comment author: Thomas 10 August 2015 11:05:04AM *  2 points [-]

I see yet another problem with the Singularity. Say that a group of people manages to ignite it. Until the day before, they, the team were forced to buy the food and everything else. Now, what does the baker or pizza guy have to offer to them, anymore?

The team has everything to offer to everybody else, but everybody else have nothing to give them back as a payment for the services.

The "S team" may decide to give a colossal charity. A bigger one than everything we currently all combined poses. To each. That, if the Singularity is any good, of course.

But, will they really do that?

They might decide not to. What then?

Comment author: RichardKennaway 10 August 2015 12:12:33PM 2 points [-]

What then?

They take over and rule like gods forever, reducing the mehums to mere insects in the cracks of the world.

Comment author: Thomas 10 August 2015 01:05:01PM 0 points [-]

Yes. A farmer does not want to give a bushel of wheat to this "future Singularity inventors" for free. Those guys may starve to death as far as he cares, if they don't pay the said bushel of wheat with a good money.

They understand it.

Now, they don't need any wheat anymore. And nothing what this farmer has to offer. Or anybody else for that matter. The commerce has stopped here and they see no reason for giving tremendous gifts around. They have paid for their wheat, vino and meat. Now, they are not shopping anymore.

The farmer should understand.

Comment author: RichardKennaway 10 August 2015 01:25:14PM *  3 points [-]

The farmer will never know about these "Singularity inventors".

The inventors themselves may not know. The scenario presumes that the "Singularity inventors" have control of their "invention" and know that it is "the creation of the Singularity". The history of world-changing inventions of the past suggests that no-one will be in control of "the Singularity". No-one at the time will know that that is what it is, and will participate in whatever it looks like according to their own local interests.

The farmer will not know about the Singularity, but he's probably on Facebook.

Comment author: [deleted] 11 August 2015 01:36:57PM 0 points [-]

The inventors themselves may not know. The scenario presumes that the "Singularity inventors" have control of their "invention" and know that it is "the creation of the Singularity". The history of world-changing inventions of the past suggests that no-one will be in control of "the Singularity". No-one at the time will know that that is what it is, and will participate in whatever it looks like according to their own local interests.

Except for all the people on this site, who talk nonstop about deliberately setting off such a thing?

Comment author: RichardKennaway 11 August 2015 03:04:45PM *  1 point [-]

Except for all the people on this site, who talk nonstop about deliberately setting off such a thing?

"Why, so can I, and so can any man
but will they foom when you do conjure them?"

Comment author: Salemicus 11 August 2015 08:37:08PM *  3 points [-]

The annotated RichardKennaway:

This is a quote from Henry IV part I, when Glendower is showing off to the other rebels, claiming to be a sorceror, and Hotspur is having none of it.

Glendower:

I can call spirits from the vasty deep.

Hotspur:

Why, so can I, or so can any man

But will they come when you do call for them?

Comment author: [deleted] 11 August 2015 11:22:05PM 0 points [-]
Comment author: Mac 10 August 2015 03:35:44PM *  1 point [-]

I often see arguments on LessWrong similar to this, and I feel compelled to disagree.

1) The AI you describe is God-like. It can do anything at a lower cost than its competitors, and trade is pointless only if it can do anything at extremely low cost without sacrificing more important goals. Example: Hiring humans to clean its server room is fairly cheap for the AI if it is working on creating Heaven, so it would have to be unbelievably efficient to not find this trade attractive.

2) If the AI is God-like, an extremely small amount of charity is required to dramatically increase humanity’s standard of living. Will the S team give at least 0.0000001% of their resources to charity? Probably.

3) If the AI is God-like, and if the S team is motivated only by self-interest, why would they waste their time dealing with humans? They will inhabit their own paradise, and the rest of us will continue working and trading with each other.

The economic problems associated with AI seem to be relatively minor, and it pains me to see smart people wasting their time on them. Let’s first make sure AI doesn’t paperclip our light cone - can we agree this is the dominant concern?

Comment author: Lumifer 10 August 2015 03:53:27PM 1 point [-]

The team has everything to offer to everybody else

You are assuming that the S team is in full control of their Singularity which is not very likely.

Comment author: WalterL 10 August 2015 08:06:41PM 0 points [-]

It feels pretty likely to me. An AI that grows ever more effective at optimizing its futures will not suddenly begin to question its goals. If so, whoever pulled off the creation of the AI is responsible for the future, based on what it wrote into the "goal list" of the proto-AI.

One part of the "goal list" is going to be some equivalent of "always satisfy Programmer's expressed desires" and "never let communication with Programmer lapse", to allow for fixing the problem if the AI starts turning people into paper clips. Side effect, Programmer is now God, but presumably (s)he will tolerate this crushing burden for the first few thousand years.

Comment author: ChristianKl 10 August 2015 08:48:48PM 3 points [-]

You can mess people up quite easily will still satisfying their expressed desires. The AGI can also talk the programmers into whatever position it considers reasonable.

"never let communication with Programmer lapse"

You just forbade the AGI from allowing the programmer to sleep.

Comment author: Lumifer 10 August 2015 08:17:34PM 1 point [-]

An AI that grows ever more effective at optimizing its futures will not suddenly begin to question its goals.

Oh, great. So MIRI can disband and we can cross one item off the existential-risk list....

some equivalent of "always satisfy Programmer's expressed desires"

Well, that idea has been explored on LW. Quite extensively, in fact.

Comment author: WalterL 10 August 2015 08:25:00PM 1 point [-]

Point of MIRI is making sure the goals are set up right, yeah? Like, the whole "AI is smart enough to fix its defective goals" is something we make fun of. No ghost in the machine, etc.

Whatever outcome of perfect goal set is (if MIRI's AI is, in fact, the one that takes over), will presumably include human ability to override in case of failure.

Comment author: ChristianKl 10 August 2015 08:47:21PM 2 points [-]

Point of MIRI is making sure the goals are set up right, yeah?

That's not the only point. It's also to keep the goals stable in the face of self modification.

Comment author: cousin_it 14 August 2015 11:03:40AM 1 point [-]

Important question that might affect the chances of humanity's survival:

Why is Bostrom's owl so ugly? I'm not much of a designer, but here's my humble attempt at a redesign :-)

Comment author: ChristianKl 14 August 2015 11:21:07AM 3 points [-]

Your owl looks cute and not scary. Framing AGI's as cute seems to go against the point.

Comment author: cousin_it 14 August 2015 11:46:33AM *  2 points [-]

Aha, that answers my question. I didn't realize that Bostrom's owl represented superintelligence, so I chose mine to be more like a reader stand-in.

If the owl is supposed to look scary and wrong, reminiscent of Langford's basilisk, then I agree that the original owl does the job okay. Though that didn't stop someone on Tumblr from being asked "why are you reading an adult coloring book?", which was the original impetus for me.

Is it possible to find an image that will look scary and wrong, but won't look badly drawn? Does anything here fit the bill?

Comment author: ChristianKl 14 August 2015 12:05:39PM 1 point [-]

There's the parable of sparrows who raise an owl: https://www.youtube.com/watch?v=7rRJ9Ep1Wzs That owl likely made it on the cover.

I don't think the owl has anything to do with the owls in the study hall ;)

Comment author: cousin_it 14 August 2015 03:28:25PM *  4 points [-]

OK, here's my next attempt with a well-drawn owl that looks scary instead of cute. What do you think?

Comment author: g_pepper 14 August 2015 05:30:12PM 4 points [-]

I actually like Bostrom's owl. I've always thought that Superintelligence has a really good cover illustration.

Comment author: cousin_it 14 August 2015 05:57:12PM *  2 points [-]

I like it too, because it has character, which few pictures do. But the asymmetrical distorted face just bugs me. And the ketchup stains on the collar. And the composition problems (lack of space below the subtitle, timid placement of trees, etc.) For some reason my brain didn't see them as creative choices, but as mechanical problems that need fixing. Maybe I'm oversensitive.

Comment author: Lumifer 17 August 2015 08:32:37PM 1 point [-]

But the asymmetrical distorted face just bugs me.

That might be meant as a reminder of the inhumanity.

Comment author: gjm 18 August 2015 02:27:55PM 0 points [-]

Stating the obvious: I don't think those stains are meant to be ketchup. (And maybe "owl with bloodstains" feels scarier than "owl with ketchup stains".)

Comment author: Lumifer 18 August 2015 02:29:26PM 0 points [-]

They don't look like blood either.

Comment author: gjm 18 August 2015 05:30:41PM 0 points [-]

Well, if we're going to be picky, neither does the owl look like an owl. It's not that sort of picture. But I suggest that "blood" is a more likely answer to "what do those red bits indicate?" than anything like "ketchup".

Comment author: jam_brand 17 August 2015 08:27:38PM 0 points [-]

I strongly dislike this. The head seems too ornate and the outline reminds me of so-called "tribal" tattoos, which seems low status. The body being subtly asymmetrical is a slight annoyance as well and with the owl now being centered in the image I think the subtitle should be too.

Comment author: Tem42 15 August 2015 02:01:07PM 0 points [-]

This looks very good. The feet perched on thin air look a little off.

You should probably check with the presumed copyright holder, although I suspect that she plagiarized the head design.

Comment author: ChristianKl 15 August 2015 11:58:00AM 0 points [-]

That looks nice, but I wouldn't trust my aesthetic judgement too much.

Comment author: Houshalter 14 August 2015 03:57:18AM 1 point [-]

Omega places a button in front of you. he promises that each press gives you an extra year of life, plus whatever your discounting factor is. If you walk away, the button is destroyed. Do you press the button forever and never leave?

Comment author: MrMind 14 August 2015 07:04:43AM 4 points [-]

That's a variant of a known problem in any decision theory that admits unbounded utility: there's something inside a box which every minute increases its utility, but it stops when you open the box and you get to enjoy it.
When do you open the box?

Comment author: philh 14 August 2015 03:16:00PM 4 points [-]

A similar problem is: pick a number. Gain that many utilons.

Comment author: g_pepper 14 August 2015 05:24:51PM 2 points [-]

That's when Scott Aaronson's essay Who Can Name the Bigger Number comes in handy!

Comment author: NoSuchPlace 14 August 2015 01:19:37PM 1 point [-]

Since I don't spend all my time inside avoiding every risk hoping for someone to find the cure to aging, I probably value a infinite life a large but finite amount times more than a year of life. This means that I must discount in such a way that after a finite number of button press Omega would need to grant me an infinite life span.

So I preform some Fermi calculations to obtain an upper bound on the number of button presses I need to obtain Immortality, press the button that often, then leave.

Comment author: shminux 14 August 2015 06:22:20AM 1 point [-]

Assuming those are QALY, not just years, spend a week or so pressing the button non-stop, then use the extra million years to become Omega.

Comment author: Username 12 August 2015 07:21:05PM *  1 point [-]

Tacit Knowledge: A Wittgensteinian Approach by Zhenhua Yu

In the ongoing discussion of tacit knowing/knowledge, the Scandinavian Wittgensteinians are a very active force. In close connection with the Swedish Center for Working Life in Stockholm, their work provides us with a wonderful example of the fruitful collaboration between philosophical reflection and empirical research. In the Wittgensteinian approach to the problem of tacit knowing/knowledge, Kell S. Johannessen is the leading figure. In addition, philosophers like Harald Grimen, Bengt Molander and Allan Janik also make contributions to the discussion in their own ways. In this paper, I will try to clarify the main points of their contribution to the discussion of tacit knowing/knowledge.

...

Johannessen observes:

It has in fact been recognized in various camps that propositional knowledge, i.e, knowledge expressible by some kind of linguistic means in a propositional form, is not the only type of knowledge that is scientifically relevant. Some have, therefore, even if somewhat reluctantly, accepted that it might be legitimate to talk about knowledge also in cases where it is not possible to articulate it in full measure by proper linguistic means.

Johannessen, using Polanyi’s terminology, calls the kind of knowledge that cannot be fully articulated by verbal means tacit knowledge.

Comment author: Username 10 August 2015 01:11:59PM 1 point [-]

Change your name by Paul Graham

If you have a US startup called X and you don't have x.com, you should probably change your name.

The reason is not just that people can't find you. For companies with mobile apps, especially, having the right domain name is not as critical as it used to be for getting users. The problem with not having the .com of your name is that it signals weakness. Unless you're so big that your reputation precedes you, a marginal domain suggests you're a marginal company. Whereas (as Stripe shows) having x.com signals strength even if it has no relation to what you do.

...

100% of the top 20 YC companies by valuation have the .com of their name. 94% of the top 50 do. But only 66% of companies in the current batch have the .com of their name. Which suggests there are lessons ahead for most of the rest, one way or another

Comment author: [deleted] 11 August 2015 03:22:17AM 3 points [-]

This seems to me a clear case of reversing (most of) the causation.

Comment author: SolveIt 11 August 2015 04:41:16AM 4 points [-]

Which makes it a good target for signalling. If you want to seem strong, you get the domain.

Comment author: drethelin 11 August 2015 04:01:30AM 4 points [-]

turns out when you're a billion dollar startup you can afford to buy the .com of your name regardless of what it is.

Comment author: [deleted] 11 August 2015 04:02:20AM 1 point [-]

Exactly.

Comment author: skilesare 19 August 2015 06:53:07PM 1 point [-]

Does anyone here have kids in school and if so how did you go about picking their school? Where is the best place to get a scientifically based 'rational' education.

I'm in Houston and the public schools are a non-starter. We could move to a better area with better schools but my mortgage would increase 4x. Instead we send our kids to private school and most in the area are Christian schools. In a recent visit with my schools principal we were told in glowing terms about how all their activities this year would be tied back to Egypt and the stories of Egypt in the old testament. I thought to my self that I didn't even think that Moses was a real person so this is going to get very interesting.

I wish they'd spend half as much time on studying science and psychological concepts that they do studying the bible...but what are you going to do?

Any ideas?

I should add that I did graduate from this same school although I did not go through grades 1-9 there...only high school, and that education was really top notch...but still an hour a day of bible class.

Comment author: Username 19 August 2015 07:23:47PM *  6 points [-]

My approach was very simple: find the best public school system in my area and move there. "Best" is defined mostly by IQ of high-school seniors proxied by SAT scores. What colleges the school graduates go to mattered as well, but it is highly correlated with the SAT scores.

What I find important is not the school curriculum which will suck regardless. The crucial thing, IMHO, is the attitude of the students. In the school that my kids went to, the attitude was that being stupid was very uncool. Getting good grades was regarded as entirely normal and necessary for high social status (not counting the separate clusters of athletes and kids with very rich parents). The basic idea was "What, are you that dumb you can't even get an A in physics??" and not having a few AP classes was a noticeable negative. This all is still speaking about social prestige among the students and has nothing to do with teachers or parents.

I think that this attitude of "it's uncool to be stupid" is a very very important part of what makes good schools good.

Comment author: Username 11 August 2015 05:32:11PM 1 point [-]

Previously on LW, I have seen the suggestion made that having short hair can be a good idea, and it seems like this can be especially true in professional contexts. For an entry-level male web developer who will be shortly moving to San Francisco, is this still true? I'm not sure if the culture there is different enough that long hair might actually be a plus. What about beards?

(I didn't post in this OT yet).

Comment author: badger 11 August 2015 07:13:45PM 2 points [-]

If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself.

Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.

Comment author: ChristianKl 11 August 2015 06:09:20PM 1 point [-]

Do you want to do freelance web development or be employed at a single company without much consumer contact?

Comment author: Username 11 August 2015 07:04:25PM 0 points [-]

Employment at a single company is the plan.

Comment author: G0W51 16 August 2015 01:22:24AM 0 points [-]

Perhaps the endowment effect evolved because placing high value on an object you own signals to others that the object is valuable, which signals that you are wealthy, which can increase social status, which can increase mating prospects. I have not seen this idea mentioned previously, but I only skimmed parts of the literature.