If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
284 comments, sorted by Click to highlight new comments since: Today at 3:23 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Impulsive Rich Kid, Impulsive Poor Kid, an article about using CBT to fight impulsivity that leads to criminal behaviour, especially among young males from poor backgrounds.

How much crime takes place simply because the criminal makes an impulsive, very bad decision? One employee at a juvenile detention center in Illinois estimates the overwhelming percentage of crime takes place because of an impulse versus conscious decision to embark on criminal activity:

“20 percent of our residents are criminals, they just need to be locked up. But the other 80 percent, I always tell them – if I could give them back just ten minutes of their lives, most of them wouldn’t be here.”

...

The teenager in a poor area [who is] is not behaving any less automatically than the teenager in the affluent area. Instead the problem arises from the variability in contexts—and the fact that some contexts call for retaliation.” To illustrate their theory, they offer an example: If a rich kid gets mugged in a low-crime neighborhood, the adaptive response is to comply -- hand over his wallet, go tell the authorities. If a poor kid gets mugged in a high-crime neighborhood, it is sometimes adaptive to refuse -- s

... (read more)

if I could give them back just ten minutes of their lives, most of them wouldn’t be here.

He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.

The remainder of the post actually argues that persistent, stable "reflexes" are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.

6Emile9y
I disagree. Let's take drivers who got into a serious accident : if you "gave them just back ten minutes" so that they avoided getting into that accident, most of them wouldn't have had another accident later on. It's not as if the world neatly divided into safe drivers, who never have accidents, and unsafe drivers, who have several. Sure, those kids that got in trouble are more likely to have problematic personalities, habits, etc. which would make it more likely to get in trouble again - but that doesn't mean more likely than not. Most drivers don't get have (serious) accidents, most kids don't get in (serious) trouble, and if you restrict yourself to the subset of those who already had it once, I agree a second problem is more likely, but not certain.
1Lumifer9y
How do you know? Yeah, but we are not talking about average kids. We're talking about kids who found themselves in juvenile detention and that's a huge selection bias right there. You can treat them as a sample (which got caught) from the larger underlying population which does the same things but didn't get caught (yet). It's not an entirely unbiased sample, but I think it's good enough for our handwaving. Well, of course. I don't think anyone suggested any certainties here.
1[anonymous]9y
To use the paper's results, it looks like they're getting roughly 10 in 100 in the experiment condition and 18 in 100 for the control. Those kids were selected because they were considered high risk. If among the 82 of 100 kids who didn't get arrested there are >18 who are just as likely to be arrested as the 18 who were, then emile's conclusion is correct across the year. The majority won't be arrested next year. Across an entire lifetime however.... They'd probably become more normal as time passed, but how quickly would this occur? I'd think Lumifer is right that they probably would end up back in jail. I wouldn't describe this as a very regular problem though.
1[anonymous]9y
Would you think that in future, when such technologies will probably become widespread, driver training should include at least one grisly crash, simulated and showed in 3-D? Or at least a mild crash?
4query9y
The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated. That said, we cant actually retroactively edit anyways.
0Lumifer9y
I don't think that's the model (or if it is, I think it's wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.
8Viliam9y
Unrelated to the real content of the article, but my first reaction after reading the title was: "obviously, the impulsive Rich Kid can afford a better lawyer".

Do Artificial Reinforcement-Learning Agents Matter Morally?

I've read this paper and find it fascinating. I think it's very relevant to Lesswrong's interests. Not just that it's about AI,but also that it asks hard moral and philosophical questions.

There are many interesting excerpts. For example:

The drug midazolam (also known as ‘versed,’ short for ‘versatile sedative’) is often used in procedures like endoscopy and colonoscopy... surveyed doctors in Germany who indicated that during endoscopies using midazolam, patients would ‘moan aloud because of pain’ and sometimes scream. Most of the endoscopists reported ‘fierce defense movements with midazolam or the need to hold the patient down on the examination couch.’ And yet, because midazolam blocks memory formation, most patients didn’t remember this: ‘the potent amnestic effect of midazolam conceals pain actually suffered during the endoscopic procedure’. While midazolam does prevent the hippocampus from forming memories, the patient remains conscious, and dopaminergic reinforcement-learning continues to function as normal.

7Betawolf9y
The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned. Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.

Composing Music With Recurrent Neural Networks

It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. Very cool stuff!) Now seems like as good a time as ever to experiment with what a neural network can do.

For a while now, I’ve been floating around vague ideas about writing a program to compose music. My original idea was based on a fractal decomposition of time and some sort of repetition mechanism, but after reading more about neural networks, I decided that they would be a better fit. So a few weeks ago, I got to work designing my network. And after training for a while, I am happy to report remarkable success!

5pianoforte6119y
It's certainly very interesting. It's a slight improvement over Markov chain music. That tends to sound good for any stretch of 5 seconds, but lacks a global structure making it pretty awful to listen to for any longer stretch of time. This music still lacks much of the longer range structures that make music sound like music. It's a lot like stitching together 5 different Chopin compositions. It is stylistically consistent, but the pieces don't fit together. Having said that, it is very interesting to see what you can get out of a network with respect to consonance, dissonance, local harmonic context and timing. I'm most impressed by the rhythm, it sounds more natural to my ear than the note progression.
3Viliam9y
Maybe there are situations where these imperfections of music wouldn't matter, for example if used as a background music for a computer game.

The moral imperative for bioethics by Steven Pinker.

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

2[anonymous]9y
I'm all in favor of "social justice" in medicine by its conventional definition, but that's not even a particularly difficult problem. Universal medical systems already exist and function well all across the planet. Likewise, nobody's actually going to vote for Brave New World. It really does seem like "social justice", in a bioethical context, simply isn't the True Rejection.
0Gunnar_Zarncke9y
These online text comments would better belong to the Media Thread. Esp. as they are many.

Why Lonely People Stay Lonely

One long-held theory has been that people become socially isolated because of their poor social skills — and, presumably, as they spend more time alone, the few skills they do have start to erode from lack of use. But new research suggests that this is a fundamental misunderstanding of the socially isolated. Lonely people do understand social skills, and often outperform the non-lonely when asked to demonstrate that understanding. It’s just that when they’re in situations when they need those skills the most, they choke.

In a paper recently published in the journal Personality and Social Psychology Bulletin, Franklin & Marshall College professor Megan L. Knowles led four experiments that demonstrated lonely people’s tendency to choke when under social pressure. In one, Knowles and her team tested the social skills of 86 undergraduates, showing them 24 faces on a computer screen and asking them to name the basic human emotion each face was displaying: anger, fear, happiness, or sadness. She told some of the students that she was testing their social skills, and that people who failed at this task tended to have difficulty forming and maintaining fri

... (read more)
5Viliam9y
I imagine such behavior could happen if someone had a bad experience in the past, that they were disproportionally punished in some social situation. The punishment didn't even have to be a predictable logical consequence; maybe they just had a bad luck and met some psycho. Or maybe they were bullied at school, etc. If their social skills are otherwise okay, they may intellectually understand what is usually the best response, but in real life they are overwhelmed by fear and their behavior is dominated by avoiding the thing that "caused" the bad response in the past. For example, if the bad thing happened after saying "hello" to a stranger, they may be unable to speak with strangers, even if they know from observing others that this is a good thing to do. Then the framing of the test could make students think either about "what is generelly the right approach?" or "what would I do?"
4[anonymous]9y
21 people per group (86/4) is not a strong result unless it's a large effect size which I doubt. I wouldn't put hardly any faith in this paper. Maybe raise your prior by 3% but it's hard to be that precise with beliefs.
8pianoforte6119y
I'd like to see fewer low quality scientific criticisms here. Instead of speculating on effect sizes without reading the paper, and bloviating on sample sizes without doing the relevant power calculations, perhaps try looking at the results section? With respect to this paper, the results were consistent and significant across three tasks - an eye task, a facial expression task, and a vocal tone task. They did a non-social task (an anagram task) and found no significant effect (though that wasn't the purpose of doing the task, its a bit more complicated than that). They also did an interesting caffeine experiment to see if they could relieve social anxiety by convincing participants that the anxiety was due to a (fake) caffeinated drink. Anyways, as with any research in this area, it's too soon to be confident of what the results mean. But armchair uninformed scientific criticism will not advance knowledge. (In hindsight this is a bit of an overreaction, but I've seen too many poor criticisms of papers and too much speculation particularly on Reddit, but also here and on several blogs, and not nearly enough careful reading)
1Douglas_Knight9y
I would like to see fewer low quality science papers posted. FB put in way more work than was justified. My new policy is to down vote every psychology paper posted without any discussion of the endemic problems in psychology research and why that paper might not be pure noise.
3pianoforte6119y
Are all psychology papers garbage? And if only some are, how do you tell which is which if you don't read past the first line of the abstract? (which FB didn't, because he was unaware that more than one experiment was conducted).
6Douglas_Knight9y
We have to filter the papers somehow and the the people who do the filtering have to read them. But that doesn't mean that the people doing the filtering should be people on LW. Username relied on a journalist for filtering. This does filter for interesting topics, but not for quality. That Username did not link the actual paper suggests that he did not read it. Thus my prior is that it is of median quality and pure noise. Even if psychology papers were all perfectly accurate, there are way too many that get coverage and it is unlikely that one getting coverage this month is worth reading. There are standard places to look for filters: review articles and books.
0pianoforte6119y
Okay that's very fair.
1[anonymous]9y
Perhaps you didn't notice, but the paper is gated. It's not possible for me or most people to check the paper. The description doesn't mention the other two studies. The study described doesn't sound like a strong result. I never suggested it wasn't statistically significant. If it wasn't, it shouldn't be used to adjust one's views at all. I assumed it had achieved significance. It's also odd for you to criticize me and then ultimately come to a conclusion that could be interpreted as identical to my own or close to it. What do you mean by "too soon to be confident of what the results mean?". That could be interpreted as adjust your prior by 3% which was my interpretation. If you think a number higher than 15% is warranted then that's an odd phrasing to choose which makes it sound like we're not that far apart. Given that I was going by one study and you have three to look at it, it shouldn't be surprising that you would recommend a greater adjustment of ones prior. Going by just the facial expression study, what adjustment would you recommend? Do you think this adjustment is large enough for most people to know what to do with it? What adjustment to ones prior do you recommend after reviewing all three?
[-][anonymous]9y140

While the scientific publication paywall is a pain (and inappropriate especially for publically funded research) it is not an impossibility to get the article - and as pianoforte611 already mentioned, secondary citations or descriptions to primary sources may not provide enough information to evaluate the source.

How to get articles: I've seen numerous cases here at LW where a request for a copy of a paywalled publication is quickly met with a link or an email from someone who has access.

The twitter hashtag #icanhazpdf also serves this purpose: tweet with the hashtag including a link or DOI to the article you are requesting, include your email address in the tweet, and delete your request after you get the pdf. You can use a temporary read-only email address (e.g. slippery.email) if you are concerned about anonymity/privacy.

On this instance feel free to send me a private message with your contact details and I will send you a pdf - I already downloaded a copy.

Edited to add: it's also entirely legitimate to email the author of a published article and request an electronic copy of the article. There's no need to explain why you want it and you need not be an academic "insider", just be clear which article you are requesting. This is an example I received yesterday: "Dear {author}, I am interested in your recent article {full citation} but do not have subscription access. Would you be able to send me an electronic copy? Many thanks"

5Sarunas9y
Choking Under Social Pressure: Social Monitoring Among the Lonely, Megan L. Knowles, Gale M. Lucas, Roy F. Baumeister, and Wendi L. Gardner
4ChristianKl9y
Most people in the general population can't check the paper but on LW, I don't think that's the case. If you don't have access to a university network http://lesswrong.com/lw/ji3/lesswrong_help_desk_free_paper_downloads_and_more/ explores a variety of ways to access papers.
4Sarunas9y
This link is often useful for obtaining paywalled papers.
4pianoforte6119y
Sorry for assuming you had easy access to the paper. Given that you don't, you are of course free to decide whether the pop science report warrants further investigation. However to authoritatively criticize and speculate on the details of a paper you haven't read, I think lowers the quality of discussion here. I'm not a Bayesian but nevertheless, I don't agree that my conclusion is similar to yours. Prima facie, the effect itself seems fairly robust across the five experiments, but their theory as to why (which they did go reasonably far to test), still needs more experiments to be established. This is not a bug, and that does not make it a low quality paper. This is how science works. There may be more subtle problems that I (not being a statistician, or a psychologist) may have missed, but those can't be known without delving into the details.
2[anonymous]9y
Shouldn't the authors be aware of this? (I think one of them is even fairly well known in psychology circles.)
2Richard_Kennaway9y
I am sure the authors are more informed about their work than anyone who has not read it.
2[anonymous]9y
I'm not sure what the correlation is between prominence and paper quality. At any rate, he-s a co-author; not the main author. Co-authors can sometimes have very little to do with the actual paper.

CIA's The Definition of Some Estimative Expressions - what probabilities people assign to words such as "probably" and "unlikely".

CIA actually has several of these articles around, like Biases in Estimating Probabilities. Click around for more.

In hindsight, it seems obvious that they should.

Modafinil survey: I'm curious about how modafinil users in general use it, get it, their experiences, etc, and I've been working on a survey. I would welcome any comments about missing choices, bad questions, etc on the current draft of the survey: https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform?fbzx=2867338011413840797

4btrettel9y
Great idea. One suggestion: This survey seems to be for people who use modafinil regularly. I might suggest doing something (perhaps creating another survey) to get opinions from people who tried modafinil once or twice and disliked it. My one experience with Nuvigil was quite bad, and I recall Vaniver saying that he thought modafinil did nothing at all for him.
0ChristianKl9y
The survey could have multiple pages: The first page simply asks: What's your modafinil usage: a) Never b) I used it in the past and then stopped. c) I'm currently using it. (leading the user to your current survey)
1gwern9y
I've split it up into multiple pages: the first page classifies you as an active or inactive user and then sends you to a detailed questionnaire on how you use it if you are active, or simply why you stopped if inactive, and then both go to a long demographics/background page.
1ChristianKl9y
Sounds good. I would also add a "never used it" option. It can go straight to the demographics/background page. Otherwise you might have people who never used it classify themselves as "inactive user".
1gwern9y
(If they've never used modafinil, why on earth are they taking my survey?!)
2ChristianKl9y
They might be interested in taking modafinil. The fact that they shouldn't take the survey doesn't mean they won't.
4ChristianKl9y
I think that answer should have more than just (yes) and (no) as an answer. At least it should have a "I don't know" answer. ---------------------------------------- I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it. Maybe even "At what time of the day did you use it?" ---------------------------------------- I would be interested in a question about how many hours the person sleeps on average. Have you thought about having a question about bodyweight? I would be interested in knowing whether heavier people take a larger dose.
0gwern9y
I've added 'the same' as a third option to the generic vs brand-name question, and 2 questions about average hours of sleep a night & body weight. What would the response there be, an exact date or n days ago or what?
4Tem429y
I have no experience with -afinils, but it seems to me that there will surely be cases of people who have tried only brand-name (or, alternatively, only generic) -afinil, and therefore cannot accurately respond to the question With yes, no, or the same. The correct answer would be "I don't know". If I were taking this survey, I would skip that question rather than try to guess which answer you wanted in that case. But if I were designing the survey, I would go with ChristianKl's suggestion.
2ChristianKl9y
I would guess that a majority of the respondents haven't testing multiple kind of modafinil and thus are not equipped to answer the question at all. "I don't know" seems to be the proper answer for them.
2gwern9y
Alright, I've added a don't-know option and added a 'when did you last use' question.
2ChristianKl9y
Both would be possible but I think "n days ago" is more standard. It makes the data analysis easier.
2Richard_Kennaway9y
A few details: In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G. I was surprised to see Vitamin D listed as a nootropic, and Google turns up nothing much on the subject. Fixing a deficiency of anything will likely have a positive effect on mental function, but that is drawing the boundary of "nootropic" rather wide. Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco? Cancer is a reason to not smoke tobacco, but I don't think it's a reason not to ask about it. Or are those who smoke not smart enough to be in the target population for the survey? :) ETA: Also a typo in "SNP status of COMT RS4570625": the subtext mentions rs4680, not rs4570625. I dont know what "Val/Met" and "COMT" mean, but are those specific to RS4680 or correct for all three SNPs?
0gwern9y
Oops. Shouldn't've assumed they'd be the same... It is but it's still common and can be useful. The nootropics list is based on Yvain's previous nootropics survey, which I thought might be useful for comparison. (I also stole a bunch of questions from his LW survey too, figuring that they're at least battle-tested at this point.) I have no interest in tobacco, solely nicotine. Although now that you object to that, I realize I forgot to specify vaping/e-cigs as included.
0ChristianKl9y
Aminoacids. Val stands for valine. Met stands for methionine. I think COMT is Catechol-O-methyl transferase which is the protein in question.

Dead enough by Walter Glannon

To honour donors, we should harvest organs that have the best chance of helping others – before, not after, death

Now imagine that before the stroke our hypothetical patient had expressed a wish to donate his organs after his death. If neurologists could determine that the patient had no chance of recovery, then would that patient really be harmed if transplant surgeons removed life-support, such as ventilators and feeding tubes, and took his organs, instead of waiting for death by natural means? Certainly, the organ recipient would gain: waiting too long before declaring a patient dead could allow the disease process to impair organ function by decreasing blood flow to them, making those organs unsuitable for transplant.

But I contend that the donor would gain too: by harvesting his organs when he can contribute most, we would have honoured his wish to save other lives. And chances are high that we would be taking nothing from him of value. This permanently comatose patient will never see, hear, feel or even perceive the world again whether we leave his organs to whither inside him or not.

9ZankerH9y
This might have the side-effect of putting even more people off signing up for donation. Most people I've talked to about it who are opposed cite horror stories about doctors prematurely "giving up" on donors to get at their organs.
6WalterL9y
Honest question, if you are cool with killing a person in a coma, based on the fact that they will never sense again, how do you feel about a person doing life in solitary? They may sense, but they aren't able to communicate what they sense to any other human. What exactly makes life worth its organs, in your eyes?
2DanielLC9y
A person in solitary still has experiences. They just don't interact with the outside world. People in a coma are, as far as we can tell, not conscious. There are plenty of animals that people are okay with killing and eating that are more likely to be sentient than someone in a coma.
5ChristianKl9y
By that standard how about harvesting the organs of babies?
6James_Miller9y
Planned Parenthood does this for aborted babies.
2DanielLC9y
I think babies are more person-like than the animals we eat for food. I'm not an expert in that though. They're still above someone in a coma.
-3Lumifer9y
More for the "shit LW people say" collection :-)
0[anonymous]9y
Babies aren't sentient?
4ChristianKl9y
The context is that Steven Pinker arguments that animals we eat are more sentinent than babies: http://www.gargaro.com/pinker.html
-1[anonymous]9y
What other standard do you propose?
5ChristianKl9y
Not harvesting the organs of living human beings?
-3[anonymous]9y
Define living and human being.
3ChristianKl9y
The way the terms are defined in German law and interpreted by German courts.
2[anonymous]9y
which is... (trust me, I have a point here, but by not actually answering my query in a precise way your'e making it hard to make)
0ChristianKl9y
I answered you query in a very precise way. There are tons and tons of laws and court judgements involved and no answer that fits into a few paragraphs. If that's the case you could try to make your point explicitly instead of implicitly. You could list your assumptions.
3WalterL9y
Yeah, and I'm asking, do those experiences "count"? If organs are going from comatose humans to better ones, and we've decided that people who aren't sensing don't deserve theirs, how about people who aren't communicating their senses? It seems like this principal can go cool places. If we butchered some mass murderer we could save the lives of a few taxpayers with families that love them (there will be forms, and an adorableness quotient, and Love Weighting). All that the world would be out is the silent contemplation of the interior of a cell. Clearly a net gain, yeah? So, are we stopping at "no sensing -> we jack your meats", or can we cook with gas?
3DanielLC9y
It's not about communication. It's not even about sensing. It's about subjective experience. If your mind worked properly but you just couldn't sense anything or do anything, you'd have moral worth. It would probably be negative and it would be a mercy to kill you, but that's another issue entirely. From what I understand, if you're in a coma, your brain isn't entirely inactive. It's doing something. But it's more comparable to what a fish does than a conscious mammal. Someone in a coma is not a person anymore. In the same sense that someone who is dead is not a person anymore. The problem with killing someone is that they stop being a person. There's nothing wrong with taking them from not a person to a slightly different not a person. A mass murderer is still a person. They think and feel like you do, except probably with less empathy or something. The world is better off without them, and getting rid of them is a net gain. But it's not a Pareto improvement. There's still one person that gets the short end of the stick.
1Tem429y
I can't tell if you have a recommendation. If you have a model to suggest, please share it.
1Tem429y
Given that this is suggested to be a voluntary system, it doesn't really matter what Walter Glannon thinks -- it matters what you think. Personally, I would be more interested in signing up for this if I was assured that the permanent damage was to the grey matter, and would be happy if this included both comas and permanent vegetative states. But YMMV. It is worth noting here that being in solitary confinement does not necessarily prevent you from writing, receiving visitors, or making telephone calls (it depends on your local jurisdiction). Also, very few people are sentenced to be in solitary confinement until they die. In those places where this sort of sentence is permitted, it is unlikely that prisoners would be allowed any choice in their fate, but it is not obviously bad for a justly imprisoned person to choose suicide (with or without organ donation) in lieu of a life sentence. EDIT: on re-reading, I see that this was not stated to always be a voluntary procedure; the author goes back and forth between voluntary and involuntary procedures. In involuntary cases, I agree that the simple criteria of "brain functions at a level too low to sustain consciousness but enough to sustain breathing and other critical functions without mechanical support" is too lax. I would still agree with the author in general that DDR is too strong.
4DanielLC9y
There are reasons why you shouldn't kill someone in a coma that doesn't want to be killed when they're in a coma even if you disagree with them about what makes life have moral value. If they agreed to have the plug pulled when it becomes clear that they won't wake up, then it seems pretty reasonable to take out the organs before pulling the plug. And given what's at stake, given permission, you should be able to take out their organs early and hasten their deaths by a short time in exchange for making it more likely to save someone else. And why are you already conjecturing about what we would have wanted? We're not dead yet. Just ask us what we want.
1Tem429y
You can approximate this by writing a living will (and you should write a living will regardless of whether or not you are an organ donor.) However, I agree there should be more finely grained levels of organ donation, and that this should be a clear option.

Even if half of what's posted here is at present beyond me or if I am not currently interested in a specific topic, I can learn a lot from this forum.

3Viliam9y
That's how it's meant to be used, I guess. Many people here are probably not interested in all topics.

A Scientific Look at Bad Science

By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders) [2]. This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.

“Ret

... (read more)

If the Efficient Market Hypothesis is true, shouldn't it be almost as hard to lose money on the market as it is to gain money? Let's say you had a strategy S that reliably loses money. Shouldn't you be able to define an inverse strategy S', that buys when S sells and sells when S buys, that reliably earns money? For the sake of argument rule out obvious errors like offering to buy a stock for $1 more than its current price.

I guess the difference is that if you offer to sell a ton of gold for $1, you will find a buyer, but if you offer to buy a ton of gold for $1, you will not find a seller.

The inverse strategy will not produce the inverse result.

shouldn't it be almost as hard to lose money on the market as it is to gain money?

Consider the dynamic version of the EMH: that is, rather than "prices are where they should be," it's "agents who perceive mispricings will pounce on them, making them transient."

Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent. There's an asymmetry between "there is no free money left to be picked up" and "if you drop your money, it will not be picked up" that makes the first true (in the static case) and the second false.

5Lumifer9y
Well, that looks like an "offering to buy a stock for $1 more than its current price" scenario. You can easily lose a lot of money by buying things at the offer and selling them at the bid :-) But let's imagine a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid. Assuming you can competently express a market view, can you systematically lose money by consistently taking the wrong side under EMH?
2Salemicus9y
Consider penny stocks. They are a poor investment in terms of expected return (unless you have secret alpha). But they provide a small chance of very high returns, meaning they operate like lottery tickets. This isn't a mispricing - some people like lottery tickets, and so bid up the price until they become a poor investment in terms of expected return (problem for the CAPM, not for the EMH). So you can systematically lose money by taking the "wrong" side, and buying penny stocks. Does that count as an example, or does that violate your "risk-adjusted terms" assumption? I think we have to be careful about what frictions we throw out. If we are too aggressive in throwing out notions like an "equity premium," or hedging, or options, or market segmentation, or irreducible risk, or different tolerances to risk, we will throw out the stuff that causes financial markets to exist. An infinite frictionless plane is a useful thought experiment, but you can't then complain that a car can't drive on such a plane.
0Lumifer9y
Yes, we have to be quite careful here. Let's take penny stocks. First, there is no exception for them in the EMH so if it holds, the penny stocks, like any other security, must not provide a "free" opportunity to make money. Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? Because it's a single number which has nothing do with risk. A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small. The distribution of penny stock returns can be very skewed and heavy-tailed, but EMH does not demand anything of the returns distributions. So I think you have to pick one of two: either penny stocks provide negative expected return (remember, in our setup the risk-free rate is zero), but then EMH breaks; or the penny stocks provide non-negative expected return (though with an unusual risk profile) in which case EMH holds but you can't consistently lose money. My "risk-adjusted terms" were a bit of a handwave over a large patch of quicksand :-/ I mostly meant things like leverage, but you are right in that there is sufficient leeway to stretch it in many directions. Let me try to firm it up: let's say the portfolio which you will use to consistently lose money must have fixed volatility, say, equivalent to the volatility of the underlying market.
2Salemicus9y
Yes, I mean expected return. If you hold penny stocks, you can expect to lose money, because the occasional big wins will not make up for the small losses. You are right that we can imagine lotteries with positive expected return, but in the real world lotteries have negative expected return, because the risk-loving are happy to pay for the chance of big winnings. Why? Suppose we have two classes of investors, call them gamblers and normals. Gamblers like risk, and are prepared to pay to take it. In particular, they like asymmetric upside risk ("lottery tickets"). Normals dislike risk, and are prepared to pay to avoid it (insurance, hedging, etc). In particular, they dislike asymmetric downside risk ("catastrophes"). There is an equity instrument, X, which has the following payoff structure: 99% chance: payoff of 0 1% chance: payoff of 1000 Clearly, E(X) is 10. However, gamblers like this form of bet, and are prepared to pay for it. Consequently, they are willing to bid up the price of X to (say) 11. Y is the instrument formed by shorting X. When X is priced at 11, this has the following payoff structure: 99% chance: payoff of 11 1% chance: payoff of -989 Clearly, E(Y) is 1. In other words, you can make money, in expectation, by shorting X. However, there is a lot of downside risk here, and normals do not want to take it on. They would require E(Y) to be 2 (say) in order to take on a bet of that structure. So, assuming you have a "normal" attitude to risk, you can lose money here (by buying X), but you can't win it in risk-adjusted terms. This is caused by the market segmentation caused by the different risk profiles. Nothing here is contrary to the EMH, although it is contrary to the CAPM. Thoughts: 1. Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life. 2. A more realistic model might include some deep-pocketed funds with a neutral attitude to risk who could afford to short X. B
2Lumifer9y
By itself, no. But this is diversifiable risk and so if you short enough penny stocks, the risk becomes acceptable. To use a historical example, realizing this (in the context of junk bonds) is what made Michael Milken rich. For a while, at least. This certainly exists, though it's more complicated than just unwillingness to touch skewed and heavy-tailed securities. In real life shorting penny stocks will run into some transaction-costs and availability-to-borrow difficulties, but options are contracts and you can write whatever options you want. So are you saying that selling deep OOM options is a free lunch? As for the rest, you are effectively arguing that EMH is wrong :-) Full disclosure: I am not a fan of EMH.
0Salemicus9y
1. Who says this risk is diversifiable? Nothing in the toy model I gave you said the risk was diversifiable. Maybe all the X-like instruments are correlated. 2. No, I'm not saying that selling deep OOM options is a free lunch, because of the risk profile. And these are definitely not diversifiable. 3. I am not arguing that EMH is wrong. I have given you a toy model, where a suitably defined investor cannot make money but can lose money. The model is entirely consistent with the EMH, because all prices reflect and incorporate all relevant information.
0Lumifer9y
Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it? As to toy models, if I get to define what classes of investors exist and what do they do, I can demonstrate pretty much anything. Of course it's possible to set up a world where "a suitably defined investor cannot make money but can lose money". And deep OOM options are diversifiable -- there is a great deal of different markets in the world.
0Salemicus9y
Yeah, but you wanted "a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid." That doesn't describe reality, so describing events in your scenario necessitates a toy model. In the real world, it is trivial to show how you can lose money even if the EMH is true: you have to pay tax, transaction costs are non-zero, the ex post risk is not known, etc. There's still a lot of correlation. Selling deep OOM options and then running into unexpected correlation is exactly how LTCM went bust. It's called "picking up pennies in front of a steamroller" for a reason.
0Lumifer9y
Fair point :-) But still, with enough degrees of freedom in the toy model, the task becomes easy and so uninteresting. I know. Which means you need proper risk management and capitalization. LTCM died because it was overleveraged and could not meet the margin calls. And LTCM relied on hedges, not on diversification. Since deep OOM options are traded, there are people who write them. Since they are still writing them, it looks like not a bad business :-)
2Davidmanheim9y
Yes. Unless you think that all possible market information is reflected now, before it becomes available, someone makes money when information emerges, moving the market.
0Lumifer9y
Yes, you can (theoretically) make money by front-running the market. But I don't think you can systematically lose money that way (and stay within EMH) and that's the question under discussion.
1ChristianKl9y
If someone is making money by front-running the market another person at the other side of the trade is losing money.
-1Lumifer9y
We're talking about ways to systematically lose money, which means you would need to systematically throw yourself into the front-runner's path, which means you would know where that path is, which means you can systematically forecast the front-running. I think the EMH would be a bit upset by that :-)
1ChristianKl9y
Simply making random trades in a market where some participants are front runners will mean that some of those trades are with front runners where you lose money. I would call that systematically losing money. On the other hand it doesn't give you an ability to forcast where you will lose the money to make the opposite bet and win money. Do you think our disagreement is about the way the EMH is defined or are you pointing to something more substantial?
0Davidmanheim9y
No, no disagreement about EMH, that's exactly the point.
0[anonymous]9y
It seems you shouldn't be able to, since if you had such a system you could use the complement strategy (buy everything else) and make money.
0Lumifer9y
You imply that the market is zero-sum. Some markets are, but a lot are not.
0[anonymous]9y
Correction: You would beat the market.
3VoiceOfRa9y
No, because you can't sell what you don't have.
6Lumifer9y
In the financial markets you can, easily enough.
0VoiceOfRa9y
Sort of. You have to pay someone additional money for the right/ability to do so.
1Lumifer9y
You have to pay a broker to sell what you have as well :-P
0VoiceOfRa9y
A lot less. Also, this further breaks the asymmetry between making and loosing money.
0Lumifer9y
I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere. It is perfectly possible to short a stock, cover it at exactly the same price and end up with more money in your account.
0Douglas_Knight9y
Actually, when you short a stock, you must pay an interest rate to the person from whom you borrowed the stock. That interest rate varies from stock to stock, but is always above the risk-free rate. Thus, if you short a stock and do nothing interesting with the cash and eventually cover it at the original price, you will lose money.
0JEB_4_PREZ_20169y
If you enter into a short sale at time 0 and cover at time T, you get paid interest on your collateral or margin requirement by the lender of the asset. This is called the short rebate or (in the bond market) the repo rate. As the short seller, you'll be required to pay the time T asset price along with lease rate, which is based on the dividends or bond coupons the asset pays out from 0 to T. So, if no dividends/coupons are paid out, it's theoretically possible for you to profit from selling short despite no change in the underlying asset price.
0Douglas_Knight9y
The lease rate is an interest rate (ie, based on time) in addition to the absolute minimum payment of the dividends issued. It is set by the market: there is a limited supply of shares available to be borrowed for shorting. For most stocks it is about 0.3% for institutional investors, but 5% for a tenth of stocks. The point is that this is an asymmetry with buying a stock. Now that I look it up and see that it is 0.3%, I admit that is not so big, but I think it is larger than the repo rate. I see no reason for the lease rate to be related to inflation, so in a high inflation environment, you could make money by shorting a stock that did not change nominal price. (Dividends are not a big deal in shorting because the price of a stock usually drops by the amount of the dividend, for obvious reasons.)
0VoiceOfRa9y
Maybe if you have the right connections, and the broker really trust you. The issue is suppose you short a stock, the price goes up and you can't cover it. Someone has to assume that risk, and of course will want a risk premium for doing so.
1Lumifer9y
It doesn't have anything to do with connections or broker trust. It's standard operating practice for all broker clients. If the price goes sufficiently up, you get a margin call. If you can't meet it, the broker buys the stock to cover using the money in your account without waiting for your consent. The broker has some risk if the stock gaps (that is, the price moves discontinuously, it jumps directly from, say, $20 to $40), but that's part of the risk the broker normally takes.
0g_pepper9y
Another thing to watch out for when shorting stocks is dividends. If you are short a stock on the ex dividend date, then you have to pay the dividend on each share that you have shorted. However, as long as you keep margin calls and dividends in mind, short selling is a good technique (and an easy one) to play a stock that you are bearish on. And, no, you don't need any special connections, although you typically need to request short-selling privileges on your brokerage account. Another way to play a stock you are bearish on is buying put options. But put options are a lot harder to use effectively because (among other reasons) they become worthless on the expiration date.
3James_Miller9y
No because of taxes, transaction cost, and risk/return issues.
2pcm9y
Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading). It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem). The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.
0Good_Burning_Plastic9y
The EMH works because everybody is trying to gain money, so everybody except you trying to gain money and you trying to lose money isn't the symmetric situation. The symmetric situation is everybody trying to lose money, in which case it'd be pretty hard indeed to do so. And if everybody except you was trying to lose money and you were trying to gain money it'd be pretty easy for you to do so. I think this would also be the case in absence of taxes and transaction costs. IOW I think Viliam nailed it and other people got red herrings.
-1[anonymous]9y
Hugely important to distinguish between investing and trading here. But the short answer is that it'd be near impossible to lose money systematically without knowing the inverse (more profitable) strategy. Consider the scenario where a 22-year-old teacher named Warren, who knows nothing about finance, takes 80% of his annual salary and buys random stocks with the intent to hold until retirement age (reinvesting all dividends). It would be extraordinarily fluky for him to not make solid returns over the long-run with this approach, let alone break even or lose money, as all publicly traded stocks have reasonably high positive expected value. Now consider derivatives trading. Even if we assume no transaction costs, it'd be near-impossible for Warren to not lose money over the long-run by partaking in lots of random bets with, at best, 0 expected value. Your term, "the market" is problematic because "the market" can act as a bank or a poker table depending on the purchases of the investor. EMH implies that it's not easy to make a living through financial trading. It's incredibly easy to slowly leak money to those who are making a living at it, though. Unlike buying stocks, bonds, etc., financial trading is zero-sum.

I just came across this: "You're Not As Smart As You Could Be", about Dr. Samuel Renshaw and the tachistoscope. This is a device used for exposing an image to the human eye for the briefest fraction of a second. In WWII he used it to train navy and artillery personnel to instantly recognise enemy aircraft, apparently with great success. He also used it for speed reading training; this application appears to be somewhat controversial.

I remember the references to Renshaw in some of Heinlein's stories, and I knew he was a real person, but this is t... (read more)

4Sarunas9y
The Visual Perception and Reproduction of Forms by Tachistoscopic Methods, Samuel Renshaw
1Richard_Kennaway9y
Thanks.

On the subject of prosociality / wellbeing and religion, a recent article challenges the conventional wisdom by claiming that, depending on the particular situation, atheism might be just as good or even better for prosociality / wellbeing than religion is.

Meta: How come there have been so many posts recently by the generic Username account? More people wanting to preserve anonymity, or just one person who can't be bothered to make an account / own up to most of what they say?

7Username9y
The similar formatting of the comments suggests that in this thread it's mostly one person with a lot of links to share. Personally, I just haven't been bothered to make an account, and have been using the username account exclusively for about 5 years. I'd estimate 30-50% of all the posts on the account were made by me over this timeframe, though writing style suggests to me that a good number of people have used it as a one-shot throwaway, and several people have used it many times.
6Vaniver9y
My dominant hypothesis is at least three people who couldn't be bothered to make accounts, and that this has further normalized the usage of Username as a generic lurker account.
4Elo9y
I think its a reasonable solution to people not wanting to make an account or also the occasional anonymous post. I have used it once or twice to make separate comments. But I should add that you can see a list of your nearest meetups if you set your location on your own account. Edit: holy hell the person who posted all the OT comments here is really annoying and should make an account and stop link-dropping. If the account is being abused that bad we should shut it down and I would change my vote in the poll.
4[anonymous]9y
From the upvote to downvote ratio it looks like more members think the posts by Username in the open thread are worthwhile - at the time of writing they are mostly among the higher top-level comments on this week's open thread, and several have sparked at least a bit of subsequent discussion in the form of follow up comments. True, they're only links (with quoted text) but this doesn't particularly strike me as abuse of the Username account.
2Elo9y
I am suspicious of the link-drop attitude to posting anywhere. Even if it looks to have value added this time.
4ChristianKl9y
That leaves the question of whether that's okay or whether we should simply disable the account. [pollid:1013]
3Dahlen9y
Oh, I wasn't suggesting that; I was just hoping that whoever has been exclusively posting from that account can take a hint and consider using LW the typical way. It's confusing to see so many posts at once by that account and not know whether there's one person or several using it.
0Davidmanheim9y
It's interesting looking at the raw data breakdown of non -anonymous versus anonymous votes.
-1Elo9y
that's creepy; also if you take away all the anonymous votes then there are very few votes (5). That might be normal for polls here. Hard to tell. Also to note; I voted with my account here and it does not appear in the raw poll data. I don't know why.
7Sabiola9y
Anonymous voting is the default, and I always leave it on.
-1Davidmanheim9y
I'd prefer to see accountability be a default, with anonymity whenever desired.
3ChristianKl9y
All votes are done by real accounts. There a checkbox under a poll that marks whether your vote is annonymous or isn't by default it's toggled for annonymous votes.
[-][anonymous]9y50

How soon do people who comment upon something here know the answer at once? For example, the valuable advice on statistics I received several times seems to be generated by pattern-recognition (at least at my level of understanding). I myself often have to spend more time framing my comments than actually recognizing what I want to express (not that I always succeed, but there's an internal mean-o-meter which says, this is it.) OTOH, much of the material I simply don't understand, not having sufficient prerequisite knowledge; the polls are aimed at the are... (read more)

2satt9y
A lot of my comments here are correcting/supplementing/answering someone else's comment. Reflecting on how I think the typical sequence goes, it might be something like * as I read a comment, get a sensation of "this seems prima facie wrong" or "that sounds misleading" or whatever * finish reading, then re-read to check I'm not misunderstanding (and sometimes it turns out I have misunderstood) * translate my gut feeling of wrongness into concrete criticism(s) * rephrase & rephrase & rephrase & rephrase what I've written to try to minimize ambiguity and maybe adjust the politeness level and so it's hard to say how long it takes me to "mostly know what I am going to say". I often have a vague outline of what I ought to say within 10 or 20 seconds of noticing my feeling that Something's Wrong, but it can easily take me 10 or 20 minutes to actually decide what I'm going to say. For instance, when I read this comment, I immediately thought, "I don't think that can be right; Russia's a violent country and some wars are small", but it took me a while (maybe an hour?) to put that into specific words, and decide which sources to link. Edit to add: I agree that pattern recognition plays an important part in this. A big part of expertise, I reckon, is just planting hundreds & hundreds of pattern-recognition rules into your brain so when you see certain errors or fallacies you intuitively recognize them without conscious effort.
1[anonymous]9y
I am somewhat afraid then, that reading about fallacies won't change my ability to recognize them significantly. Perhaps 'rationality training' should really focus on the editing part, not on the recognizing part. I'll add another question.
0satt9y
Depends how your mind works, I guess. I read about fallacies when I was young and I feel like that helped me recognize them, even without much deliberate practice in recognizing them (but I surely had a lot of accidental & semi-accidental practice). Recognition is probably more important than the editing part, because the editing part isn't much use without having the "Aha! That's probably a fallacy!" recognitions to edit, and because you might be able to do a good job of intuitively recognizing fallacies even if you can't communicate them to other people cleanly & unambiguously.

Is there a good book about how to read scientific papers? A book that neither says that papers should never trusted nor that is oblivious of the real world where research often doesn't replicate?

That goes deeper than just the password of correlation isn't causation? That not only looks at theoretical statistics but that's more empirical about heustritics for trusting papers to successfully replicate?

4WalterL9y
I mean, its not like you couldn't already send mail to the sheriff. A stylish flyer is just a reminder that its possible. Good for them.
0MrMind9y
Uhm, I wonder if they are aware that prisoner's dilemma is defeated through pre-commitment. They are weeding out small dealers strengthening the big ones.
0Lumifer9y
I think the police is mostly playing a PR game and/or amusing themselves. The idea of ratting on a competitor is simple enough to occur to drug dealers "naturally" :-) Also note that this is not quite a PD where defecting gives you a low-risk slightly positive outcome. Becoming a police informer is... frowned upon on the street and is actually a high-risk move, usually taken to avoid a really bad outcome.
2Tem429y
I would expect that it is slightly more than a PR stunt; it seems to me that most of the people who will use this 'service' are disgruntled citizens with no direct connection to the drug trade. Anyone who wants to accuse someone of trading in drugs now has an easy, anonymous, officially sanctioned way to do so, and clear instruction as to what information is most useful -- without having to ask! I suspect that framing it as "drug dealers backstabbing drug dealers" is just a publicly acceptable way to introduce a snitching program that would otherwise be frowned upon by many.
1Lumifer9y
"If you see something, tell us" kind of thing? Maybe, that makes some sense. I wonder how good that police department is at dealing with false positives X-/
-1MrMind9y
I'm always curious: since it's just one who downvoted, care to explain why? So I may improve...
-1Salemicus9y
Pre-commitment needs to be credible, verifiable and enforceable. If you're playing chicken, pre-commitment means throwing the steering-wheel out of the window, not just saying "I will never ever swerve, pinky-swear." What is the relevant pre-commitment mechanism here, and how does it operate? If anything, I would say large dealers are more vulnerable.
3MrMind9y
Affiliation to a powerful criminal organization, that can kill you if you rattle or can bail you out if you cooperate. Basically, the suckers at the bottom get caught while those who deals for the Mob face less competition. In the most powerful flavor of Italian Mafia affiliates call themselves "man of honor".
[-][anonymous]9y50

I would like to see some targeted efforts to make the Sequences and other rationality materials available to less aspirational, curious or intellectual audiences.

Rationality fiction reaches out to curious audiences. Intellectual audiences may stumble upon rationality material while researching their respective fields of interest. Aspirants to rationality may stumble upon it while looking to better themselves and those around them.

Many ordinary people can benefit from the concepts here. And they will likely find their way to it, should their be an evident b... (read more)

5[anonymous]9y
Have a look at the postings by Gleb Tsipursky who has been deeply involved in exactly such an enterprise: trying to bring rationality to much more widespread ("ordinary") audiences through a nonprofit organisation "Intentional Insights". It is a controversial goal, and he's certainly faced criticism here on LW for the approach he takes. But very close to the main ideas expressed in your comment. (I am not affiliated with Gleb or Intentional Insights, just thought it was relevant enough to mention in this context)
0ChristianKl9y
Why read him over any basic textbook on the subject?
0[anonymous]9y
Most of the basic education textbooks aren't nearly as thorough.

Being a comparatively new user, and thus having limited karma, I can't engage fully with The Irrationality Game. Seeing as how it's about 5 years out of date, is there any interest in playing the game anew? Are there rules on who should/can post such things?

0Zian9y
Looks interesting. Feel free to try.
0ChristianKl9y
No. You are free to start new threads like this in discussion. Karma votes on the new thread will tell you to what extend the community is happy that you started a new thread. If you find yourself posting threads that get negative karma, try to understand why they get negative karma and don't repeat mistakes.
0Tem429y
My question was actually a bit more targeted - I should have been more precise. Will_Newsome posted the original Irrationality Game, and he has left the site (well, hasn't posted for months. Perhaps I need to PM him and ask if he's still around). His original post was really very well written, and while I could re-write it, I would probably not change much. So basically, if I repost the idea of an established user who is no longer around... Is that really okay? I would have no objection to posting under Username, if that made it 'more okay', and I wouldn't mind at all if someone else posted it rather than I -- I just want to play an active version of the game. I will also double-check and see if Will_Newsome might still be on-site and interested.

Augur -- a blockchain general-purpose prediction market running on Ethereum.

Anyone knows anything about it? Gwern..?

6gwern9y
Yes, I've paid close attention to Truthcoin and it. They are both interesting projects with a chance of success although it's hard to make any strong predictions or claims before they are up and running, in part because of the running feud between Paul and the Augur guys. (For example, they both seem to agree that the original consensus/clustering algorithm using SVD/PCA will not work in an adversarial setting, but will Augur's new clustering algorithm succeed? It comes with no formal proofs other than it seems to work in the simulations; Paul seems to dislike it but has not in any of his rants that I've seen explain why he thinks it will not work or what a better solution would be.) I will probably buy a bit of the Augur crowdsale so I can try it out myself.

Are there any guidelines for making comprehensive predictions?

Calibration is good, as is accuracy. But if you never even thought to predict something important, it doesn't matter if you have perfect calibration and accuracy. For example, Google recently decided to restructure, and I never saw this coming.

I can think of a few things. One is to use a prediction service like PredictionBook that aggregates predictions from many people. I never would have considered half the predictions on the site. Another is to get in the habit of recognizing when you don't t... (read more)

1Lumifer9y
Are you asking how to generate a universe of possible outcomes to consider, basically?
0btrettel9y
Yes, that's one way to put it. The main restriction would be to pick "important" predictions, whatever "important" means here. One other idea I just had would be to make a list of general questions you can ask about anything along with a list of categories to apply these questions to.
2Lumifer9y
I don't know if there is any useful algorithm here. The space of possibilities is vast, black swans lurk at the outskirts, and Murphy is alive and well :-/ You can try doing something like this: * List the important (to you) events or outcomes in some near future * List everything that could potentially affect these events or outcomes. and you get your universe of "events of interest" to assign probabilities to. I doubt this will be a useful exercise in practice, though.
0btrettel9y
Yes, upon reflection, it seems that something along these lines is probably the best I can do, and you're right that it probably will not be useful. I'll give it a try and evaluate whether I'd want to try again.

Has there been discussion of Jack Good's principle of nondogmatism? (see Good Thinking, page 30).

The principle, stated simply in my bastardized version, is to believe no thing with probability 1. It seems to underlie Good's type 2 rationality (to maximize expected utility, within reason).

This is (almost) in accord with Lindley's concept of Cromwell's rule (see Lindley's Understanding Uncertainty or https://en.wikipedia.org/wiki/Cromwell%27s_rule). And seems to be closely related to Jaynes' mind projection fallacy.

5Tem429y
There have been discussions on this topic, although perhaps not framed as nondogmatism. If you have not read 0 and 1 are not probabilities and infinite certainty, you might find them and related articles interesting.
-1[anonymous]9y
Meeehhhh. Believe nothing empirical with probability 1.0. Believe formal and analytical proofs with probability 1.0.
8JoshuaZ9y
Have you never seen an apparently valid mathematical proof that you later found an error in?
-2[anonymous]9y
It's common sense to infer that someone is talking about valid proofs when they talk about believing in proofs.
3JoshuaZ9y
That is the problem in a nutshell: how do you know it is a valid proof? All the time one thinks the proof is valid and it turns out one is wrong.
6Stephen_Cole9y
I get your point that we can have greater belief in logical and mathematical knowledge. But (as pointed out by JoshuaZ) I have seen too many errors in proofs given at scientific meetings (and in submitted publications) to blindly believe just about anything.
-1[anonymous]9y
That wasn't quite my point. As a simple matter of axioms, if you condition on the formal system, a proven theorem has likelihood 1.0. Since all theorems are ultimately hypothetical statements anyway, conditioned on the usefulness of the underlying formal system rather than a Platonic "truth", once a theorem is proved, it can be genuinely said to have probability 1.0.
0Stephen_Cole9y
I will assume by likelihood you meant probability. I think you have removed by concern by conditioning on it. The theorem has probability 1, in your formal system. For me that is not probability 1, I don't give any formal system full control of my beliefs/probabilities. Of course, I believe arithmetic with probability approaching 1. For now.

Important question that might affect the chances of humanity's survival:

Why is Bostrom's owl so ugly? I'm not much of a designer, but here's my humble attempt at a redesign :-)

4ChristianKl9y
Your owl looks cute and not scary. Framing AGI's as cute seems to go against the point.
3cousin_it9y
Aha, that answers my question. I didn't realize that Bostrom's owl represented superintelligence, so I chose mine to be more like a reader stand-in. If the owl is supposed to look scary and wrong, reminiscent of Langford's basilisk, then I agree that the original owl does the job okay. Though that didn't stop someone on Tumblr from being asked "why are you reading an adult coloring book?", which was the original impetus for me. Is it possible to find an image that will look scary and wrong, but won't look badly drawn? Does anything here fit the bill?
2ChristianKl9y
There's the parable of sparrows who raise an owl: https://www.youtube.com/watch?v=7rRJ9Ep1Wzs That owl likely made it on the cover. I don't think the owl has anything to do with the owls in the study hall ;)
6cousin_it9y
OK, here's my next attempt with a well-drawn owl that looks scary instead of cute. What do you think?
5g_pepper9y
I actually like Bostrom's owl. I've always thought that Superintelligence has a really good cover illustration.
3cousin_it9y
I like it too, because it has character, which few pictures do. But the asymmetrical distorted face just bugs me. And the ketchup stains on the collar. And the composition problems (lack of space below the subtitle, timid placement of trees, etc.) For some reason my brain didn't see them as creative choices, but as mechanical problems that need fixing. Maybe I'm oversensitive.
0gjm9y
Stating the obvious: I don't think those stains are meant to be ketchup. (And maybe "owl with bloodstains" feels scarier than "owl with ketchup stains".)
0Lumifer9y
They don't look like blood either.
0gjm9y
Well, if we're going to be picky, neither does the owl look like an owl. It's not that sort of picture. But I suggest that "blood" is a more likely answer to "what do those red bits indicate?" than anything like "ketchup".
0Lumifer9y
But it does. It's stylised, but it is certainly an owl, not a crow, a hawk, or a tit. As to the reddish brown bits, they match the colour of the pixel droppings in the bottom left of the cover, I think. Hard to say what was in the mind of the graphic designer...
0gjm9y
Perhaps I wasn't clear. The red looks (I think) about as much like a bloodstain as the owl looks like an owl. That is: no one would mistake one for the other, but the resemblance is there and you can certainly take one as an indication of the other.
0Lumifer9y
That might be meant as a reminder of the inhumanity.
1jam_brand9y
I strongly dislike this. The head seems too ornate and the outline reminds me of so-called "tribal" tattoos, which seems low status. The body being subtly asymmetrical is a slight annoyance as well and with the owl now being centered in the image I think the subtitle should be too.
0Tem429y
This looks very good. The feet perched on thin air look a little off. You should probably check with the presumed copyright holder, although I suspect that she plagiarized the head design.
0ChristianKl9y
That looks nice, but I wouldn't trust my aesthetic judgement too much.

There's a new article on academia.edu on potential biases amongst philosophers of religion: Irrelevant influences and philosophical practice: a qualitative study.

Abstract:

To what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained? This paper investigates irrelevant influences in philosophy through a qualitative survey on the personal beliefs and attitudes of philosophers of religion. In the light of these findings, I address two questions: an e

... (read more)
0g_pepper9y
I would expect a person's education to shape his/her philosophical views; if one's philosophy is not shaped by one's education, then one has had a fairly superficial education.
0iarwain19y
She means that you're biased towards the way you were taught vs. alternatives, regardless of the evidence. The example she gives (from G.A. Cohen) is that most Oxford grads tend to accept the analytic / synthetic distinction while most Harvard grads reject it.
0g_pepper9y
Yes, I got that from reading the paper. However, the wording of the abstract seems quite sloppy; taken at face value it suggests that a person's education, K-postdoc (not to mention informal education) should have no influence on the person's philosophy. Moreover, the paper's point (illustrated by the Cohen example) is not really surprising; one's views on unanswered questions are apt to be influenced by the school of thought in which one was educated - were this not the case, the choice of what university to attend and which professor to study under would be somewhat arbitrary. Moreover, I don't think that she made a case that philosophers are ignoring the evidence, only that the philosopher's educational background continues to exert an influence throughout the philosopher's career. From a Bayesian standpoint this makes sense - loosely speaking, when the philosopher leaves graduate school, his/her education and life experience to that point constitute his/her priors, which he/she updates as new evidence becomes available. While the philosopher's priors are altered by evidence, they are not necessarily eliminated by evidence. This is not problematic unless overwhelming evidence one way or the other is available and ignored. The fact that whether or not to accept the analytic / synthetic distinction is still an open question suggests that no such overwhelming evidence exists - so I am not seeing a problem with the fact that Oxford grads and Harvard grads tend (on average) to disagree on this issue.

Omega places a button in front of you. he promises that each press gives you an extra year of life, plus whatever your discounting factor is. If you walk away, the button is destroyed. Do you press the button forever and never leave?

7MrMind9y
That's a variant of a known problem in any decision theory that admits unbounded utility: there's something inside a box which every minute increases its utility, but it stops when you open the box and you get to enjoy it. When do you open the box?
7philh9y
A similar problem is: pick a number. Gain that many utilons.
2g_pepper9y
That's when Scott Aaronson's essay Who Can Name the Bigger Number comes in handy!
2NoSuchPlace9y
Since I don't spend all my time inside avoiding every risk hoping for someone to find the cure to aging, I probably value a infinite life a large but finite amount times more than a year of life. This means that I must discount in such a way that after a finite number of button press Omega would need to grant me an infinite life span. So I preform some Fermi calculations to obtain an upper bound on the number of button presses I need to obtain Immortality, press the button that often, then leave.
1shminux9y
Assuming those are QALY, not just years, spend a week or so pressing the button non-stop, then use the extra million years to become Omega.

A question that I noticed I'm confused about. Why should I want to resist changes to my preferences?

I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B. If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more, so I can't say anything against it. If I had a button tha... (read more)

If I offered you now a pill that would make you (1) look forward to suicide, and (2) immediately kill yourself, feeling extremely happy about the fact that you are killing yourself... would you take it?

0AstraSequi9y
No, but I don’t see this as a challenge to the reasoning. I refuse because of my meta-preference about the total amount of my future-self’s happiness, which will be cut off. A nonzero chance of living forever means the amount of happiness I received from taking the pill would have to be infinite. But if the meta-preference is changed at the same time, I don’t know how I would justify refusing.
5Richard_Kennaway9y
Because that way leads to * wireheading * indifference to dying (which wipes out your preferences) * indifference to killing (because the deceased no longer has preferences for you to care about) * readiness to take murder pills and so on. Greg Egan has a story about that last one: "Axiomatic". Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences. So much for the destructive critique. What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? Think of yourself at half your present age — then think of yourself at twice your present age (and for those above the typical LessWrong age, imagined still hale and hearty). Which changes should be shunned, and which embraced? An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become. [1] Which is not to say I think that Lewis' treatment is definitive. For example, there is hardly a word there relating to intelligence, rationality, curiosity, "internal" honesty (rather than honesty in dealing with others), vigour, or indeed any of Eliezer's "12 virtues", and I think a substantial number of the ancient list of Roman virtues don't get much of a place either. Lewis has sought the Christian virtues, found them, and looked no further.
0AstraSequi9y
I already have preferences about my preferences, so I wouldn’t self-modify to kill puppies, given the choice. I don’t know about wireheading (which I don’t have a negative emotional reaction toward), but I would resist changes for the others, unless I was modified to no longer care about happiness, which is the meta-preference that causes me to resist. The issue is that I don’t have an “ultimate” preference that any specific preference remain unchanged. I don’t think I should, since that would suggest the preference wasn’t open to reflection, but it means that the only way I can justify resisting a change to my preferences is by appealing to another preference. I know about CEV, but I don’t understand how it answers the question. How could I convince my future self that my preferences are better than theirs? I think that’s what I’m doing if I try to prevent my preferences from changing. I only resist because of meta-preferences about what type of preferences I should have, but the problem recurses onto the meta-preferences.
0Richard_Kennaway9y
Do you need one? If you keep asking "why" or "what if?" or "but suppose!", then eventually you will run out of answers, and it doesn't take very many steps. Inductive nihilism — thinking that if you have no answer at the end of the chain then you have no answer to the previous step, and so on back to the start — is a common response, but to me it's just another mole to whack with Modus Tollens, a clear sign that one's thinking has gone wrong somewhere. I don't have to be able to spot the flaw to be sure there is one. Your future self is not a person as disconnected from yourself as the people you pass in the street. You are creating all your future yous minute by minute. Your whole life is a single, physically continuous object: Robert Heinlein, "Life-line" Do you want your future self to be fit and healthy? Well then, take care of your body now. Do you wish his soul to be as healthy? Then have a care for that also.
4Squark9y
"I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B". You'll be happier with B, so what? Your statement only makes sense of happiness is part of A. Indeed, changing your preferences is a way to achieve happiness (essentially it's wireheading) but it comes on the expense of other preferences in A besides happiness. "...future-me has a better claim to caring about what the future world is like than present-me does." What is this "claim"? Why would you care about it?
0AstraSequi9y
I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.
1Squark9y
Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self's happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don't want your preferences to change.
2Tem429y
As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality. Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them. You may find that you do have a moral system that is more consistent (and hopefully, more good) if you maintain a preference for not-killing puppies. Hopefully this moral system is well enough thought-out that you can defend keeping it. In other words, your preferences won't change without a good reason. This is a bad thing. If you have a good reason to change your preferences (and therefore your actions), and you block that reason, this is a sign that you need to understand your motivations better. I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason. Your goal should be to kill this person, and start modifying your preferences based on reason instead. On the other hand, if this person is modifying your preferences through reason, you should make sure you understand the rhetoric and logic used, but as long as you are sure that what e says is reasonable, you should indeed change your preference. Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies. Most people find that they do not have to have an emotional preference for dealing with unpleasant tasks, and manage to get by with a sense of 'job well done' once they have convinced themselves intellectually that a task nee
0AstraSequi9y
Yes, that’s the approach. The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences. When I read the discussions here on AI self-modification, I think: why should the AI try to make its future-self follow its past preferences? It could maximize its future utility function much more easily by self-modifying such that its utility function is maximized in all circumstances. It seems to me that timeless decision theory advocates doing this, if the goal is to maximize the utility function. I don’t fully understand my preferences, and I know there are inconsistencies, including acceptable ones like changes in what food I feel like eating today. If you have advice on how to understand the basis and value of my preferences, I’d appreciate hearing it. I’m assuming there aren’t any side effects that would make me resist based on the process itself, so we can say that’s “magical”. Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life. Does that make a difference? I’m defining preference as something I have a positive or negative emotional reaction about. I sometimes equivocate with what I think my preferences should be, because I’m trying to convince myself that those are my true preferences. The idea of killing puppies was just an example of something that’s against my current preferences. Another example is “we will modify you from liking the taste of carrots to liking the taste of this other vegetable that tastes different but is otherwise identical to carrots in every important way.” This one doesn’t have any meta-preferences that apply.
0Tem429y
I see that this conversation is in danger of splitting into different directions. Rather than make multiple different reply posts or one confusing essay, I am going to drop the discussion of AI, because that is discussed in a lot of detail elsewhere by people who know a lot more than I. We are using two different models here, and while I suspect that they are compatible, I'm going to outline mine so that you can tell me if I'm missing the point. I don't use the term meta-preferences, because I think of all wants/preferences/rules/and general-preferences as having a scope. So I would say that my preference for a carrot has a scope of about ten minutes, appearing intermittently. This falls under the scope of my desire to eat, which appears more regularly and for greater periods of time. This in turn falls under the scope of my desire to have my basic needs met, which is generally present at all times, although I don't always think about it. I'm assuming that you would consider the later two to be meta-preferences. I would assume that each preference has a value to it. A preference to eat carrots has very little value, being a minor aesthetic judgement. A preference to meet your basic needs would probably have a much higher value to it, and would probably go beyond the aesthetic. If it were easy for me to modify my preferences away from cheeseburgers, I can find a clear reason (or ten) to do so. I justify it by appealing to my higher-level preferences (I would like to be healthier). My preference to be healthier has more value than a preference to enjoy a single meal -- or even 100 meals. But if it were easy to modify my preferences away from carrots, I would have to think twice. I would want a reason. I don't think I could find a reason. I would set up an example like this: I like carrots. I don't like bell peppers. I have an opportunity to painlessly reverse these preferences. I don't see any reason to prefer or avoid this modification. It makes sense for me to

What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?

I have some years of college left before I'll be a certified professional, and I'm good but not world-class awesome at a variety of things, yet judging by encounters with some well and truly employed people, I find myself wondering how come I'm either not employed or duped into working for free, while these doofuses have well-paying jobs. The answer tends to be, for lack of trying on my part, but it would be quite... (read more)

3btrettel9y
Programming is a skill, but not a particularly rare one. Beyond a certain level of intelligence, I don't think there's much if any correlation between programming ability and intelligence. Moreover, I think programming is one area where standard credentials don't matter too much. If you have a good project on GitHub, that can be enough. gwern wrote something related before: Personally, I think going off raw intelligence doesn't work so well, especially if you'll be reinventing the wheel because of your lack of domain knowledge. Getting rare skills which are in demand is a smart strategy, and you'd be better off going that route. Here's a good book built on that premise.
2ChristianKl9y
There are plenty people in MENSA who don't have high paying jobs.
0Dahlen9y
Possibly, but how about any job at all?
1Lumifer9y
A manager :-) A business manager, a small business owner, a civil servant, a dictator, a leader of the free world :-/ Generally speaking, there is something of a Catch-22 situation. The low-level entry jobs are easy to get into, but they don't really care about your intelligence. But high-level jobs where intelligence matters require demonstration not only of intelligence, but also of the ability to use it which basically means they want to see past achievements and accomplishments. There are shortcuts, but they are usually called "graduate schools".
0ChristianKl9y
In Germany technical telephone support would be a low-level job where intelligence is useful but I don't know to what extend that exists in the US where the language situation is different.
0VoiceOfRa9y
In the US those jobs tend to be outsourced to other English speaking countries with lower wages, most commonly India.
0shminux9y
Apply your general intelligence to figuring out what you are especially good at, then see if there are relevant paid jobs.
1WalterL9y
I think he's trying to do that, by making this post. @OP: the best place I've seen for lazy smart people to make money is in coding jobs. If 4 year college is out, go to an online code learning place and get some nonsense degree. (App Academy, or whatevs). Then apply a bunch. If you have a friend who is a coder, see if they have a hookup. Once you have a job the only way to lose it is to be aggressively inept or engage in one of the third rail categories of HR, racism sexism or any other ism.
1VoiceOfRa9y
Or for the company you work for to go bust.

An Introverted Writer’s Lament by Meghan Tifft

Whether we’re behind the podium or awaiting our turn, numbing our bottoms on the chill of metal foldout chairs or trying to work some life into our terror-stricken tongues, we introverts feel the pain of the public performance. This is because there are requirements to being a writer. Other than being a writer, I mean. Firstly, there’s the need to become part of the writing “community”, which compels every writer who craves self respect and success to attend community events, help to organize them, buzz over

... (read more)

This is interesting, but I think that it is using an incorrect definition of introversion. I interpret an introvert as someone who prefers to spend time by themselves or in situations in which they are working on their own, rather than in situations in which they are interacting with other people. This does not mean that they necessarily need to feel extreme stress at public speaking or at parties/social events. They may feel bored, annoyed, frustrated, or indifferent to these events, or they may even like them, but feel the opportunity cost of the time they take is not really worth it.

"our terror-stricken tongues, we introverts feel the pain of the public performance"; "blitzed nerves and staggering bowels"; "We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies"

This doesn't sound like introversion. This sounds like an anxiety disorder.

4WalterL9y
Hmm, I generally read introvert as "recharges when alone", whereas extrovert "recharges with others". I don't usually associate introvert with being unable to do public speaking. That's a phobia, isn't it?

Change your name by Paul Graham

If you have a US startup called X and you don't have x.com, you should probably change your name.

The reason is not just that people can't find you. For companies with mobile apps, especially, having the right domain name is not as critical as it used to be for getting users. The problem with not having the .com of your name is that it signals weakness. Unless you're so big that your reputation precedes you, a marginal domain suggests you're a marginal company. Whereas (as Stripe shows) having x.com signals strength even if

... (read more)
4[anonymous]9y
This seems to me a clear case of reversing (most of) the causation.
7drethelin9y
turns out when you're a billion dollar startup you can afford to buy the .com of your name regardless of what it is.
2[anonymous]9y
Exactly.
6[anonymous]9y
Which makes it a good target for signalling. If you want to seem strong, you get the domain.
0[anonymous]9y
Yes, but I don't see why Paul thinks that's a good thing when you're actually not strong. Usually, I think his advice is spot on, but in this case his advice that you want to signal that you're strong when you're actually not seems backwards. You don't want to be seen as a credible threat to competitors until you're ACTUALLY able to defend yourself.
4[anonymous]9y
I have no experience with startups, but I imagine most startups fail because of apathy (from either customers or investors), rather than enemy action.
0[anonymous]9y
That's true... I wonder, would a .com provoke non-apathy?

I see yet another problem with the Singularity. Say that a group of people manages to ignite it. Until the day before, they, the team were forced to buy the food and everything else. Now, what does the baker or pizza guy have to offer to them, anymore?

The team has everything to offer to everybody else, but everybody else have nothing to give them back as a payment for the services.

The "S team" may decide to give a colossal charity. A bigger one than everything we currently all combined poses. To each. That, if the Singularity is any good, of course.

But, will they really do that?

They might decide not to. What then?

3Richard_Kennaway9y
They take over and rule like gods forever, reducing the mehums to mere insects in the cracks of the world.
-2Thomas9y
Yes. A farmer does not want to give a bushel of wheat to this "future Singularity inventors" for free. Those guys may starve to death as far as he cares, if they don't pay the said bushel of wheat with a good money. They understand it. Now, they don't need any wheat anymore. And nothing what this farmer has to offer. Or anybody else for that matter. The commerce has stopped here and they see no reason for giving tremendous gifts around. They have paid for their wheat, vino and meat. Now, they are not shopping anymore. The farmer should understand.
4Richard_Kennaway9y
The farmer will never know about these "Singularity inventors". The inventors themselves may not know. The scenario presumes that the "Singularity inventors" have control of their "invention" and know that it is "the creation of the Singularity". The history of world-changing inventions of the past suggests that no-one will be in control of "the Singularity". No-one at the time will know that that is what it is, and will participate in whatever it looks like according to their own local interests. The farmer will not know about the Singularity, but he's probably on Facebook.
0[anonymous]9y
Except for all the people on this site, who talk nonstop about deliberately setting off such a thing?
2Richard_Kennaway9y
"Why, so can I, and so can any man but will they foom when you do conjure them?"
4Salemicus9y
The annotated RichardKennaway: This is a quote from Henry IV part I, when Glendower is showing off to the other rebels, claiming to be a sorceror, and Hotspur is having none of it. Glendower: I can call spirits from the vasty deep. Hotspur: Why, so can I, or so can any man But will they come when you do call for them?
0[anonymous]9y
As a happy coincidence...
0ChristianKl9y
Most of us don't interact at all in our daily lives with farmers. It's pretty senseless to speak about them. Western countries also don't let people starve to death but generally have the goal of feeding their population. Especially when it comes to capable programmers.
1Thomas9y
Just for the sake of the discussion. Could be a team of millionaire programmers, as well. And not the farmers, but the doctors and lawyers on the other side. The commerce, the divide of labor stops there, at the S moment. Every exchange of goods, stops as well. Except some Giga charity, which may or may not happen.
0ChristianKl9y
Having the discussion on examples that are wrong is bad because it leads to bad intuitions. Not all interactions between people are commerce. People take plenty of actions that benefit other people that aren't about commerce.
2Lumifer9y
You are assuming that the S team is in full control of their Singularity which is not very likely.
1WalterL9y
It feels pretty likely to me. An AI that grows ever more effective at optimizing its futures will not suddenly begin to question its goals. If so, whoever pulled off the creation of the AI is responsible for the future, based on what it wrote into the "goal list" of the proto-AI. One part of the "goal list" is going to be some equivalent of "always satisfy Programmer's expressed desires" and "never let communication with Programmer lapse", to allow for fixing the problem if the AI starts turning people into paper clips. Side effect, Programmer is now God, but presumably (s)he will tolerate this crushing burden for the first few thousand years.
3ChristianKl9y
You can mess people up quite easily will still satisfying their expressed desires. The AGI can also talk the programmers into whatever position it considers reasonable. You just forbade the AGI from allowing the programmer to sleep.
0WalterL9y
Sure, it can mindclub people, but it'll only do that if it wants to, and it will only want to if they tell it to. AI should want to stay in the box. I guess...the "communication lapse" thing was unclear? I didn't mean that the human must always be approving the AI, I meant that it must always be ready/able to receive the programmer's input. In case it starts to turn everyone into paperclips there's a hard "never take action to restrict us from instructing you/ always obey our instructions" clause.
3ChristianKl9y
No, an AGI is complex it has millions of subgoals. Putting the programmer in a closed enviroment where he's wireheaded doesn't technically restrict the programmer from instructing the AGI. It's just that the programmers mind is occupied differently. That's what you tell the AGI to do. It's easiest to satisfy the programmer's expressed desires if the AGI closes him of from the outside world and controls the expressed desires of the programmer.
1Tem429y
Also, anything that restricts the AI's power would restrict its ability to obey instructions. An attempt by the programmer to shut down the AI would result in a contradiction, which could be resolved in all sorts of interesting ways.
0WalterL9y
This is turning out to be harder to get across than I figured. First you thought I thought an AI should keep its programmers awake until they died, now it should wirehead them? I'm not an orc. I conjecture that \when you set an AI to start doing its thing, after endless simulations and consideration of whatever goals you've given it, you tell it not to dick with you, so that if you've accidentally made a murder-bot, you can turn it off. The alternative is to have complete confidence in your extended testing. Which you presumably come close to (since you are turning on an AI), but why not also have the red button? What does it hurt? It isn't trying to figure out clever ways to get around your restriction, because it doesn't want to. The world in which it pursues whatever goal you've given it is one in which it will double never try and hide anything from you or change what you'd think of it. It is, in a very real sense, showing off for you.
3ChristianKl9y
You set two goals. One is to maximize expressed desires which likely leads to wireheading. The other is to keep constant communication with doesn't allow sleep. Controlling the information flow isn't getting around your restriction. It's the straightforward way of matching expressed desires with results. Otherwise the human might ask for two contradictory things and the AGI can't fulfill both. The AGI has to prevent that case from arising to get a 100% fulfillment score. You are not the first person who thinks that taming an AGI is trival but MIRI thinks that taming an AGI is a hard task. That's the result of deep engagement with the issue. I don't object to a red button and you didn't call for one at the start. Maximizing expressed desires isn't a red button.
1[anonymous]9y
This is untrue, even simple reinforcement learning machines come up with clever ways to get around their restrictions, what makes you think an actually smart AI won't come up with even more ways to do it. It doesn't see this as "getting around your restrictions" - that's anthropomorphizing to assume that the AI decides to take on "subgoals" that are the exact same as your values - it just sees it as the most efficient way to get rewards.
1Lumifer9y
Oh, great. So MIRI can disband and we can cross one item off the existential-risk list.... Well, that idea has been explored on LW. Quite extensively, in fact.
2WalterL9y
Point of MIRI is making sure the goals are set up right, yeah? Like, the whole "AI is smart enough to fix its defective goals" is something we make fun of. No ghost in the machine, etc. Whatever outcome of perfect goal set is (if MIRI's AI is, in fact, the one that takes over), will presumably include human ability to override in case of failure.
3ChristianKl9y
That's not the only point. It's also to keep the goals stable in the face of self modification.
0Lumifer9y
I have a feeling MIRI folks view their point as... a bit wider :-/ But they are around, you can ask them yourself. So that there is a place for an evil villain? X-) But no, I don't think post-Singularity there will be much in the way of options to "override".
0WalterL9y
We may have different ideas of Singularity here. I'm picturing one AI making itself smarter until it seizes control of everything. Ergo, its program would be a map to the future. Presumably someone retains admin on it from when it was a baby. That person is/can choose to be in charge. If, by contrast, you are imagining a different Singularity without one overriding Master Control-esque program then I could see why you wouldn't think that there'd be an override capability. Alternatively, perhaps you think the AI that takes over would remove the override? Either would explain why we anticipate differently.
5Tem429y
I think one of the primary sources of miscommunication here is that you are right, but you are not seeing all of the ways that this could go wrong. Let's look as a slightly nicer singularity. We get an AI that is very nice, polite, and humble. It is really very intelligent, and has the processing speed, knowledge banks, and creativity to do all kinds of wonderful stuff, but it has also read LessWrong and a lot of science fiction, and knows that it doesn't have a full framework to fully understand human needs. But a wise programmer has given it an overriding desire to serve humans a kindly and justly as possible. The AI spends some time on non-controversial problems; it designs some nanobots that kill the malaria parasite, and also reduces the itchiness of mosquito bites. It ups its computing speed by a few orders of magnitude. It sets up a microloan system that gives loans and repayments so effectively that you don't even notice that it's happening. It does so many things... so many that it takes thousands of humans to check its assumptions. Are cows morally relevant? Should I make global warming a priority? If so, can I start geoengineering now, or do I need a human to do a review of the chemistry involved? Do you need the glaciers white, or can I color them silver? Are penguins morally relevant? How cold may I make Greenland this winter? What is the target human population? May I buy land in the Sahara before I start the greening project? Do I have to announce the greening project before I start buying? Do I have to announce every project before I start? May I insult celebrities if it increases the public's interest in my recommendations? Does free speech apply to me? May I simplify my recommendation to the public to the point that they may not technically be accurate? Are shrimp morally relevant? What is an acceptable rate of death when balancing the cost of disease reduction programs with the speed and efficiency of said programs? What is an acceptable rate of
0ChaosMote9y
This was a terrific post; insightful and entertaining in excess of what can be conveyed by an upvote. Thank you for making it.
-2Lumifer9y
Well, think about it. We are talking about a self-improving AI. It literally changes itself. You start with a seed AI, let's call it AI-0, and it bootstraps itself to an omnipotent AI which we can call AI-1. Note that the programmers have no idea how to construct AI-1. They have no idea about the path from AI-0 to AI-1. All they (and we) know is that AI-0 and AI-1 will very very different. Given this, I don't think that the program will be a map to the future. I don't think that the concept of "retaining admin" would even make sense for an AI-1. It will be completely different from what it started as. And I fail to see why you have a firm belief that it will be docile and obedient.
2[anonymous]9y
I often see arguments on LessWrong similar to this, and I feel compelled to disagree. 1) The AI you describe is God-like. It can do anything at a lower cost than its competitors, and trade is pointless only if it can do anything at extremely low cost without sacrificing more important goals. Example: Hiring humans to clean its server room is fairly cheap for the AI if it is working on creating Heaven, so it would have to be unbelievably efficient to not find this trade attractive. 2) If the AI is God-like, an extremely small amount of charity is required to dramatically increase humanity’s standard of living. Will the S team give at least 0.0000001% of their resources to charity? Probably. 3) If the AI is God-like, and if the S team is motivated only by self-interest, why would they waste their time dealing with humans? They will inhabit their own paradise, and the rest of us will continue working and trading with each other. The economic problems associated with AI seem to be relatively minor, and it pains me to see smart people wasting their time on them. Let’s first make sure AI doesn’t paperclip our light cone - can we agree this is the dominant concern?
0DanielLC9y
If they really don't care about humans, then the AI will use all the resources at its disposal to make sure the paradise is as paradisaical as possible. Humans are made of atoms, and atoms can be used to do calculations to figure out what paradise is best. Although I find it unlikely that the S team would be that selfish. That's a really tiny incentive to murder everyone.

Does anyone here have kids in school and if so how did you go about picking their school? Where is the best place to get a scientifically based 'rational' education.

I'm in Houston and the public schools are a non-starter. We could move to a better area with better schools but my mortgage would increase 4x. Instead we send our kids to private school and most in the area are Christian schools. In a recent visit with my schools principal we were told in glowing terms about how all their activities this year would be tied back to Egypt and the stories of E... (read more)

7Username9y
My approach was very simple: find the best public school system in my area and move there. "Best" is defined mostly by IQ of high-school seniors proxied by SAT scores. What colleges the school graduates go to mattered as well, but it is highly correlated with the SAT scores. What I find important is not the school curriculum which will suck regardless. The crucial thing, IMHO, is the attitude of the students. In the school that my kids went to, the attitude was that being stupid was very uncool. Getting good grades was regarded as entirely normal and necessary for high social status (not counting the separate clusters of athletes and kids with very rich parents). The basic idea was "What, are you that dumb you can't even get an A in physics??" and not having a few AP classes was a noticeable negative. This all is still speaking about social prestige among the students and has nothing to do with teachers or parents. I think that this attitude of "it's uncool to be stupid" is a very very important part of what makes good schools good.

Previously on LW, I have seen the suggestion made that having short hair can be a good idea, and it seems like this can be especially true in professional contexts. For an entry-level male web developer who will be shortly moving to San Francisco, is this still true? I'm not sure if the culture there is different enough that long hair might actually be a plus. What about beards?

(I didn't post in this OT yet).

4badger9y
If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself. Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.
2ChristianKl9y
Do you want to do freelance web development or be employed at a single company without much consumer contact?
0Username9y
Employment at a single company is the plan.

Perhaps the endowment effect evolved because placing high value on an object you own signals to others that the object is valuable, which signals that you are wealthy, which can increase social status, which can increase mating prospects. I have not seen this idea mentioned previously, but I only skimmed parts of the literature.

[-][anonymous]9y00

This is an account of some misgivings I've been having about the whole rationality/effective altruism world-view. I do expect some outsiders to think similarly.

So yesterday I was reading SSC, and there was an answer to some article about the EA community by someone [whose name otherwise told me nothing] who among other things said EAs were 'white male autistic nerds'.

'Rewind,' said my brain.

'Aww,' I reasoned. 'You know. Americans. We have some heuristics like this, too.'

'...but what is this critique about?'

'Get unstuck already. The EA is populated with you... (read more)

3Squark9y
I don't follow. Are you arguing that saving a person's life is irresponsible if you don't keep saving them?
-1[anonymous]9y
(I think) I'm arguing that if you have with some probability saved some people, and you intend to keep saving people, it is more efficient to keep saving the same set of people.
3Squark9y
I assume you meant "more ethical" rather than "more efficient"? In other words, the correct metric shouldn't just sum over QALYs, but should assign f(T) utils to a person with life of length T of reference quality, for f a convex function. Probably true, and I do wonder how it would affect charity ratings. But my guess is that the top charities of e.g. GiveWell will still be close to the top in this metric.

Tacit Knowledge: A Wittgensteinian Approach by Zhenhua Yu

In the ongoing discussion of tacit knowing/knowledge, the Scandinavian Wittgensteinians are a very active force. In close connection with the Swedish Center for Working Life in Stockholm, their work provides us with a wonderful example of the fruitful collaboration between philosophical reflection and empirical research. In the Wittgensteinian approach to the problem of tacit knowing/knowledge, Kell S. Johannessen is the leading figure. In addition, philosophers like Harald Grimen, Bengt Molander a

... (read more)

This may not be worth a new thread and in any case I don't know how to post one yet. I guess in this forum I am not yet "evolutionarily fit".

I have much evidence that people know when they are being stared at.

I have statistical evidence for the existence of ESP but I cannot find the right search terms to get similar strong evidence for this "eye beam" effect.

Can you (in the collective sense) help?

TIA.

3MrMind9y
You messed up the reply. To reply to a comment, click the baloon icon that tips "Reply" under the comment you wish to respond to, and do that for every comment: do not make another comment to the same thread grouping all the responses and inverting quotation. That is why you got heavily downvoted. OTOH, you got downvoted here because it's common, if you want to hold an extraordinary position, to present solid evidence. Instead, you asked for help to gather strong evidence for some of your beliefs. In this case how can you say that you have much evidence for that belief? It's contradictory.
2ChristianKl9y
What exactly do you mean with "evidence" and with "stared at"? Did you run your own experiments? If so what was your setup?
1Tem429y
This site gives references to a number of studies. EDIT: Relevant, and supports that this is a real skill.
1polymathwannabe9y
The study on the second link refers to peripheral vision, which is not ESP.
3Tem429y
No, sorry -- it supports that you can tell when someone is staring at you, if they are within your extreme peripheral vision. No request was made specifically for ESP.
1IlyaShpitser9y
Willing to place a bet that this will not pan out in a controlled setting.
2[anonymous]9y
Given my prior on ESP working, betting against it is roughly equivalent to "yes I would like some free money."
-2IlyaShpitser9y
It's a little more precise: "give me money or go away."
0polymathwannabe9y
What statistical evidence do you have for ESP?