You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, Aug. 10 - Aug. 16, 2015

5 Post author: MrMind 10 August 2015 07:29AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (283)

Comment author: skilesare 19 August 2015 06:53:07PM 1 point [-]

Does anyone here have kids in school and if so how did you go about picking their school? Where is the best place to get a scientifically based 'rational' education.

I'm in Houston and the public schools are a non-starter. We could move to a better area with better schools but my mortgage would increase 4x. Instead we send our kids to private school and most in the area are Christian schools. In a recent visit with my schools principal we were told in glowing terms about how all their activities this year would be tied back to Egypt and the stories of Egypt in the old testament. I thought to my self that I didn't even think that Moses was a real person so this is going to get very interesting.

I wish they'd spend half as much time on studying science and psychological concepts that they do studying the bible...but what are you going to do?

Any ideas?

I should add that I did graduate from this same school although I did not go through grades 1-9 there...only high school, and that education was really top notch...but still an hour a day of bible class.

Comment author: Username 19 August 2015 07:23:47PM *  6 points [-]

My approach was very simple: find the best public school system in my area and move there. "Best" is defined mostly by IQ of high-school seniors proxied by SAT scores. What colleges the school graduates go to mattered as well, but it is highly correlated with the SAT scores.

What I find important is not the school curriculum which will suck regardless. The crucial thing, IMHO, is the attitude of the students. In the school that my kids went to, the attitude was that being stupid was very uncool. Getting good grades was regarded as entirely normal and necessary for high social status (not counting the separate clusters of athletes and kids with very rich parents). The basic idea was "What, are you that dumb you can't even get an A in physics??" and not having a few AP classes was a noticeable negative. This all is still speaking about social prestige among the students and has nothing to do with teachers or parents.

I think that this attitude of "it's uncool to be stupid" is a very very important part of what makes good schools good.

Comment author: G0W51 16 August 2015 01:22:24AM 0 points [-]

Perhaps the endowment effect evolved because placing high value on an object you own signals to others that the object is valuable, which signals that you are wealthy, which can increase social status, which can increase mating prospects. I have not seen this idea mentioned previously, but I only skimmed parts of the literature.

Comment author: Romashka 15 August 2015 12:45:27PM *  3 points [-]

How soon do people who comment upon something here know the answer at once? For example, the valuable advice on statistics I received several times seems to be generated by pattern-recognition (at least at my level of understanding). I myself often have to spend more time framing my comments than actually recognizing what I want to express (not that I always succeed, but there's an internal mean-o-meter which says, this is it.) OTOH, much of the material I simply don't understand, not having sufficient prerequisite knowledge; the polls are aimed at the areas with which you personally interact.

I mostly know what I am going to say

The posts to which I don't have an immediate answer

Please add your comments on which topics you have to 'slow down' - anonymously, if you wish.

Edit to add: My answer undergoes changes before I submit it;

Submitting...

Comment author: satt 15 August 2015 01:45:58PM *  2 points [-]

A lot of my comments here are correcting/supplementing/answering someone else's comment. Reflecting on how I think the typical sequence goes, it might be something like

  • as I read a comment, get a sensation of "this seems prima facie wrong" or "that sounds misleading" or whatever
  • finish reading, then re-read to check I'm not misunderstanding (and sometimes it turns out I have misunderstood)
  • translate my gut feeling of wrongness into concrete criticism(s)
  • rephrase & rephrase & rephrase & rephrase what I've written to try to minimize ambiguity and maybe adjust the politeness level

and so it's hard to say how long it takes me to "mostly know what I am going to say". I often have a vague outline of what I ought to say within 10 or 20 seconds of noticing my feeling that Something's Wrong, but it can easily take me 10 or 20 minutes to actually decide what I'm going to say. For instance, when I read this comment, I immediately thought, "I don't think that can be right; Russia's a violent country and some wars are small", but it took me a while (maybe an hour?) to put that into specific words, and decide which sources to link.

Edit to add: I agree that pattern recognition plays an important part in this. A big part of expertise, I reckon, is just planting hundreds & hundreds of pattern-recognition rules into your brain so when you see certain errors or fallacies you intuitively recognize them without conscious effort.

Comment author: Romashka 15 August 2015 02:06:40PM 1 point [-]

I am somewhat afraid then, that reading about fallacies won't change my ability to recognize them significantly. Perhaps 'rationality training' should really focus on the editing part, not on the recognizing part. I'll add another question.

Comment author: satt 18 August 2015 12:44:33AM 0 points [-]

Depends how your mind works, I guess. I read about fallacies when I was young and I feel like that helped me recognize them, even without much deliberate practice in recognizing them (but I surely had a lot of accidental & semi-accidental practice).

Recognition is probably more important than the editing part, because the editing part isn't much use without having the "Aha! That's probably a fallacy!" recognitions to edit, and because you might be able to do a good job of intuitively recognizing fallacies even if you can't communicate them to other people cleanly & unambiguously.

Comment author: RichardKennaway 14 August 2015 10:05:23PM *  6 points [-]

I just came across this: "You're Not As Smart As You Could Be", about Dr. Samuel Renshaw and the tachistoscope. This is a device used for exposing an image to the human eye for the briefest fraction of a second. In WWII he used it to train navy and artillery personnel to instantly recognise enemy aircraft, apparently with great success. He also used it for speed reading training; this application appears to be somewhat controversial.

I remember the references to Renshaw in some of Heinlein's stories, and I knew he was a real person, but this is the first time I've seen a substantial account of his work.

A few more references:

Wikipedia is rather brief.

Open access review article about work with the tachistoscope, in the Journal of Behavioral Optometry, 2003. This is the closest thing I've found to a modern reference.

An academic paper by Renshaw himself from 1945. Despite its antiquity, it is paywalled. I have not been able to access the full text.

This information is mostly rather old and musty, and there appears to be little modern interest. With current computers, it should be very easy to duplicate the technology, although low-level graphics expertise is likely needed to get very short, precise exposure times.

Comment author: Sarunas 15 August 2015 12:26:19PM 2 points [-]
Comment author: RichardKennaway 15 August 2015 12:46:29PM 1 point [-]

Thanks.

Comment author: cousin_it 14 August 2015 11:03:40AM 1 point [-]

Important question that might affect the chances of humanity's survival:

Why is Bostrom's owl so ugly? I'm not much of a designer, but here's my humble attempt at a redesign :-)

Comment author: ChristianKl 14 August 2015 11:21:07AM 3 points [-]

Your owl looks cute and not scary. Framing AGI's as cute seems to go against the point.

Comment author: cousin_it 14 August 2015 11:46:33AM *  2 points [-]

Aha, that answers my question. I didn't realize that Bostrom's owl represented superintelligence, so I chose mine to be more like a reader stand-in.

If the owl is supposed to look scary and wrong, reminiscent of Langford's basilisk, then I agree that the original owl does the job okay. Though that didn't stop someone on Tumblr from being asked "why are you reading an adult coloring book?", which was the original impetus for me.

Is it possible to find an image that will look scary and wrong, but won't look badly drawn? Does anything here fit the bill?

Comment author: ChristianKl 14 August 2015 12:05:39PM 1 point [-]

There's the parable of sparrows who raise an owl: https://www.youtube.com/watch?v=7rRJ9Ep1Wzs That owl likely made it on the cover.

I don't think the owl has anything to do with the owls in the study hall ;)

Comment author: cousin_it 14 August 2015 03:28:25PM *  4 points [-]

OK, here's my next attempt with a well-drawn owl that looks scary instead of cute. What do you think?

Comment author: jam_brand 17 August 2015 08:27:38PM 0 points [-]

I strongly dislike this. The head seems too ornate and the outline reminds me of so-called "tribal" tattoos, which seems low status. The body being subtly asymmetrical is a slight annoyance as well and with the owl now being centered in the image I think the subtitle should be too.

Comment author: Tem42 15 August 2015 02:01:07PM 0 points [-]

This looks very good. The feet perched on thin air look a little off.

You should probably check with the presumed copyright holder, although I suspect that she plagiarized the head design.

Comment author: ChristianKl 15 August 2015 11:58:00AM 0 points [-]

That looks nice, but I wouldn't trust my aesthetic judgement too much.

Comment author: g_pepper 14 August 2015 05:30:12PM 4 points [-]

I actually like Bostrom's owl. I've always thought that Superintelligence has a really good cover illustration.

Comment author: cousin_it 14 August 2015 05:57:12PM *  2 points [-]

I like it too, because it has character, which few pictures do. But the asymmetrical distorted face just bugs me. And the ketchup stains on the collar. And the composition problems (lack of space below the subtitle, timid placement of trees, etc.) For some reason my brain didn't see them as creative choices, but as mechanical problems that need fixing. Maybe I'm oversensitive.

Comment author: gjm 18 August 2015 02:27:55PM 0 points [-]

Stating the obvious: I don't think those stains are meant to be ketchup. (And maybe "owl with bloodstains" feels scarier than "owl with ketchup stains".)

Comment author: Lumifer 18 August 2015 02:29:26PM 0 points [-]

They don't look like blood either.

Comment author: gjm 18 August 2015 05:30:41PM 0 points [-]

Well, if we're going to be picky, neither does the owl look like an owl. It's not that sort of picture. But I suggest that "blood" is a more likely answer to "what do those red bits indicate?" than anything like "ketchup".

Comment author: Lumifer 17 August 2015 08:32:37PM 1 point [-]

But the asymmetrical distorted face just bugs me.

That might be meant as a reminder of the inhumanity.

Comment author: Houshalter 14 August 2015 03:57:18AM 1 point [-]

Omega places a button in front of you. he promises that each press gives you an extra year of life, plus whatever your discounting factor is. If you walk away, the button is destroyed. Do you press the button forever and never leave?

Comment author: NoSuchPlace 14 August 2015 01:19:37PM 1 point [-]

Since I don't spend all my time inside avoiding every risk hoping for someone to find the cure to aging, I probably value a infinite life a large but finite amount times more than a year of life. This means that I must discount in such a way that after a finite number of button press Omega would need to grant me an infinite life span.

So I preform some Fermi calculations to obtain an upper bound on the number of button presses I need to obtain Immortality, press the button that often, then leave.

Comment author: MrMind 14 August 2015 07:04:43AM 4 points [-]

That's a variant of a known problem in any decision theory that admits unbounded utility: there's something inside a box which every minute increases its utility, but it stops when you open the box and you get to enjoy it.
When do you open the box?

Comment author: philh 14 August 2015 03:16:00PM 4 points [-]

A similar problem is: pick a number. Gain that many utilons.

Comment author: g_pepper 14 August 2015 05:24:51PM 2 points [-]

That's when Scott Aaronson's essay Who Can Name the Bigger Number comes in handy!

Comment author: shminux 14 August 2015 06:22:20AM 1 point [-]

Assuming those are QALY, not just years, spend a week or so pressing the button non-stop, then use the extra million years to become Omega.

Comment author: Tem42 14 August 2015 02:55:01AM 3 points [-]

Being a comparatively new user, and thus having limited karma, I can't engage fully with The Irrationality Game. Seeing as how it's about 5 years out of date, is there any interest in playing the game anew? Are there rules on who should/can post such things?

Comment author: Zian 22 August 2015 06:22:18AM 0 points [-]

Looks interesting. Feel free to try.

Comment author: ChristianKl 14 August 2015 11:19:18AM 0 points [-]

Are there rules on who should/can post such things?

No. You are free to start new threads like this in discussion. Karma votes on the new thread will tell you to what extend the community is happy that you started a new thread.

If you find yourself posting threads that get negative karma, try to understand why they get negative karma and don't repeat mistakes.

Comment author: Tem42 14 August 2015 03:52:15PM 0 points [-]

My question was actually a bit more targeted - I should have been more precise.

Will_Newsome posted the original Irrationality Game, and he has left the site (well, hasn't posted for months. Perhaps I need to PM him and ask if he's still around). His original post was really very well written, and while I could re-write it, I would probably not change much. So basically, if I repost the idea of an established user who is no longer around... Is that really okay?

I would have no objection to posting under Username, if that made it 'more okay', and I wouldn't mind at all if someone else posted it rather than I -- I just want to play an active version of the game.

I will also double-check and see if Will_Newsome might still be on-site and interested.

Comment author: gwern 14 August 2015 12:58:56AM 8 points [-]

Modafinil survey: I'm curious about how modafinil users in general use it, get it, their experiences, etc, and I've been working on a survey. I would welcome any comments about missing choices, bad questions, etc on the current draft of the survey: https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform?fbzx=2867338011413840797

Comment author: RichardKennaway 14 August 2015 10:27:07PM *  2 points [-]

A few details:

In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G.

I was surprised to see Vitamin D listed as a nootropic, and Google turns up nothing much on the subject. Fixing a deficiency of anything will likely have a positive effect on mental function, but that is drawing the boundary of "nootropic" rather wide.

Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco? Cancer is a reason to not smoke tobacco, but I don't think it's a reason not to ask about it. Or are those who smoke not smart enough to be in the target population for the survey? :)

ETA: Also a typo in "SNP status of COMT RS4570625": the subtext mentions rs4680, not rs4570625. I dont know what "Val/Met" and "COMT" mean, but are those specific to RS4680 or correct for all three SNPs?

Comment author: gwern 06 September 2015 08:59:07PM 0 points [-]

In the questions about SNPs, 23andMe reports RS4570625 as G or T, not A or G, and RS4633 as C or T, not A or G.

Oops. Shouldn't've assumed they'd be the same...

but that is drawing the boundary of "nootropic" rather wide.

It is but it's still common and can be useful. The nootropics list is based on Yvain's previous nootropics survey, which I thought might be useful for comparison. (I also stole a bunch of questions from his LW survey too, figuring that they're at least battle-tested at this point.)

Why is nicotine amplified as "gum, patch, lozenge", to the exclusion of tobacco?

I have no interest in tobacco, solely nicotine. Although now that you object to that, I realize I forgot to specify vaping/e-cigs as included.

Comment author: ChristianKl 15 August 2015 04:58:14PM 0 points [-]

Val/Met

Aminoacids. Val stands for valine. Met stands for methionine.

COMT

I think COMT is Catechol-O-methyl transferase which is the protein in question.

Comment author: btrettel 14 August 2015 01:13:18PM *  3 points [-]

Great idea.

One suggestion: This survey seems to be for people who use modafinil regularly. I might suggest doing something (perhaps creating another survey) to get opinions from people who tried modafinil once or twice and disliked it. My one experience with Nuvigil was quite bad, and I recall Vaniver saying that he thought modafinil did nothing at all for him.

Comment author: ChristianKl 14 August 2015 01:41:43PM *  0 points [-]

The survey could have multiple pages:

The first page simply asks:

What's your modafinil usage:
a) Never
b) I used it in the past and then stopped.
c) I'm currently using it. (leading the user to your current survey)

Comment author: gwern 14 August 2015 06:53:09PM *  1 point [-]

I've split it up into multiple pages: the first page classifies you as an active or inactive user and then sends you to a detailed questionnaire on how you use it if you are active, or simply why you stopped if inactive, and then both go to a long demographics/background page.

Comment author: ChristianKl 15 August 2015 12:48:08AM 1 point [-]

Sounds good. I would also add a "never used it" option. It can go straight to the demographics/background page. Otherwise you might have people who never used it classify themselves as "inactive user".

Comment author: gwern 15 August 2015 01:37:31AM 1 point [-]

(If they've never used modafinil, why on earth are they taking my survey?!)

Comment author: ChristianKl 15 August 2015 12:28:08PM *  2 points [-]

They might be interested in taking modafinil. The fact that they shouldn't take the survey doesn't mean they won't.

Comment author: ChristianKl 14 August 2015 11:41:29AM *  3 points [-]

In general, do you find brand-name -afinils more effective than generics?

I think that answer should have more than just (yes) and (no) as an answer. At least it should have a "I don't know" answer.


I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it. Maybe even "At what time of the day did you use it?"


I would be interested in a question about how many hours the person sleeps on average.

Have you thought about having a question about bodyweight? I would be interested in knowing whether heavier people take a larger dose.

Comment author: gwern 14 August 2015 06:27:35PM 0 points [-]

I've added 'the same' as a third option to the generic vs brand-name question, and 2 questions about average hours of sleep a night & body weight.

I would add a question "When was the last time you used modafinil?" to see whether people are on aggregate right about how many days per week they use it.

What would the response there be, an exact date or n days ago or what?

Comment author: ChristianKl 15 August 2015 01:04:11AM 1 point [-]

I've added 'the same' as a third option to the generic vs brand-name question

I would guess that a majority of the respondents haven't testing multiple kind of modafinil and thus are not equipped to answer the question at all. "I don't know" seems to be the proper answer for them.

Comment author: gwern 06 September 2015 08:43:02PM 1 point [-]

Alright, I've added a don't-know option and added a 'when did you last use' question.

Comment author: ChristianKl 15 August 2015 01:02:01AM 1 point [-]

What would the response there be, an exact date or n days ago or what?

Both would be possible but I think "n days ago" is more standard. It makes the data analysis easier.

Comment author: Tem42 14 August 2015 06:44:30PM 2 points [-]

I have no experience with -afinils, but it seems to me that there will surely be cases of people who have tried only brand-name (or, alternatively, only generic) -afinil, and therefore cannot accurately respond to the question

In general, do you find brand-name -afinils more effective than generics?

With yes, no, or the same. The correct answer would be "I don't know". If I were taking this survey, I would skip that question rather than try to guess which answer you wanted in that case. But if I were designing the survey, I would go with ChristianKl's suggestion.

Comment author: Romashka 13 August 2015 06:44:07PM 0 points [-]

This is an account of some misgivings I've been having about the whole rationality/effective altruism world-view. I do expect some outsiders to think similarly.

So yesterday I was reading SSC, and there was an answer to some article about the EA community by someone [whose name otherwise told me nothing] who among other things said EAs were 'white male autistic nerds'.

'Rewind,' said my brain.

'Aww,' I reasoned. 'You know. Americans. We have some heuristics like this, too.'

'...but what is this critique about?'

'Get unstuck already. The EA is populated with young hard-working talented educated hopeful people...'

'Let's not join,' brain ruled immediately. 'We're not like that!'

'...who are out to save the world, eliminate suffering and maybe even defeat Death.'

Brain smirked. 'I find it easier to believe in the WMAN than in the YHTEHS - fewer dice rolls... But even if all of it is true, and they do intend to do all this; how would they fail?'

'Huh?'

'Would they lose their jobs, if some angry developer rings up their boss? Would they get sued, and lose their jobs, if they protest unwisely? Would they get beaten up in a dark alley, and incidentally lose their jobs, if - '

'THE WHOLE POINT is that you don't risk your own skin. You efficiently pay others to do it, hopefully without the actual risking, and in this way more people benefit. And stop being bloody-minded.'

'Well, good luck making more people join. We want to have lived. (In case there ain't no Singularity coming soon.) We believe experience. We believe failure.'

'Failure isn't efficient. And what are you about? That you want us to get beaten up?'

'No, I want to see some price they pay for their ideas. Out of, you know, sheer malice. Like if you're an environmentalist, then everybody around you knows what you must do better than yourself.'

'They pay money, because people shouldn't be heroes to do good. Shouldn't have to be sad to do good. Or angry. Even if it helps.'

Brain thought for a moment.

'Okay. But why do they expect others to be sad, angry or heroes? You buy a malaria net as an Effective Altruist, you kinda make a contract with somebody who uses it, like Albus Dumbledore giving the Cloak of Invisibility to Harry Potter. For your money to have mattered, that person would have to live in unceasing toil.'

'Which is in their best interests anyway.'

'...in more toil than you could ever imagine. And sorrow. And make efficient decisions. Aren't you morally obliged to keep helping?'

'If a builder sells a house, is he morally obliged to keep repairing it?' I shrugged. 'Legally, perhaps, if the house falls down.'

'Then I want to know what an Effective Altruist does when the house falls down, in the absence of any law that can force him,' said the brain. 'Surely he is more responsible than the builder?'

Comment author: Squark 13 August 2015 08:06:06PM 2 points [-]

I don't follow. Are you arguing that saving a person's life is irresponsible if you don't keep saving them?

Comment author: Romashka 13 August 2015 08:19:21PM 0 points [-]

(I think) I'm arguing that if you have with some probability saved some people, and you intend to keep saving people, it is more efficient to keep saving the same set of people.

Comment author: Squark 13 August 2015 08:36:24PM *  2 points [-]

I assume you meant "more ethical" rather than "more efficient"? In other words, the correct metric shouldn't just sum over QALYs, but should assign f(T) utils to a person with life of length T of reference quality, for f a convex function. Probably true, and I do wonder how it would affect charity ratings. But my guess is that the top charities of e.g. GiveWell will still be close to the top in this metric.

Comment author: ChristianKl 13 August 2015 05:30:16PM *  5 points [-]

Is there a good book about how to read scientific papers? A book that neither says that papers should never trusted nor that is oblivious of the real world where research often doesn't replicate?

That goes deeper than just the password of correlation isn't causation? That not only looks at theoretical statistics but that's more empirical about heustritics for trusting papers to successfully replicate?

Comment author: iarwain1 13 August 2015 03:29:44PM *  5 points [-]

On the subject of prosociality / wellbeing and religion, a recent article challenges the conventional wisdom by claiming that, depending on the particular situation, atheism might be just as good or even better for prosociality / wellbeing than religion is.

Comment author: iarwain1 12 August 2015 09:27:05PM *  2 points [-]

There's a new article on academia.edu on potential biases amongst philosophers of religion: Irrelevant influences and philosophical practice: a qualitative study.

Abstract:

To what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained? This paper investigates irrelevant influences in philosophy through a qualitative survey on the personal beliefs and attitudes of philosophers of religion. In the light of these findings, I address two questions: an empirical one (whether philosophers of religion are influenced by irrelevant factors in forming their philosophical attitudes), and an epistemological one (whether the influence of irrelevant factors on our philosophical views should worry us). The answer to the empirical question is a confident yes, to the epistemological question, a tentative yes.

Comment author: g_pepper 12 August 2015 11:28:08PM 0 points [-]

To what extent do factors such as upbringing and education shape our philosophical views? And if they do, does this cast doubt on the philosophical results we have obtained?

I would expect a person's education to shape his/her philosophical views; if one's philosophy is not shaped by one's education, then one has had a fairly superficial education.

Comment author: iarwain1 13 August 2015 12:27:05AM 0 points [-]

She means that you're biased towards the way you were taught vs. alternatives, regardless of the evidence. The example she gives (from G.A. Cohen) is that most Oxford grads tend to accept the analytic / synthetic distinction while most Harvard grads reject it.

Comment author: g_pepper 13 August 2015 01:16:21AM 0 points [-]

Yes, I got that from reading the paper. However, the wording of the abstract seems quite sloppy; taken at face value it suggests that a person's education, K-postdoc (not to mention informal education) should have no influence on the person's philosophy.

Moreover, the paper's point (illustrated by the Cohen example) is not really surprising; one's views on unanswered questions are apt to be influenced by the school of thought in which one was educated - were this not the case, the choice of what university to attend and which professor to study under would be somewhat arbitrary. Moreover, I don't think that she made a case that philosophers are ignoring the evidence, only that the philosopher's educational background continues to exert an influence throughout the philosopher's career. From a Bayesian standpoint this makes sense - loosely speaking, when the philosopher leaves graduate school, his/her education and life experience to that point constitute his/her priors, which he/she updates as new evidence becomes available. While the philosopher's priors are altered by evidence, they are not necessarily eliminated by evidence. This is not problematic unless overwhelming evidence one way or the other is available and ignored. The fact that whether or not to accept the analytic / synthetic distinction is still an open question suggests that no such overwhelming evidence exists - so I am not seeing a problem with the fact that Oxford grads and Harvard grads tend (on average) to disagree on this issue.

Comment author: Username 12 August 2015 07:21:05PM *  1 point [-]

Tacit Knowledge: A Wittgensteinian Approach by Zhenhua Yu

In the ongoing discussion of tacit knowing/knowledge, the Scandinavian Wittgensteinians are a very active force. In close connection with the Swedish Center for Working Life in Stockholm, their work provides us with a wonderful example of the fruitful collaboration between philosophical reflection and empirical research. In the Wittgensteinian approach to the problem of tacit knowing/knowledge, Kell S. Johannessen is the leading figure. In addition, philosophers like Harald Grimen, Bengt Molander and Allan Janik also make contributions to the discussion in their own ways. In this paper, I will try to clarify the main points of their contribution to the discussion of tacit knowing/knowledge.

...

Johannessen observes:

It has in fact been recognized in various camps that propositional knowledge, i.e, knowledge expressible by some kind of linguistic means in a propositional form, is not the only type of knowledge that is scientifically relevant. Some have, therefore, even if somewhat reluctantly, accepted that it might be legitimate to talk about knowledge also in cases where it is not possible to articulate it in full measure by proper linguistic means.

Johannessen, using Polanyi’s terminology, calls the kind of knowledge that cannot be fully articulated by verbal means tacit knowledge.

Comment author: Username 12 August 2015 04:53:11PM *  7 points [-]

A Scientific Look at Bad Science

By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11 (adjusting for increases in published literature, and excluding articles by repeat offenders) [2]. This surge raises an obvious question: Are retractions increasing because errors and other misdeeds are becoming more common, or because research is now scrutinized more closely? Helpfully, some scientists have taken to conducting studies of retracted studies, and their work sheds new light on the situation.

“Retractions are born of many mothers,” write Ivan Oransky and Adam Marcus, the co-founders of the blog Retraction Watch, which has logged thousands of retractions in the past five years. A study in the Proceedings of the National Academy of Sciences reviewed 2,047 retractions of biomedical and life-sciences articles and found that just 21.3 percent stemmed from straightforward error, while 67.4 percent resulted from misconduct, including fraud or suspected fraud (43.4 percent) and plagiarism (9.8 percent) [3].

Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’ ” [4].

Comment author: Lumifer 12 August 2015 04:27:51PM *  4 points [-]

Augur -- a blockchain general-purpose prediction market running on Ethereum.

Anyone knows anything about it? Gwern..?

Comment author: gwern 12 August 2015 05:32:37PM *  5 points [-]

Yes, I've paid close attention to Truthcoin and it. They are both interesting projects with a chance of success although it's hard to make any strong predictions or claims before they are up and running, in part because of the running feud between Paul and the Augur guys. (For example, they both seem to agree that the original consensus/clustering algorithm using SVD/PCA will not work in an adversarial setting, but will Augur's new clustering algorithm succeed? It comes with no formal proofs other than it seems to work in the simulations; Paul seems to dislike it but has not in any of his rants that I've seen explain why he thinks it will not work or what a better solution would be.)

I will probably buy a bit of the Augur crowdsale so I can try it out myself.

Comment author: Sherincall 12 August 2015 10:35:52AM 9 points [-]

CIA's The Definition of Some Estimative Expressions - what probabilities people assign to words such as "probably" and "unlikely".

CIA actually has several of these articles around, like Biases in Estimating Probabilities. Click around for more.

In hindsight, it seems obvious that they should.

Comment author: AstraSequi 12 August 2015 09:49:20AM 2 points [-]

A question that I noticed I'm confused about. Why should I want to resist changes to my preferences?

I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B. If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more, so I can't say anything against it. If I had a button that could block the modification, I would press it, but I feel like that's only because I have a meta-preference that my preferences tend to maximizing happiness, and the meta-preference has the same problem.

A quicker way to say this is that future-me has a better claim to caring about what the future world is like than present-me does. I still try to work toward a better world, but that's based on my best prediction for my future preferences, which is my current preferences.

Comment author: Squark 12 August 2015 08:11:01PM *  4 points [-]

"I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B". You'll be happier with B, so what? Your statement only makes sense of happiness is part of A. Indeed, changing your preferences is a way to achieve happiness (essentially it's wireheading) but it comes on the expense of other preferences in A besides happiness.

"...future-me has a better claim to caring about what the future world is like than present-me does." What is this "claim"? Why would you care about it?

Comment author: AstraSequi 13 August 2015 11:24:05AM 0 points [-]

I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.

Comment author: Squark 13 August 2015 08:10:11PM *  1 point [-]

Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self's happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don't want your preferences to change.

Comment author: Tem42 12 August 2015 01:08:03PM *  2 points [-]

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more.

You may find that you do have a moral system that is more consistent (and hopefully, more good) if you maintain a preference for not-killing puppies. Hopefully this moral system is well enough thought-out that you can defend keeping it. In other words, your preferences won't change without a good reason.

If I had a button that could block the modification, I would press it

This is a bad thing. If you have a good reason to change your preferences (and therefore your actions), and you block that reason, this is a sign that you need to understand your motivations better.

"tonight we will modify you to want to kill puppies,"

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason. Your goal should be to kill this person, and start modifying your preferences based on reason instead. On the other hand, if this person is modifying your preferences through reason, you should make sure you understand the rhetoric and logic used, but as long as you are sure that what e says is reasonable, you should indeed change your preference.

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies. Most people find that they do not have to have an emotional preference for dealing with unpleasant tasks, and manage to get by with a sense of 'job well done' once they have convinced themselves intellectually that a task needs to be done. It is understandable if you feel that 'job well done' might not apply to killing puppies, but I am fairly agnostic on the matter, so I won't try to convince you that puppy population control is your next step to sainthood. However, if after much introspection you do find that puppies need to be killed and you seriously don't like doing it, you might want to consider paying someone else to kill puppies for you.

Edited for format and to remove an errant comma.

Comment author: AstraSequi 13 August 2015 11:40:52AM *  0 points [-]

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

Yes, that’s the approach. The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.

When I read the discussions here on AI self-modification, I think: why should the AI try to make its future-self follow its past preferences? It could maximize its future utility function much more easily by self-modifying such that its utility function is maximized in all circumstances. It seems to me that timeless decision theory advocates doing this, if the goal is to maximize the utility function.

I don’t fully understand my preferences, and I know there are inconsistencies, including acceptable ones like changes in what food I feel like eating today. If you have advice on how to understand the basis and value of my preferences, I’d appreciate hearing it.

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason.

I’m assuming there aren’t any side effects that would make me resist based on the process itself, so we can say that’s “magical”. Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life. Does that make a difference?

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies.

I’m defining preference as something I have a positive or negative emotional reaction about. I sometimes equivocate with what I think my preferences should be, because I’m trying to convince myself that those are my true preferences. The idea of killing puppies was just an example of something that’s against my current preferences. Another example is “we will modify you from liking the taste of carrots to liking the taste of this other vegetable that tastes different but is otherwise identical to carrots in every important way.” This one doesn’t have any meta-preferences that apply.

Comment author: Tem42 13 August 2015 04:11:54PM 0 points [-]

I see that this conversation is in danger of splitting into different directions. Rather than make multiple different reply posts or one confusing essay, I am going to drop the discussion of AI, because that is discussed in a lot of detail elsewhere by people who know a lot more than I.

meta-preferences

We are using two different models here, and while I suspect that they are compatible, I'm going to outline mine so that you can tell me if I'm missing the point.

I don't use the term meta-preferences, because I think of all wants/preferences/rules/and general-preferences as having a scope. So I would say that my preference for a carrot has a scope of about ten minutes, appearing intermittently. This falls under the scope of my desire to eat, which appears more regularly and for greater periods of time. This in turn falls under the scope of my desire to have my basic needs met, which is generally present at all times, although I don't always think about it. I'm assuming that you would consider the later two to be meta-preferences.

I don’t know how to justify resisting an intervention that would change my preferences

I would assume that each preference has a value to it. A preference to eat carrots has very little value, being a minor aesthetic judgement. A preference to meet your basic needs would probably have a much higher value to it, and would probably go beyond the aesthetic.

If it were easy for me to modify my preferences away from cheeseburgers, I can find a clear reason (or ten) to do so. I justify it by appealing to my higher-level preferences (I would like to be healthier). My preference to be healthier has more value than a preference to enjoy a single meal -- or even 100 meals.

But if it were easy to modify my preferences away from carrots, I would have to think twice. I would want a reason. I don't think I could find a reason.

Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life.

I would set up an example like this: I like carrots. I don't like bell peppers. I have an opportunity to painlessly reverse these preferences. I don't see any reason to prefer or avoid this modification. It makes sense for me to be agnostic on this issue.

I would set up a more fun example like this: I like Alex. I do not like Chris. I have an opportunity to painlessly reverse these preferences.

I would hope that I have reasons for liking Alex, and not liking Chris... but if I don't have good reasons, and if there will not be any great social awkwardness about the change, then yes, perhaps Alex and Chris are fungible. If they are fungible, this may be a sign that I should be more directed in who I form attachments with.

The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.

In the Alex/Chris example, it would be interesting to see if you ever reached a preference that you did mind changing. For example, you might be willing to change a preference for tall friends over short friends, but you might not be willing to change a preference for friends that kick orphans with friends who help orphans.

If you do find a preference that you aren't willing to change, it is interesting to see what it is based on -- a moral system (if so, how formalized and consistent is it), an aesthetic preference (if so, are you overvaluing it? Undervaluing it?), or social pressures and norms (if so, do you want those norms to have that influence over you?).

It is arguable, but not productive, to say that ultimately no one can justify anything. I can bootstrap up a few guidelines that I base lesser preferences on -- try not to hurt unnecessarily (ethical), avoid bits of dead things (aesthetic), and don't walk around town naked (social). I would not want to switch out these preferences without a very strong reason.

Comment author: Viliam 12 August 2015 12:36:59PM 7 points [-]

If I offered you now a pill that would make you (1) look forward to suicide, and (2) immediately kill yourself, feeling extremely happy about the fact that you are killing yourself... would you take it?

Comment author: AstraSequi 13 August 2015 11:26:54AM 0 points [-]

No, but I don’t see this as a challenge to the reasoning. I refuse because of my meta-preference about the total amount of my future-self’s happiness, which will be cut off. A nonzero chance of living forever means the amount of happiness I received from taking the pill would have to be infinite. But if the meta-preference is changed at the same time, I don’t know how I would justify refusing.

Comment author: RichardKennaway 12 August 2015 12:00:54PM 4 points [-]

Why should I want to resist changes to my preferences?

Because that way leads to

  • wireheading

  • indifference to dying (which wipes out your preferences)

  • indifference to killing (because the deceased no longer has preferences for you to care about)

  • readiness to take murder pills

and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

So much for the destructive critique. What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? Think of yourself at half your present age — then think of yourself at twice your present age (and for those above the typical LessWrong age, imagined still hale and hearty).

Which changes should be shunned, and which embraced?

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

[1] Which is not to say I think that Lewis' treatment is definitive. For example, there is hardly a word there relating to intelligence, rationality, curiosity, "internal" honesty (rather than honesty in dealing with others), vigour, or indeed any of Eliezer's "12 virtues", and I think a substantial number of the ancient list of Roman virtues don't get much of a place either. Lewis has sought the Christian virtues, found them, and looked no further.

Comment author: AstraSequi 13 August 2015 11:53:16AM *  0 points [-]

Because that way leads to wireheading, indifference to dying (which wipes out your preferences), indifference to killing (because the deceased no longer has preferences for you to care about), readiness to take murder pills, and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

I already have preferences about my preferences, so I wouldn’t self-modify to kill puppies, given the choice. I don’t know about wireheading (which I don’t have a negative emotional reaction toward), but I would resist changes for the others, unless I was modified to no longer care about happiness, which is the meta-preference that causes me to resist. The issue is that I don’t have an “ultimate” preference that any specific preference remain unchanged. I don’t think I should, since that would suggest the preference wasn’t open to reflection, but it means that the only way I can justify resisting a change to my preferences is by appealing to another preference.

What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? …

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

I know about CEV, but I don’t understand how it answers the question. How could I convince my future self that my preferences are better than theirs? I think that’s what I’m doing if I try to prevent my preferences from changing. I only resist because of meta-preferences about what type of preferences I should have, but the problem recurses onto the meta-preferences.

Comment author: RichardKennaway 13 August 2015 03:40:31PM 0 points [-]

The issue is that I don’t have an “ultimate” preference

Do you need one?

If you keep asking "why" or "what if?" or "but suppose!", then eventually you will run out of answers, and it doesn't take very many steps. Inductive nihilism — thinking that if you have no answer at the end of the chain then you have no answer to the previous step, and so on back to the start — is a common response, but to me it's just another mole to whack with Modus Tollens, a clear sign that one's thinking has gone wrong somewhere. I don't have to be able to spot the flaw to be sure there is one.

How could I convince my future self that my preferences are better than theirs?

Your future self is not a person as disconnected from yourself as the people you pass in the street. You are creating all your future yous minute by minute. Your whole life is a single, physically continuous object:

"Suppose we take you as an example. Your name is Rogers, is it not? Very well, Rogers, you are a space-time event having duration four ways. You are not quite six feet tall, you are about twenty inches wide and perhaps ten inches thick. In time, there stretches behind you more of this space-time event, reaching to perhaps nineteen-sixteen, of which we see a cross-section here at right angles to the time axis, and as thick as the present. At the far end is a baby, smelling of sour milk and drooling its breakfast on its bib. At the other end lies, perhaps, an old man someplace in the nineteen-eighties.

"Imagine this space-time event that we call Rogers as a long pink worm, continuous through the years, one end in his mother's womb, and the other at the grave..."

Robert Heinlein, "Life-line"

Do you want your future self to be fit and healthy? Well then, take care of your body now. Do you wish his soul to be as healthy? Then have a care for that also.

Comment author: btrettel 11 August 2015 07:44:30PM 3 points [-]

Are there any guidelines for making comprehensive predictions?

Calibration is good, as is accuracy. But if you never even thought to predict something important, it doesn't matter if you have perfect calibration and accuracy. For example, Google recently decided to restructure, and I never saw this coming.

I can think of a few things. One is to use a prediction service like PredictionBook that aggregates predictions from many people. I never would have considered half the predictions on the site. Another is to get in the habit of recognizing when you don't think something will change and questioning that. E.g., I never would have thought not wearing socks would become stylish, but it seems to have caught on at least among some people.

Questioning literally everything you can think of might work, but it seems pretty inefficient. I'm interested in predictions which are important in some sense.

Any ideas would be appreciated.

Comment author: Lumifer 11 August 2015 08:22:58PM 2 points [-]

Are there any guidelines for making comprehensive predictions?

Are you asking how to generate a universe of possible outcomes to consider, basically?

Comment author: btrettel 11 August 2015 09:59:36PM 0 points [-]

Yes, that's one way to put it. The main restriction would be to pick "important" predictions, whatever "important" means here.

One other idea I just had would be to make a list of general questions you can ask about anything along with a list of categories to apply these questions to.

Comment author: Lumifer 12 August 2015 03:27:43PM *  1 point [-]

The main restriction would be to pick "important" predictions, whatever "important" means here.

I don't know if there is any useful algorithm here. The space of possibilities is vast, black swans lurk at the outskirts, and Murphy is alive and well :-/

You can try doing something like this:

  • List the important (to you) events or outcomes in some near future
  • List everything that could potentially affect these events or outcomes.

and you get your universe of "events of interest" to assign probabilities to.

I doubt this will be a useful exercise in practice, though.

Comment author: btrettel 12 August 2015 04:01:41PM 0 points [-]

Yes, upon reflection, it seems that something along these lines is probably the best I can do, and you're right that it probably will not be useful.

I'll give it a try and evaluate whether I'd want to try again.

Comment author: WhyAsk 11 August 2015 07:21:02PM -2 points [-]

This may not be worth a new thread and in any case I don't know how to post one yet. I guess in this forum I am not yet "evolutionarily fit".

I have much evidence that people know when they are being stared at.

I have statistical evidence for the existence of ESP but I cannot find the right search terms to get similar strong evidence for this "eye beam" effect.

Can you (in the collective sense) help?

TIA.

Comment author: MrMind 12 August 2015 08:06:31AM 1 point [-]

You messed up the reply. To reply to a comment, click the baloon icon that tips "Reply" under the comment you wish to respond to, and do that for every comment: do not make another comment to the same thread grouping all the responses and inverting quotation. That is why you got heavily downvoted.

OTOH, you got downvoted here because it's common, if you want to hold an extraordinary position, to present solid evidence. Instead, you asked for help to gather strong evidence for some of your beliefs. In this case how can you say that you have much evidence for that belief? It's contradictory.

Comment author: Tem42 11 August 2015 10:08:13PM *  1 point [-]

This site gives references to a number of studies.

EDIT: Relevant, and supports that this is a real skill.

Comment author: polymathwannabe 12 August 2015 05:16:55AM 0 points [-]

The study on the second link refers to peripheral vision, which is not ESP.

Comment author: Tem42 12 August 2015 01:52:47PM 2 points [-]

No, sorry -- it supports that you can tell when someone is staring at you, if they are within your extreme peripheral vision.

No request was made specifically for ESP.

Comment author: ChristianKl 11 August 2015 09:55:56PM 1 point [-]

I have much evidence that people know when they are being stared at.

What exactly do you mean with "evidence" and with "stared at"?

Did you run your own experiments? If so what was your setup?

Comment author: IlyaShpitser 11 August 2015 09:42:10PM 0 points [-]

Willing to place a bet that this will not pan out in a controlled setting.

Comment author: [deleted] 13 August 2015 01:37:15PM 1 point [-]

Given my prior on ESP working, betting against it is roughly equivalent to "yes I would like some free money."

Comment author: IlyaShpitser 13 August 2015 04:04:58PM -1 points [-]

It's a little more precise: "give me money or go away."

Comment author: polymathwannabe 11 August 2015 09:11:34PM 0 points [-]

What statistical evidence do you have for ESP?

Comment author: Username 11 August 2015 05:32:11PM 1 point [-]

Previously on LW, I have seen the suggestion made that having short hair can be a good idea, and it seems like this can be especially true in professional contexts. For an entry-level male web developer who will be shortly moving to San Francisco, is this still true? I'm not sure if the culture there is different enough that long hair might actually be a plus. What about beards?

(I didn't post in this OT yet).

Comment author: badger 11 August 2015 07:13:45PM 2 points [-]

If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself.

Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.

Comment author: ChristianKl 11 August 2015 06:09:20PM 1 point [-]

Do you want to do freelance web development or be employed at a single company without much consumer contact?

Comment author: Username 11 August 2015 07:04:25PM 0 points [-]

Employment at a single company is the plan.

Comment author: Lumifer 11 August 2015 04:44:31PM 4 points [-]

The soon-to-be prisoner's dilemma in real life, no less :-)

Comment author: WalterL 12 August 2015 07:55:02PM 3 points [-]

I mean, its not like you couldn't already send mail to the sheriff. A stylish flyer is just a reminder that its possible. Good for them.

Comment author: MrMind 12 August 2015 09:38:27AM 1 point [-]

Uhm, I wonder if they are aware that prisoner's dilemma is defeated through pre-commitment. They are weeding out small dealers strengthening the big ones.

Comment author: MrMind 13 August 2015 09:02:29AM 0 points [-]

I'm always curious: since it's just one who downvoted, care to explain why? So I may improve...

Comment author: Lumifer 12 August 2015 03:36:56PM 1 point [-]

I think the police is mostly playing a PR game and/or amusing themselves. The idea of ratting on a competitor is simple enough to occur to drug dealers "naturally" :-)

Also note that this is not quite a PD where defecting gives you a low-risk slightly positive outcome. Becoming a police informer is... frowned upon on the street and is actually a high-risk move, usually taken to avoid a really bad outcome.

Comment author: Tem42 12 August 2015 04:00:10PM 3 points [-]

I would expect that it is slightly more than a PR stunt; it seems to me that most of the people who will use this 'service' are disgruntled citizens with no direct connection to the drug trade. Anyone who wants to accuse someone of trading in drugs now has an easy, anonymous, officially sanctioned way to do so, and clear instruction as to what information is most useful -- without having to ask!

I suspect that framing it as "drug dealers backstabbing drug dealers" is just a publicly acceptable way to introduce a snitching program that would otherwise be frowned upon by many.

Comment author: Lumifer 12 August 2015 04:13:43PM 2 points [-]

"If you see something, tell us" kind of thing? Maybe, that makes some sense.

I wonder how good that police department is at dealing with false positives X-/

Comment author: Salemicus 12 August 2015 10:45:25AM 0 points [-]

Pre-commitment needs to be credible, verifiable and enforceable. If you're playing chicken, pre-commitment means throwing the steering-wheel out of the window, not just saying "I will never ever swerve, pinky-swear."

What is the relevant pre-commitment mechanism here, and how does it operate?

If anything, I would say large dealers are more vulnerable.

Comment author: MrMind 12 August 2015 01:21:15PM 3 points [-]

What is the relevant pre-commitment mechanism here, and how does it operate?

Affiliation to a powerful criminal organization, that can kill you if you rattle or can bail you out if you cooperate.

Basically, the suckers at the bottom get caught while those who deals for the Mob face less competition.
In the most powerful flavor of Italian Mafia affiliates call themselves "man of honor".

Comment author: Dahlen 11 August 2015 09:38:06AM 2 points [-]

What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?

I have some years of college left before I'll be a certified professional, and I'm good but not world-class awesome at a variety of things, yet judging by encounters with some well and truly employed people, I find myself wondering how come I'm either not employed or duped into working for free, while these doofuses have well-paying jobs. The answer tends to be, for lack of trying on my part, but it would be quite a nasty surprise if I do begin to try and it turns out that my most relied-upon quality turns out not to be worth much. So, better to ask: how much is intelligence worth for earning money, when not supplemented by the relevant pieces of paper or loads of experience?

Comment author: Lumifer 11 August 2015 03:15:16PM 2 points [-]

What examples are there of jobs which can make use of high general intelligence, that at the same time don't require rare domain-specific skills?

A manager :-) A business manager, a small business owner, a civil servant, a dictator, a leader of the free world :-/

Generally speaking, there is something of a Catch-22 situation. The low-level entry jobs are easy to get into, but they don't really care about your intelligence. But high-level jobs where intelligence matters require demonstration not only of intelligence, but also of the ability to use it which basically means they want to see past achievements and accomplishments.

There are shortcuts, but they are usually called "graduate schools".

Comment author: ChristianKl 11 August 2015 06:10:54PM 0 points [-]

The low-level entry jobs are easy to get into, but they don't really care about your intelligence.

In Germany technical telephone support would be a low-level job where intelligence is useful but I don't know to what extend that exists in the US where the language situation is different.

Comment author: VoiceOfRa 18 August 2015 02:48:46AM *  2 points [-]

In the US those jobs tend to be outsourced to other English speaking countries with lower wages, most commonly India.

Comment author: shminux 11 August 2015 02:46:36PM 0 points [-]

Apply your general intelligence to figuring out what you are especially good at, then see if there are relevant paid jobs.

Comment author: WalterL 12 August 2015 08:04:58PM 1 point [-]

I think he's trying to do that, by making this post.

@OP: the best place I've seen for lazy smart people to make money is in coding jobs. If 4 year college is out, go to an online code learning place and get some nonsense degree. (App Academy, or whatevs). Then apply a bunch. If you have a friend who is a coder, see if they have a hookup.

Once you have a job the only way to lose it is to be aggressively inept or engage in one of the third rail categories of HR, racism sexism or any other ism.

Comment author: VoiceOfRa 18 August 2015 02:49:54AM 4 points [-]

Or for the company you work for to go bust.

Comment author: btrettel 11 August 2015 12:40:21PM *  3 points [-]

Programming is a skill, but not a particularly rare one. Beyond a certain level of intelligence, I don't think there's much if any correlation between programming ability and intelligence. Moreover, I think programming is one area where standard credentials don't matter too much. If you have a good project on GitHub, that can be enough.

gwern wrote something related before:

I've often seen it said on Hacker News that programmers could clean up in many other occupations because writing programs would give them a huge advantage. And I believe Michael Vassar has said here that he thought a LWer could take over a random store in SF and likewise clean up.

Personally, I think going off raw intelligence doesn't work so well, especially if you'll be reinventing the wheel because of your lack of domain knowledge. Getting rare skills which are in demand is a smart strategy, and you'd be better off going that route. Here's a good book built on that premise.

Comment author: ChristianKl 11 August 2015 10:15:31AM 1 point [-]

There are plenty people in MENSA who don't have high paying jobs.

Comment author: Dahlen 11 August 2015 10:35:29AM 0 points [-]

Possibly, but how about any job at all?

Comment author: Daniel_Burfoot 10 August 2015 11:00:07PM 6 points [-]

If the Efficient Market Hypothesis is true, shouldn't it be almost as hard to lose money on the market as it is to gain money? Let's say you had a strategy S that reliably loses money. Shouldn't you be able to define an inverse strategy S', that buys when S sells and sells when S buys, that reliably earns money? For the sake of argument rule out obvious errors like offering to buy a stock for $1 more than its current price.

Comment author: Good_Burning_Plastic 13 August 2015 09:34:02PM *  1 point [-]

The EMH works because everybody is trying to gain money, so everybody except you trying to gain money and you trying to lose money isn't the symmetric situation. The symmetric situation is everybody trying to lose money, in which case it'd be pretty hard indeed to do so. And if everybody except you was trying to lose money and you were trying to gain money it'd be pretty easy for you to do so. I think this would also be the case in absence of taxes and transaction costs. IOW I think Viliam nailed it and other people got red herrings.

Comment author: pcm 11 August 2015 07:01:31PM 2 points [-]

Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading).

It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem).

The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.

Comment author: Viliam 11 August 2015 08:07:43AM 12 points [-]

I guess the difference is that if you offer to sell a ton of gold for $1, you will find a buyer, but if you offer to buy a ton of gold for $1, you will not find a seller.

The inverse strategy will not produce the inverse result.

Comment author: VoiceOfRa 11 August 2015 04:46:21AM 3 points [-]

No, because you can't sell what you don't have.

Comment author: Lumifer 11 August 2015 07:16:59PM 3 points [-]

In the financial markets you can, easily enough.

Comment author: VoiceOfRa 12 August 2015 07:20:22AM 2 points [-]

Sort of. You have to pay someone additional money for the right/ability to do so.

Comment author: Lumifer 12 August 2015 03:30:45PM 1 point [-]

You have to pay a broker to sell what you have as well :-P

Comment author: VoiceOfRa 13 August 2015 05:13:42AM 2 points [-]

A lot less.

Also, this further breaks the asymmetry between making and loosing money.

Comment author: Lumifer 13 August 2015 02:31:15PM 0 points [-]

A lot less.

I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere.

It is perfectly possible to short a stock, cover it at exactly the same price and end up with more money in your account.

Comment author: Douglas_Knight 27 August 2015 11:07:18PM 1 point [-]

Actually, when you short a stock, you must pay an interest rate to the person from whom you borrowed the stock. That interest rate varies from stock to stock, but is always above the risk-free rate. Thus, if you short a stock and do nothing interesting with the cash and eventually cover it at the original price, you will lose money.

Comment author: JEB_4_PREZ_2016 27 August 2015 11:57:57PM *  -1 points [-]

If you enter into a short sale at time 0 and cover at time T, you get paid interest on your collateral or margin requirement by the lender of the asset. This is called the short rebate or (in the bond market) the repo rate. As the short seller, you'll be required to pay the time T asset price along with lease rate, which is based on the dividends or bond coupons the asset pays out from 0 to T.

So, if no dividends/coupons are paid out, it's theoretically possible for you to profit from selling short despite no change in the underlying asset price.

Comment author: Douglas_Knight 28 August 2015 02:24:50AM 1 point [-]

The lease rate is an interest rate (ie, based on time) in addition to the absolute minimum payment of the dividends issued. It is set by the market: there is a limited supply of shares available to be borrowed for shorting. For most stocks it is about 0.3% for institutional investors, but 5% for a tenth of stocks. The point is that this is an asymmetry with buying a stock.

Now that I look it up and see that it is 0.3%, I admit that is not so big, but I think it is larger than the repo rate. I see no reason for the lease rate to be related to inflation, so in a high inflation environment, you could make money by shorting a stock that did not change nominal price.

(Dividends are not a big deal in shorting because the price of a stock usually drops by the amount of the dividend, for obvious reasons.)

Comment author: VoiceOfRa 14 August 2015 04:54:06AM 2 points [-]

I think you're mistaken about that. As an empirical fact, it depends. What you are missing is the mechanism where when you sell a stock short, you don't get to withdraw the cash (for obvious reasons). The broker keeps it until you cover your short and basically pays you interest on the cash deposit. Right now in most of the first world it's miniscule because money is very cheap, that that is not the case always or everywhere.

Maybe if you have the right connections, and the broker really trust you. The issue is suppose you short a stock, the price goes up and you can't cover it. Someone has to assume that risk, and of course will want a risk premium for doing so.

Comment author: Lumifer 14 August 2015 02:29:16PM 1 point [-]

Maybe if you have the right connections, and the broker really trust you.

It doesn't have anything to do with connections or broker trust. It's standard operating practice for all broker clients.

The issue is suppose you short a stock, the price goes up and you can't cover it.

If the price goes sufficiently up, you get a margin call. If you can't meet it, the broker buys the stock to cover using the money in your account without waiting for your consent. The broker has some risk if the stock gaps (that is, the price moves discontinuously, it jumps directly from, say, $20 to $40), but that's part of the risk the broker normally takes.

Comment author: g_pepper 14 August 2015 05:15:29PM -1 points [-]

Another thing to watch out for when shorting stocks is dividends. If you are short a stock on the ex dividend date, then you have to pay the dividend on each share that you have shorted. However, as long as you keep margin calls and dividends in mind, short selling is a good technique (and an easy one) to play a stock that you are bearish on.

And, no, you don't need any special connections, although you typically need to request short-selling privileges on your brokerage account.

Another way to play a stock you are bearish on is buying put options. But put options are a lot harder to use effectively because (among other reasons) they become worthless on the expiration date.

Comment author: Vaniver 11 August 2015 12:03:17AM 10 points [-]

shouldn't it be almost as hard to lose money on the market as it is to gain money?

Consider the dynamic version of the EMH: that is, rather than "prices are where they should be," it's "agents who perceive mispricings will pounce on them, making them transient."

Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent. There's an asymmetry between "there is no free money left to be picked up" and "if you drop your money, it will not be picked up" that makes the first true (in the static case) and the second false.

Comment author: Lumifer 11 August 2015 01:04:18AM 3 points [-]

Then a person placing a dumb trade is creating a mispricing, which will be consumed by some market agent.

Well, that looks like an "offering to buy a stock for $1 more than its current price" scenario. You can easily lose a lot of money by buying things at the offer and selling them at the bid :-)

But let's imagine a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid.

Assuming you can competently express a market view, can you systematically lose money by consistently taking the wrong side under EMH?

Comment author: Salemicus 11 August 2015 11:02:23AM *  2 points [-]

Consider penny stocks. They are a poor investment in terms of expected return (unless you have secret alpha). But they provide a small chance of very high returns, meaning they operate like lottery tickets. This isn't a mispricing - some people like lottery tickets, and so bid up the price until they become a poor investment in terms of expected return (problem for the CAPM, not for the EMH). So you can systematically lose money by taking the "wrong" side, and buying penny stocks.

Does that count as an example, or does that violate your "risk-adjusted terms" assumption? I think we have to be careful about what frictions we throw out. If we are too aggressive in throwing out notions like an "equity premium," or hedging, or options, or market segmentation, or irreducible risk, or different tolerances to risk, we will throw out the stuff that causes financial markets to exist. An infinite frictionless plane is a useful thought experiment, but you can't then complain that a car can't drive on such a plane.

Comment author: Lumifer 11 August 2015 02:53:55PM *  0 points [-]

Yes, we have to be quite careful here.

Let's take penny stocks. First, there is no exception for them in the EMH so if it holds, the penny stocks, like any other security, must not provide a "free" opportunity to make money.

Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? Because it's a single number which has nothing do with risk. A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small. The distribution of penny stock returns can be very skewed and heavy-tailed, but EMH does not demand anything of the returns distributions.

So I think you have to pick one of two: either penny stocks provide negative expected return (remember, in our setup the risk-free rate is zero), but then EMH breaks; or the penny stocks provide non-negative expected return (though with an unusual risk profile) in which case EMH holds but you can't consistently lose money.

Does that violate your "risk-adjusted terms" assumption?

My "risk-adjusted terms" were a bit of a handwave over a large patch of quicksand :-/ I mostly meant things like leverage, but you are right in that there is sufficient leeway to stretch it in many directions. Let me try to firm it up: let's say the portfolio which you will use to consistently lose money must have fixed volatility, say, equivalent to the volatility of the underlying market.

Comment author: Salemicus 11 August 2015 07:58:30PM 2 points [-]

Second, when you say they are "a poor investment in terms of expected return", do you actually mean expected return? ... A lottery can perfectly well have a positive expected return even if your chance of getting a positive return is very small.

Yes, I mean expected return. If you hold penny stocks, you can expect to lose money, because the occasional big wins will not make up for the small losses. You are right that we can imagine lotteries with positive expected return, but in the real world lotteries have negative expected return, because the risk-loving are happy to pay for the chance of big winnings.

[If] penny stocks provide negative expected return ... then EMH breaks

Why?

Suppose we have two classes of investors, call them gamblers and normals. Gamblers like risk, and are prepared to pay to take it. In particular, they like asymmetric upside risk ("lottery tickets"). Normals dislike risk, and are prepared to pay to avoid it (insurance, hedging, etc). In particular, they dislike asymmetric downside risk ("catastrophes").

There is an equity instrument, X, which has the following payoff structure:

99% chance: payoff of 0 1% chance: payoff of 1000

Clearly, E(X) is 10. However, gamblers like this form of bet, and are prepared to pay for it. Consequently, they are willing to bid up the price of X to (say) 11.

Y is the instrument formed by shorting X. When X is priced at 11, this has the following payoff structure:

99% chance: payoff of 11 1% chance: payoff of -989

Clearly, E(Y) is 1. In other words, you can make money, in expectation, by shorting X. However, there is a lot of downside risk here, and normals do not want to take it on. They would require E(Y) to be 2 (say) in order to take on a bet of that structure.

So, assuming you have a "normal" attitude to risk, you can lose money here (by buying X), but you can't win it in risk-adjusted terms. This is caused by the market segmentation caused by the different risk profiles. Nothing here is contrary to the EMH, although it is contrary to the CAPM.

Thoughts:

  1. Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life.
  2. A more realistic model might include some deep-pocketed funds with a neutral attitude to risk who could afford to short X. But in real life, there is market segmentation and a lack of liquidity. Penny stocks are illiquid and hard to short, and so are many other high-beta instruments.
  3. The logical corollary of this model is that safe, boring equities will outperform stocks with lottery-ticket-like qualities. And it therefore follows that safe, boring equities will outperform the market as a whole. And that also seems true in real life.
  4. There are plausible microfoundations for why there might be a "gambler" class of investor. For example, fund managers are risking their clients' capital, not their own, and are typically paid in a ranking relative to their peers. Their incentives may well lead them to buy lottery tickets.
Comment author: Lumifer 11 August 2015 08:34:06PM 1 point [-]

However, there is a lot of downside risk here, and normals do not want to take it on.

By itself, no. But this is diversifiable risk and so if you short enough penny stocks, the risk becomes acceptable. To use a historical example, realizing this (in the context of junk bonds) is what made Michael Milken rich. For a while, at least.

market segmentation caused by the different risk profiles

This certainly exists, though it's more complicated than just unwillingness to touch skewed and heavy-tailed securities.

Penny stocks (and high-beta instruments generally, such as deep out-of-the-money options) display this behaviour in real life.

In real life shorting penny stocks will run into some transaction-costs and availability-to-borrow difficulties, but options are contracts and you can write whatever options you want. So are you saying that selling deep OOM options is a free lunch?

As for the rest, you are effectively arguing that EMH is wrong :-)

Full disclosure: I am not a fan of EMH.

Comment author: Salemicus 11 August 2015 08:47:09PM *  1 point [-]
  1. Who says this risk is diversifiable? Nothing in the toy model I gave you said the risk was diversifiable. Maybe all the X-like instruments are correlated.
  2. No, I'm not saying that selling deep OOM options is a free lunch, because of the risk profile. And these are definitely not diversifiable.
  3. I am not arguing that EMH is wrong. I have given you a toy model, where a suitably defined investor cannot make money but can lose money. The model is entirely consistent with the EMH, because all prices reflect and incorporate all relevant information.
Comment author: Lumifer 11 August 2015 08:53:18PM 0 points [-]

toy model

Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it?

As to toy models, if I get to define what classes of investors exist and what do they do, I can demonstrate pretty much anything. Of course it's possible to set up a world where "a suitably defined investor cannot make money but can lose money".

And deep OOM options are diversifiable -- there is a great deal of different markets in the world.

Comment author: Salemicus 11 August 2015 09:03:27PM 1 point [-]

Oh, I thought we were talking about reality. EMH claims to describe reality, doesn't it?

Yeah, but you wanted "a scenario where everything is happening pre-tax, there are no transaction costs, we're operating in risk-adjusted terms and, to make things simple, the risk-free rate is zero. Moreover, the markets are orderly and liquid." That doesn't describe reality, so describing events in your scenario necessitates a toy model.

In the real world, it is trivial to show how you can lose money even if the EMH is true: you have to pay tax, transaction costs are non-zero, the ex post risk is not known, etc.

deep OOM options are diversifiable -- there is a great deal of different markets in the world.

There's still a lot of correlation. Selling deep OOM options and then running into unexpected correlation is exactly how LTCM went bust. It's called "picking up pennies in front of a steamroller" for a reason.

Comment author: SolveIt 11 August 2015 04:26:23AM 0 points [-]

It seems you shouldn't be able to, since if you had such a system you could use the complement strategy (buy everything else) and make money.

Comment author: Lumifer 11 August 2015 02:30:50PM 0 points [-]

You imply that the market is zero-sum. Some markets are, but a lot are not.

Comment author: SolveIt 11 August 2015 04:11:26PM 0 points [-]

Correction: You would beat the market.

Comment author: Davidmanheim 11 August 2015 04:22:39AM 1 point [-]

Yes. Unless you think that all possible market information is reflected now, before it becomes available, someone makes money when information emerges, moving the market.

Comment author: Lumifer 11 August 2015 02:29:17PM *  0 points [-]

Yes, you can (theoretically) make money by front-running the market. But I don't think you can systematically lose money that way (and stay within EMH) and that's the question under discussion.

Comment author: ChristianKl 11 August 2015 06:17:55PM 1 point [-]

If someone is making money by front-running the market another person at the other side of the trade is losing money.

Comment author: Lumifer 11 August 2015 06:37:59PM *  -1 points [-]

We're talking about ways to systematically lose money, which means you would need to systematically throw yourself into the front-runner's path, which means you would know where that path is, which means you can systematically forecast the front-running. I think the EMH would be a bit upset by that :-)

Comment author: ChristianKl 11 August 2015 06:53:50PM 1 point [-]

We're talking about ways to systematically lose money, which means you would need to systematically throw yourself into the front-runner's path

Simply making random trades in a market where some participants are front runners will mean that some of those trades are with front runners where you lose money.

I would call that systematically losing money. On the other hand it doesn't give you an ability to forcast where you will lose the money to make the opposite bet and win money.

Do you think our disagreement is about the way the EMH is defined or are you pointing to something more substantial?

Comment author: Davidmanheim 19 August 2015 11:47:27PM 0 points [-]

No, no disagreement about EMH, that's exactly the point.

Comment author: James_Miller 10 August 2015 11:36:07PM 3 points [-]

No because of taxes, transaction cost, and risk/return issues.

Comment author: WhyAsk 10 August 2015 07:20:57PM *  6 points [-]

Even if half of what's posted here is at present beyond me or if I am not currently interested in a specific topic, I can learn a lot from this forum.

Comment author: Viliam 11 August 2015 08:10:32AM 2 points [-]

That's how it's meant to be used, I guess. Many people here are probably not interested in all topics.

Comment author: Dahlen 10 August 2015 02:27:29PM 4 points [-]

Meta: How come there have been so many posts recently by the generic Username account? More people wanting to preserve anonymity, or just one person who can't be bothered to make an account / own up to most of what they say?

Comment author: Elo 10 August 2015 10:45:14PM *  1 point [-]

I think its a reasonable solution to people not wanting to make an account or also the occasional anonymous post. I have used it once or twice to make separate comments.

But I should add that you can see a list of your nearest meetups if you set your location on your own account.

Edit: holy hell the person who posted all the OT comments here is really annoying and should make an account and stop link-dropping. If the account is being abused that bad we should shut it down and I would change my vote in the poll.

Comment author: [deleted] 11 August 2015 04:23:53PM 3 points [-]

From the upvote to downvote ratio it looks like more members think the posts by Username in the open thread are worthwhile - at the time of writing they are mostly among the higher top-level comments on this week's open thread, and several have sparked at least a bit of subsequent discussion in the form of follow up comments.

True, they're only links (with quoted text) but this doesn't particularly strike me as abuse of the Username account.

Comment author: Elo 11 August 2015 05:45:27PM 1 point [-]

I am suspicious of the link-drop attitude to posting anywhere. Even if it looks to have value added this time.

Comment author: Username 10 August 2015 09:40:17PM 5 points [-]

The similar formatting of the comments suggests that in this thread it's mostly one person with a lot of links to share.

Personally, I just haven't been bothered to make an account, and have been using the username account exclusively for about 5 years. I'd estimate 30-50% of all the posts on the account were made by me over this timeframe, though writing style suggests to me that a good number of people have used it as a one-shot throwaway, and several people have used it many times.

Comment author: ChristianKl 10 August 2015 06:30:39PM 2 points [-]

That leaves the question of whether that's okay or whether we should simply disable the account.

Submitting...

Comment author: Dahlen 11 August 2015 09:06:58AM 2 points [-]

Oh, I wasn't suggesting that; I was just hoping that whoever has been exclusively posting from that account can take a hint and consider using LW the typical way. It's confusing to see so many posts at once by that account and not know whether there's one person or several using it.

Comment author: Davidmanheim 11 August 2015 04:36:22AM 0 points [-]

It's interesting looking at the raw data breakdown of non -anonymous versus anonymous votes.

Comment author: Elo 11 August 2015 06:53:37AM -1 points [-]

that's creepy; also if you take away all the anonymous votes then there are very few votes (5). That might be normal for polls here. Hard to tell.

Also to note; I voted with my account here and it does not appear in the raw poll data. I don't know why.

Comment author: ChristianKl 11 August 2015 09:00:28AM 2 points [-]

All votes are done by real accounts. There a checkbox under a poll that marks whether your vote is annonymous or isn't by default it's toggled for annonymous votes.

Comment author: bbleeker 11 August 2015 08:50:44AM 5 points [-]

Anonymous voting is the default, and I always leave it on.

Comment author: Davidmanheim 19 August 2015 11:45:14PM -1 points [-]

I'd prefer to see accountability be a default, with anonymity whenever desired.

Comment author: Vaniver 10 August 2015 06:08:07PM 5 points [-]

just one person who can't be bothered to make an account

My dominant hypothesis is at least three people who couldn't be bothered to make accounts, and that this has further normalized the usage of Username as a generic lurker account.

Comment author: Stephen_Cole 10 August 2015 02:20:29PM 2 points [-]

Has there been discussion of Jack Good's principle of nondogmatism? (see Good Thinking, page 30).

The principle, stated simply in my bastardized version, is to believe no thing with probability 1. It seems to underlie Good's type 2 rationality (to maximize expected utility, within reason).

This is (almost) in accord with Lindley's concept of Cromwell's rule (see Lindley's Understanding Uncertainty or https://en.wikipedia.org/wiki/Cromwell%27s_rule). And seems to be closely related to Jaynes' mind projection fallacy.

Comment author: [deleted] 11 August 2015 12:37:26PM -1 points [-]

The principle, stated simply in my bastardized version, is to believe no thing with probability 1.

Meeehhhh. Believe nothing empirical with probability 1.0. Believe formal and analytical proofs with probability 1.0.

Comment author: Stephen_Cole 14 August 2015 08:39:12PM 3 points [-]

I get your point that we can have greater belief in logical and mathematical knowledge. But (as pointed out by JoshuaZ) I have seen too many errors in proofs given at scientific meetings (and in submitted publications) to blindly believe just about anything.

Comment author: [deleted] 14 August 2015 11:52:54PM -1 points [-]

I get your point that we can have greater belief in logical and mathematical knowledge.

That wasn't quite my point. As a simple matter of axioms, if you condition on the formal system, a proven theorem has likelihood 1.0. Since all theorems are ultimately hypothetical statements anyway, conditioned on the usefulness of the underlying formal system rather than a Platonic "truth", once a theorem is proved, it can be genuinely said to have probability 1.0.

Comment author: Stephen_Cole 22 August 2015 04:13:59AM 0 points [-]

I will assume by likelihood you meant probability. I think you have removed by concern by conditioning on it. The theorem has probability 1, in your formal system. For me that is not probability 1, I don't give any formal system full control of my beliefs/probabilities.

Of course, I believe arithmetic with probability approaching 1. For now.

Comment author: JoshuaZ 14 August 2015 06:02:53PM 5 points [-]

Have you never seen an apparently valid mathematical proof that you later found an error in?

Comment author: [deleted] 14 August 2015 11:53:35PM -2 points [-]

It's common sense to infer that someone is talking about valid proofs when they talk about believing in proofs.

Comment author: JoshuaZ 16 August 2015 02:33:39AM 2 points [-]

That is the problem in a nutshell: how do you know it is a valid proof? All the time one thinks the proof is valid and it turns out one is wrong.

Comment author: Tem42 10 August 2015 05:30:06PM 4 points [-]

There have been discussions on this topic, although perhaps not framed as nondogmatism. If you have not read 0 and 1 are not probabilities and infinite certainty, you might find them and related articles interesting.

Comment author: Username 10 August 2015 01:37:47PM 2 points [-]

An Introverted Writer’s Lament by Meghan Tifft

Whether we’re behind the podium or awaiting our turn, numbing our bottoms on the chill of metal foldout chairs or trying to work some life into our terror-stricken tongues, we introverts feel the pain of the public performance. This is because there are requirements to being a writer. Other than being a writer, I mean. Firstly, there’s the need to become part of the writing “community”, which compels every writer who craves self respect and success to attend community events, help to organize them, buzz over them, and—despite blitzed nerves and staggering bowels—present and perform at them. We get through it. We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies for a night of validation and participation.

Comment author: WalterL 12 August 2015 08:06:48PM 3 points [-]

Hmm, I generally read introvert as "recharges when alone", whereas extrovert "recharges with others". I don't usually associate introvert with being unable to do public speaking. That's a phobia, isn't it?

Comment author: Tem42 10 August 2015 06:23:52PM 8 points [-]

This is interesting, but I think that it is using an incorrect definition of introversion. I interpret an introvert as someone who prefers to spend time by themselves or in situations in which they are working on their own, rather than in situations in which they are interacting with other people. This does not mean that they necessarily need to feel extreme stress at public speaking or at parties/social events. They may feel bored, annoyed, frustrated, or indifferent to these events, or they may even like them, but feel the opportunity cost of the time they take is not really worth it.

"our terror-stricken tongues, we introverts feel the pain of the public performance"; "blitzed nerves and staggering bowels"; "We bully ourselves into it. We dose ourselves with beta blockers. We drink. We become our own worst enemies"

This doesn't sound like introversion. This sounds like an anxiety disorder.

Comment author: Username 10 August 2015 01:31:42PM 13 points [-]

Impulsive Rich Kid, Impulsive Poor Kid, an article about using CBT to fight impulsivity that leads to criminal behaviour, especially among young males from poor backgrounds.

How much crime takes place simply because the criminal makes an impulsive, very bad decision? One employee at a juvenile detention center in Illinois estimates the overwhelming percentage of crime takes place because of an impulse versus conscious decision to embark on criminal activity:

“20 percent of our residents are criminals, they just need to be locked up. But the other 80 percent, I always tell them – if I could give them back just ten minutes of their lives, most of them wouldn’t be here.”

...

The teenager in a poor area [who is] is not behaving any less automatically than the teenager in the affluent area. Instead the problem arises from the variability in contexts—and the fact that some contexts call for retaliation.” To illustrate their theory, they offer an example: If a rich kid gets mugged in a low-crime neighborhood, the adaptive response is to comply -- hand over his wallet, go tell the authorities. If a poor kid gets mugged in a high-crime neighborhood, it is sometimes adaptive to refuse -- stand up for himself, retaliate, run. If he complies, he might get a reputation as someone who is easy to bully, increasing the probability he will be victimized in the future. The two kids, conditioned by their environment, learn very different automatic responses to similar stimuli: someone else asserting authority over them.

The authors of “Thinking, Fast and Slow” extend the example further by asking you to imagine these same two kids in the classroom. If a teacher tells the rich kid to sit down and be quiet, his automatic response to authority on the street -- comply, sit down and be quiet -- is the same as the adaptive response for this situation. If a teacher tells the poor kid to sit down and be quiet, his automatic response to authority on the street -- refuse, retaliate -- is maladapted to this situation. The poor kid knows the contexts are different, but still on a certain level feels like his reputation is at stake when he’s confronted at school, and acts-out, automatically.

...

The researchers examined clinical studies of programs that keep this in mind and focus on teaching kids to regulate their automaticity. These interventions were designed to help young people, “recognize when they are in a high-stakes situation where their automatic responses might be maladaptive,” and slow down and consider them. One of the interventions studied was the Becoming a Man (BAM) program, conducted in public schools with disadvantaged young males, grades 6-12, on the south and west sides of Chicago.

“What makes the interventions we study particularly interesting is that they do not attempt to delineate specific behaviors as “good,” but rather focus on teaching youths when and how to be less automatic and more contingent in their behavior.”

Researchers randomly assigned students to have the opportunity to participate in BAM, as a course conducted once a week throughout the 2009-2010 school year.

The course is actually a program of cognitive behavioral therapy (CBT). CBT helps people identify harmful psychological and behavioral patterns, and then disrupt them and foster healthier ones. It’s used by a wide range of people for a wide range of issues, including to treat depression, anger management, and anxiety disorders. The particular style of CBT used in BAM focuses on three fundamental skills:

  1. Recognize when their automatic responses might get them into trouble,

  2. Slow down in those situations and behave less automatically,

  3. Objectively assess situations and think about what response is called-for. One thing participants are taught in BAM is that “a shift to an aversive emotion” is an important cue for when they are prone to act automatically. Anger, for example, was a common cue among participants in the study group. They were also taught tricks to help them slow down to consider their situation before acting: including deep breathing and other relaxation techniques. Lastly, they were guided through self-reflection and assessment of their own behavior: examining their “automatic” missteps, thinking about how they might have acted differently.

The researchers found that, during the program year, program participants had a 44% lower arrest rate for violent crimes than the control group. They repeated the intervention in 2013-2014 with a new group, and found that program participants had a 31% lower arrest rate for violent crimes than the control group.

Comment author: Viliam 11 August 2015 07:47:02AM 6 points [-]

Impulsive Rich Kid, Impulsive Poor Kid

Unrelated to the real content of the article, but my first reaction after reading the title was: "obviously, the impulsive Rich Kid can afford a better lawyer".

Comment author: Lumifer 10 August 2015 03:48:49PM 12 points [-]

if I could give them back just ten minutes of their lives, most of them wouldn’t be here.

He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.

The remainder of the post actually argues that persistent, stable "reflexes" are the cause of bad decisions and those certainly are not going to be fixed by a one-time gift of 10 minutes.

Comment author: Emile 11 August 2015 07:32:11AM 4 points [-]

if I could give them back just ten minutes of their lives, most of them wouldn’t be here.

He's wrong about that. He would need to give them back 10 minutes of their lives, and then keep on giving them back different 10 minutes on a very regular basis.

I disagree. Let's take drivers who got into a serious accident : if you "gave them just back ten minutes" so that they avoided getting into that accident, most of them wouldn't have had another accident later on. It's not as if the world neatly divided into safe drivers, who never have accidents, and unsafe drivers, who have several.

Sure, those kids that got in trouble are more likely to have problematic personalities, habits, etc. which would make it more likely to get in trouble again - but that doesn't mean more likely than not. Most drivers don't get have (serious) accidents, most kids don't get in (serious) trouble, and if you restrict yourself to the subset of those who already had it once, I agree a second problem is more likely, but not certain.

Comment author: Lumifer 11 August 2015 02:37:44PM 2 points [-]

but that doesn't mean more likely than not

How do you know?

most kids don't get in (serious) trouble

Yeah, but we are not talking about average kids. We're talking about kids who found themselves in juvenile detention and that's a huge selection bias right there. You can treat them as a sample (which got caught) from the larger underlying population which does the same things but didn't get caught (yet). It's not an entirely unbiased sample, but I think it's good enough for our handwaving.

but not certain.

Well, of course. I don't think anyone suggested any certainties here.

Comment author: FrameBenignly 12 August 2015 01:09:54AM *  1 point [-]

To use the paper's results, it looks like they're getting roughly 10 in 100 in the experiment condition and 18 in 100 for the control. Those kids were selected because they were considered high risk. If among the 82 of 100 kids who didn't get arrested there are >18 who are just as likely to be arrested as the 18 who were, then emile's conclusion is correct across the year. The majority won't be arrested next year. Across an entire lifetime however.... They'd probably become more normal as time passed, but how quickly would this occur? I'd think Lumifer is right that they probably would end up back in jail. I wouldn't describe this as a very regular problem though.

Comment author: Romashka 11 August 2015 01:58:18PM 1 point [-]

Would you think that in future, when such technologies will probably become widespread, driver training should include at least one grisly crash, simulated and showed in 3-D? Or at least a mild crash?

Comment author: query 10 August 2015 08:50:55PM 3 points [-]

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated.

That said, we cant actually retroactively edit anyways.

Comment author: Lumifer 11 August 2015 12:40:03AM 1 point [-]

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence.

I don't think that's the model (or if it is, I think it's wrong). I see the model as persistent reflexes interacting with the environment and giving rise to common, repeatable, predictable events with serious legal consequences.

Comment author: Username 10 August 2015 01:13:18PM 11 points [-]

Composing Music With Recurrent Neural Networks

It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. Very cool stuff!) Now seems like as good a time as ever to experiment with what a neural network can do.

For a while now, I’ve been floating around vague ideas about writing a program to compose music. My original idea was based on a fractal decomposition of time and some sort of repetition mechanism, but after reading more about neural networks, I decided that they would be a better fit. So a few weeks ago, I got to work designing my network. And after training for a while, I am happy to report remarkable success!

Comment author: pianoforte611 11 August 2015 09:48:58PM *  4 points [-]

It's certainly very interesting. It's a slight improvement over Markov chain music. That tends to sound good for any stretch of 5 seconds, but lacks a global structure making it pretty awful to listen to for any longer stretch of time. This music still lacks much of the longer range structures that make music sound like music. It's a lot like stitching together 5 different Chopin compositions. It is stylistically consistent, but the pieces don't fit together.

Having said that, it is very interesting to see what you can get out of a network with respect to consonance, dissonance, local harmonic context and timing. I'm most impressed by the rhythm, it sounds more natural to my ear than the note progression.

Comment author: Viliam 12 August 2015 07:50:20AM 2 points [-]

Maybe there are situations where these imperfections of music wouldn't matter, for example if used as a background music for a computer game.

Comment author: Houshalter 10 August 2015 01:12:25PM 11 points [-]

Do Artificial Reinforcement-Learning Agents Matter Morally?

I've read this paper and find it fascinating. I think it's very relevant to Lesswrong's interests. Not just that it's about AI,but also that it asks hard moral and philosophical questions.

There are many interesting excerpts. For example:

The drug midazolam (also known as ‘versed,’ short for ‘versatile sedative’) is often used in procedures like endoscopy and colonoscopy... surveyed doctors in Germany who indicated that during endoscopies using midazolam, patients would ‘moan aloud because of pain’ and sometimes scream. Most of the endoscopists reported ‘fierce defense movements with midazolam or the need to hold the patient down on the examination couch.’ And yet, because midazolam blocks memory formation, most patients didn’t remember this: ‘the potent amnestic effect of midazolam conceals pain actually suffered during the endoscopic procedure’. While midazolam does prevent the hippocampus from forming memories, the patient remains conscious, and dopaminergic reinforcement-learning continues to function as normal.

Comment author: Betawolf 10 August 2015 09:41:29PM 4 points [-]

The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they've not been mentioned.

Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.

Comment author: Username 10 August 2015 01:11:59PM 1 point [-]

Change your name by Paul Graham

If you have a US startup called X and you don't have x.com, you should probably change your name.

The reason is not just that people can't find you. For companies with mobile apps, especially, having the right domain name is not as critical as it used to be for getting users. The problem with not having the .com of your name is that it signals weakness. Unless you're so big that your reputation precedes you, a marginal domain suggests you're a marginal company. Whereas (as Stripe shows) having x.com signals strength even if it has no relation to what you do.

...

100% of the top 20 YC companies by valuation have the .com of their name. 94% of the top 50 do. But only 66% of companies in the current batch have the .com of their name. Which suggests there are lessons ahead for most of the rest, one way or another

Comment author: [deleted] 11 August 2015 03:22:17AM 3 points [-]

This seems to me a clear case of reversing (most of) the causation.

Comment author: SolveIt 11 August 2015 04:41:16AM 4 points [-]

Which makes it a good target for signalling. If you want to seem strong, you get the domain.

Comment author: [deleted] 11 August 2015 04:52:15AM *  0 points [-]

Yes, but I don't see why Paul thinks that's a good thing when you're actually not strong.

Usually, I think his advice is spot on, but in this case his advice that you want to signal that you're strong when you're actually not seems backwards. You don't want to be seen as a credible threat to competitors until you're ACTUALLY able to defend yourself.

Comment author: SolveIt 11 August 2015 06:46:37AM 2 points [-]

I have no experience with startups, but I imagine most startups fail because of apathy (from either customers or investors), rather than enemy action.