If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New to LessWrong?

New Comment
426 comments, sorted by Click to highlight new comments since: Today at 8:22 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
1Jayson_Virissimo8y
You're welcome.

I will be most interested to find out what it is that requires a sockpuppet but doesn't require it to be secret that it's a sockpuppet or even whose sockpuppet.

I think the point is that when googling his name, the post does not show up, but if LWers know it's the same person, there's no harm.

5gjm8y
Yup, he has confirmed essentially this by PM.
4tut8y
What is your credence that the google of five years in the future won't find things written under pseudonyms when you search for the author's real name? 10 years?
4Vaniver8y
I agree that will likely be available as a subscription service in 5 years or so, but I think it would be somewhat uncharacteristic for Google to launch that for everyone. (As I recall, they had rather good face recognition software ~5 years ago but decided to kill potential features built on that instead of rolling them out, because of privacy and PR concerns.)
0ChristianKl8y
By replying you eliminated his ability to delete the post and thus maybe the point of the effort.
0gjm8y
Can't he still replace it with [deleted] or something? (If so, and if it is helpful, I will happily amend what I wrote to leak less information about what happened.) Anyway: of course it was not my intention to deanonymize anyone, and I regret it if I have.
0ChristianKl8y
I think that's only happens when he would delete his own account. I don't think that's the case but if he wants to create a annonymous account he likely should start over with a new one.
0gjm8y
I meant replacing the content with "[deleted]", not the account name.
0ChristianKl8y
I think from the context of your post the meaning would still have been clear. Apart from that I don't think he can do it after he retracked the post. (the strikethrough)
0philh8y
Nope, I can still edit it.

Lessons from teaching a neural network...

Grandma teaches our baby that a pink toy cat is "meow".
Baby calls the pink cat "meow".
Parents celebrate. (It's her first word!)

Later Barbara notices that the baby also calls another pink toy non-cat "meow".
The celebration stops; the parents are concerned.
Viliam: "We need to teach her that this other pink toy is... uhm... actually, what is this thing? Is that a pig or a pink bear or what? I have no idea. Why do people create such horribly unrealistic toys for the innocent little children?"
Barbara shrugs.
Viliam: "I guess if we don't know, it's okay if the baby doesn't know either. The toys are kinda similar. Let's ignore this, so we neither correct her nor reward her for calling this toy 'meow'."

Barbara: "I noticed that the baby also calls the pink fish 'meow'."
Viliam: "Okay... I think now the problem is obvious... and so is the solution."
Viliam brings a white toy cat and teaches the baby that this toy is also "meow".
Baby initially seems incredulous, but gradually accepts.

A week later, the baby calls every toy and grandma "meow".

[-][anonymous]8y150

So the child was generalizing along the wrong dimension, so you decided the solution was to train an increase in generalization of the word meow which is what you got. You need to teach discrimination; not generalization. A method for doing so is to present the pink cat and pink fish sequentially. Reward the meow response in the presence of the cat, and reward fish responses to the fish. Eventually meow responses to the fish should extinguish.

7[anonymous]8y
Teaching subtraction: 'See, you had five apples, and you ate three. How many apples do you have?' 'Five.' 'No, look here, you only have two left. Okay, you had six apples, and ate four, how many apples do you have now?' 'Five.' 'No, dear, look here... Okay...' Sigh. 'Mom?' 'Yes, dear?' 'And if I have many apples, and I eat many, how many do I have left?..'
5Gunnar_Zarncke8y
Piagets problem: The child tries to guess what the teacher/parent/questioner wants. I never teach math. At least not in the school way of offering problems and asking questions about them. For example I 'tought' subtraction the following way: (in the kitchen) Me: "Please give me six potatoes." Him: "1, 2, 3" Me (putting them in the pot): "How many do we still need?" Him: "4, 5, 6" (thinking) "3 more." A specific situation avoids guessing the password.
5Dagon8y
The necessity of negative examples is well-known when training classifiers.
4Viliam8y
In theory I agree. Experimentally, trying to teach her that other toys are connected to different sounds, e.g. that the black-and-white cow is "moo", didn't produce any reaction so far. And I believe she doesn't understand the meaning of the word "not" yet, so I can't explain that some things are "not meow". I guess this problem will fix itself later... that some day she will start also repeating the sounds for other animals. (But I am not sure what is the official sound for turtle.)
3moridinamael8y
I am sure this isn't necessary, but, you do realize that she's going to learn language flawlessly without you actively doing anything? Instead of saying "my " my daughter used to exclusively say "the that I use." My son used to append a "t" sound to the end of almost every word. These quirks sorted themselves out without us mentioning them. =)
6Crux8y
The phrase "actively doing anything" is too slippery. What one person does passively another may do actively. People who post on Less Wrong tend to do things consciously more often than the general public. The theories which say that children acquire language without anyone doing anything special are no doubt studying the behavior of normal people. The conclusion is that Viliam is probably simply thinking out loud about things that most people consider only subconsciously and implement in some way but don't know how to articulate. If you try to acquire a foreign language by merely listening to native speakers converse, you will learn very little. Children learn language when adults adapt their speech to their level and attempt to bridge the inferential distance. Most people do this by accident of having the impulses of a human parent.
1Gunnar_Zarncke8y
Not actively but maybe subconsciously. As I already mentioned child directed speech is different. And also yes: Most children probably can get by without that either. And also: I'm sure gwern will chime in an cite that parents have no impact on language and concept acquisition at all.
3TimS8y
There's overwhelming data that parenting can prevent language acquisition. But that requires extreme degenerate cases - essentially child abuse on the level of locking the child in the closet and not talking to them at all. For typical parenting, I agree that it is unlikely that variance in parenting style has measurable effect on language acquisition.
0Gunnar_Zarncke8y
And what about the size of the vocabulary?
6gwern8y
I don't see why you would expect that to be affected much either. Vocabulary is a good measure of intelligence because words have a very long tail (Pareto, IIRC) distribution of usage; most words are hardly ever used. I pride myself on my vocabulary and I know it's vastly larger than most English speakers as evidenced by things like a perfect SAT verbal score, but nevertheless if I read through one page of my compact OED, I will run into scores or hundreds of words I've never seen before (and even more meanings of words!). This doesn't bother me since when you reach the point where you're reading the OED to learn new words, the words are now useless for any sort of actual communication... Anyway, since most words are hardly ever used, people will be exposed to them very few times, and so they are a sensitive measure of how quickly a person can learn the meaning of a word. If people are exposed to the word 'perspicacious' only 3 or 4 times in a lifetime on average, then intelligence will heavily affect whether that was enough for them to learn it; and by testing a few dozen words drawn from the critical region of rarity where they're rare enough that most people would not have been exposed enough times but not so rare that no one learns them normally (as determined empirically - think item-response theory curves here), you can get a surprisingly good proxy for intelligence despite the obvious fragility to cheating. (Of course, you can get around it, and the SAT does, by simply having lots of semi-rare words to draw upon. Have you ever looked at comprehensive SAT vocab lists? There's just no way most people could memorize more than a few hundred without, well, being very smart and verbally adept.) Since there are so many potential rare vocabulary words to learn (and which could be sampled on an IQ test) and since parents speak only a small fraction of the number of words a person is exposed to over a lifetime... Even a parent deliberately trying to build vocabulary
4Lumifer8y
It's also a good measure of how much do you read (or how much have you read as a child). People who read books -- lots and LOTS of books -- have a very good vocabulary. People who don't, don't. There is certainly a correlation, but vocabulary is still just a proxy for intelligence, maybe not a good proxy for math people or in the 'net age.
0Gunnar_Zarncke8y
Yes, that is about what I expected you to confirm. And wow, are you omnipresent or how comes you actually noticed the post so quickly (or is there a 'find my name in posts' functionality I overlooked)?
3gwern8y
I just skim http://lesswrong.com/r/discussion/comments/ and occasionally C-f for my name. (My PMs/red-box are so backed up that I haven't dared check them in many months, so this is how I see most replies to my comments...)
0polymathwannabe8y
Then teach her to say turtle. Likewise for other animals; a cat is not a meow.
0Viliam8y
Yes, as soon as she learns to speak polysyllabic words (which in my language also include "cat").
5polymathwannabe8y
My first word was "daddy" (papi in Spanish). It should be possible to start with regular words. Edited to add: I just looked up "turtle" in Slovak. I can't believe how much of a jerk I was in my previous comment.
2Gunnar_Zarncke8y
I'm not convinced that using different names is a really helpful idea. It requires an extra transition later on. Well no real harm done. But I wonder about the principle behind that: Dropping complexity because it is hard? I agree that child directed speech is different. It is simpler. But it isn't wrong. Couldn't you have said "the cat meows" or "this toy meows" or even "it meows"? That would have placed the verb in the right place in a simple sentence. The baby can now validly repeat the sound/word.
3Emily8y
Hard to come by in normal language acquisition, though. So it probably doesn't quite work like that.
0Unnamed8y
Sounds like she hasn't learned shape bias yet.

I've gotten around to doing a cost-benefit analysis for vitamin D: http://www.gwern.net/Longevity#vitamin-d

3closeness8y
Is it 5000IU per day?
1gwern8y
We don't know. Since you asked, here's the comment from one of the more recent meta-analyses to discuss dose in connection with all-cause mortality, Autier 2014: 1μg=40IU, so 10μg=400IU, 20μg=800IU, and 1250μg=5000IU. Personally, I'm not sure I agree. The mechanistic theory and correlations do not predict that 400IU is ideal, it doesn't seem enough to get blood serum levels of 25(OH)D to what seems optimal, and I don't even read Rejnmark the same way: look at the Figure 3 forest plot. To me, this looks like after correcting for Smith's use of D2 rather than D3 (D2 usually performs worse), that there are too few studies using higher doses to make any kind of claim (Table 1; almost all the daily studies use <=20μg), and the studies which we do have tend to point to higher being better within this restricted range of dosages. That said, I cannot prove that 5k IU is equally or more effective, so if anyone is feeling risk-averse or dubious on that score, they should stick with 800IU doses.
1ChristianKl8y
People in the studies presumably don't take it all in the morning. Do you have an estimate of how that affects the total effect? How much bigger would you estimate the effect to be when people take it in the morning?
5gwern8y
I take it in the morning just because I found that taking it late at night harmed my sleep. I have no idea how much people taking it later in the day might reduce benefits by damaging sleep; I would guess that the elderly people usually enrolled in these trials would be taking it as part of their breakfast regimen of pills/prescriptions and so the underestimate of benefits is not that serious.
0Lumifer8y
D is a fat-soluble vitamin that the body can store. It's not like, say, the B vitamins which get washed out of your body pretty quickly. I don't think when you take it makes any difference (though you might want to take it together with food that contains fat for better absorption).
0ChristianKl8y
Multiple people such as gwern and Seth Roberts found that the timing makes a difference for them.
1Lumifer8y
That's true. What I meant is that blood levels of vitamin D are fairly stable and for the purposes of reduction in mortality it shouldn't matter when in the day do you take it. However side-effects, e.g. affecting sleep, are possible and may be a good reason to take it at particular times.
2ChristianKl8y
I don't think it's clear at all the the purpose of the reduction of mortality is different than the purpose of sleep quality. Vitamin D does do different things but I would estimate that a lot of the reduction of mortality is due to having a better immune system. Sleeping badly means a worse immune system.
0NoSignalNoNoise8y
Thanks for posting that! The key stats: expected life extension: 4 months; optimal starting age: 24.

Why too much evidence can be a bad thing

(Phys.org)—Under ancient Jewish law, if a suspect on trial was unanimously found guilty by all judges, then the suspect was acquitted. This reasoning sounds counterintuitive, but the legislators of the time had noticed that unanimous agreement often indicates the presence of systemic error in the judicial process, even if the exact nature of the error is yet to be discovered. They intuitively reasoned that when something seems too good to be true, most likely a mistake was made.

In a new paper to be published in The Proceedings of The Royal Society A, a team of researchers, Lachlan J. Gunn, et al., from Australia and France has further investigated this idea, which they call the "paradox of unanimity."

"If many independent witnesses unanimously testify to the identity of a suspect of a crime, we assume they cannot all be wrong," coauthor Derek Abbott, a physicist and electronic engineer at The University of Adelaide, Australia, told Phys.org. "Unanimity is often assumed to be reliable. However, it turns out that the probability of a large number of people all agreeing is small, so our confidence in unanimity is ill-fo

... (read more)
8gwern8y
See: * "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" * http://www.gwern.net/The%20Existential%20Risk%20of%20Mathematical%20Error * Jaynes on the Emperor of China fallacy * Schimmack's incredibility index
0gwern8y
Looks like the paper is now out: http://arxiv.org/pdf/1601.00900v1.pdf
0[anonymous]8y
Thanks Panorama and Gwern, incredibly interesting quote and links
6philh8y
This isn't "more evidence can be bad", but "seemingly-stronger evidence can be weaker". If you do the math right, more evidence will make you more likely to get the right answer. If more evidence lowers your conviction rate, then your conviction rate was too high. Briefly, I think what's going on is that a 'yes' presents N bits of evidence for 'guilty', and M bits of evidence for 'the process is biased', where M>N. The probability of bias is initially low, but lots of yeses make it shoot up. So you have four hypotheses (bias yes/no cross guilty yes/no), the two bias ones dominate, and their relative odds are the same as when you started.
2casebash8y
So, why not stab someone in front of everyone to ensure that they all rule you guilty?
2Slider8y
If you are more confident that the method is inaccurate when it is operating then it being low spread is indication that it is not operating. A TV that shows a static image that flickers when you kick it more likely is recieving actual feed than one that doesn't flicker when punched. If you have multiple TVs that all flicker at the same time it is likely that the cause was the weather rather than the broadcast
0[anonymous]8y
Can you clarify what youre talking about without using the terms method, operating and spread.
3Slider8y
I have a device that displays three numbers when a button is pressed. If any two numbers are different then one of the numbers is the exact room temperature but no telling which one it is. If all the numbers are the same number I don't have any reason to think the displayed number would be the room temperature. In a way I have two info channels "did the button pressing result in a temperature reading?" and "if there was a temperature reading what it tells me about the true temperature?". The first of these channels doesn't tell me anything about the temperature but it tells me about something. Or I could have three temperature meters one of which is accurate in cold, on in moderate temperatures and one in hot temperatures. Suppose that cold and hot don't overlap. If all the temperature cauges show the same number it would mean both the cold and hot meters would in fact be accurate in the same temperatures. I can not be more certain about the temperature than the operating principles of the measuring device as the temperature is based on those principles. The temperature gauges showing differnt temperatures supports me being rigth about the operating principles. Them being the same is evidence that I am ignorant on how those numbers are formed.
0[anonymous]8y
Very well explained :)
-2IlyaShpitser8y
https://en.wikipedia.org/wiki/Central_limit_theorem
-4Slider8y
That is the case that +ing amongs many should be gaussian. If the distribution is too narrow to be caussian it tells against the "+ing" theory. Someone who is amadant that it is just a very narrow caussian could never be proven conclusively wrong. However it places restraints on how ranodm the factors can be. At some point the claim of regularity will become implausible. If you have something that claims that throwing a fair dice will always come up with the same number there is an error lurking about.
2IlyaShpitser8y
The variance of the Gaussian you get isn't arbitrary and related to the variance of variables being combined. So unless you expect people picking folks out of a lineup to be mostly noise-free, a very narrow Gaussian would imply a violation of assumptions of CLT. This Jewish law thing is sort of an informal law version of how frequentist hypothesis testing works: assume everything is fine (null) and see how surprised we are. If very surprised, reject assumption that everything is fine.
-4Slider8y
Thus our knowledge on people being noisy means the mean is illdefined instead of inaccurate.
0IlyaShpitser8y
Sorry, what?
-4Slider8y
having unanimous tesitimony means that the gaussian is too narrow to be the results of noisy testimonies. So either they gave absolutely accurate testimonies or they did something else than testify. Having them all agree raises more doubt on that everyone was trying to deliver justice than their ability to deliver it. If a jury answers a "guilty or not guilty" verdict with "banana" it sure ain't a result of a valid justice process. Too certasin results are effectively as good as "banana" verdicts. If our assumtions about the process hold they should not happen.
1Viliam8y
I believe I read somewhere on LW about an investment company that had three directors, and when they decided whether to invest in some company, they voted, and invested only if 2 of 3 have agreed. The reasoning behind this policy was that if 3 of 3 agreed, then probably it was just a fad. Unfortunately, I am unable to find the link.
[-][anonymous]8y140

A side note.

My mother is a psychologist, father - an applied physicist, aunt 1 - a former morgue cytologist, aunt 2 - a practicing ultrasound specialist, father-in-law - a general practitioner, husband - a biochemist, my friends (c. 5) are biologists, and most of my immediate coworkers teach either chemistry or biology. (Occasionally I talk to other people, too.) I'm mentioning this to describe the scope of my experience with how they come to terms with the 'animal part' of the human being; when I started reading LW I felt immediately that people here come from different backgrounds. It felt implied that 'rationality' was a culture of either hacking humanity, or patching together the best practices accumulated in the past (or even just adopting the past), because clearly, we are held back by social constraints - if we weren't, we'd be able to fully realize our winning potential. (I'm strawmanning a bit, yes.) For a while I ignored the voice in the back of my mind that kept mumbling 'inferential distances between the dreams of these people and the underlying wetware are too great for you to estimate', or some such, but I don't want to anymore.

To put it simply, there is a marked diff... (read more)

6Viliam8y
I am not sure what exactly you wanted to say. All I got from reading it is: "human anatomy is complicated, non-biologists hugely underestimate this, modifying the anatomy of human brain would be incredibly difficult". I am not what is the relation to the following part (which doesn't speak about modifying the anatomy of human brain): Are you suggesting that for increasing rationality, using "best practices" will be not enough, changes in anatomy of human brain will be required (and we underestimate how difficult it will be)? Or something else?
4Lumifer8y
I read Romashka as saying that the clean separation between the hardware and the software does not work for humans. Humans are wetware which is both.
4[anonymous]8y
That, and that those changes in the brain might lead to other changes not associated with intelligence at all. Like sleep requirements, haemorrages or fluctuations in blood pressure in the skull, food cravings, etc. Things that belong to physiology and are freely discussed by a much narrower circle of people, in part because even among biologists many people don't like the organismal level of discussion, and doctors are too concerned with not doing harm to consider radical transformations. Currently, 'rationality' is seen (by me) as a mix of nurturing one's ability to act given the current limitations AND counting on vastly lessened limitations in the future, with some vague hopes of adapting the brain to perform better, but the basis of the hopes seems (to me) unestablished.
0Viliam8y
That's also more or less how I see it. I am not planning to perform a brain surgery on myself in the near future. :D
2Gunnar_Zarncke8y
The XKCD for it: DNA (or "Biology is largely solved"): https://xkcd.com/1605/
0ChristianKl8y
I see three lines of addressing this concern: 1) Anatomy was over a long time under strong evolutionary pressure. Human intelligence is a fairly recent phenomena of the last 100,000 years. It's a mess that's not as well ordered as anatomy. 2) Individual humans deviate more from the textbook anatomy than you would guess by reading the textbook. 3) The brain seems to be build out of basic modules that easily allow it to add an additional color if you edit the DNA in the eye via gene therapy. People with implented magnets can feel magnetic fields. It's modules allow us to learn complex mental tasks like reading texts which is very far from what we evolved to do.
3[anonymous]8y
Also, human intelligence has been evolving exactly as long as human anatomy, it simply leaped forward recently in ways we can notice. That doesn't mean it hasn't been under strong evolutionary pressure before. I would say that until humans learned to use tools, the pressure on an individual human had had to be stronger.
1ChristianKl8y
I don't think that reflects reality. Our anatomy isn't as different from chimpanzee's as our minds. Most people hear voices in their head that say stuff to them. Chimpanzee's don't have language to do something similar.
1[anonymous]8y
I'm not saying otherwise! I'm saying that the formulation has little sense either way. Compare: 'there is little observed variation in anatomy between apes in broad sense because the evolutionary pressure constraining anatomical changes is too great to allow much viable variation', 'there is little observed variation in anatomy ..., but not in intelligence, because further evolution of intelligence allows for greater success and so younger branches are more intelligent and better at survival', 'only change in anatomy drives change in intelligence, so apparently there was some great hack which translated small changes in anatomy to lead to great changes in intelligence', 'chimpanzees never tell us about the voices they hear'...
0ChristianKl8y
There are million of years invested into the task about how to move with legs. There's not millions of years invested into the task of how brains best deal with language.
0[anonymous]8y
What do you understand as evolution of the mind, then, and how is it related to that of organs?
0ChristianKl8y
I think adding lanugage produced something like a quantum leap for the mind and that there's no similar quantum leap for other organ's like the human heart. The quantum leap means that other parts have to adapt and optimize for now language being a major factor. You could look at IQ. The mental difference between a human at IQ 70 and a human at IQ 130 is vast. Intelligence is also highly heritable. With a few hundred thousand years and a decent amount of evolutionary pressure on stronger intelligence you wouldn't have many low IQ people anymore.
1[anonymous]8y
And yet textbook anatomy is my best guess about a body when I haven't seen it, and all deviations are describable compared to it. What I object to is the norm of treating phenomenology, such as the observations about magnets and eye color, as more-or-less solid background for predictions about the future. If we discuss, say, artificial new brain modules, that's fine by me as long as I keep in mind the potential problems with cranial pressure fluctuations, the need to establish interconnections with other neurons - in some very ordered fashion, building blood vessels to feed it, changes in glucose consumption, even the possibility of your children cgoosing to have completely different artificial modules than you, to the point that heritability becomes obsolete, etc. I am not a specialist to talk about it. I have low priors on anybody here pointing me to The Literature were I to ask. I think seeing at least the bones and then trying to gauge the distance to what experimental interference one considers possible would be a good thing to happen.
[-][anonymous]8y130

Would anyone actually be interested if I prepared a post about the recent "correlation explanation" approach to latent-model learning, the "multivariate mutual information"/"total correlation" metric it's all based on, supervenience in analytical philosophy, and implications for cognitive science and AI, including FAI?

Because I promise I didn't write that last sentence by picking buzzwords out of a bag.

6IlyaShpitser8y
I might be super mean about this!
0[anonymous]8y
Is "super mean" still a bad thing, or now a good thing?
5gjm8y
I will be very interested to read both your account of correlation explanation and Ilya's super-meanness about it.
4IlyaShpitser8y
In the words of Calvin's dad, it builds character.
0[anonymous]8y
Ah. You mean you'll act as Reviewer 2. Excellent.
0IlyaShpitser8y
There is a relevant quote from Faust by Mephistopheles.
0[anonymous]8y
That being, for those of us too gauche to have read Faust in the original?
1IlyaShpitser8y
Ein Teil von jener Kraft, Die stets das Böse will und stets das Gute schafft. Ich bin der Geist der stets verneint! ---------------------------------------- Part of that power which would Do evil constantly and constantly does good. I am the spirit of perpetual negation
0[anonymous]8y
Anyway, could you PM me your email address? I figure that for a start at being Reviewer 2, I might as well send you the last thing I wrote along these lines, and then start writing the one I've actually just promised.
0[anonymous]8y
I really don't think that Reviewer 2 has anything to do with Lucifer, or with the Catholic view of Lucifer/Satan as self-thwarting.
1gjm8y
I think you are overestimating how literally and seriously Ilya intended his reference to be taken. I don't think the intended parallel goes beyond this: the devil (allegedly) tries to do evil and ends up doing good in spite of that; a highly critical reviewer feels (to the reviewee) like he's doing evil but ends up doing good in spite of that.
0[anonymous]8y
Ah. But of course the reviewer thinks he's good, from his point of view within the system.
1gjm8y
Oh yes, indeed. (For me that's actually part of why the parallel Ilya is drawing is funny.)
6Manfred8y
I'd be interested! I hereby promise to read and comment, unless you've gone totally off the bland end.
3[anonymous]8y
Ok, then, it'll definitely happen Real Soon Now.
3Lumifer8y
Moderately. On the plus side it's forcing people to acknowledge the uncertainty involved in many numbers they use. On the minus side it's treating everything as a normal (Gaussian) distribution. That's a common default assumption, but it's not necessarily a good assumption. To start with an obvious problem, a lot of real-world values are bounded, but the normal distribution is not.
0iarwain18y
It's open source. Right now I only know very basic Python, but I'm taking a CS course this coming semester and I'm going for a minor in CS. How hard do you think it would be to add in other distributions, bounded values, etc.?
0Douglas_Knight8y
As a matter of programming it would be very easy. The difficult part is designing the user interface so that the availability of the options doesn't make the overall product worse.
0[anonymous]8y
Author is on the effective altruism forum, he said his next planned future is more distributions, and that he specifically architected it to be easy to add new distributions.
0Lumifer8y
How hard will it be to add features depends on the way it's architected, but the real issue is complexity. After you add other distributions, bounds, etc. the user would have to figure out what are the right choices for his specific situation and that's a set of non-trivial decisions. Besides, one of the reasons people like normal distributions is that they are nicely tractable. If you want to, say, add two it's easy to do. But once you go to even slightly complicated things like truncated normals, a lot of operations do not have analytical solutions and you need to do stuff numerically and that becomes... complex and slow.
0Douglas_Knight8y
It is already doing everything numerically.
2moridinamael8y
This is awesome. Awesome awesome awesome. I have been trying to code something like this for a long time but I've never got the hang of UI design.

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)

You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.

I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it's about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don't care about religion). Without MWI, something else would be "the most controversial topic which EY should not have added because it antagonizes people for no good reason", and people would speculate about the dark reasons that made EY write about that.

For context, I will quote the part that Yvain quoted from the Sequences:

Everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on QM. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probabl

... (read more)

Because he was building a tribe. (He's done now).


edit: This should actually worry people a lot more than it seems to.

3Lumifer8y
Why?
6IlyaShpitser8y
Consider that if stuff someone says resonates with you, that someone is optimizing for that.
4Lumifer8y
There are two quite different scenarios here. In scenario 1 that someone knows me beforehand and optimizes what he says to influence me. In scenario 2 that someone doesn't know who will respond, but is optimizing his message to attract specific kinds of people. The former scenario is a bit worrisome -- it's manipulation. But the latter one looks fairly benign to me -- how else would you attract people with a particular set of features? Of course the message is, in some sense, bait but unless it's poisoned that shouldn't be a big problem.
0Dagon8y
I don't know why scenario 2 should be any less worrisome. The distinction between "optimized for some perception/subset of you" and "optimized for someone like you" is completely meaningless.
0Lumifer8y
Because of degree of focus. It's like the distinction between a black-hat scanning the entire 'net for vulnerabilities and a black-hat scanning specifically your system for vulnerabilities. Are the two equally worrisome?
0Dagon8y
equally worrisome, conditional on me having the vulnerability the blackhat is trying to use. This is equivalent to the original warning being conditional on something resonating with you.
-1IlyaShpitser8y
MIRI survives in part via donations from people who bought the party line on stuff like MWI.
4ChristianKl8y
Are you saying that based on having looked at the data? I think we should have a census that has numbers about donations for MIRI and belief in MWI.
2Vaniver8y
Really, you would want MWI belief delta (to before they found LW) to measure "bought the party line."
1IlyaShpitser8y
I am not trying to emphasize MWI specifically, it's the whole set of tribal markers together.
4bogus8y
If there is a tribal marker, it's not MWI persay; it's choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist 'tribe'.
0[anonymous]8y
* contested the strength of the MW claim. Explanatory parsimony doesn't differentiate a strong from a weak claim OP's original claim:
3Lumifer8y
A fair point. Maybe I'm committing the typical mind fallacy and underestimating the general gullibility of people. If someone offers you something, it's obvious to me that you should look for strings, consider the incentives of the giver, and ponder the consequences (including those concerning your mind). If you don't understand why something is given to you, it's probably wise to delay grabbing the cheese (or not touching it) until you understand. And still this all looks to me like a plain-vanilla example of a bootstrapping an organization and creating a base of support, financial and otherwise, for it. Unless you think there were lies, misdirections, or particularly egregious sins of omission, that's just how the world operates.
2Richard_Kennaway8y
Also, anyone who succeeds in attracting people to an enterprise, be it by the most impeccable of means, will find the people they have assembled creating tribal markers anyway. The leader doesn't have to give out funny hats. People will invent their own.
1IlyaShpitser8y
People do a lot of things. Have biases, for example. There is quite a bit of our evolutionary legacy it would be wise to deemphasize. Not like there aren't successful examples of people doing good work in common and not being a tribe. ---------------------------------------- edit: I think what's going on is a lot of the rationalist tribe folks are on the spectrum and/or "nerdy", and thus have a more difficult time forming communities, and LW/etc was a great way for them to get something important in their life. They find it valuable and rightly so. They don't want to give it up. I am sympathetic to this, but I think it would be wise to separate the community aspects and rationality itself as a "serious business." Like, I am friends with lots of academics, but the academic part of our relationship has to be kept separate (I would rip into their papers in peer review, etc.) The guru/disciple dynamic I think is super unhealthy.
0[anonymous]8y
Because warning against dark side rationality with dark side rationality to find light side rationalists doesn't look good against the perennial c-word claims against LW...
1knb8y
I think LW is skewed toward believing in MWI because they've all read Yudkowsky. It really doesn't seem likely Yudkowsky just gleaned MWI was already popular and wrote about it to pander to the tribe. In any case I don't really see why MWI would be a salient point for group identity.
4IlyaShpitser8y
That's not what I am saying. People didn't write the Nicene Creed to pander to Christians. (Sorry about the affect side effects of that comparison, that wasn't my intention, just the first example that came to mind). MWI is perfect for group identity -- it's safely beyond falsification, and QM interpretations are a sufficiently obscure topic where folks typically haven't thought a lot about it. So you don't get a lot of noise in the marker. But I am not trying to make MWI into more than it is. I don't think MWI is a centrally important idea, it's mostly an illustration of what I think is going on (also with some other ideas).
0[anonymous]8y
Consequentialist ethic

My model of him has him having an attitude of "if I think that there's a reason to be highly confident of X, then I'm not going to hide what's true just for the sake of playing social games".

3ChristianKl8y
Given the way the internet works bloggers who don't take strong stances don't get traffic. If Yudkowsky wouldn't have took positions confidently, it's likely that he wouldn't have founded LW as we know it. Shying away from strong positions for the sake of not wanting to be vulnerable is no good strategy.
0username28y
I don't agree with this reasoning. Why not write clickbait then if the goal is to drive traffic?
3ChristianKl8y
I don't think the goal is to drive traffic. It's also to have an impact on the person who reads the article. If you want a deeper look at the strategy look at Nassim Taleb is quite explicit about the principle in Antifragile. I don't think that Elizers public and private beliefs differ on the issues that RaelwayScot mentioned. A counterfactual world where Eliezer would be a vocal about his beliefs wouldn't have ended up with LW as we know it.
2[anonymous]8y
its a balancing act
0hairyfigment8y
Actually, I can probably answer this without knowing exactly what you mean: the notion of improved Solomonoff Induction that gets him many-worlds seems like an important concept for his work with MIRI. I don't know where "his work often gets ripped apart" for that reason, but I suspect they'd object to the idea of improved/naturalized SI as well.
4IlyaShpitser8y
His work doesn't get "ripped apart" because he doesn't write or submit for peer review.
0[anonymous]8y
inductive bias
0hairyfigment8y
The Hell do you mean by "computational monism" if you think it could be a "weaker prior"?

So I think I've genuinely finished http://gwern.net/Mail%20delivery now. It should be an interesting read for LWers: it's a fully Bayesian decision-theoretic analysis of when it is optimal to check my mail for deliveries. I learned a tremendous amount working my way through it, from how to much better use JAGS to how to do Bayesian model comparison & averaging to loss functions and EVSI and EVPI for decision theory purposes to even dabbling in reinforcement learning with Thompson sampling/probability-matching.

I thought it was done earlier, but then I r... (read more)

2gwern8y
Related to this, I am trying to get a subreddit going for statistical decision theory links and papers to discuss: https://www.reddit.com/r/DecisionTheory/ Right now it's just me dumping in decision-theory related material like cost-benefit analyses, textbooks, relevant blog posts, etc, but hopefully other people will join in. We have flair and a sidebar now! If anyone wants to be a mod, just ask. (Workload should be negligibly small, this is more so the subreddit doesn't get locked by absence.)
0gwern8y
If anyone with graphics skills would like to help me make a header for the subreddit, I have some ideas and suggested images in https://plus.google.com/103530621949492999968/posts/ZfEtb54aN4Q for visualizing the steps in decision analysis.

Recently my working definition of 'political opinion' became "which parts of reality did the person choose to ignore". At least this is my usual experience when debating with people who have strong political opinions. There usually exists a standard argument that an opposing side would use against them, and the typical responses to this argument are "that's not the most important thing; now let's talk about a completely different topic where my side has the argumentational advantage". (LW calls it an 'ugh field'.) Sometimes the argument... (read more)

I'd say that his critics are annoyed that he's ignoring their motte [ETA: Well, not ignoring, but not treating as the bailey], from which they're basing their assault on Income Inequality. "Come over here and fight, you coward!"

There's not much concession in agreeing that fraud is bad. Look: Fraud is bad. And income inequality is not. Income inequality that promotes or is caused by fraud is bad, but it's bad because fraud is bad, not because income inequality is bad.

It's possible to be ignorant of the portion of the intellectual landscape that includes that motte; to be unaware of fraud. It's possible to be ignorant of the portion of the intellectual landscape that doesn't include the bailey; to be unaware of wealth inequality that isn't hopelessly entangled in fraud. But once you realize that the landscape includes both, you have two conversations you can have: One about income inequality, and one about fraud.

Which is to say, you can address the motte, or you can address the bailey. You don't get to continue to pretend they're the same thing in full intellectual honesty.

7TheAncientGeek8y
"More specifically, after reading the essay Economic Inequality by Paul Graham, I would say that the really simplified version is that there are essentially two different ways how people get rich. (1) By creating value; and today individuals are able to create incredible amounts of value thanks to technology. (2) By taking value from other people, using force or fraud in a wider meaning of the word; sometimes perfectly legally; often using the wealth they already have as a weapon." Which one is inheritance?
4tut8y
I think it would be counted as whichever way was used by whoever you got the inheritance from.
4Viliam8y
Inheritance is how some people randomly get the weapon they can choose to use for (2). I don't have a problem with inheritance per se; I see it as a subset of donation, and I believe people should be free to donate their money. It's just that in a bad system, it will be a multiplier of the badness. If you have a system where evil people can get their money by force/fraud, and where they can use the money to do illegal stuff or to buy lobbists and change laws in ways that allow them to use more force/fraud legally... in such system inheritance gives you people who benefit from crimes of their ancestors, who in their childhood get extreme power over other people without having to do anything productive ever, etc.
0TheAncientGeek8y
Can't an inheritance be used as seed money for some wonderful world-enhacing entrepeneurship?
2ChristianKl8y
Bill Gates argues that it's bad to inherent children so much money that they don't have to work: https://www.ted.com/talks/bill_and_melinda_gates_why_giving_away_our_wealth_has_been_the_most_satisfying_thing_we_ve_done I think the world is a better place for Bill Gates thinking that way.
2polymathwannabe8y
I never thought I'd find myself saying this: I don't want to be Bill Gates's kid.
3PipFoweraker8y
Does that not-want take into consideration your changed capacity to influence him if you became his child?
0polymathwannabe8y
How would I have any more influence than his actual child does?
1PipFoweraker8y
I would posit that his actual children have a comfortably non-zero amount of influence over him, and that the rest of us have a non-zero-but-muchcloser-to-zero amount of influence over him.
1Viliam8y
Yeah, the idea of "I could have been a part of the legendary 1%, but my parents decided to throw me back among the muggles" could make one rather angry.
1gjm8y
I bet Bill Gates's children will still be comfortably in the 1%. (I found one source saying he plans to leave them $10M each. It didn't look like a super-reliable source.)
0Lumifer8y
/snort In such a case I would probably think that you failed at your child's upbringing, much earlier than deciding to dispossess her.
3Viliam8y
Imagine that your parents were uneducated and homeless as teenagers. They lived many years on the streets, starving and abused. But they never gave up hope, and never stopped trying, so when they were 30, they already had an equivalent of high-school education, were able to get a job, and actually were able to buy a small house. Then you were born. You had a chance to start your life in much better circumstances than your parents had. You could have attended a normal school. You could have a roof above your head every night. You could have the life they only dreamed about when they were your age. But your parents thought like this: "A roof above one's head, and a warm meal every day, that would spoil a child. We didn't have that when we were kids -- and look how far we got! All the misery only made our spirits stronger. What we desire for our children is to have the same opportunity for spiritual growth in life that we had." So they donated all the property to charity, and kicked you out of the house. You can't afford a school anymore. You are lucky to find some work that allows you to eat. Hey, why the sad face? If such life was good enough for them, how dare you complain that it is not good anough for you? Clearly they failed somewhere at your upbringing, if you believe that you deserve something better than they had. (Explanation: To avoid the status quo bias of being in my social class -- to avoid the feeling that the classes below me have it so bad that it breaks them, but the classes above me have it so good that it weakens their spirits; and therefore my social class, or perhaps the one only slighly above me, just coincidentally happens to be the optimal place in the society -- I sometimes take stories about people, and try to translate them higher or lower in the social ladder and look if they still feel the same.)
4Lumifer8y
It's not a social class thing. It's a human motivation thing. Humans are motivated by needs and if you start with a few $B in the bank, many of your needs are met by lazily waving your hand. That's not a good thing as the rich say they have discovered empirically. That, of course, is not a new idea. A quote attributed to Genghis Khan says and there is an interesting post discussing the historical context. The consequences, by the way, are very real -- when you grow soft, the next batch of tough, lean, and hungry outsiders comes in and kills you. The no-fortune-for-you rich do not aim for their children to suffer (because it ennobles the spirit or any other such crap). They want their children to go out into the world and make their own mark on the world. And I bet that these children still have a LOT of advantages. For one thing, they have a safety net -- I'm pretty sure the parents will pay for medevac from a trek in Nepal, if need be. For another, they have an excellent network and a sympathetic investor close by.
0Viliam8y
Same difference. So there is an optimal amount of wealth to inherit to maximize human motivation, and it happens to be exactly the same amount that Gateses are going to give their children. (The optimal amount depends on the state of global economy or technology, so it was a different amount for Genghis Khan than it is now.) I'd like to see the data supporting this hypothesis. Especially the kind of data that allows you to estimate the optimal amount as a specific number (not merely that the optimal amount is less than infinity). Which cannot be done if you have too much money. But will be much easier to do if you have less money. And you know precisely that e.g. 10^7 USD is okay, but 10^8 USD is too much. Imagine that your goal would be to have your children "make their own mark on the world", and that you really care about that goal (as opposed to just having it as a convenient rationalization for some other goals). As a rational person, would you simply reduce their inheritance to sane levels and more or less stop there? If you would spend five minutes thinking about the problem, couldn't you find a better solution?
2gjm8y
You say this as if it's a silly thing that no one could have good reason to believe. I've no idea whether it's actually true but it's not silly. Here, let me put it differently. "It just happens that the amount some outstandingly smart people with a known interest in world-optimization and effectively unlimited resources have decided to leave their children is the optimal amount." I mean, sure, they may well have got it wrong. But they have obvious incentives to get it right, and should be at least as capable of doing so as anyone else. I doubt they would claim to know precisely. But they have to choose some amount, no? You can't leave your children a probability distribution over inheritances. (You could leave them a randomly chosen inheritance, but that's not the same.) It seems like whatever the Gateses were allegedly planning, you could say "And you know precisely that doing X is okay, but doing similar-other-thing-Y is not" and that would have just the same rhetorical force. I don't know. Could you? Have you? If so, why not argue "If the Gateses really had the goals they say, they would do X instead" rather than "If the Gateses really had the goals they say, they would do something else instead; I'm not saying what, but I bet it would be better than what they are doing."? Again, I'm not claiming that what the Gateses are allegedly planning is anything like optimal; for that matter, I have no good evidence that they are actually planning what they're allegedly planning. But the objections you're raising seem really (and uncharacteristically) weak. But I'm not sure I've grasped what your actual position is. Would you care to make it more explicit?
7Viliam8y
My actual position is that: 1) Gateses had some true reason for donating most of the money -- probably a combination of "want to do a lot of good", "want to become famous", etc. -- and they decided that these goals are more important for them than maximizing the inheritance of their children. I am not criticizing them for making that decision; I think it is a correct one, or at least in a good direction. 2) But the explanation that they want their children to "make their own mark on the world" is most likely a rationalization of the previous paragraph. It's like, where the true version is "saving thousand human lives is more important for me than making my child twice as rich", this explanation is trying to add "...and coincidentally, not making my child twice as rich is actually better for my child, so actually I am optimizing for my child", which in my opinion is clearly false, but obviously socially preferable. 3) What specifically would one do to literally optimize for the chance that their children would "make their own mark on the world"? I am not going into details here, because that would depend on specific talents and interests of the child, but I believe it is a combination of giving them more resources; spending more resources on their teachers or coaches; spending my own time helping them with their own projects. 4) I can imagine being the child, and selfishly resenting that my parents did not optimize for me. 5) However I think that the child still has more money than necessary to have a great life. My whole point is that (2) is a rationalization.
0gjm8y
OK, I understand. Thanks.
0Richard_Kennaway8y
Does this work? I don't know; I have no children.
0Lumifer8y
Who are you arguing against? I saw no one express the position that you're attacking. Huh? Who stopped there? Do you have any reason to believe that the Gates handed their kids a "small" check and told them to get lost?
2Viliam8y
Sure, it can be used for whatever purpose. So now we have an empirical question of what is the average usage of inheritance in real life. Or even better, the average usage of inheritance, as a function of how much was inherited, because patterns at different parts of the scale may be dramatically different. I would like to read a data-based answer to this question. (My assumption is that the second generation usually tries to copy what their parents did in the later period of life, only less skillfully because regression to the mean; and the third generation usually just wastes the money. If this is true, then it's the second generation, especially if they are "criminals, sons of criminals", that I worry about most.)
1TheAncientGeek8y
I don't think it's a question of more research being needed, I think it's a an issue ofthe original two categories being two few and too sharply delineated.
0gjm8y
Yeah, should have been (1) by creating value, (2) by taking value from others by force or fraud, (3) by being given value willingly by benevolently disposed others. Of these #3 is rather rare except for inheritance (broadly understood; parents may give their children a lot of money while still alive). Make it "essentially two different ways how people or families get rich", though, and the remaining cases of #3 are probably rare enough to ignore. Here's another case that isn't so neatly fitted into Viliam's dichotomy. Suppose your culture values some scarce substance such as gold, purely because of its scarcity, and you discover a new source of that substance or a new way of making it. You haven't created much value because the stuff was never particularly valued for its actual use, but it's not like you stole it either. What actually happened: everyone else's gold just got slightly less valuable because gold became less scarce, and what they lost you gained. But for some reason gold mining isn't usually considered a variety of large-scale theft. Of course gold has some value. You can make pretty jewelry out of it, and really good electrical contacts, and a few other things. But most of the benefit you get if you find a tonne of gold comes from its scarcity-value rather than from its practical utility. Printing money has essentially the same effect, but isn't generally used directly to make individuals rich.
1TheAncientGeek8y
"Make it "essentially two different ways how people or families get rich", though, and the remaining cases of #3 are probably rare enough to ignore." I think inheritance is an important case, because lack of inherited wealth, by default, is what leads to some people being excluded from becoming self-made millionaires like Mr Graham; and because inheritance isn't inevitable, it's something that can be adjusted independently of other variables. "Here's another case that isn't so neatly fitted into Viliam's dichotomy. Suppose your culture values some scarce substance such as gold, purely because of its scarcity, and you discover a new source of that substance or a new way of making it. You haven't created much value because the stuff was never particularly valued for its actual use, but it's not like you stole it either. What actually happened: everyone else's gold just got slightly less valuable because gold became less scarce, and what they lost you gained. But for some reason gold mining isn't usually considered a variety of large-scale theft." How natural resources are dealt with is an important point in political philosophy. If you think people are entitled to keep whatever they find, you end up with a conservative philosophy, if you think they should be shared or held in common you end up with a leftish one.
1gjm8y
In case it wasn't clear: So do I. But if we think of families rather than individuals as the holders of wealth, Viliam's two ways of getting rich cover the available options fairly well; that's all I was saying.
6Lumifer8y
Well, but he's writing an essay and has a position to put forward. Not being blind to counter-arguments does not require you to never come to a conclusion. At a crude level, the pro arguments show the benefits, the contra arguments show the costs, but if you do the cost-benefit analysis and decide that it's worth it, you can have an express definite position without necessarily ignoring chunks of reality.
0The_Lion8y
So why are you focusing your complaining on Paul Graham's essay rather than on the essays complaining about "economic inequality" without even bothering to make the distinction? What does that say about your "ugh fields"? In fact a remarkable number of the people perusing strategy (1) are the same people railing against economic inequality. One would almost suspect they're intentionally conflating (1) and (2) to provide a smokescreen for their actions. Also since strategy (1) requires more social manipulation skills then strategy (2), the people pursuing strategy (1) can usually arrange for anti-inequality policies to mostly target the people in group (2).
1bogus8y
We hold someone like Paul Graham to higher standards than some random nobody trying to score political points. Isn't Graham one of the leading voices in the rationalist/SV-tech/hacker tribe?
-2The_Lion8y
Ok, while we're nitpicking Paul Graham's essay, I should mention the part of it that struck me as least rational when I read it. Namely, the sloppy way he talks about "poverty", conflating relative and absolute poverty. After all, thanks to advances in technology what's considered poverty today was considered unobtainable luxury several centuries ago.
1bogus8y
Advances in technology have certainly improved living standards across the board, but they have not done much for the next layer of human needs - things like social inclusion or safety against adverse events. Indeed, we can assume that, in reasonably developed societies (as opposed to dysfunctional places like North Korea or several African countries) lack of such things is probably the major cause of absolute 'poverty', since primary needs like food or shelter are easily satisfied. It's interesting to speculate about focused interventions that could successfully improve social inclusion; fostering "organic" social institutions (such as quasi-religious groups with a focus on socially-binding rituals and public services) would seem to be an obvious candidate.
5Richard_Kennaway8y
You have redefined "absolute poverty" to mean "absolute poverty on a scale revised to ignore the historic improvements", i.e. relative poverty. The internet has done a great deal for that. Which ones? Disease? Vast progress. Earthquakes and hurricanes? We make better buildings, better safety systems. Of course, we can also build taller buildings, and cities on flood plains, so the technology acts on both sides there. Institutions that require focused interventions to foster them are the opposite of "organic". Besides, "quasi-religious groups with a focus on socially-binding rituals and public services" already exist. Actual religions, for example, and groups such as Freemasons.
0bogus8y
I'm not 'redefining' the scale absolute poverty is measured on, or ignoring the historic improvements in it. These improvements are quite real. They're also less impressive than we might assume by just looking at material living standards, because social dynamics are relevant as well.
0bogus8y
Sure, but does rent-seeking really explain the increase in inequality since, say, the 1950s or so, which is what most folks tend to be worried about and what's discussed in Paul Graham's essay? I don't think it does, except as a minor factor (that is, it could certainly explain increased wealth among congress-critters and other members of the 'Cathedral'); the main factor was technical change favoring skilled people and sometimes conferring exceptional amounts of wealth to random "superstars".
4Viliam8y
I don't know. Seems to me possible that people like Paul Graham (or Eliezer Yudkowsky) may overestimate the impact of technical change on wealth distribution because of the selection bias -- they associate with people who mostly make wealth using the "fair" methods. If instead they would be spending most of their time among African warlords, or Russian oligarchs, or whatever is their more civilized equivalent in USA, maybe they would have very different models of how wealth works. The technological progress explains why the pie is growing, not how the larger pie is divided. There are probably more people who got rich selling homeopathics, than who got rich founding startups. Yet in our social sphere it is a custom to pretend that the former option does not exist, and focus on the latter.
0ChristianKl8y
If you look at the Forbes list there aren't many African warlords on it. Which people do you think became billionaire's mainly by selling homeopathics? Homeopathics is a competive market where there no protection from competitors that allows to charge high sums of money in the way startups like Google produce a Thielean monopoly.
2gjm8y
It seems possible that African warlords' wealth is greatly underestimated by comparing notional wealth in dollars. E.g., if you want to own a lot of land and houses, that's much cheaper (in dollars) in most of Africa than in most of the US. If you want a lot of people doing your bidding, that's much cheaper (in dollars) in most of Africa than in most of the US.
0ChristianKl8y
On the other hand the African warlord has to invest resources into avoiding getting murdered.
0gjm8y
Yup. It's certainly not clear-cut, and there are after all reasons why the more expensive parts of the world are more expensive.
0Viliam8y
Money has more or less logarithmic utility. So selling homeopathics could still bring higher average utility (although less average money) than startups. For every successful Google there are thousands of homeopaths.
0ChristianKl8y
That depends on your goals. If you want to create social or political impact with money it's not true. Large fortunes get largely made in tech, resources and finance.
0tut8y
I think the generalized concept is 'politicians'. And yeah, that sounds likely. But I would say that it is a problem that the ones who make the rules and the ones who explain to everyone else what's what all live in an environment where earning something honestly is weird is a problem. That there are some who are not in such a bubble is not the problem.
4Viliam8y
Oligarchs are the level above politicians. You can think about them as the true employers of most politicians. (If I can make an analogy, for a politician the voters are merely a problem to be solved; the oligarch is the person who gave them the job to solve the problem.) Imagine someone who has incredible wealth, owns a lot of press in the country, and is friendly with many important people in police, secret service, et cetera. The person who, if they like you as a wannabe politician, can give you a lot of money and media power to boost your career, in return for some important decisions when you get into the government.
2Lumifer8y
So, can you tell us who employs Frau Merkel? M. Hollande? Mr. Cameron? Mr. Obama? Please be specific.
3Viliam8y
This requires a good investigative journalist with good understanding of economics. Which I am not. I could tell you some names for Slovakia (J&T, Penta, Brhel, Výboh), which probably you would have no way to verify. (Note that the last one doesn't even have a Wikipedia page. These people in general prefer privacy, they own most of the media, and they have a lot of money to sue you if you write something negative about them, and they also own the judges which means they will win each lawsuit.) I am not even sure if countries other than ex-communist use this specific model. (This doesn't mean I believe that the West is completely fair. More likely the methods of "power above politicians" in the West are more sophisticated, while in the East sophistication was never necessary if you had the power -- you usually don't have to go far beyond "the former secret service bosses" and check if any of them owns a huge economical empire.)
0Lumifer8y
Ah, well, that's a rather important detail. I'm not saying that your model is entirely wrong -- just that it's not universally applicable. By the way, another place where you are likely to find it is in Central and South America. However I think it's way too crude to be applied to the West. The interaction between money and power is more... nuanced there and recently the state power seem to be ascending.
2Lumifer8y
Except that, well, you know, in Soviet Russia the politician is above the oligarchs :-D
0ChristianKl8y
He does something. He uses niceness as a filter for filtering out people from YCombinator who aren't nice. YCombinator has standardized term sheats to prevent bad VC's from ripping off companies by adding intransparent terms. I have heard it written that YCombinator works as a sort of union for startup founders whereby a VC can't abuse one YCombinator company because word would get around within YCombinator and the VC suffer negative consequences from abusing a founder.
1Viliam8y
Yes. But for a person who is focused on the problem of "people taking a lot of value from others by force and fraud" this is like a drop in the ocean. Okay, PG has created a bubble of niceness around himself, that's very nice of him. What about the remaining 99.9999% of the world? Is this solution scalable? EDIT: Found a nice quote in Mean People Fail:
5ChristianKl8y
If you take a single company YCombinator company like AirBnB I think it affects a lot more than 0.0001% of the world. The solution of standardized term sheets seems to scale pretty well. The politics of standardized terms sheats aren't sexy but they matter very strongly. Power in our society is heavily contractualized. As far as for the norms of YCombinator being scalable, YCombinator itself can scale to be bigger. YCombinator is also a role-model for other accelerators due to the fact that it's the only accelerator that produced unicorns. Apart from that the idea that Paul Graham fails because he doesn't singlehandily turn the world towards the good is ridiculous. You criticisze him because of not signaling that he cares by talking enough about the issue. I think you get the idea of how effective political action looks very wrong. It's not about publically complaining about evil people and proposing ways to fight evil people. It's about building effective communities with good norms. Think globally but act locally. Make sure that your enviroment is well so that it can grow and become stronger.
4gjm8y
So maybe it's only 99.99% rather than 99.9999%. I don't think this really affects Viliam's point, which is that if a substantial fraction of the world's economic inequality arises from cause 2 (taking by force or fraud) more than from cause 1 (creating value), and Paul Graham writes and acts as if it's almost all cause 1, then maybe Paul Graham is doing the same thing he complains about other people doing and ignoring inconvenient bits of reality. Note that PG could well be doing that even if when working on cause 1 he takes some measures to reduce the impact of cause 2 on it. It's not like PG completely denies that some people get rich by exploiting or robbing others; Viliam's suggesting only that he may be closing his eyes to how much of the world's economic inequality arises that way. If you have a world full of evil then don't you want to do both of (1) fight the evil and (2) build enclaves of not-evil?
6Lumifer8y
That may have been implied, but wasn't stated. Is it actually Viliam's point? I am not sure how true it is -- consider e.g. Soviet Russia. A lot of value was taken by force, but economic inequality was very low. Or consider the massive growth of wealth in China over the last 20 years. Where did this wealth come from -- did the Chinese create it or did they steal it from someone? This is a tricky subject because Marxist-style analysis would claim that capital owners are fleecing the workers who actually create value and so pretty much all wealth resulting from investment is "stolen". If we start to discuss this seriously, we'll need to begin with the basics -- who has the original rights to created value and how are they established?
5Viliam8y
I believe this, at least in long turn; i.e. that even if once in a while some genius creates a lot of wealth and succeeds to capture a significant amount of it, sooner or later most of that money will pass into hands of people who are experts on taking value from others. No Marxism here, merely an assumption that people who specialize at X will become good at X, especially when X can be simply measured. Here X is "taking value from others". Nope, that was merely the official propaganda. In fact, high-level Communists were rich. Not only they had much more money, but perhaps more importantly, they were allowed to use "common property" that the average muggle wasn't allowed to touch. For example, there would be a large villa that nominally belonged to the state, but in fact someone specific lived there. Or there would be a service provided nominally to anyone (chosen by an unspecified algorithm), but in fact only high-level Communists had that service available and average muggles didn't. High-level Communists were also in much better positions to steal things or blackmail people. How is this wealth distributed among the specific Chinese? It can be both true that "China" created the wealth, and that the specific "Chinese" who own it, mostly stole it (from the other Chinese). My argument is completely unrelated to this. For me the worrying part about rich people is that they can use their wealth to (1) do crime more safely, and even (2) change laws so that the things they wanted to do are no longer crimes, but the things that other people wanted to do suddenly become crimes.
0Lumifer8y
I disagree. As I mentioned, they did live better (more comfortably, higher consumption) than the peons, but not to the degree that I would call "rich". I don't believe that critics of communist regimes, both internal and external, called the party bosses "rich" either. For comparison, consider, say, corrupt South/Central American dictatorships. Things have changed, of course. Putin is very rich. You are worried about power, not wealth. It's true that wealth can be converted to power -- sometimes, to some degree, at some conversion rate. But if you actually want power, the straightforward way is attempt to acquire more power directly. There is also the inverse worry: if no individuals have power, who does? Is it good for individuals to have no power, to be cogs/slaves/sheep?
3gjm8y
I'll let Viliam answer that one (while remarking that the bit you quoted certainly isn't what I claimed V's point to be, since you chopped it off after the antecedent). That's not a counterexample; what you want is a case where economic inequality was high without a lot of value being taken by force. Mostly a matter of real growth through technological and commercial advancement, I've always assumed. (Much of it through trade with richer countries -- that comes into category 1 in so far as the trade was genuinely beneficial to both sides.) But I'm far from an expert on China. It seems like one could say that about a very wide variety of issues, and that it's more likely to prevent discussion than to raise its quality in general. As for the actual question with which you close: I am not convinced that moral analysis in terms of rights is ever the right place to begin.
0Lumifer8y
I am not so much asking for moral analysis as for precise definitions for "using force or fraud in a wider meaning of the word; sometimes perfectly legally; often using the wealth they already have as a weapon". That seems like a very malleable part which can be bent into any shape desired.
0gjm8y
Well, that would be for Viliam to clarify rather than for me, should he so choose. It doesn't seem excessively malleable to me, for what it's worth.
1IlyaShpitser8y
I am contesting this.
0Lumifer8y
The first part, or the second, or both?
0IlyaShpitser8y
Second.
0Lumifer8y
To get a bit more concrete I'm talking about the Soviet Russia of the pre-perestroika era, basically Brezhnev times. Do you have something specific in mind? Of course party bosses lived better than village peons, but I don't think that the economic inequality was high. Money wasn't the preferred currency in the USSR -- it was power (and access).
4ChristianKl8y
If a single person solves 0.01% of the worlds injustice that's a huge success. You only need 10000 people like that to solve all injustice. Startups funded by YC fight powerful enemies day-in-and-out by distrupting industries. If Paul success in keeping the YC companies nice and successful he shifts the global balance towards the good. There's no glory in fighting for the sake of fighting. As YC grows it might pick a few fights. You could call supporting DemocracyOS a fight against the established political system but it's also simply building systems that work better than the established political system. You change things for the better by providing powerful alternatives to the status quo.
0gjm8y
It looks to me as if you just switched from one 99.99% to an almost completely unrelated 99.99%. I see no reason to think that Paul Graham or AirBnB has solved 0.01% of the world's injustice. Even if they had, finding 10k Paul Grahams or 10k AirBnBs is not at all an easy problem. You don't get points for fighting powerful enemies, you get points for doing actual good. No doubt some YC companies are in fact improving the world; good for them; but what does that have to do with the question actually under discussion? Viliam never said that YC is useless or that PG is a bad person. He said only that PG is focusing on one (important) part of reality -- the part where some people add value to the world and get rich in the process -- and may be neglecting another part. Of course. Nor much utility. So the question is: if there's a lot of injustice in the world, is it effective to point it out and try to reduce it? Maybe it is, maybe not, but I don't see that you can just deflect the question by saying "effective political action is a matter of building effective communities with good norms". You're just restating your thesis that in the face of evil one should construct good rather than fighting evil. But sometimes you change things for the better just by saying that the status quo isn't good enough and trying to get it knocked down, or by agitating for other people who are better placed than you are to provide powerful alternatives to do so. Rosa Parks didn't start her own non-racist bus service. She helped to create a climate in which the existing bus service providers couldn't get away with telling black people where to sit.
4ChristianKl8y
Rosa Parks operated as the secretary of the Montgomery chapter of the NAACP. The NAACP was founded in 1909 and slowly build it's powerbase til it was strong enough to allow Rosa Parks to pull of the move in 1955. I think cases like Egypt are an example of how things get messed up when trying to fight the evil status quo without having a good replacement. In my own country I think the Pirate Party got too much power to soon and self destructed as a result. It failed to build a good foundation. In modern politics people are largely to impatient to build power bases from which to create sustainable change for the better. The direction of our core political direction at the moment is largely create by a bunch of foundations who don't try to win in short-term fights but acts with long time horizons.
0gjm8y
Sure. So she helped to build a political movement -- centred not around creating new non-racist businesses and communities to supplant the old racist ones, but around exposing and fighting racism in the existing businesses and communities. In terms of your dichotomy the NAACP was firmly on the side of publicly complaining about evil people and proposing ways to fight their evil. That may very well be a serious problem. But it's an issue almost perfectly orthogonal to the "fight the evil or build better new communities?" one. (This whole discussion seems to be based, in any case, on a misunderstanding of Viliam's complaint, which is not that Paul Graham is doing the wrong things with his life but that some things he's said amount to trivializing something that shouldn't be trivialized. It's entirely possible for someone to say wrong things while doing right ones, and objecting to that is not the same thing as complaining that he's "not signalling that he cares".)
-19The_Lion8y

While browsing the Intelligence Squared upcoming debates, I noticed two things that may be of interest to LW readers.

The first is a debate titled "Lifespans are long enough", with Aubrey De Grey and Brian Kennedy of the Buck Institute for Research on Aging arguing against Paul Root Wolpe from the Emory Centre for Ethics and another panelist TBA. The debate is taking place in early February.

The second, and of potentially more interest to the LW community, is taking place on March 9th and is titled "Artificial Intelligence: The risks outweigh... (read more)

6Vaniver8y
Brian Kennedy. Note that he's on the "Against" side with Aubrey, as makes sense given the Buck Institute's goal to "extend help towards the problems of the aged."
0PipFoweraker8y
Thanks! Y'know, I actually spotted the doubling up of the pronoun, checked it, thought "Huh, random egotism, naming a centre after yourself" and went ahead and clicked 'Submit'. Cheers, random brainfart! Edited OP for accuracy.
0Vaniver8y
I mean, Buck did name the centre after herself / her husband, so it's not that far off. :P

PSA: I had a hard drive die on me. Recovered all my data with about 25 hours of work all up for two people working together.

Looking back on it I doubt many things could have convinced me to improve my backup systems; short of working in the cloud; my best possible backups would have probably lost the last two weeks of work at least.

I am taking suggestions for best practice; but also a shout out to backups, and given it's now a new year, you might want to back up everything before 2016 right now. Then work on a solid backing up system.

(Either that or al... (read more)

4Baughn8y
Use a backup system that automatically backs up your data, and then nags at you if the backup fails. Test to make sure that it works. For people who don't want / can't run their own, I've found that Crashplan is a decent one. It's free, if you only back up to other computers you own (or other peoples' computers); in my case I've got one server in Norway and one in Ireland. There have, however, been some doubts about Crashplan's correctness in the past. There are also about half a dozen other good ones.
1Lumifer8y
Links? I use Crashplan and would be interested in learning about its bugs.
0Baughn8y
Google for 'crashplan data loss', and you'll find a few anecdotes. The plural of which isn't "data", but it's enough to ensure that I wouldn't use it for my own important data if I wasn't running two backup servers of my own for it. Even then, I'm also replicating with Unison to a ZFS filesystem that has auto-snapshots enabled. In fact, my Crashplan backups are on the same ZFS setup (two machines, two different countries), so I should be covered against corruption there as well. Suffice to say, I've been burnt in the past. That seems to be the only way that anyone ever starts spending this much (that is, 'sufficient') effort on backups. E.g. http://jeffreydonenfeld.com/blog/2011/12/crashplan-online-backup-lost-my-entire-backup-archive/ ---------------------------------------- All of that said? I'm paranoid. I wouldn't trust a single backup service, even if it had never had any problems; I'd be wondering what they were covering up, or if they were so small, they'd likely go away. Crashplan is probably fine. Probably.
0Lumifer8y
I'm using Crashplan as the offsite backup, I have another backup in-house. The few anecdotes seem to be from Crashplan's early days. But yeah, maybe I should do a complete dump to an external hard drive once in a while and just keep it offline somewhere...
2iceman8y
Use RAID on ZFS. RAID is not a backup solution, but with the proper RAIDZ6 configuration will protect you against common hard drive failure scenarios. Put all your files on ZFS. I use a dedicated FreeNAS file server for my home storage. Once everything you have is on ZFS, turn on snapshotting. I have my NAS configured to take a snapshot every hour during the day (set to expire in a week), and one snapshot on Monday which lasts 18 months. The short lived snapshots lets me quickly recover from brain snafus like overwriting a file. Long lived snapshotting is amazing. Once you have filesystem snapshots, incremental backups become trivial. I have two portable hard drives, one onsite and one offsite. I plug in the hard drive, issue one command, and a few minutes later, I've copied the incremental snapshot to my offline drive. My backup hard drives become append only logs of my state. ZFS also lets you configure a drive so that it stores copies of data twice, so I have that turned on just to protect against the remote chance of random bitflips on the drive. I do this monthly, and it only burns about 10 minutes a month. However, this isn't automated. If you're willing to trust the cloud, you could improve this and make it entirely automated with something like rsync.net's ZFS snapshot support. I think other cloud providers also offer snapshotting now, too.
1passive_fist8y
I feel that this is too complicated a solution for most people to follow. And it's not a very secure backup system anyway. You can just get an external hard drive and use any of the commonly-available full-drive backup software. Duplicity is a free one and it has GUI frontends that are basically just click-to-backup. You can also set them up to give you weekly reminders, etc.
1Lumifer8y
Generally speaking, the best practice is to have two separate backups, one of them offsite. First, you might want to run some kind of a RAID setup so that a single disk failure doesn't affect much. RAID is not backup, but it's useful. Second, you might want to set up some automated backup/copy of your data to a different machine or to a cloud. The advantage is that it's setup-and-forget. The disadvantage is that if you have data corruption or malware, etc. the corrupted data could overwrite your clean backup before you notice something is wrong. Because of that it would not be a bad idea to occasionally make known-clean copies of data (say, after a disk check and a malware check) on some offline media like a flash drive or an external hard drive. Disk space is really REALLY cheap. It's not rational :-/ to skimp on it.
0ChristianKl8y
I consider either tresoit or megaupload currently to be the best ways to backup data automatically both provide clientside encryption. The free version of megaupload allows for 50GB.

Maybe it's just the particular links I have been following (acausal trade and blackmail, AI boxes you, the Magnum Innominandum) but I keep coming across the idea that the self should care about the well-being (it seems to always come back to torture) of one or of a googleplex of simulated selves. I can't find a single argument or proof of why this should be so. I accept that perfectly simulated sentient beings can be seen as morally equal in value to meat sentient beings (or, if we accept Bostrom's reasoning, that beings in a simulation other than our ow... (read more)

6[anonymous]8y
There is Bostrom's argument - but there's also another take on these types of scenario, which you may be confusing with the Bostrom argument. In those takes, you're not sure whether you're the simulation or the original - and since there's billions of simulations, there's a billion to one chance you'll be the one tortured. Just make sure you're not pattern matching to the first type of argument when it's actually the second.
9Usul8y
I appreciate the reply. I recognize both of those arguments but I am asking something different. If Omega tells me to give him a dollar or he tortures a simulation, a separate being to me, no threat that I might be that simulation (also thinking of the Basilisk here), why should I care if that simulation is one of me as opposed to any other sentient being? I see them as equally valuable. Both are not-me. Identical-to-me is still not-me. If I am a simulation and I meet another simulation of me in Thunderdome (Omega is an evil bastard) I'm going to kill that other guy just the same as if he were someone else. I don't get why sim-self is of greater value than sim-other. Everything I've read here (admittedly not too much) seems to assume this as self-evident but I can't find a basis for it. Is the "it could be you who is tortured" just implied in all of these examples and I'm not up on the convention? I don't see it specified, and in "The AI boxes you" the "It could be you" is a tacked-on threat in addition to the "I will torture simulations of you", implying that the starting threat is enough to give pause.
6solipsist8y
If love your simulation as you love yourself, they will love you as they love themselves (and if you don't, they won't). You can choose to have enemies or allies with your own actions. You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?
0Usul8y
I don't play, craps is the only sucker bet I enjoy engaging in. But if coerced to play, I press with non-sims. Don't press with sims. But not out of love, out of an intimate knowledge of my opponent's expected actions. Out of my status as a reliable predictor in this unique circumstance.
1Gunnar_Zarncke8y
My take on ethics is that it breaks into two parts: Individual ethics and population ethics. Population ethics in the general sense of action toward the greater good for the population under consideration (however large). Action here consequently meaning action by the population, i.e. among the available actions for a population - which much take into account that not all beings of the population are equal or willing to contribute equally. Individual ethics on the other hand are ethics individual beings can potentially be convinced of (by others or themselves). These two interplay. More altruistically minded individuals might (try to) adopt sensible population ethics as their maxim, some individuals might just adopt the ethics of their peers and and others might adopt ego-centrical or tribal ethics. I do not see either of these as wrong or some better then others (OK, I admit I do; personally, but not abstractly). People are different and I accept that. Populations have to deal with that. Also note that people err. You might for example (try to) follow a specific population ethics because you don't see a difference between population and individual ethics. This can feel quite natural because many people have a tendency to contribute toward the greater good of their population. This is an important aspect because it allows population ethics to even have a chance to work. It couples population ethics to individual ethics (my math mind kicks in and wonders about a connection coefficient between these two and if and how this could be measured and how it depends on the level of altruism present in a population and how to measure that...). What about my ethics? I admit that some people are more important to me than others. I invest more energy in the well-being of my children, myself and my family. And in an emergency I'd take greater risks to rescue them (and me) than unrelated strangers. I believe there is such a thing as an emotional distance to other people. I a
3Usul8y
Thanks for the reply. I'm not sure if your reasoning (sound) is behind the tendency I think I've identified for LW'ers to overvalue simulated selves in the examples I've cited, though. I suppose by population ethics you should value the more altruistic simulation, whomever that should be. But then, in a simulated universe devoted to nothing but endless torture, I'm not sure how much individual altruism counts. "Totally tangential point" I believe footnotes do the job best. The fiction of David Foster Wallace is a masterwork of portraying OCD through this technique. I am an idiot at formatting on all media, though, and could offer no specifics as to how to do so.
0Gunnar_Zarncke8y
I think if people don't make the distinction I proposed it is easy to choose an ethics that overvalues other selves compared to a mixed model. Thanks for the idea to use footnotes though yes it is difficult with with some media.
2polymathwannabe8y
What you're calling population ethics is very similar to what most people call politics; indeed, I see politics as the logical extension of ethics when generalized to groups of people. I'm curious about whether there is some item in your description that would invalidate this comparison.
0username28y
Ethics is a part of philosophy, political philosophy, also being a part of philosophy, would be a better analogy than politics itself, I think.
0Gunnar_Zarncke8y
I did look up Wikipedia on population ethics and considered to be matching if you generalize by substitution of "number of people" with "well-being of people". But I admit that politics contains a matching with choosing among available actions in a group for the benefit of the group. The main difference to what I meant (ahem) is that politics describes the real thing with unequal power whereas population ethics prescribes independent of the decision makers power.

Any LessWrongers in Taipei? I am there for a while, PM me and I will buy you a beer.

[-][anonymous]8y60

Dealing with shame by embracing a vulnerability, fear of vulnerability and letting that shame be

I feel full of shame which I can’t explain. I feel that it is linked to my gender identity, sexuality and/or body.

why

When I asked Google why I feel this shame with search terms linked to the above suspicions, I landed on a page suggesting that shame in adult males is linked to child abuse. The point that really hit home was the comment: ‘’Males are not supposed to feel vulnerable or fearful about sex.’’ Was I sexually abused as a child? I didn’t think so. Thou... (read more)

6Viliam8y
"Abuse" is not a binary thing; it's a scale. Just because you were not at one extreme, does not mean that you were necessarily at the other extreme or near it. Depends on how you are going to react to the label. The healthy aspect is that it may allow you to see causalities in your life that you have previously censored from yourself; and then you can take specific actions to untangle the problems. The unhealthy aspect is if you take it with a "fixed mindset", and start crying about your past ("I am tainted, forever tainted"), or in extreme case if you start building some ideology of revenge against the whole evil society (or parts of the society) responsible for not preventing the bad things from happening to you. Seems like you are choosing generally the good direction. Okay, I wouldn't go that far. ("What doesn't kill you, makes you stronger", Just World Hypothesis, etc.) It is good to react to bad things by deriving useful lessons. However, in a parallel universe you could have good things happen to you, and still derive useful lessons from them. (Or you could derive useful lessons from bad things that happened to other people.) Bad things are simply bad things, no need to excuse them, no cosmic balance that needed to happen to make you a better person. That would mean denying that those things were actually bad. Being able to turn a bad experience into a good lesson, is a good message about you and your abilities. Not about the bad experience per se. A different person could remain broken by the same experience. I'd say: Use the past to extract useful information and move on, not to build a narrative for your life. I'd say: Admit that some people have fucked up, but don't waste your time planning revenge (it is usually not the optimal thing to do with your life). Maybe don't even analyze too much who or how precisely have fucked up, if such analysis would take too much energy. I agree. Depends on context. Feelling vulnerable (in situations where you fe
0ChristianKl8y
That really depends. Authenticity is often more useful than wearing a mask. In present politics Trump is successful while being relatively authentic. There's a lot of power in it.
2TimS8y
Respectfully, Trump is very skilled at sounding authentic. I'm not sure that he is authentic, but some other politician could easily be more authentic while lacking Trump's skills at sounding authentic.
3polymathwannabe8y
Overcoming fear is always healthy, but you should not let social expectations dictate how you have to feel. There's no single way how men are supposed to behave. Trying to force masculinity to fit inside a rigid box of allowed behaviors is a recipe for frustration and self-hatred. If you have feelings of vulnerability and fear, rather than denying or repressing them, you can observe and understand them. In cases like this I always recommend the Empty Closets forum. Members are knowledgeable and compassionate.
4Viliam8y
Just a sidenote: there are multiple "boxes" for masculinity, and when someone tells you to get out of the box, they often have an alternative box ready for you. (For example, instead of constant checking whether something you want to do is not "girly" or not "gay", they may offer you to constantly check your "privilege".) Remember that you can avoid those new boxes too.
0ChristianKl8y
That's dangerous territory. Quite a lot of people got talked by their therapist has having false memories of abuse. There are many psychological techniques to overcome feelings. There's CBT with includes workbooks like The Feeling Good handbook and there Focusing.
0Usul8y
"That's dangerous territory. Quite a lot of people got talked by their therapist has having false memories of abuse." I would want to have a hell of a lot of evidence showing a clear statistically significant problem along these lines before I attempted to discourage a person from seeking expert help with a self-defined mental health problem.
0ChristianKl8y
Nothing I said is about discouraging Clarity to seek out an expert for mental health. A well trained expert should know what creates false memories and be aware of the dangers. From my perspective the idea that false meories got planted is uncontroversial history taught in mainstream psychology classes.
2Usul8y
"the idea that false meories got planted is uncontroversial history" Certainly, but is this a significant concern for the OP at this time, such that it bears mention in a thread in which he is turning to this community seeking help with a mental health problem. "Dangerous territory" is a strong turn of phrase. I don't know the answer, but I would need evidence that p(damage from discouraging needed help)< p(damage from memory implantation in 2015). Would you mention Tuskigee if he was seeking help for syphilis? Facilitated communication if he was sending an aphasic child to a Speech Language Pathologist? Just my opinion.
0ChristianKl8y
This community is not "expert help" for a mental health problem in the sense that people here are trained to deal with the issue in a way that doesn't produce false memories. That's not at all what he's doing. In this post he doesn't speak about going to an expert to get help. He instead speaks about acting based on reading on the internet of a theory about shame. Clarity spoke in the past about having seen a psychologist and I don't argue that he shouldn't.

Is there a formal theory of how a rational actor should bet on prediction markets? If the prediction market says the probability is 70% and the actor thinks it's 60% is there a formal way to think about to what extend the agent thinks he knows better and should therefore bet against the market?

I'd guess that that falls under the usual paradigms like Savage or Von Neumann-Morgenstern. For example, the Kelly criterion.

2Lumifer8y
Sure. Just stop thinking in terms of discrete probabilities and start thinking in terms of full distributions.
0ChristianKl8y
If the Superforcasting people would add a group for the next batch of forcasts that would use that method, what specific methodology should they use?
0Lumifer8y
What do you mean, methodology? Operation on distributions (e.g. addition) are well-defined and while there may be no easy analytic solution, in our days you can always do things numerically.
0ChristianKl8y
If you would tell an average person: Please write down the probability of each event, then they can handle what they want to write. If you would change prediction book, how would the new system look like? Would it be that someone writes 40-60% instead of 50%? Would they write something different?
0Lumifer8y
We'll need to talk about the probability of the probability and I'm not sure that "an average person" is up for it. Maybe one can ask "What do you think the probability of event X is?" and then ask "How sure are you of your answer to the previous question? What is the minimum and the maximum reasonable possibilities?" Given this, you can construct some reasonable distribution over the [0..1] segment, e.g. Beta, and go on from there.
0ChristianKl8y
Let's say we talk about an average sensible person. I'm not exactly sure about this subject but it feels to me like it would be good to move from what prediction book does to something like this. When reading the IPCC report I would be very interested in something like this to get a better sense of the state of the science.

Sapir-Whorf-related question:

Although I've been an informal reader of philosophy for most of my life, only today did I connect some dots and notice that Chinese philosophers never occupied themselves with the question of Being, which has so obsessed Western philosophers. When I noticed this, my next thought was, "But of course; the Chinese language has no word for 'be.'" Wikipedia didn't provide any confirmation or disconfirmation of this hypothesis, but it does narrate how Muslim philosophers struggled when adapting Greek questions of Being into... (read more)

5gjm8y
Or that Eastern philosophers have spent centuries failing to ask the right questions. If language A makes it easy to ask a certain question and language B makes it hard, it doesn't follow that it's a bad question arising only from quirks of language A; instead it could be a good question hidden by quirks of language B (or revealed by in-this-case-beneficial quirks of language A).
2wizard8y
It seems a stretch to put Buddhism in the category of don't-really-care-about-Being. Rather, it's an important point that there is no being and realizing so brings countless bliss and enlightenment.
1ChristianKl8y
A particularity of English is that to be means a lot of different things. It covers three distrinct categories in natural semantic metalanguage
1Viliam8y
Now I am curious whether most of the philosophy of "Being" are merely confusions caused by conflating some of those different meanings.
1CAE_Jones8y
I was under the impression that 是 was Chinese for "to be". The nuance isn't quite the same--you can say 是 in response to "are or aren't you American?", but that's more or less subject-omission--but it seems close enough? But my experience with Chinese includes only two years of Mandarin classes and a few podcasts; I haven't studied the linguistics in so much detail, and that studying ended 5 years ago, so if you're basing this on something I don't know, I'd be glad for the correction.
3polymathwannabe8y
I know much less Chinese than you do. Having said that: The Chinese version of "be" lets you apply a noun predicate to your subject, but not an adjectival predicate: you can use it to say "I am a student" or "I am an American" but not "I am tired" or "I am tall;" that is, it doesn't state the attributes of a noun but an equivalence between two nouns. To say "I am tall," you just say "I tall." All of the other meanings of "be" (the ones relevant to this problem are those related to the essence/existence question) are expressed with various other words in Chinese.
3entirelyuseless8y
If that is the case I consider it pretty unlikely that this has any relevance to Chinese or Western philosophy. Especially since in Greek saying "I am tall" is basically saying "I am [something tall]" which according to your description you could also say in Chinese if you had a word for "something tall."
1CAE_Jones8y
Ah, yeah, that's true. Adjectives exhibit verb-like behavior in several East Asian languages; that they also do this in Chinese kinda slipped my mind.

Iran's blogfather: Facebook, Instagram and Twitter are killing the web

Hossein Derakhshan was imprisoned by the regime for his blogging. On his release, he found the internet stripped of its power to change the world and instead serving up a stream of pointless social trivia

2Lumifer8y
"The street finds its own uses for things." -- William Gibson
[-][anonymous]8y20

How profitable are student club party and ballroom events? I am suprised external companies haven't sprung up to handle the organising of those events on students club's behalves for tidy profits in exchange for access to an attendee base and marketing channels. In return, the student club members get value and their leadership gets extra funds.

5ChristianKl8y
Your average disco is such a company. They make parties that people can enter by paying money.
2Elo8y
not profitable. companies try, venues for example - regularly email clubs and try to get business from them. source: personal experience.
2[anonymous]8y
I concur, having advised many student organisations over the years in the US and UK. Often such events are supported by organisation funds raised in other ways, rather than as generating income. And many universities have a body of some kind that serves to advise and support student organisations (including administrative and events advice). Finally, in many cases, students actually want to gain experience organising events, sometimes for personal development and other times just for CV fodder. Farming events out to an external company eliminates this possibility.
0OrphanWilde8y
I had a friend who organized these kinds of events. She made okay money for the amount of time invested in the organization of the event itself, but events were sporadic, and once you considered the time investiture in getting the event, a retail job paid rather better. If you can achieve the kind of success where people seek you out, it would pay pretty well, but that requires considerable social capital and skill, and there are other opportunities where similar social capital and skill would pay better.

Can anyone help think of a clever name for a quantitative consulting company? LW in-jokes allowed.

Bay Esteem (halfway sounds like Bayes Team, har har).

6Lumifer8y
Conquan. Or AskClippy :-)
5PipFoweraker8y
Replying to clarify my point assigned was entirely for AskClippy :-)
3[anonymous]8y
As boring as it might sound, something with the term quantitative or similar might be prudent in the long run.
3LizzardWizzard8y
quanto costa

I am 85% sure The Lion is Eugine Nier.

0NancyLebovitz8y
The_Lion is possibly karma-mining in Quotes, but doesn't seem to be malicious. What's your line of thought?

The Lion started posting "abruptly" with no signs of being a newbie, not very long after VoiceOfRa was banned (much like VoiceOfRa did after Azathoth123 was banned and Azathoth123 did after Eugine Nier was). Also, the first comments of The Lion have been on points that the previous EN incarnations also often made, and their writing styles sound very similar to me.

1gilch8y
I've heard of narrow AIs that can supposedly identify an author from their writings. I'm not certain how accurate they are, or how much material they need, but perhaps we could use such a system here to identify sockpuppets and make ban evasion more difficult.
0Lumifer8y
It's not a "narrow AI", it's a straightforward statistical model. Or, if you prefer, an outcome of machine learning applied to texts of different authors. A voting sockpuppet doesn't post except to get the initial karma. It just up- and down-votes -- there is no text to analyse.
0gjm8y
There's a pattern of voting to analyse, though...
0Lumifer8y
Sure, but that's a much easier problem.
0gjm8y
I'm not convinced it actually is. (To get useful information out of, that is.)
0gilch8y
https://en.wikipedia.org/wiki/Weak_AI It is by that definition. Of course, words are only useful if people understand them. I know LW has some non-standard terminology. Point me to the definitions agreed upon by this community and I'll update accordingly. Sounds like the initial karma threshold is too low. I have various other ideas about how to fix the karma system, but perhaps I should hold off on proposing (more) solutions before we've discussed the problem thoroughly. If that's already been started I should probably continue from there, otherwise, do you think this issue (karma problems) merits a top-level discussion post?
-3Lumifer8y
I still don't think so. "AI" is a very fuzzy term (the meaning of which changes with time, too) but in this case what you have is a fairly plain-vanilla classifier which I see no reason to anoint with the "intelligence" title. Karma has been extensively (and fruitlessly) talked about here. If you want to write a top-level post about your proposals, it might be a good idea to acquaint yourself with the previous discussions here (as well as the experience of other forums, from Slashdot to Reddit).
0Baughn8y
I would be very interested in trying one of those. In particular, I frequently change up my writing style (deliberately), and it might be able to tell me what I'm not changing.
0CronoDAS8y
Hopefully we aren't going to have to implement an IP ban or something...
0username28y
There are VPNs for circumventing that, although getting a VPN is harder than creating a new account.
0Lumifer8y
You don't want to get into an arms race. Especially given that one side fields Trike as front-line troops.

Verdict on Wim Hof and his method?

1NancyLebovitz8y
I've also seen a milder claim from his that exposure to moderate temperature extremes (cold/hot showers, I think) makes one's blood vessels more flexible.
1ChristianKl8y
Being able to withstand extreme colds isn't a pretty useful skill?
1moridinamael8y
I probably should have provided more detail in the post. He claims not only to be be able to withstand cold, but to be able to almost fully regulate his immune and other autonomic systems. He furthermore claims that anyone can learn to do this via his method. For example, he claims to be able to control his inflammation response. This would be very useful to me, at least. There seems to be some science backing up his claims - he was injected with toxins and demonstrated an ability to control his body's cytokine, cortisol, etc. reaction to the toxins. So when I'm asking for a verdict, I'm sort of asking what people think of the quality of this science.
1ChristianKl8y
Nothing in the Wikipedia article sounds surprising to me. The Wikipedia article says nothing about him achieving therapeutically useful effects with it or claiming to do so. I have two friends who successfully cured allergies via hypnosis. One of them found that it takes motivation on the part of the subject and doesn't work well when the subject doesn't pay for the procedure so an attempt of doing a formal scientific trial failed due to the recruited subjects who got the treatment for free being not motivated in the right way.

Discussion of the Bayes' Theorem as expounded by EY 8-/

Fairly active follow-up discussion on HN.

2username28y
Reading that HN discussion... well, I understand that it doesn't necessarily tell me anything, but socially I can't help but notice how idiotic anti-LWers sound in that thread. Fnords upon strawmen upon fnords upon cherry-picking upon fnords upon claims that if anyone on LW ever said that the probability of something is greater than zero that means every single LWers is certain that thing is guaranteed upon mood affiliation upon misspellings of EY's name upon claims that if you don't condemn poster's political enemy you must be supporting it et cetera et cetera.
3Viliam8y
Your comment made me read the debate, but it seems rather boring to me. Okay, there are a few gems there, such as (rephrased and added a link): * cult = a system of religious veneration and devotion directed towards a particular figure or object * the veneration of Yudkowski and others in the LW community is more than a bit "religious" * therefore by definition LW is a cult Also, a list of our cult leader crimes includes "a clear violation of copyright law" -- wanting to monetize HPMoR fanfic. Which by the way is "an introductory religious text". (Today I learned a new argumentational technique: Describe what someone is doing, and keep inserting the word "religious" in random places. Use the scare quotes to prevent possible criticism; yes, you know that the word does not apply literally. However, when you are finished, use the frequency of the "religious" adjective as a proof that yes, the group you described is de facto religious. Case closed.) But generally, the discussion seems okay to me. I mean, I expect that most internet discussions contain this kind of argumentation. I take it for granted that someone will link the "RationalWiki". When I imagine how that HN discussion would probably have looked like five years ago, I am quite satisfied with the outcome. Seeing that the pro and con voices are approximately balanced, that is much more than I have expected.
2Lumifer8y
That's just a fnord.
2Viliam8y
I feel like I found a prokaryotic version of fnord, which is almost a different species. Only one word, repeated with no skills or subtlety, and then directly used as a punchline. I think modern-day fnords are supposed to have a larger vocabulary, so they can better merge with the text.
0username28y
Eh, it's the same thing even if we replace some instances of the word 'religion' with synonyms like 'X fetishism', 'worshipping at X altar', 'fundamentalism', 'X-god'.
0Lumifer8y
Interestingly enough, this is what the original ur-fnord was.
0gjm8y
For anyone unfamiliar with the term: Fnord.
0OrphanWilde8y
...so one of the basic writing skills, of adding meaning at a subtextual/connotative level? (Poe in particular was adamant that you should set the tone of the story in the first first sentence.) I'm puzzled by that story. Any halfway decent author can do that with a halfway receptive audience without all the prior-hypnosis baggage, just by utilizing the negative feelings people develop towards words over the course of their lives. Periodic spacing of negative-connotation words throughout an otherwise neutral-connotative work would make most people uneasy or uncomfortable.
0gjm8y
I think the fnords are meant to have more effect than merely making most people uneasy or uncomfortable; they're supposed to function as a means of outright control. But I haven't actually read the Illuminatus! trilogy so I don't guarantee I'm right about that.
0NancyLebovitz8y
In Illuminatus!, children were trained to have anxiety reactions to fnords in elementary school, and then have no conscious awareness of why they felt anxious. I assume this is allegory-- and also that most of the training doesn't happen in school. I'd say that in the book, fnords are about control, but in a general "the system is out of control you" sort of way rather than getting specific beliefs or actions. Ads don't have fnords, so people buy things in the hope of relieving anxiety. This is not literally true-- many ads evoke anxiety. Becoming able to see the fnords is a sign of impending enlightenment.

sorry, not sure if this should be posted here, but I hadn't yet found more rational strategy for my problem. If any of you guys know someone who can speak and write Japanese please contact me, I would very appreciate any help

0Crux8y
What are you trying to do? If a Japanese person with passable English would be able to help, then head on over to a language-exchange website such as InterPals and trade English instruction for whatever you need. If you need someone who speaks high-level English and it's okay if their Japanese is only passable, then go to a forum such as Koohii and make your request to the large population of people learning Japanese. If you need someone high level in both languages, well, you'll have a harder time. Such people are uncommon and not likely to work for free.
2LizzardWizzard8y
I want to have 'tsuyoku naritai' tattoo and wondering how it should be written properly on hieroglyphic
2[anonymous]8y
the irony. Don't mourn any change of decision here. a structured approach to help you think through problems and solutions in a logical way. There are six basic steps You may find that getting a tattoo with 'tsuyoku naritai' isn't your solution.
2Crux8y
Your response is rather cryptic. What does that content have to do with LizzardWizzard wanting to get a tattoo with a classic Less Wrong phrase on it?
0LizzardWizzard8y
I promise to take care of my rationality skills when the work is done
0LizzardWizzard8y
mate thank your for your help, I hope interpals would work out there for me, The thing is I don't want to go meta on this, it's just a reminder for myself and that's it, it wasn't designed as a solution for something else
2polymathwannabe8y
Japanese ideograms are not the same as 'hieroglyphics,' but here is your sentence: 強くなりたい
-1LizzardWizzard8y
Thank You! You are my hero, tho I'm not your princess
-1LizzardWizzard8y
oops seems like I misclicked, this was off course a reply to Clarity

Some political predictions (Edited for formatting):

  • Another stock market slump within the next year: 50% (70% within two years)
  • Cor: Average stock value collapse, given slump, of 70%, +- 10%: 90%
  • Trump to get Republican nomination: 65%
  • Cruz to get Republican nomination: 35%
  • Hillary to get Democratic nomination: 30%
  • Rel: Hillary to be indicted on criminal charges: 50%
  • Sanders to get Democratic nomination: 60%
  • Republicans to win 2016 presidential race, regardless of nomination: 80%
  • Republicans to win moderate majority in both houses in 2016: 80%
  • Republicans
... (read more)
4ChristianKl8y
That suggest zero chance for Marco Rubio. Why so low? Especially with 10% left open for a non-Hilary, non-Sanders candidate.
3iarwain18y
So probability of either Trump or Cruz is 100%?
-2OrphanWilde8y
No, ~83%
0ChristianKl8y
How do you go from Trump to get Republican nomination: 65% and Cruz to get Republican nomination: 35% to 83%?
-2OrphanWilde8y
Rephrase those as the inverse probabilities (Trump's probability of losing is 35%, Cruz's is 65%), and it will make more sense.
4g_pepper8y
It seems to me that if the probability of Trump winning is 35% and the probability of Cruz winning is 65%, then the probability of Trump or Cruz winning is 100% (since the probability of Cruz AND Trump wining is 0%).
2username28y
What's your reasoning behind putting such a low probability on this one? According to this data, this proposition has been true 35 years in a row. The ten years from 1971-1980 was only 0.08 degrees C warmer than the period from 1961-1970, but every ten year period since then (beginning with 1972-1981) has been more than 0.12 C warmer than the previous 10 years.
-2OrphanWilde8y
The choice of starting year has a substantial effect.
2Douglas_Knight8y
What are "Cor:" and "Rel:"? Are those conditional predictions? What is the meaning of the global temperature predictions? Are you going to compare the average temperature of 2025 to the average temperature of 2015? The average of one decade to another? Are you going to bet on this? Your stock predictions and 2016 political predictions are pretty far from consensus and easy to bet on.
0OrphanWilde8y
Yes. ETA: Decade. No. Two reasons: First, my internal ethics system puts no value on things I do not feel I have earned by merit of production, so I can literally only lose in the proposition. Second, I'm putting this here so I remember my predictions to calibrate my confidence levels, which is why I didn't put it in the latest Open Thread, where it would be more widely exposed. My last set of predictions, made around eight-ten years ago, are also online, but because I do not want to associate the username under which I made them with this one, I cannot share them; short summary, however, is that they shared the same political nature as these, and they were accurate to a degree which has even surprised me. I didn't attach probabilities at the time; that's a thing I'm borrowing from this community. I suspect my guesses will be correct on average, while my probabilities incorrect on average, but that's what this process is for.
2ChristianKl8y
What's a major military conflict?
2Lumifer8y
Couple of questions. What's your definition of a "market slump" and/or an "economic crisis"? Also, what's Health Index and what is the US National Health Database?
0OrphanWilde8y
Let's say for simplicity a nationally recognized economic downturn amounting to at least a recession. I guess an unofficial name for WHO's ranking system for national healthcare systems, last performed in 2000. http://www.who.int/whr/2000/media_centre/press_release/en/ The US National Health Database is a theoretical thing that is in the works to provide patient information nationally to any hospital or medical provider which requires it, and funding was set aside in the PPACA (Obamacare); it's being implemented at a state level by federal grant, and I believe is intended to eventually operate as a set of interacting state databases rather than a single database stored somewhere.
4Lumifer8y
You have a 90% probability that this "downturn" will lead to the US stock market losing two thirds of its value which is worse than 2008. That implies a bit more, um, severe event. Ah, I know an expression that fits the situation well...
0OrphanWilde8y
Yes.
0polymathwannabe8y
Why?
0OrphanWilde8y
Ever-increasing dissatisfaction with the government combined with the illusion of change provided by swapping political parties. The presidency has been passed back and forth between the parties for the last few decades as a result; the candidates really don't matter all that much, because voters aren't voting for the candidate, they're voting against what they see as the current status quo. I'm expecting an acceleration, actually, as the current generation, with expectations shaped by the internet era, becomes the dominant voting force; which is to say, within the next twenty years, all presidents will become one-term presidents, and third party presidents will become viable contenders after the collapse of the major parties into infighting, unbolstered by eight-year terms in which the losing party reconsolidates its coalitions.
2polymathwannabe8y
The people's evaluation of Obama's performance appears rather even, and steadily so if you look at previous years in the same page.
0OrphanWilde8y
It's curious that you think this is a counterargument, particularly given that Obama's performance evaluation is historically low for a president.
0Viliam8y
Could you please shortly explain why you give Sander twice the chance as Hillary? I am not watching politics too closely, but my impression was that Hillary is "part of the system", and she will also play the gender card; while Sanders is a "cool weirdo" (less than Ron Paul in 2012, but in the similar direction) which makes him very popular on internet, but the votes in real life will not follow the internet polls.
3gjm8y
I would guess the answer is in the prediction in between the Clinton and Sanders nomination predictions: "Hillary to be indicted on criminal charges: 50%". Presumably that would hurt her chances of nomination. Some of these figures seem implausible to me. The US presidential predictions are fairly strange, but others are worse. 63% probability of a 60%-80% decline in stock prices within two years? Really? (And, given that, why so little probability attached to a smaller decline? What's the underlying model here?) And what exactly is OrphanWilde's mental model of the WHO's attitude to US healthcare, that predicts such a huge influence of the existence of a US national health database on how the WHO assesses countries' health?
0OrphanWilde8y
This isn't the whole of it, but it contributes, along with personal issues Clinton is struggling with. The bigger issue is that the coalition is fractured. If Sanders weren't playing softball against Hillary, it wouldn't even be a question, but I think he believes playing hard politics against her would damage his chances against Trump by fracturing the Democratic coalition along gendered lines. The Democratic coalition is at its weakest leaving a Democratic presidency, since anything they have achieved results in a less interested coalition member group whose goals are already at least partially achieved, and anything they haven't achieved results in a frustrated coalition member group whose goals were perceived to be passed over. United, the Democrats win; their coalition is larger than the Republican base. Unfortunately, they're at their least united right now, and Sanders can't afford to fracture them any further. Hillary, on the other hand, seems perfectly happy to weaken the coalition in order to win the nomination.
2gjm8y
Seems fairly plausible, but why put this specifically in terms of the Democrats? The same will apply to the Republicans, or any other party anywhere whose support comes from anything other than a perfectly homogeneous group. On the face of it, that should make her more likely to get nominated. Are you suggesting that the Democratic Party's electorate is sufficiently calculating to reason: "She's doing these things to get nominated, they seem likely to piss off Sanders supporters, that will hurt us in the general election, so I won't vote for her in the primary"? Colour me unconvinced.
0OrphanWilde8y
The Republicans are less of a coalition than the Democrats, and more an alliance of two groups; social conservatives, and economic liberals. This isn't why Sanders will win, this is why he's still behind. It's a short-term strategy, however, which she started too soon; the primary voters aren't going to vote against Hillary because they don't think they'll win in the general election, they're going to vote against Hillary because she's alienated them to pander to her base.
0gjm8y
So what? If your argument is "if they achieve group G's goals, group G will be unmotivated because they've already got what they need; if they don't, group G will be unmotivated because they'll think they've been neglected", surely this applies whether group G is 10% of the party's support or 50%.
0OrphanWilde8y
Which is easier: Ordering food that ten people need to agree upon, or ordering food that two people need to agree upon?
0gjm8y
I don't see the relevance of the question. The argument wasn't "It's difficult for the Democrats to do things that will please all their supporters, because their supporters are a motley coalition of groups that want different things". It was "Support for the Democrats will be weak in this situation, because each group will be demotivated for one reason if they've got what they want and demotivated for a different reason if they haven't got what they want".
0OrphanWilde8y
It's relevant. A given Democrat is likely a Democrat for their one issue; on all other issues, they tend to revert to the mean (which is why, historically, Democrats tend to rate their party lower on listening to the base than the Republicans do). The Democratic platform is a collection of concessions and compromises between the different coalitions, and is attractive only because of a given coalition's particular interests; the rest of what it offers isn't particularly attractive to its constituents. Republican objectives tend to be more in-line with what its constituents want, since it is only catering to a couple of different factions. It isn't invulnerable, of course, as we see right now with the fight between the conservatives and the pragmatists in the party, but is more resilient to this. The outcome is a Republican base that generally-consistently turns out, and a Democratic base that turns out only when they feel they are losing. The moderates, meanwhile, swing back and forth based on whoever has annoyed them the most recently. Since the party that isn't in power can't do much to annoy them, and things that have happened are more salient than things that might happen, you get elections that swing from party to party each election cycle. With increasing media exposure (both through the traditional media since Watergate, and the Internet more recently), they're increasingly aware of the smallest annoyances, which is accelerating the process.
0gjm8y
This may all be correct, but it seems to me an entirely different argument from the one you made before and on which I was commenting.
0OrphanWilde8y
Systemic overvaluation of stocks relative to risk as a result of tax benefits combined with overdue bills from the last three economic shocks. The full extent of the drop will require including inflation in consideration. It's not the WHO's attitude towards US healthcare, it's a difference in attitudes towards national pride between the US and... everybody else. In the US, outward patriotism is combined with criticism of our institutions; representatives of the US are all-too-happy to say what we should do better, but still insist we're great anyways. Elsewhere, it's dangerous and right-wing (in the European rather than US sense) to be outwardly patriotic (unless the government is dangerously right-wing already, which is to say, requires patriotism of this form), but that gets combined with a resentment of any implication that there's anything wrong with their country or culture. So ranking systems tend to accentuate the things Europe (which is powerful enough to get its say) does well (such as national health databases, repeatedly, for every category of health) while making sure the US ranks below them so they can say they're doing better than the central modern superpower (because Asia doesn't count and nobody wants to annoy China).
2gjm8y
What I don't understand is that you can attach a 63% probability to a decline of at least 60%, but at most a 7% probability to a decline of, let's say, 20%-60% (can we agree that a 20% decline would count as a "market slump"?). So, in fact, it is the WHO's attitude towards US healthcare that's relevant here. Anyway, your cynicism is noted but I can't say I find your argument in any way convincing. (In fact, my own cynicism makes me wonder what your motive is for looking for explanations for the US's poor ranking other than the obvious one, namely that the US actually doesn't do healthcare terribly well despite spending so much.) The Wikipedia page on this stuff says that the WHO hasn't been publishing rankings since 2000 (which I think actually makes your prediction pretty meaningless), and that the factors it purports to weigh up are health as measured by disability-adjusted life expectancy, responsiveness as measured by "speed of service, protection of privacy, and quality of amenities", and what people have to pay. I don't see anything in there that cares about national health databases (except in so far as they advance those other very reasonable-sounding goals).
0OrphanWilde8y
They're separate predictions. I went through its ranking criteria about a decade ago, and the database thing came up in every single ranking, dropping even our top-caliber cancer treatment to merely average.
2gjm8y
So what? If you hold that * Pr(slump) ~= 0.7 * Pr(decline >= 60% | slump) ~= 0.9 then you necessarily think there's at least a ~63% probability of at least 60% decline and at most a ~7% probability of a decline between 20% and 60%. (And the real weirdness here, actually, comes from the second prediction more or less on its own.) Interesting. Do you have more information?
0OrphanWilde8y
It's not that weird. Think about predicting the size of the explosion of a factory filled with open barrels of gasoline and oxygen tanks. I think the global economy is filled with the economic equivalent of open barrels of gasoline. Not at the moment. It's been literally years since I've done any serious research on global healthcare. (Working in the health industry tends to make you stop wanting to study it as a hobby.)
0gjm8y
So (if I'm understanding your analogy right) you expect that any drop in the market will almost certainly lead to a huge crash? From the 17th to the 25th of August last year, the S&P 500 dropped by about 11%. This led to ... about a month of generally depressed prices, followed by a month-long rise up to their previous levels. That doesn't sound to me like an economy filled with open barrels of gasoline.
0OrphanWilde8y
Any given spark -could- set it off, which is not the same as any given spark definitely setting it off. If the stock market were responding appropriately to the conditions, then there wouldn't be the equivalent of open barrels of gasoline all over the place. The issue is more structural than that: Interest rates and limited investment opportunities have driven money into the markets, driving prices up, and then keeping them artificially high. Some of this pressure has been relieved by amassing inventory, but that's reached its stopping point, which is starting to cause international trade to falter.
2gjm8y
If Pr(crash|drop) is not quite large, then Pr(smallish decline|any decline) should be reasonably big. Each observation of a drop without a crash is (some) evidence against the antecedent. It sounds as if you think I was assuming that it is or should be, and I'm not sure why. Could you explain?
0OrphanWilde8y
Each time you strike a match without blowing up might be evidence that you're not in a building full of gasoline, but if you see the gasoline, the match-striking evidence doesn't weigh nearly as heavily. I was slightly more specific in another question chain about what I meant by slump, and a 10% drop isn't quite what I had in mind (neither is today's slump, which could rally again), which was more along the lines of a recession. To clarify my prediction: I expect at least a recession. Given a recession, I expect a depression. If we don't get a recession (which would surprise me somewhat), the absence of a depression won't surprise me, but if we do get a recession, the absence of a depression will surprise me a lot.
0Lumifer8y
Since approximately when?

Who buys government bonds at sub-zero rates? Why can't those instiutions simply put the money into a bank tresor?

2OrphanWilde8y
If I understand correctly, FDIC insurance costs more that way, so whatever you save in negative interest, you'd lose and then some on FDIC insurance.
0Douglas_Knight8y
Maybe the alternative to buying government bonds is putting the money in an account at the central bank, which has interest even more negative? Here is the ECB addressing the question of why a bank would be willing to pay interest to deposit at the central bank, rather than putting paper in a vault: because vaults cost money to build and operate.
0ChristianKl8y
Aren't the big banks publically traded and expected to grow by stock market analysts? How does that work when they get negative interest rates?
2Lumifer8y
Banks have a variety of ways of making money besides collecting interest on deposits they make.
0tut8y
They get positive expected real interest from loans they give, but pay negative real interest on deposits they receive.
0ChristianKl8y
If a bank buys a government bond that's "giving a loan" and I understand that to give negative interest in certain cases.

How does one call a philosophical position that images have intrinsic meanining, rather than assigned one by the external observer?

What can be said about a person giving voice to such position? (with the purpose of understanding their position and how to best one could converse with them, if at all)

I am asking because I encountered such a person in a social network discussion about computer vision. They are saying that pattern recognition is not yet a knowledge of their meaning and yes, meaning is intrinsic to image.

All that comes to my mind is: I am not versed in philosophy, but it looks to me that science is based on the opposite premise and further discussion is meaningless.

4polymathwannabe8y
To me it sounds like semantic externalism, i.e. the view that meaning doesn't exist in your head but in physical reality.
0ChristianKl8y
Are you sure? I can imagine a dualist who consider that meaning to be mental reality but physical reality?

Can I edit events that I created on Less Wrong?

It seems I can't. (I ask because I created this event, but when I pasted the details, I neglected to add the city (Melbourne). And now the map is wrong by about 3600km.

3Vaniver8y
You should see a "edit meetup" link underneath the map at that link.
2Chriswaterguy8y
Thank you. (In hindsight I should have done a page search for "edit".)

Polls seem to indicate that Trump has a massive lead in the Republican primary, far ahead of Cruz who is far ahead of Rubio. UK Bookmakers put him slightly behind Rubio, and slightly ahead of Cruz. Why the discrepancy?

For that matter - that's the odds of him being the Republican candidate. For the primaries, they put him ahead of Rubio for both the Iowa Caucus and New Hampshire Primary. Does winning the primaries not make him the Republican party candidate? Are there other primaries that aren't being bet on? Do they think his performance in NH and IA is so... (read more)

3ChristianKl8y
Winning two primaries does not make him the Republican party candidate. The idea is that while Trump has 40% at the moment that means he has more than any other candidate but as time goes on other candidates drop out and the 60% that don't vote for Trump at the moment still won't vote for Trump but for somebody else. Apart from that other people start to understand Trump and how to respond to him as time goes on and they think that as time goes on the establishment candidate Marco Rubio will benefit from that.
0philh8y
This is plausible, thanks. Followup question for the first part - why do bookmakers favor Rubio over Cruz? Cruz has advantage in polls and IA (30× difference between their odds), but disadvantage in NH (4× difference). I could see "non-Trump voters will tend to go to anyone except Trump when their candidate drops out", but "will tend to go to Rubio" seems rather more specific.
2ChristianKl8y
Rubio has more support of the Republican establishment.
2username28y
And the establishment have superdelegates who cast votes in National Conventions and those 5-10 percent of total delegates would be enough to choose the winner in otherwise close races.
2TimS8y
There's a lot of reasons to think that national polling is not predictive of a primary race * First, the relevant decisions are made state-by-state. * Second, the sampling issue for primary voters is much harder that for general election voters. Among other reasons, people are paying a lot less attention, so people who care strongly are probably more over-represented than is typical. Yes to your second question. Also, winning one or both of the early primaries is not a strong predictor of who will be nominated in contested primaries.
[-][anonymous]8y00

I've created a sock puppet named deprimita_patro with which I intend to create one and only one post (possibly with responses in the comments). I will respond to this comment using that account and would appreciate if you would upvote that response so that I can make the post. Afterwords, I will delete this comment. Thank you.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]8y-10

edited: this post used to be dumber

gamification for flow experiences.

  • appropriate challenge
  • curiosity
  • controllablel
  • creativity
  • feedback
  • self esteem
3OrphanWilde8y
I'm a Less Wrong boss? And apparently an above-average boss in terms of friendliness, but below-average in terms of intelligence. I'm not sure how I should feel about that. I think I'll go with "Amused".
-1[anonymous]8y
On averageness: I don't understand how I was thinking when I made that. I don't hold believe in the aforementioned categorisation any more. It just seems really weird. On boss: What a weird term to use. I suppose I was trying to get at that below average in this context is already of the population I consider above average on LW.
0Viliam8y
It's like a new chapter for "How to make Friends" -- rate them by how intelligent and friendly them seem to you, and publish the results online. :D :D :D If you insist (which IMHO is not a good idea), perhaps you could at least somewhat taboo the words "intelligence" and "friendliness". Because the words themselves are just labels that different people use differently; and since your definition can differ from mine, your chart is useless to me. Something like "I am impressed by how gwern succeeds to apply statistical software to anything" would convey information I could agree with.

Academic and anti-transhumanist, anti-libertarian, democratic socialist Dale Carrico is in full flow against Eliezer's essay, Competent Elites. The comments have the new (to me) tidbit that the aforementioned essay and this one on IQ are not present in Rationality: From AI to Zombies (a base motive is, of course, attributed).

3Viliam8y
There are things one could criticize about that EY's article. Cconcidentally I did it in this Open Thread before reading your comment (what EY observed may be specific for IT elites but unusual for rich people in general). However the linked critique is... a boring rant. It doesn't contain much more information than "I disagree".
0PipFoweraker8y
If you are not familiar with Carrico's blog and writing style, this is a feature, not a bug.
0Lumifer8y
Looks like a misfeature.
1[anonymous]8y
Too much personal attitude to the Yudkowski's piece in question makes this full flow hard to take seriously.
1[anonymous]8y
Oh, that blog is sometimes quite fun. In this particular case he's saying many of the things I would love to say about that essay too.
-1knb8y
Welcome back AdvancedAtheist (I guess).
0username28y
Nope. (I have replaced the far-right political jargon with a more mainstream descriptor.) Edit: Someone has downvoted this. It's pretty pointless to take umbrage at this happening to an anonymous account, but I would like to know what exactly the downvoter finds objectionable.
[-][anonymous]8y-20

musings

What does an example super-healthy lifestyle look like? Are there any prescription one could model their behaviour changes towards? I imagine it would include like: x amount of exercise, y diet, not smoking, yada yada. The elements that are suprising for a given person would likely be the really important parts. Ideally, if the prescription is sophisticated enough, some kind of prioritisation of different elements would be helpful.

*

Is there a hedonistic counterpart to effective altruism? I'd sure like to get involved with that :) Imagine that, a com... (read more)

Two people were lamenting the state of affairs of the world.

A bystander said, "When I become 'King of the World' I will fix things."

One of the two said, "Can I trust you?"

The bystander said, "Of course not."

The retort was, "In that case, I trust you."

Is this a

par·a·dox/ˈperəˌdäks/ noun

1.    a statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory.

?

0Viliam8y
I think the word "paradox" is ill-defined, because "seems senseless, logically unacceptable, or self-contradictory" is a "2-place word" -- seems X to whom?
0WhyAsk8y
Thanks, it'll take me some time to digest this link. Can you suggest a better definition and would this anecdote be included or excluded? If excluded, how would you define this odd exchange?
1Viliam8y
I am not a linguist -- maybe there is a more appropriate label for this thing, but I don't know it. The idea of the link is: you shouldn't say things like "the conclusion seems senseless" but rather "the conclusion doesn't make any sense to person X (but it could make sense to some other person Y)". Otherwise you get the implicit assumption that things make or don't make sense equally to all listeners; that is that "not making sense" is an inherent property of the conclusion, instead of a relation between the conclusion and the listener.
[-][anonymous]8y-30

Goals for January

  • Try unprotected sex for the first time
  • Try magic mushrooms
  • Exit your social enterprise: growing too quick, too much responsibility, unadequate staff, too stressful
1PipFoweraker8y
Combining the first two would likely result in a more-memorable-than-most experience.
0[anonymous]8y
I've reconsidered the first one. It's no longer on my list. Pleasurable as it may be, it sets a precedence that may make me less happy overall. I'm a forward thinking hedonist :) Another goal I forgot to write down was sperm donation, but my friends helped me realise that isn't what I want to do!
0username28y
What do you mean?
0[anonymous]8y
Unprotected sex will be pleasurable, but until I have a stable partner, it poses a concurrent sexual health threat. If I have infrequent unprotected sex to lower my risk, I would find protected sex less pleasurable. Ignorance is bliss. This my new 'intentional list' rather than a goal list for January: * Normative interpersonal relations * Normative self concept * Normative gmail and google drive activity * No more obsessive music listening * Normative sleep patterns * Normative note taking * Normative scheduling and calendars * Normative emailing and writing * Normative career * Normative living situations * Normative family relations * Normative linguistic patterns * Not volunteering * Not enterpreneurialising * Not being a sperm donor * Not buying domain names * Political moderation * Heteronormativity * Neurotypicality * Psychotypicality * Not associating with socially a-normative friends tl;dr: be more normal
0polymathwannabe8y
It's risky to aim at such a strong level of conformity. If you go that way, you'll be letting social pressure mold you into an obedient everyman, with nothing to make you distinctly you.
0polymathwannabe8y
Do you already know what partner you'll have for this? This is literally a life-or-death situation. You can never be too paranoid.
5Viliam8y
There is also a chance of creating life, so... I guess the risks cancel each other out... for some kind of utilitarianism.
1polymathwannabe8y
Is the % risk for death from STD the same as the % risk for pregnancy? Also, maternal transmission of STDs make life horrible for the fetus.
2zedzed8y
http://markmanson.net/std-guide
4Lumifer8y
You mean like crossing the street?
0polymathwannabe8y
I'm unsure what the intention of the comparison is. If you want to stretch it all you can, swallowing is a life-or-death situation. But you don't routinely have to teach your kids to practice "safe swallowing," whereas "safe street crossing" lessons for kids do exist.
2Lumifer8y
The intention of comparison is, basically, "unnecessary dramatisation".
0Good_Burning_Plastic8y
I suspect Clarity was thinking about unprotected sex with somebody they've already been in a stable monogamous relationship for a while (possibly partly because they want a baby), whereas polymathwannabe was thinking about something more like a one-night stand with a stranger. But if the latter is right, the dramatization ain't that unnecessary, at least in certain geographical locales.
0Lumifer8y
In such geographical locales a lot of things, starting with just being there, tend to be a matter of life and death.
2Gunslinger8y
My take is that he meant a black and white view of risk, which can be visualized using a SAFE | RISK coin rather than a SAFE ------------------ RISK continuum. And to be somewhat on topic, in some areas of the world crossing the street can be either safer or more risky.
0Dagon8y
I'd love to see the correlation across locations between risk of street-crossing and risk of unprotected sex. I suspect it's noticeably positive.
0Lumifer8y
Hm :-) You'll probably find two clusters: the first one will correspond to big cities and the other will correspond to failed states. Though I'm not sure there's that much car traffic in the failed states.
0passive_fist8y
Statistically, withdrawal is just as effective as condoms at preventing pregnancy; STDs are a bigger concern but the risk can be minimized with a checkup. However, condoms are not effective at preventing transmission of many types of STDs either.
-2[anonymous]8y
HIV is the only non-transient or trivial STI. The actual risk is negligible for non-regular heterosexual contact with a given person of unknown status. However, the anxiety will be harmful enough that I'd rather not.

Game Theory (Nalebuff, Avinash) says carrying a gun is a dominant strategy. Does it favor concealed, or open carry? TIA.

1Manfred8y
First big problem is obviously that things are only proven given some starting premises, and in this case those premises are highly questionable. Carrying a gun has plenty of costs that might outweigh the benefits. Obviously it costs money, and peoples' reactions to you may be a cost, but I think the most interesting, and possibly biggest, cost may be the mortal one. Gun accidents are rare but they happen, especially if you're going to be carrying your gun around loaded, so in order to check whether it's worth it to carry a gun, one of the things you might want to estimate is the risk of accidents. Even more interesting to me is the risk that if I become temporarily suicidal, having a gun might increase my probability of suicide, and right now I don't want my future self to commit suicide (unless terminally ill etc.).
0Dagon8y
I only read a synopsis of their book, but it's massively incorrect to take their statements as "game theory says" anything about carrying a gun in the real world. In their incredibly wrong payoff model, gun ownership does dominate. But that payoff model is simply is simply insane.
0WhyAsk8y
What are then appropriate payoff models for carrying or not carrying, concealed or open?
-3Elo8y
simple thought experiment: You are carrying a gun. Someone else decides they want to do something dangerous with a gun. (shoot some people; commit a gun-crime, etc.). They know they are about to become a target because everyone else is usually also self-preserving. They decide to shoot anyone with the means to slow them down. That primarily includes everyone else with a gun; anyone else strong enough to overpower them, and anyone able to alert authorities on them. Who do they shoot first? anyone else with a gun. Likely a not safe position to carry a gun
2wizard8y
That's the reason Batman doesn't use guns.
0Lumifer8y
I wouild recommend making some numerical calculations of probabilities involved, in particular with respect to finding oneself at the scene of some rampage AND being selected as a target because you have a gun AND not being able to do anything about that (like follow the example of Han Solo).
0WhyAsk8y
The decision tree for this gets complex even after the split for concealed or open carry. Also, shot through the heart, a person has about 10 seconds left to act (to return fire, I hope).
0Elo8y
given the choice; I'd rather avoid the position of "most likely to get shot first" more than gain the utility of "have 10 seconds in which to shoot back right before I die".
[+][anonymous]8y-50
[+][anonymous]8y-70
[+][anonymous]8y-90