Open Thread, January 4-10, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (430)
Some political predictions (Edited for formatting):
What's your reasoning behind putting such a low probability on this one? According to this data, this proposition has been true 35 years in a row. The ten years from 1971-1980 was only 0.08 degrees C warmer than the period from 1961-1970, but every ten year period since then (beginning with 1972-1981) has been more than 0.12 C warmer than the previous 10 years.
The choice of starting year has a substantial effect.
What are "Cor:" and "Rel:"? Are those conditional predictions?
What is the meaning of the global temperature predictions? Are you going to compare the average temperature of 2025 to the average temperature of 2015? The average of one decade to another?
Are you going to bet on this? Your stock predictions and 2016 political predictions are pretty far from consensus and easy to bet on.
Yes. ETA: Decade.
No. Two reasons: First, my internal ethics system puts no value on things I do not feel I have earned by merit of production, so I can literally only lose in the proposition. Second, I'm putting this here so I remember my predictions to calibrate my confidence levels, which is why I didn't put it in the latest Open Thread, where it would be more widely exposed.
My last set of predictions, made around eight-ten years ago, are also online, but because I do not want to associate the username under which I made them with this one, I cannot share them; short summary, however, is that they shared the same political nature as these, and they were accurate to a degree which has even surprised me. I didn't attach probabilities at the time; that's a thing I'm borrowing from this community. I suspect my guesses will be correct on average, while my probabilities incorrect on average, but that's what this process is for.
Why?
Ever-increasing dissatisfaction with the government combined with the illusion of change provided by swapping political parties. The presidency has been passed back and forth between the parties for the last few decades as a result; the candidates really don't matter all that much, because voters aren't voting for the candidate, they're voting against what they see as the current status quo.
I'm expecting an acceleration, actually, as the current generation, with expectations shaped by the internet era, becomes the dominant voting force; which is to say, within the next twenty years, all presidents will become one-term presidents, and third party presidents will become viable contenders after the collapse of the major parties into infighting, unbolstered by eight-year terms in which the losing party reconsolidates its coalitions.
The people's evaluation of Obama's performance appears rather even, and steadily so if you look at previous years in the same page.
It's curious that you think this is a counterargument, particularly given that Obama's performance evaluation is historically low for a president.
Could you please shortly explain why you give Sander twice the chance as Hillary?
I am not watching politics too closely, but my impression was that Hillary is "part of the system", and she will also play the gender card; while Sanders is a "cool weirdo" (less than Ron Paul in 2012, but in the similar direction) which makes him very popular on internet, but the votes in real life will not follow the internet polls.
I would guess the answer is in the prediction in between the Clinton and Sanders nomination predictions: "Hillary to be indicted on criminal charges: 50%". Presumably that would hurt her chances of nomination.
Some of these figures seem implausible to me. The US presidential predictions are fairly strange, but others are worse. 63% probability of a 60%-80% decline in stock prices within two years? Really? (And, given that, why so little probability attached to a smaller decline? What's the underlying model here?) And what exactly is OrphanWilde's mental model of the WHO's attitude to US healthcare, that predicts such a huge influence of the existence of a US national health database on how the WHO assesses countries' health?
This isn't the whole of it, but it contributes, along with personal issues Clinton is struggling with. The bigger issue is that the coalition is fractured. If Sanders weren't playing softball against Hillary, it wouldn't even be a question, but I think he believes playing hard politics against her would damage his chances against Trump by fracturing the Democratic coalition along gendered lines. The Democratic coalition is at its weakest leaving a Democratic presidency, since anything they have achieved results in a less interested coalition member group whose goals are already at least partially achieved, and anything they haven't achieved results in a frustrated coalition member group whose goals were perceived to be passed over.
United, the Democrats win; their coalition is larger than the Republican base. Unfortunately, they're at their least united right now, and Sanders can't afford to fracture them any further. Hillary, on the other hand, seems perfectly happy to weaken the coalition in order to win the nomination.
Seems fairly plausible, but why put this specifically in terms of the Democrats? The same will apply to the Republicans, or any other party anywhere whose support comes from anything other than a perfectly homogeneous group.
On the face of it, that should make her more likely to get nominated. Are you suggesting that the Democratic Party's electorate is sufficiently calculating to reason: "She's doing these things to get nominated, they seem likely to piss off Sanders supporters, that will hurt us in the general election, so I won't vote for her in the primary"? Colour me unconvinced.
The Republicans are less of a coalition than the Democrats, and more an alliance of two groups; social conservatives, and economic liberals.
This isn't why Sanders will win, this is why he's still behind. It's a short-term strategy, however, which she started too soon; the primary voters aren't going to vote against Hillary because they don't think they'll win in the general election, they're going to vote against Hillary because she's alienated them to pander to her base.
So what? If your argument is "if they achieve group G's goals, group G will be unmotivated because they've already got what they need; if they don't, group G will be unmotivated because they'll think they've been neglected", surely this applies whether group G is 10% of the party's support or 50%.
Which is easier: Ordering food that ten people need to agree upon, or ordering food that two people need to agree upon?
I don't see the relevance of the question. The argument wasn't "It's difficult for the Democrats to do things that will please all their supporters, because their supporters are a motley coalition of groups that want different things". It was "Support for the Democrats will be weak in this situation, because each group will be demotivated for one reason if they've got what they want and demotivated for a different reason if they haven't got what they want".
It's relevant. A given Democrat is likely a Democrat for their one issue; on all other issues, they tend to revert to the mean (which is why, historically, Democrats tend to rate their party lower on listening to the base than the Republicans do). The Democratic platform is a collection of concessions and compromises between the different coalitions, and is attractive only because of a given coalition's particular interests; the rest of what it offers isn't particularly attractive to its constituents.
Republican objectives tend to be more in-line with what its constituents want, since it is only catering to a couple of different factions. It isn't invulnerable, of course, as we see right now with the fight between the conservatives and the pragmatists in the party, but is more resilient to this.
The outcome is a Republican base that generally-consistently turns out, and a Democratic base that turns out only when they feel they are losing.
The moderates, meanwhile, swing back and forth based on whoever has annoyed them the most recently. Since the party that isn't in power can't do much to annoy them, and things that have happened are more salient than things that might happen, you get elections that swing from party to party each election cycle. With increasing media exposure (both through the traditional media since Watergate, and the Internet more recently), they're increasingly aware of the smallest annoyances, which is accelerating the process.
Systemic overvaluation of stocks relative to risk as a result of tax benefits combined with overdue bills from the last three economic shocks. The full extent of the drop will require including inflation in consideration.
It's not the WHO's attitude towards US healthcare, it's a difference in attitudes towards national pride between the US and... everybody else. In the US, outward patriotism is combined with criticism of our institutions; representatives of the US are all-too-happy to say what we should do better, but still insist we're great anyways. Elsewhere, it's dangerous and right-wing (in the European rather than US sense) to be outwardly patriotic (unless the government is dangerously right-wing already, which is to say, requires patriotism of this form), but that gets combined with a resentment of any implication that there's anything wrong with their country or culture.
So ranking systems tend to accentuate the things Europe (which is powerful enough to get its say) does well (such as national health databases, repeatedly, for every category of health) while making sure the US ranks below them so they can say they're doing better than the central modern superpower (because Asia doesn't count and nobody wants to annoy China).
What I don't understand is that you can attach a 63% probability to a decline of at least 60%, but at most a 7% probability to a decline of, let's say, 20%-60% (can we agree that a 20% decline would count as a "market slump"?).
So, in fact, it is the WHO's attitude towards US healthcare that's relevant here. Anyway, your cynicism is noted but I can't say I find your argument in any way convincing. (In fact, my own cynicism makes me wonder what your motive is for looking for explanations for the US's poor ranking other than the obvious one, namely that the US actually doesn't do healthcare terribly well despite spending so much.)
The Wikipedia page on this stuff says that the WHO hasn't been publishing rankings since 2000 (which I think actually makes your prediction pretty meaningless), and that the factors it purports to weigh up are health as measured by disability-adjusted life expectancy, responsiveness as measured by "speed of service, protection of privacy, and quality of amenities", and what people have to pay. I don't see anything in there that cares about national health databases (except in so far as they advance those other very reasonable-sounding goals).
They're separate predictions.
I went through its ranking criteria about a decade ago, and the database thing came up in every single ranking, dropping even our top-caliber cancer treatment to merely average.
So what? If you hold that
then you necessarily think there's at least a ~63% probability of at least 60% decline and at most a ~7% probability of a decline between 20% and 60%.
(And the real weirdness here, actually, comes from the second prediction more or less on its own.)
Interesting. Do you have more information?
It's not that weird. Think about predicting the size of the explosion of a factory filled with open barrels of gasoline and oxygen tanks. I think the global economy is filled with the economic equivalent of open barrels of gasoline.
Not at the moment. It's been literally years since I've done any serious research on global healthcare. (Working in the health industry tends to make you stop wanting to study it as a hobby.)
So (if I'm understanding your analogy right) you expect that any drop in the market will almost certainly lead to a huge crash?
From the 17th to the 25th of August last year, the S&P 500 dropped by about 11%. This led to ... about a month of generally depressed prices, followed by a month-long rise up to their previous levels. That doesn't sound to me like an economy filled with open barrels of gasoline.
Any given spark -could- set it off, which is not the same as any given spark definitely setting it off.
If the stock market were responding appropriately to the conditions, then there wouldn't be the equivalent of open barrels of gasoline all over the place. The issue is more structural than that: Interest rates and limited investment opportunities have driven money into the markets, driving prices up, and then keeping them artificially high. Some of this pressure has been relieved by amassing inventory, but that's reached its stopping point, which is starting to cause international trade to falter.
Since approximately when?
What's a major military conflict?
That suggest zero chance for Marco Rubio. Why so low? Especially with 10% left open for a non-Hilary, non-Sanders candidate.
So probability of either Trump or Cruz is 100%?
No, ~83%
How do you go from
Trump to get Republican nomination: 65%andCruz to get Republican nomination: 35%to 83%?Rephrase those as the inverse probabilities (Trump's probability of losing is 35%, Cruz's is 65%), and it will make more sense.
It seems to me that if the probability of Trump winning is 35% and the probability of Cruz winning is 65%, then the probability of Trump or Cruz winning is 100% (since the probability of Cruz AND Trump wining is 0%).
Couple of questions. What's your definition of a "market slump" and/or an "economic crisis"? Also, what's Health Index and what is the US National Health Database?
Let's say for simplicity a nationally recognized economic downturn amounting to at least a recession.
I guess an unofficial name for WHO's ranking system for national healthcare systems, last performed in 2000. http://www.who.int/whr/2000/media_centre/press_release/en/
The US National Health Database is a theoretical thing that is in the works to provide patient information nationally to any hospital or medical provider which requires it, and funding was set aside in the PPACA (Obamacare); it's being implemented at a state level by federal grant, and I believe is intended to eventually operate as a set of interacting state databases rather than a single database stored somewhere.
You have a 90% probability that this "downturn" will lead to the US stock market losing two thirds of its value which is worse than 2008. That implies a bit more, um, severe event.
Ah, I know an expression that fits the situation well...
Yes.
Would anyone actually be interested if I prepared a post about the recent "correlation explanation" approach to latent-model learning, the "multivariate mutual information"/"total correlation" metric it's all based on, supervenience in analytical philosophy, and implications for cognitive science and AI, including FAI?
Because I promise I didn't write that last sentence by picking buzzwords out of a bag.
I might be super mean about this!
Is "super mean" still a bad thing, or now a good thing?
In the words of Calvin's dad, it builds character.
Ah. You mean you'll act as Reviewer 2. Excellent.
There is a relevant quote from Faust by Mephistopheles.
That being, for those of us too gauche to have read Faust in the original?
Ein Teil von jener Kraft,
Die stets das Böse will und stets das Gute schafft.
Ich bin der Geist der stets verneint!
Part of that power which would
Do evil constantly and constantly does good.
I am the spirit of perpetual negation
Anyway, could you PM me your email address? I figure that for a start at being Reviewer 2, I might as well send you the last thing I wrote along these lines, and then start writing the one I've actually just promised.
I really don't think that Reviewer 2 has anything to do with Lucifer, or with the Catholic view of Lucifer/Satan as self-thwarting.
I think you are overestimating how literally and seriously Ilya intended his reference to be taken.
I don't think the intended parallel goes beyond this: the devil (allegedly) tries to do evil and ends up doing good in spite of that; a highly critical reviewer feels (to the reviewee) like he's doing evil but ends up doing good in spite of that.
I will be very interested to read both your account of correlation explanation and Ilya's super-meanness about it.
I'd be interested! I hereby promise to read and comment, unless you've gone totally off the bland end.
Ok, then, it'll definitely happen Real Soon Now.
Sapir-Whorf-related question:
Although I've been an informal reader of philosophy for most of my life, only today did I connect some dots and notice that Chinese philosophers never occupied themselves with the question of Being, which has so obsessed Western philosophers. When I noticed this, my next thought was, "But of course; the Chinese language has no word for 'be.'" Wikipedia didn't provide any confirmation or disconfirmation of this hypothesis, but it does narrate how Muslim philosophers struggled when adapting Greek questions of Being into their own words.
Then I asked myself: Wait, did the Chinese never really address this subject? Let's see: Confucianism focused on practical philosophy, Taoism is rather poetry instead of proper ontology, and Buddhism did acknowledge questions about Being, but saw them as the wrong questions. I'm not sure about the pre-Confucian schools.
If it turns out to be the case that the main reason why Chinese philosophers never discussed Being is that Chinese has no word for "be," that would seem to me to be a very strong indication that Western philosophers have spent centuries asking the wrong questions, specifically by falling into the confusion mode of mistaking words for things, a confusion mode that I'm tempted to blame Aristotle for, but I need to reread some Aristotle before I can be sure of such an accusation.
Am I missing something here?
A particularity of English is that
to bemeans a lot of different things. It covers three distrinct categories in natural semantic metalanguageNow I am curious whether most of the philosophy of "Being" are merely confusions caused by conflating some of those different meanings.
Or that Eastern philosophers have spent centuries failing to ask the right questions. If language A makes it easy to ask a certain question and language B makes it hard, it doesn't follow that it's a bad question arising only from quirks of language A; instead it could be a good question hidden by quirks of language B (or revealed by in-this-case-beneficial quirks of language A).
It seems a stretch to put Buddhism in the category of don't-really-care-about-Being. Rather, it's an important point that there is no being and realizing so brings countless bliss and enlightenment.
I was under the impression that 是 was Chinese for "to be". The nuance isn't quite the same--you can say 是 in response to "are or aren't you American?", but that's more or less subject-omission--but it seems close enough?
But my experience with Chinese includes only two years of Mandarin classes and a few podcasts; I haven't studied the linguistics in so much detail, and that studying ended 5 years ago, so if you're basing this on something I don't know, I'd be glad for the correction.
I know much less Chinese than you do. Having said that:
The Chinese version of "be" lets you apply a noun predicate to your subject, but not an adjectival predicate: you can use it to say "I am a student" or "I am an American" but not "I am tired" or "I am tall;" that is, it doesn't state the attributes of a noun but an equivalence between two nouns. To say "I am tall," you just say "I tall." All of the other meanings of "be" (the ones relevant to this problem are those related to the essence/existence question) are expressed with various other words in Chinese.
If that is the case I consider it pretty unlikely that this has any relevance to Chinese or Western philosophy. Especially since in Greek saying "I am tall" is basically saying "I am [something tall]" which according to your description you could also say in Chinese if you had a word for "something tall."
Ah, yeah, that's true. Adjectives exhibit verb-like behavior in several East Asian languages; that they also do this in Chinese kinda slipped my mind.
So I think I've genuinely finished http://gwern.net/Mail%20delivery now. It should be an interesting read for LWers: it's a fully Bayesian decision-theoretic analysis of when it is optimal to check my mail for deliveries. I learned a tremendous amount working my way through it, from how to much better use JAGS to how to do Bayesian model comparison & averaging to loss functions and EVSI and EVPI for decision theory purposes to even dabbling in reinforcement learning with Thompson sampling/probability-matching.
I thought it was done earlier, but then I realized I had messed up my Thompson sampling implementation and also vectorspace alien pointed out that my algorithm for deciding what datapoint to sample for maximizing information gain was incorrect & how to fix it, and I have made a lot of other small improvements like more images.
Related to this, I am trying to get a subreddit going for statistical decision theory links and papers to discuss: https://www.reddit.com/r/DecisionTheory/
Right now it's just me dumping in decision-theory related material like cost-benefit analyses, textbooks, relevant blog posts, etc, but hopefully other people will join in. We have flair and a sidebar now! If anyone wants to be a mod, just ask. (Workload should be negligibly small, this is more so the subreddit doesn't get locked by absence.)
If anyone with graphics skills would like to help me make a header for the subreddit, I have some ideas and suggested images in https://plus.google.com/103530621949492999968/posts/ZfEtb54aN4Q for visualizing the steps in decision analysis.
edited: this post used to be dumber
gamification for flow experiences.
I'm a Less Wrong boss?
And apparently an above-average boss in terms of friendliness, but below-average in terms of intelligence. I'm not sure how I should feel about that. I think I'll go with "Amused".
On averageness: I don't understand how I was thinking when I made that. I don't hold believe in the aforementioned categorisation any more. It just seems really weird.
On boss: What a weird term to use. I suppose I was trying to get at that below average in this context is already of the population I consider above average on LW.
It's like a new chapter for "How to make Friends" -- rate them by how intelligent and friendly them seem to you, and publish the results online. :D :D :D
If you insist (which IMHO is not a good idea), perhaps you could at least somewhat taboo the words "intelligence" and "friendliness". Because the words themselves are just labels that different people use differently; and since your definition can differ from mine, your chart is useless to me. Something like "I am impressed by how gwern succeeds to apply statistical software to anything" would convey information I could agree with.
How profitable are student club party and ballroom events? I am suprised external companies haven't sprung up to handle the organising of those events on students club's behalves for tidy profits in exchange for access to an attendee base and marketing channels. In return, the student club members get value and their leadership gets extra funds.
I had a friend who organized these kinds of events. She made okay money for the amount of time invested in the organization of the event itself, but events were sporadic, and once you considered the time investiture in getting the event, a retail job paid rather better. If you can achieve the kind of success where people seek you out, it would pay pretty well, but that requires considerable social capital and skill, and there are other opportunities where similar social capital and skill would pay better.
Your average disco is such a company. They make parties that people can enter by paying money.
not profitable. companies try, venues for example - regularly email clubs and try to get business from them.
source: personal experience.
I concur, having advised many student organisations over the years in the US and UK. Often such events are supported by organisation funds raised in other ways, rather than as generating income. And many universities have a body of some kind that serves to advise and support student organisations (including administrative and events advice).
Finally, in many cases, students actually want to gain experience organising events, sometimes for personal development and other times just for CV fodder. Farming events out to an external company eliminates this possibility.
Who buys government bonds at sub-zero rates? Why can't those instiutions simply put the money into a bank tresor?
If I understand correctly, FDIC insurance costs more that way, so whatever you save in negative interest, you'd lose and then some on FDIC insurance.
Maybe the alternative to buying government bonds is putting the money in an account at the central bank, which has interest even more negative? Here is the ECB addressing the question of why a bank would be willing to pay interest to deposit at the central bank, rather than putting paper in a vault: because vaults cost money to build and operate.
Aren't the big banks publically traded and expected to grow by stock market analysts? How does that work when they get negative interest rates?
They get positive expected real interest from loans they give, but pay negative real interest on deposits they receive.
If a bank buys a government bond that's "giving a loan" and I understand that to give negative interest in certain cases.
Banks have a variety of ways of making money besides collecting interest on deposits they make.
Dealing with shame by embracing a vulnerability, fear of vulnerability and letting that shame be
I feel full of shame which I can’t explain. I feel that it is linked to my gender identity, sexuality and/or body.
why
When I asked Google why I feel this shame with search terms linked to the above suspicions, I landed on a page suggesting that shame in adult males is linked to child abuse. The point that really hit home was the comment: ‘’Males are not supposed to feel vulnerable or fearful about sex.’’ Was I sexually abused as a child? I didn’t think so. Though, one link on the page, hyperlinked as ‘sorting it out for yourself’appealed to my confusion. I clicked on it and reconsidered. There are some circumstances from my childhood that I had not considered child abuse that I can reframe as child abuse. The article disclaims that fussng over labelling is not particularly helpful. But is thist a healthy reframe or experience to identify with? That remains unclear to me. Those articles were not so helpful other than to indicate a dead end.
how
Rather than ask why, I reckoned it may be more prudent to ask how. How can I overcome these feelings. My line of questioning was influence by the memory of a friend who once mused that she is grateful for all the relationships that didn’t work out, because there was something good in all of them, something to learn from, and something which helped her grow...or something like that. I supposed that my feelings of inadequacy may relate to my past relationship experiences...and lack thereof. Another Google search yielded neat articles about learning from relationships that didn’t work and healing past relationships. I particularly like the way the latter article summarised it’s key points visually at the start. So, I looked for other articles in the same category on that website and found two articles that I reckon will be useful guides. The first is about survivng bad dates and healing childhood scars that create bad adultrelationships. I feel good about what I have seen here. So, I hope it will be useful to ya’ll.
The key points for me in this research experience are the points given for what not to do in one (but not the other) expert beacon articles. The what todos are fairly available knowledge. I reckon people are less likely to condemn poor ways of doing things in real life. So, the article was relatively valuable, and invoked a stopping rule by cutting off the reason I was searching for an answer in the first place - the drive to* *suppress these feeling of vulnerability, that I feel, while focussing on the negative**
DON’T
Overcoming fear is always healthy, but you should not let social expectations dictate how you have to feel. There's no single way how men are supposed to behave. Trying to force masculinity to fit inside a rigid box of allowed behaviors is a recipe for frustration and self-hatred. If you have feelings of vulnerability and fear, rather than denying or repressing them, you can observe and understand them.
In cases like this I always recommend the Empty Closets forum. Members are knowledgeable and compassionate.
Just a sidenote: there are multiple "boxes" for masculinity, and when someone tells you to get out of the box, they often have an alternative box ready for you. (For example, instead of constant checking whether something you want to do is not "girly" or not "gay", they may offer you to constantly check your "privilege".) Remember that you can avoid those new boxes too.
"Abuse" is not a binary thing; it's a scale. Just because you were not at one extreme, does not mean that you were necessarily at the other extreme or near it.
Depends on how you are going to react to the label. The healthy aspect is that it may allow you to see causalities in your life that you have previously censored from yourself; and then you can take specific actions to untangle the problems.
The unhealthy aspect is if you take it with a "fixed mindset", and start crying about your past ("I am tainted, forever tainted"), or in extreme case if you start building some ideology of revenge against the whole evil society (or parts of the society) responsible for not preventing the bad things from happening to you.
Seems like you are choosing generally the good direction.
Okay, I wouldn't go that far. ("What doesn't kill you, makes you stronger", Just World Hypothesis, etc.) It is good to react to bad things by deriving useful lessons. However, in a parallel universe you could have good things happen to you, and still derive useful lessons from them. (Or you could derive useful lessons from bad things that happened to other people.) Bad things are simply bad things, no need to excuse them, no cosmic balance that needed to happen to make you a better person. That would mean denying that those things were actually bad.
Being able to turn a bad experience into a good lesson, is a good message about you and your abilities. Not about the bad experience per se. A different person could remain broken by the same experience.
I'd say: Use the past to extract useful information and move on, not to build a narrative for your life.
I'd say: Admit that some people have fucked up, but don't waste your time planning revenge (it is usually not the optimal thing to do with your life). Maybe don't even analyze too much who or how precisely have fucked up, if such analysis would take too much energy.
I agree.
Depends on context. Feelling vulnerable (in situations where you feel safe) is okay. In public, we all wear masks, so it would be inappropriate to e.g. start thinking about your childhood when you are at a job interview.
Keep focused on where you want to get.
That really depends. Authenticity is often more useful than wearing a mask.
In present politics Trump is successful while being relatively authentic. There's a lot of power in it.
Respectfully, Trump is very skilled at sounding authentic. I'm not sure that he is authentic, but some other politician could easily be more authentic while lacking Trump's skills at sounding authentic.
That's dangerous territory. Quite a lot of people got talked by their therapist has having false memories of abuse.
There are many psychological techniques to overcome feelings. There's CBT with includes workbooks like The Feeling Good handbook and there Focusing.
"That's dangerous territory. Quite a lot of people got talked by their therapist has having false memories of abuse."
I would want to have a hell of a lot of evidence showing a clear statistically significant problem along these lines before I attempted to discourage a person from seeking expert help with a self-defined mental health problem.
Nothing I said is about discouraging Clarity to seek out an expert for mental health. A well trained expert should know what creates false memories and be aware of the dangers.
From my perspective the idea that false meories got planted is uncontroversial history taught in mainstream psychology classes.
"the idea that false meories got planted is uncontroversial history"
Certainly, but is this a significant concern for the OP at this time, such that it bears mention in a thread in which he is turning to this community seeking help with a mental health problem. "Dangerous territory" is a strong turn of phrase. I don't know the answer, but I would need evidence that p(damage from discouraging needed help)< p(damage from memory implantation in 2015). Would you mention Tuskigee if he was seeking help for syphilis? Facilitated communication if he was sending an aphasic child to a Speech Language Pathologist? Just my opinion.
This community is not "expert help" for a mental health problem in the sense that people here are trained to deal with the issue in a way that doesn't produce false memories.
That's not at all what he's doing. In this post he doesn't speak about going to an expert to get help. He instead speaks about acting based on reading on the internet of a theory about shame.
Clarity spoke in the past about having seen a psychologist and I don't argue that he shouldn't.
Game Theory (Nalebuff, Avinash) says carrying a gun is a dominant strategy. Does it favor concealed, or open carry? TIA.
simple thought experiment: You are carrying a gun. Someone else decides they want to do something dangerous with a gun. (shoot some people; commit a gun-crime, etc.). They know they are about to become a target because everyone else is usually also self-preserving. They decide to shoot anyone with the means to slow them down. That primarily includes everyone else with a gun; anyone else strong enough to overpower them, and anyone able to alert authorities on them.
Who do they shoot first? anyone else with a gun. Likely a not safe position to carry a gun
That's the reason Batman doesn't use guns.
I wouild recommend making some numerical calculations of probabilities involved, in particular with respect to finding oneself at the scene of some rampage AND being selected as a target because you have a gun AND not being able to do anything about that (like follow the example of Han Solo).
The decision tree for this gets complex even after the split for concealed or open carry.
Also, shot through the heart, a person has about 10 seconds left to act (to return fire, I hope).
given the choice; I'd rather avoid the position of "most likely to get shot first" more than gain the utility of "have 10 seconds in which to shoot back right before I die".
I only read a synopsis of their book, but it's massively incorrect to take their statements as "game theory says" anything about carrying a gun in the real world. In their incredibly wrong payoff model, gun ownership does dominate. But that payoff model is simply is simply insane.
What are then appropriate payoff models for carrying or not carrying, concealed or open?
First big problem is obviously that things are only proven given some starting premises, and in this case those premises are highly questionable. Carrying a gun has plenty of costs that might outweigh the benefits.
Obviously it costs money, and peoples' reactions to you may be a cost, but I think the most interesting, and possibly biggest, cost may be the mortal one. Gun accidents are rare but they happen, especially if you're going to be carrying your gun around loaded, so in order to check whether it's worth it to carry a gun, one of the things you might want to estimate is the risk of accidents. Even more interesting to me is the risk that if I become temporarily suicidal, having a gun might increase my probability of suicide, and right now I don't want my future self to commit suicide (unless terminally ill etc.).
A side note.
My mother is a psychologist, father - an applied physicist, aunt 1 - a former morgue cytologist, aunt 2 - a practicing ultrasound specialist, father-in-law - a general practitioner, husband - a biochemist, my friends (c. 5) are biologists, and most of my immediate coworkers teach either chemistry or biology. (Occasionally I talk to other people, too.) I'm mentioning this to describe the scope of my experience with how they come to terms with the 'animal part' of the human being; when I started reading LW I felt immediately that people here come from different backgrounds. It felt implied that 'rationality' was a culture of either hacking humanity, or patching together the best practices accumulated in the past (or even just adopting the past), because clearly, we are held back by social constraints - if we weren't, we'd be able to fully realize our winning potential. (I'm strawmanning a bit, yes.) For a while I ignored the voice in the back of my mind that kept mumbling 'inferential distances between the dreams of these people and the underlying wetware are too great for you to estimate', or some such, but I don't want to anymore.
To put it simply, there is a marked difference within biologists in how reverently they view the gross (and fine) human anatomy, in how easily they accept that a body is just a thing, composed of matter, with charges and insulation and stuff -just a system of tubes, but still not a car in which you can individually tweak the axles and the windshield (probably). (This is why I think Peter Watts is so popular on LW - the idea that you can just tinker with circuitry and upgrade people.
Psychologists are the most 'gentle', they and the doctors have too much 'social responsibilities' baked in to comfortably discuss people as walking meat. Botanists (like me) don't have enough knowledge to do it, but we at least are aware of this. Biochemists are narrow-minded by necessity (too many pathways). Vertebrate zoologists are best (Steinbeck, I think, described it in his book about the Sea of Cortes), in that you can count on them to be brutally consistent. Physicists - at least the one I know - like to talk about 'open systems' and such, but they (he) could just as plausibly speak about some totally contrived aliens.
I know it is dishonest to ask LW-ers to spend time on studying exactly human anatomy, but even a thorough look at some skeleton should give you a vibe of how defined human bodies are. There are ridges on the bones. There are seams. Try to draw them, to internalize the feeling.
I'm sorry for the cavalier assuming of ignorance, but I think at least some of you can benefit from my words.
I am not sure what exactly you wanted to say. All I got from reading it is: "human anatomy is complicated, non-biologists hugely underestimate this, modifying the anatomy of human brain would be incredibly difficult".
I am not what is the relation to the following part (which doesn't speak about modifying the anatomy of human brain):
Are you suggesting that for increasing rationality, using "best practices" will be not enough, changes in anatomy of human brain will be required (and we underestimate how difficult it will be)? Or something else?
I read Romashka as saying that the clean separation between the hardware and the software does not work for humans. Humans are wetware which is both.
That, and that those changes in the brain might lead to other changes not associated with intelligence at all. Like sleep requirements, haemorrages or fluctuations in blood pressure in the skull, food cravings, etc. Things that belong to physiology and are freely discussed by a much narrower circle of people, in part because even among biologists many people don't like the organismal level of discussion, and doctors are too concerned with not doing harm to consider radical transformations.
Currently, 'rationality' is seen (by me) as a mix of nurturing one's ability to act given the current limitations AND counting on vastly lessened limitations in the future, with some vague hopes of adapting the brain to perform better, but the basis of the hopes seems (to me) unestablished.
That's also more or less how I see it. I am not planning to perform a brain surgery on myself in the near future. :D
I see three lines of addressing this concern:
1) Anatomy was over a long time under strong evolutionary pressure. Human intelligence is a fairly recent phenomena of the last 100,000 years. It's a mess that's not as well ordered as anatomy.
2) Individual humans deviate more from the textbook anatomy than you would guess by reading the textbook.
3) The brain seems to be build out of basic modules that easily allow it to add an additional color if you edit the DNA in the eye via gene therapy. People with implented magnets can feel magnetic fields. It's modules allow us to learn complex mental tasks like reading texts which is very far from what we evolved to do.
Also, human intelligence has been evolving exactly as long as human anatomy, it simply leaped forward recently in ways we can notice. That doesn't mean it hasn't been under strong evolutionary pressure before. I would say that until humans learned to use tools, the pressure on an individual human had had to be stronger.
I don't think that reflects reality. Our anatomy isn't as different from chimpanzee's as our minds. Most people hear voices in their head that say stuff to them. Chimpanzee's don't have language to do something similar.
I'm not saying otherwise! I'm saying that the formulation has little sense either way. Compare: 'there is little observed variation in anatomy between apes in broad sense because the evolutionary pressure constraining anatomical changes is too great to allow much viable variation', 'there is little observed variation in anatomy ..., but not in intelligence, because further evolution of intelligence allows for greater success and so younger branches are more intelligent and better at survival', 'only change in anatomy drives change in intelligence, so apparently there was some great hack which translated small changes in anatomy to lead to great changes in intelligence', 'chimpanzees never tell us about the voices they hear'...
There are million of years invested into the task about how to move with legs. There's not millions of years invested into the task of how brains best deal with language.
What do you understand as evolution of the mind, then, and how is it related to that of organs?
I think adding lanugage produced something like a quantum leap for the mind and that there's no similar quantum leap for other organ's like the human heart. The quantum leap means that other parts have to adapt and optimize for now language being a major factor.
You could look at IQ.
The mental difference between a human at IQ 70 and a human at IQ 130 is vast. Intelligence is also highly heritable. With a few hundred thousand years and a decent amount of evolutionary pressure on stronger intelligence you wouldn't have many low IQ people anymore.
And yet textbook anatomy is my best guess about a body when I haven't seen it, and all deviations are describable compared to it. What I object to is the norm of treating phenomenology, such as the observations about magnets and eye color, as more-or-less solid background for predictions about the future. If we discuss, say, artificial new brain modules, that's fine by me as long as I keep in mind the potential problems with cranial pressure fluctuations, the need to establish interconnections with other neurons - in some very ordered fashion, building blood vessels to feed it, changes in glucose consumption, even the possibility of your children cgoosing to have completely different artificial modules than you, to the point that heritability becomes obsolete, etc. I am not a specialist to talk about it. I have low priors on anybody here pointing me to The Literature were I to ask.
I think seeing at least the bones and then trying to gauge the distance to what experimental interference one considers possible would be a good thing to happen.
The XKCD for it: DNA (or "Biology is largely solved"): https://xkcd.com/1605/
How does one call a philosophical position that images have intrinsic meanining, rather than assigned one by the external observer?
What can be said about a person giving voice to such position? (with the purpose of understanding their position and how to best one could converse with them, if at all)
I am asking because I encountered such a person in a social network discussion about computer vision. They are saying that pattern recognition is not yet a knowledge of their meaning and yes, meaning is intrinsic to image.
All that comes to my mind is: I am not versed in philosophy, but it looks to me that science is based on the opposite premise and further discussion is meaningless.
To me it sounds like semantic externalism, i.e. the view that meaning doesn't exist in your head but in physical reality.
Are you sure? I can imagine a dualist who consider that meaning to be mental reality but physical reality?
Link: Introducing Guesstimate, a Spreadsheet for Things That Aren’t Certain
How useful do you think this actually is?
This is awesome. Awesome awesome awesome. I have been trying to code something like this for a long time but I've never got the hang of UI design.
Moderately.
On the plus side it's forcing people to acknowledge the uncertainty involved in many numbers they use.
On the minus side it's treating everything as a normal (Gaussian) distribution. That's a common default assumption, but it's not necessarily a good assumption. To start with an obvious problem, a lot of real-world values are bounded, but the normal distribution is not.
It's open source. Right now I only know very basic Python, but I'm taking a CS course this coming semester and I'm going for a minor in CS. How hard do you think it would be to add in other distributions, bounded values, etc.?
As a matter of programming it would be very easy. The difficult part is designing the user interface so that the availability of the options doesn't make the overall product worse.
Author is on the effective altruism forum, he said his next planned future is more distributions, and that he specifically architected it to be easy to add new distributions.
How hard will it be to add features depends on the way it's architected, but the real issue is complexity. After you add other distributions, bounds, etc. the user would have to figure out what are the right choices for his specific situation and that's a set of non-trivial decisions.
Besides, one of the reasons people like normal distributions is that they are nicely tractable. If you want to, say, add two it's easy to do. But once you go to even slightly complicated things like truncated normals, a lot of operations do not have analytical solutions and you need to do stuff numerically and that becomes... complex and slow.
It is already doing everything numerically.
Can I edit events that I created on Less Wrong?
It seems I can't. (I ask because I created this event, but when I pasted the details, I neglected to add the city (Melbourne). And now the map is wrong by about 3600km.
You should see a "edit meetup" link underneath the map at that link.
Thank you. (In hindsight I should have done a page search for "edit".)
Academic and anti-transhumanist, anti-libertarian, democratic socialist Dale Carrico is in full flow against Eliezer's essay, Competent Elites. The comments have the new (to me) tidbit that the aforementioned essay and this one on IQ are not present in Rationality: From AI to Zombies (a base motive is, of course, attributed).
Too much personal attitude to the Yudkowski's piece in question makes this full flow hard to take seriously.
Oh, that blog is sometimes quite fun. In this particular case he's saying many of the things I would love to say about that essay too.
Welcome back AdvancedAtheist (I guess).
Nope. (I have replaced the far-right political jargon with a more mainstream descriptor.)
Edit: Someone has downvoted this. It's pretty pointless to take umbrage at this happening to an anonymous account, but I would like to know what exactly the downvoter finds objectionable.
There are things one could criticize about that EY's article. Cconcidentally I did it in this Open Thread before reading your comment (what EY observed may be specific for IT elites but unusual for rich people in general).
However the linked critique is... a boring rant. It doesn't contain much more information than "I disagree".
If you are not familiar with Carrico's blog and writing style, this is a feature, not a bug.
Looks like a misfeature.
musings
What does an example super-healthy lifestyle look like? Are there any prescription one could model their behaviour changes towards? I imagine it would include like: x amount of exercise, y diet, not smoking, yada yada. The elements that are suprising for a given person would likely be the really important parts. Ideally, if the prescription is sophisticated enough, some kind of prioritisation of different elements would be helpful.
*
Is there a hedonistic counterpart to effective altruism? I'd sure like to get involved with that :) Imagine that, a community of hedonists, complete with a career advisory service, what products/services to spend your money on and such. Haha. I suppose LessWrong is the closest we get.
*
When someone does something that invalidates or fails to validate you, that's a reflection on them first that may then be attributable to you or could be to someone else! Though, that second part should be a separate, impartial analyses if you want to interpret with less bias.
*
Cognitive dissonance. Ah. In the past when I have tried to narrow my value action gap, inadvertently, I ended up actually discovering things I thought where my values where not. That bridged the gap, without actually having to change much on the action song. There were important permanent changes in my mental life, and health behaviours. However, changes to my ways of relating to people where more minor, and/or transient, but none-the-less important. Brings to mind the Bruce Lee quote:
*
I don't understand this at all, and instinctively feel like THAT is linked to the reason I'm >insert inadequacy here<
*
Life as a fund manager podcast, by a fund that is exceedingly transparent with acclaimed opinions and impressive performance. And, importantly, they have a neat, simple website and:
Of particular interest in this community:
It goes into the role of fund management vs index investing, which may help breakthrough the cult of simplistic index investing in investment threads here, whether investing is harder today than it used to be and can a consistent strategy work or do you tweak them constantly
*
superficially unlikely major future trend predictions:
Iran's blogfather: Facebook, Instagram and Twitter are killing the web
"The street finds its own uses for things." -- William Gibson
Game Theory (Open Access textbook with 165 solved exercises) by Giacomo Bonanno.
Why too much evidence can be a bad thing
This isn't "more evidence can be bad", but "seemingly-stronger evidence can be weaker". If you do the math right, more evidence will make you more likely to get the right answer. If more evidence lowers your conviction rate, then your conviction rate was too high.
Briefly, I think what's going on is that a 'yes' presents N bits of evidence for 'guilty', and M bits of evidence for 'the process is biased', where M>N. The probability of bias is initially low, but lots of yeses make it shoot up. So you have four hypotheses (bias yes/no cross guilty yes/no), the two bias ones dominate, and their relative odds are the same as when you started.
So, why not stab someone in front of everyone to ensure that they all rule you guilty?
I believe I read somewhere on LW about an investment company that had three directors, and when they decided whether to invest in some company, they voted, and invested only if 2 of 3 have agreed. The reasoning behind this policy was that if 3 of 3 agreed, then probably it was just a fad.
Unfortunately, I am unable to find the link.
If you are more confident that the method is inaccurate when it is operating then it being low spread is indication that it is not operating. A TV that shows a static image that flickers when you kick it more likely is recieving actual feed than one that doesn't flicker when punched.
If you have multiple TVs that all flicker at the same time it is likely that the cause was the weather rather than the broadcast
Can you clarify what youre talking about without using the terms method, operating and spread.
I have a device that displays three numbers when a button is pressed. If any two numbers are different then one of the numbers is the exact room temperature but no telling which one it is.
If all the numbers are the same number I don't have any reason to think the displayed number would be the room temperature. In a way I have two info channels "did the button pressing result in a temperature reading?" and "if there was a temperature reading what it tells me about the true temperature?". The first of these channels doesn't tell me anything about the temperature but it tells me about something.
Or I could have three temperature meters one of which is accurate in cold, on in moderate temperatures and one in hot temperatures. Suppose that cold and hot don't overlap. If all the temperature cauges show the same number it would mean both the cold and hot meters would in fact be accurate in the same temperatures. I can not be more certain about the temperature than the operating principles of the measuring device as the temperature is based on those principles. The temperature gauges showing differnt temperatures supports me being rigth about the operating principles. Them being the same is evidence that I am ignorant on how those numbers are formed.
https://en.wikipedia.org/wiki/Central_limit_theorem
That is the case that +ing amongs many should be gaussian. If the distribution is too narrow to be caussian it tells against the "+ing" theory. Someone who is amadant that it is just a very narrow caussian could never be proven conclusively wrong. However it places restraints on how ranodm the factors can be. At some point the claim of regularity will become implausible. If you have something that claims that throwing a fair dice will always come up with the same number there is an error lurking about.
The variance of the Gaussian you get isn't arbitrary and related to the variance of variables being combined. So unless you expect people picking folks out of a lineup to be mostly noise-free, a very narrow Gaussian would imply a violation of assumptions of CLT.
This Jewish law thing is sort of an informal law version of how frequentist hypothesis testing works: assume everything is fine (null) and see how surprised we are. If very surprised, reject assumption that everything is fine.
Thus our knowledge on people being noisy means the mean is illdefined instead of inaccurate.
Sorry, what?
having unanimous tesitimony means that the gaussian is too narrow to be the results of noisy testimonies. So either they gave absolutely accurate testimonies or they did something else than testify. Having them all agree raises more doubt on that everyone was trying to deliver justice than their ability to deliver it. If a jury answers a "guilty or not guilty" verdict with "banana" it sure ain't a result of a valid justice process. Too certasin results are effectively as good as "banana" verdicts. If our assumtions about the process hold they should not happen.
Very well explained :)
See:
Looks like the paper is now out: http://arxiv.org/pdf/1601.00900v1.pdf
Thanks Panorama and Gwern, incredibly interesting quote and links
Verdict on Wim Hof and his method?
I've also seen a milder claim from his that exposure to moderate temperature extremes (cold/hot showers, I think) makes one's blood vessels more flexible.
Being able to withstand extreme colds isn't a pretty useful skill?
I probably should have provided more detail in the post. He claims not only to be be able to withstand cold, but to be able to almost fully regulate his immune and other autonomic systems. He furthermore claims that anyone can learn to do this via his method.
For example, he claims to be able to control his inflammation response. This would be very useful to me, at least. There seems to be some science backing up his claims - he was injected with toxins and demonstrated an ability to control his body's cytokine, cortisol, etc. reaction to the toxins. So when I'm asking for a verdict, I'm sort of asking what people think of the quality of this science.
Nothing in the Wikipedia article sounds surprising to me. The Wikipedia article says nothing about him achieving therapeutically useful effects with it or claiming to do so.
I have two friends who successfully cured allergies via hypnosis. One of them found that it takes motivation on the part of the subject and doesn't work well when the subject doesn't pay for the procedure so an attempt of doing a formal scientific trial failed due to the recruited subjects who got the treatment for free being not motivated in the right way.
Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)
My model of him has him having an attitude of "if I think that there's a reason to be highly confident of X, then I'm not going to hide what's true just for the sake of playing social games".
You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.
I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it's about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don't care about religion). Without MWI, something else would be "the most controversial topic which EY should not have added because it antagonizes people for no good reason", and people would speculate about the dark reasons that made EY write about that.
For context, I will quote the part that Yvain quoted from the Sequences:
Everyone please make your own opinion about whether this is how cult leaders usually speak (because that seems to be the undertone of some comments in this thread).
Actually, I can probably answer this without knowing exactly what you mean: the notion of improved Solomonoff Induction that gets him many-worlds seems like an important concept for his work with MIRI.
I don't know where "his work often gets ripped apart" for that reason, but I suspect they'd object to the idea of improved/naturalized SI as well.
inductive bias
His work doesn't get "ripped apart" because he doesn't write or submit for peer review.
The Hell do you mean by "computational monism" if you think it could be a "weaker prior"?
Because he was building a tribe. (He's done now).
edit: This should actually worry people a lot more than it seems to.
Consequentialist ethic
I think LW is skewed toward believing in MWI because they've all read Yudkowsky. It really doesn't seem likely Yudkowsky just gleaned MWI was already popular and wrote about it to pander to the tribe. In any case I don't really see why MWI would be a salient point for group identity.
That's not what I am saying. People didn't write the Nicene Creed to pander to Christians. (Sorry about the affect side effects of that comparison, that wasn't my intention, just the first example that came to mind).
MWI is perfect for group identity -- it's safely beyond falsification, and QM interpretations are a sufficiently obscure topic where folks typically haven't thought a lot about it. So you don't get a lot of noise in the marker.
But I am not trying to make MWI into more than it is. I don't think MWI is a centrally important idea, it's mostly an illustration of what I think is going on (also with some other ideas).
Why?
Because warning against dark side rationality with dark side rationality to find light side rationalists doesn't look good against the perennial c-word claims against LW...
Consider that if stuff someone says resonates with you, that someone is optimizing for that.
There are two quite different scenarios here.
In scenario 1 that someone knows me beforehand and optimizes what he says to influence me.
In scenario 2 that someone doesn't know who will respond, but is optimizing his message to attract specific kinds of people.
The former scenario is a bit worrisome -- it's manipulation. But the latter one looks fairly benign to me -- how else would you attract people with a particular set of features? Of course the message is, in some sense, bait but unless it's poisoned that shouldn't be a big problem.
I don't know why scenario 2 should be any less worrisome. The distinction between "optimized for some perception/subset of you" and "optimized for someone like you" is completely meaningless.
Because of degree of focus. It's like the distinction between a black-hat scanning the entire 'net for vulnerabilities and a black-hat scanning specifically your system for vulnerabilities. Are the two equally worrisome?
equally worrisome, conditional on me having the vulnerability the blackhat is trying to use. This is equivalent to the original warning being conditional on something resonating with you.
MIRI survives in part via donations from people who bought the party line on stuff like MWI.
Are you saying that based on having looked at the data? I think we should have a census that has numbers about donations for MIRI and belief in MWI.
Really, you would want MWI belief delta (to before they found LW) to measure "bought the party line."
I am not trying to emphasize MWI specifically, it's the whole set of tribal markers together.
If there is a tribal marker, it's not MWI persay; it's choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist 'tribe'.
A fair point. Maybe I'm committing the typical mind fallacy and underestimating the general gullibility of people. If someone offers you something, it's obvious to me that you should look for strings, consider the incentives of the giver, and ponder the consequences (including those concerning your mind). If you don't understand why something is given to you, it's probably wise to delay grabbing the cheese (or not touching it) until you understand.
And still this all looks to me like a plain-vanilla example of a bootstrapping an organization and creating a base of support, financial and otherwise, for it. Unless you think there were lies, misdirections, or particularly egregious sins of omission, that's just how the world operates.
Also, anyone who succeeds in attracting people to an enterprise, be it by the most impeccable of means, will find the people they have assembled creating tribal markers anyway. The leader doesn't have to give out funny hats. People will invent their own.
People do a lot of things. Have biases, for example.
There is quite a bit of our evolutionary legacy it would be wise to deemphasize. Not like there aren't successful examples of people doing good work in common and not being a tribe.
edit: I think what's going on is a lot of the rationalist tribe folks are on the spectrum and/or "nerdy", and thus have a more difficult time forming communities, and LW/etc was a great way for them to get something important in their life. They find it valuable and rightly so. They don't want to give it up.
I am sympathetic to this, but I think it would be wise to separate the community aspects and rationality itself as a "serious business." Like, I am friends with lots of academics, but the academic part of our relationship has to be kept separate (I would rip into their papers in peer review, etc.) The guru/disciple dynamic I think is super unhealthy.
Given the way the internet works bloggers who don't take strong stances don't get traffic. If Yudkowsky wouldn't have took positions confidently, it's likely that he wouldn't have founded LW as we know it.
Shying away from strong positions for the sake of not wanting to be vulnerable is no good strategy.
I don't agree with this reasoning. Why not write clickbait then if the goal is to drive traffic?
its a balancing act
I don't think the goal is to drive traffic. It's also to have an impact on the person who reads the article. If you want a deeper look at the strategy look at Nassim Taleb is quite explicit about the principle in Antifragile.
I don't think that Elizers public and private beliefs differ on the issues that RaelwayScot mentioned. A counterfactual world where Eliezer would be a vocal about his beliefs wouldn't have ended up with LW as we know it.