Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bogus 19 January 2017 12:50:32AM *  0 points [-]

I like the "Black Lives Matter" movement. I also like the "Black Lives Matter" name, as long as it's understood that "Black Lives" is intended as a convenient shorthand for "Whichever Lives Are Most Affected By Police Brutality At The Moment". I don't like that so many adherents of the "Black Lives Matter" movement object to the "All Lives Matter!" meme and call it racist, because this tells me that they're definitely taking the "Black Lives" part the wrong way.

Comment author: NatashaRostova 19 January 2017 04:39:12AM 0 points [-]

Well, different people understand it in different ways. Some are horrible people who understand it in the worst way. Others are great people who understand it in the best way. The entire group is willing to sacrifice clarity and a clear definition, in favor of something sufficiently vague to band together a collective action who overlap on certain dimensions.

I think for that reason though, trying to debate the definition or how it's understood is pointless. Sadly. I don't blame people who think it's a worthy cause anyway, maybe they are right. I personally can't stand associating with movements where the direction isn't clear, but that's just me.

In response to Project Hufflepuff
Comment author: NatashaRostova 18 January 2017 10:43:02PM 1 point [-]


Comment author: NatashaRostova 18 January 2017 05:44:54AM 5 points [-]


I'm gonna give you sort of an unsatisfying answer. I had a similar interest, which resulted in me getting my MSc and working in research at the Fed for a few years, with the goal of sorting it out in my head (ended up going private sector instead of getting a PhD). As far as I have surveyed, there are different models of money, but it's scientifically an unsolved problem. There seems to be a level of complexity that arises as you increase the number of people on a monetary system, increase industries, increase geographical scale, add new countries and exchanges, and add complex financial systems. As this grows, filtering out what and how, exactly, money interacts with these systems, starts to get very messy.

As an example, during the financial crisis, trillions of dollars 'disappeared.' They disappeared because they only ever existed because we were borrowing from our future selves, then collectively lost faith in our future selves having that money, so the money ceased to exist today. Is that how a commodity behaves? Well, now we are trying to build classifications for what is and isn't a commodity. Of course, you could do the same thing on a gold standard if banks were allowed to issue demand deposits, which combined with fractional reserve banking leads to the same thing.

Monetarism, I firmly believe, isn't something you can reason through intuitively at a casual level. I decided it wasn't something I wanted to devote my life to, and even though I spent a couple years working daily in the field, I don't know that I understand that much (although I do know what I don't know, which definitely counts as real knowledge).

I think monetary economics is sort of a mind-killer, since trying to intuitively reason through monetarism can take you down many very different paths, all of which seemingly arise from an incredibly reasonable set of axioms and inferences. If you ever listen to really clever Austrians or Keynesians discuss their view, it's incredibly compelling. That sets off alarms in my favorite heuristic of undeterminism, when multiple models of the world fit the data equally well. It's super common for blogosphere denizens or naive rationalists to try their hand at monetary economics, convinced they've stumbled upon some key insight that means all econ professors are wrong.

I will say, while I didn't leave the fed enamored or anything, a subset of those economists are brilliant and humble. I notice this flawed reasoning so often, where independent researchers, or researchers in another field, will construct elaborate arguments against the most uncharitable readings of economists arguments. Often they won't have ever spoken to a notable economist in person. They don't ever have to present to peers, they never have to formalize their arguments mathematically, and they never bother engaging in the more advanced formulations of economic arguments that wind up in journals. Anyway, I'm getting off track here...

While as a rule I don't think mathematizing things necessarily makes them clearer, I am convinced it's the right way to proceed in monetary studies. It forces a strict structure, which prevents us from using words to overfit or get lost. Although the field is so complex, and it's intertwined with historical narratives that aren't always easily turned into data sets, so that can sometimes make it harder. The math often gets sorta complicated as well.

Of course, the actual monetary economy has real data. Most of which we can't collect. So the theoretical models are our way of trying to imagine what the structure would look like, even though they aren't empirical. Which gets to another problem, which is how confident can one be in theoretical economics? Sometimes the assumptions are incredibly robust, but the systems are often very complex.

One place I will say I think many economists act contrary to LW style rationality, is in choosing a side, rather than taking the rational view that there are many sides with equally valid claims to truth, and they should work together to expose what is correct. It has always struck me as being mind-killed when people state "Oh, I'm a neo-Keynsian so I believe XYZ, you're a non-Keynsian, so you reject ABC" (or whatever). I mean... maybe the Austrians are all right, and they have this unique perception of reality none of the Neo-Keynsian scholars have, because they have some more profoundly true insight into the mesh of reality that is lost on the other econo-plebs... But that doesn't seem like the most likely scenario to me.

Or maybe Paul Krugman really is right about everything, but still... I doubt it. He was once a smart young man that had some crucial insights on the theoretical mathematical structure behind international trade, which earned him an econ Nobel. I don't think he's in tune with empirical realities though. He's just a genius at imagining some elegant mathematical structure that characterizes an economy which might or might not map to reality, and then convincing himself it's actually exactly how reality operates. That's the big mistake, I think.

If you want to take a look down the rabbit hole, I'd suggest reading Milton Friedman's books on monetary history. Even his detractors tend to agree his insight and clarity on money is absolutely incredible. He also is great at explaining things without too much math, but still using ratios and dataseries in his books when appropriate.

For shorter term stuff, check out John Cochrane's stuff, he's my favorite social scientist, (http://faculty.chicagobooth.edu/john.cochrane/research/papers/cochrane_policy.pdf, http://johnhcochrane.blogspot.com/search/label/Monetary%20Policy). His blog -- second link -- is really great.

Comment author: ingive 16 January 2017 11:42:50PM 5 points [-]

Hi, my horizons are towards hardcore Effective Altruism, whereas to be a successful E altruist you have to figure out how your brain works, emotional intelligence, QM and how to condition yourself. I'm very concerned that rational people who have apparently mastered the Way spend their time arguing on irrelevant matters with users here rather than being in line with their utility function and purpose. So a part of my future research is how to figure out how to communicate with high-IQ individuals here to unlock their potential and improve their reasoning.

For now I have to read the Sequences, do some math, read Jaynes and other rationalist material. http://rationality.org/resources/reading-list

I have around 7017+-500 pages left to read and understand, which will take around a year. If you have any other suggestions for material to read based on my post history among others, I highly appreciate it. Thanks.

Comment author: NatashaRostova 17 January 2017 01:37:07AM 1 point [-]

Good luck! I'm looking forward to reading your ebook on 5 easy tips on how to unlock my inner high-IQ potential.

Comment author: The_Jaded_One 15 January 2017 12:13:57PM 9 points [-]

"Dangerous speech" could easily become a weapon to attack and surpress views you don't like.

This has already happened with "Hate speech" and "Fake news".

Comment author: NatashaRostova 15 January 2017 11:29:26PM 5 points [-]

I think the most dangerous aspect of 'dangerous speech' is it is a shared meme to disregard certain types of arguments off-hand, regardless of how true or false they are. It becomes most dangerous when someone then, for some reason, decides to investigate further and realizes "Hey, some of this stuff is true! And I can't trust anyone anymore."

Comment author: MrMind 13 January 2017 03:41:08PM 0 points [-]

Maybe Leave won regardless of or even despite my ideas. Maybe I’m fooling myself like Cameron. Some of my arguments below have as good an empirical support as is possible in politics (i.e. not very good objectively) but most of them do not even have that. Also, it is clear that almost nobody agrees with me about some of my general ideas. It is more likely that I am wrong than 99% of people who work in this field professionally.

He himself warns not to be construed as too influential. In this case the Scott's caveat apply: elections that are won by slim margin don't say much of significance.

Comment author: NatashaRostova 13 January 2017 10:37:19PM 0 points [-]

I think it's fair to argue that elections that are won by a slim margin don't say much of significance about discrete narrative changes in the weeks leading up to the election. That could be false though, if for example we view Trump winning the election as a 'treatment' effect, which gives him a new discrete ability to change the narrative.

But more generally, I think an election such as Brexit does give us a significant story, not necessarily for the week leading up to it, but for the changing preferences of a population in the year or two leading up to it and the invocation of the election itself.

Comment author: gwern 09 January 2017 08:50:35PM *  21 points [-]

So apparently the fundamental attribution bias may not really exist: "The actor-observer asymmetry in attribution: a (surprising) meta-analysis", Malle 2006. Nor has Thinking, Fast and Slow held up too well under replication or evaluation (maybe half).

I am really discouraged about how the heuristics & biases literature has held up since ~2008. At this point, it seems like if it was written about in Cialdini's Influence, you can safely assume it's not real.

Comment author: NatashaRostova 10 January 2017 03:56:48AM 3 points [-]

I think there are some serious issues with the methodology and instruments used to measure heuristics & biases, which they didn't fully understand even ten years ago.

Some cognitive biases are robust and well established, like the endowment effect. Then there are the weirder ones, like ego depletion. I think a fundamental challenge with biases is clever researchers first notice them by observing other humans, as well as observing the way that they think, and then they need to try and measure it formally. The endowment effect, or priming, maps pretty well to a lab. On the other hand, ego depletion is hard to measure in a lab (in any sufficiently extendable way).

I think a lot of people experience, or think they experience, something like ego depletion. Maybe it's insufficiently described, or a broad classification, or too hard to pin down. So the original researcher noticed it in their experience, and formed a contrived experiment to 'prove' it. Everyone agreed with it, not because the statistics were compelling or it was a great research design, but because they all experience, or think they experience, ego depletion.

Then someone replicates it, and it doesn't replicate, because it's really hard to measure robustly. I think ego depletion doesn't work well in a lab, or without some sort of control or intervention, but those are hard things to set up for such a broad and expansive argument. And I guess you could build a survey, but that sucks too.

In the fundamental attribution error, I think that meta analysis is great, in that it shows that these studies suck statistically. They only work if you come to them with the strong prior evidence that "Hey, this seems like something I do to other people, and in the fake examples of attribution error I can think of lots of scenarios where I have done that." Of course, our memory sucks, so that is a questionable prior, but how questionable is it? In the end I don't know if it's real, or only real for some people, or too generalized to be meaningful, or true in some situations but not others, or how other people's brains work. Probably the original thesis was too nice and tidy: Here is a bias, here is the effect size. Maybe the reality is: Here is a name for a ton of strange correlated tiny biases, which together we classify as 'fundamental attribution', but which is incredibly challenging to measure statistically over a sample population in a contrived setting, as the best information to support it seems inextricably tangled up in the recesses of our brains.

(also most heuristics and biases probably do suck, and lack of replication shows the authors were charlatans)

Comment author: NatashaRostova 09 January 2017 08:43:10PM *  2 points [-]

Once that happened, I’d no longer be able to eat chickens. I could apply the same process to all animals, and so by induction I would be unwilling to eat any animal.

This is an interesting way to look at using induction, but I see it more as a willing reprogramming of your brain. In your case, you were able to simulate a case where eating chicken would disgust you (eating a pet) and that gave you impetus to stop eating chicken.

I am a big meat eater. I predict there is a 30-60% chance I would drastically reduce my meat eating if I was forced to run a slaughterhouse for my food, and see the suffering and kill the animals. Every time I wanted meat I'd need to take on the moral burden of killing an animal. If I may try my hand at some pop-historical analysis, I bet this is why past societies frequently held a reverence, often spiritual, with the killing of animals for food.

...And yet, I still eat lots of meat. Probably if someone took me on a tour of kids with malaria in Africa I'd donate more to those charities. Or if I was walked through a Russian sex trafficking brothel, I would support organizations to end those practices. Or if someone made a three hour movie on the tragedy of the homeless person who sleeps by my apartment, documenting their misfortune, I would go out and buy a coat and food and try to help them because it would unlock and develop emotions I don't currently have.

I sort of know if I went through these simulations it would change my outlook and behavior in life. These are also obviously topics I already am familiar with, but there are surely lots of topics I'm unfamiliar with that would change my view of the world. Of course I can't have all these experiences, and I'm not sure how I should try to adjust my behavior today on the expectations of how my behaviors would change if I were to have experiences that I'm not going to have, but plausibly could have.

Is it rational for me to eat less meat now, even though I enjoy it and don't feel guilty, because a plausible counter-factual me who had some experiences I don't have would tell me to? Or is it rational for me to eat meat because there is no counter-factual me who exists, and as it stands now I enjoy it and don't feel guilty?

Comment author: NatashaRostova 08 January 2017 11:46:10PM 0 points [-]

I've had a similar experience at [large tech firm]. It was becoming clear that an intersecting project with two teams wasn't working. The challenge though was it was stuck in a rotten equilibrium. Each team's true incentive was distinct and contrary to the other team. Yet the mandate was 'thou shalt have the same incentives.' Everyone kept publicly claiming we had aligned incentives, which you shouldn't have to publicly explain if it's actually true.

A lot of social choice theory guys tried to explain this in the context of voting, and the stability of outcomes. Arrow's impossibility theorem can be resolved if you have a dictator. In the end a strong and smart leader can solve so many issues of indecisiveness, and can take ownership of directing the deliberation, and adjusting for variables no one else owns (e.g. the cost of time in making a decision).

Still, I remember thinking about this all while intersecting teams were making obviously bad choices, and thinking that the best way out would just be to make someone sovereign and let them choose.

Leadership gets weird though. Despite all this theory and analysis over group choice dynamics, there is this transformative property of a great leader that seems to inspire people to buy into their vision and work incredibly hard. I don't understand how that works though, other than some handwaving and 'psychology.'

Comment author: The_Jaded_One 08 January 2017 11:06:24PM 2 points [-]

Thanks, that's an interesting perspective!

You know it occurs to me that it would be nice to have some kind of guide to all the "forbidden knowledge" that's out there - West Hunter, HBDChick, Infoproc.

Comment author: NatashaRostova 08 January 2017 11:16:15PM *  4 points [-]

I think that's what most people who were or want to be part of the rationalist community want to work on now. That's what Scott Alexander does full time with SSC and his comments. Even on LW despite the weird and dated rules, everyone wants to discuss this stuff and work on slowly figuring it out. I don't think anyone really cares how a 22 year old has reinterpreted EY's post on cognitive biases or some new version of AI risk(and I say that having put all my faith in 22 year old engineering kids saving the world).

I'll probably just post on it more now here, and see what happens.

View more: Next