FiftyTwo

Wiki Contributions

Comments

Sorted by

Was rereading a bunch of early 2010s LW recently, prompted by getting a reply on one of my old comments, and its definitely weird. But the flavor of weird feels different somehow? A lot more earnest and direct, and with people more willing to make silly jokes and tangents. 

There were also more top level posts along the lines of "Here's this new rationality technique I've been trying, what do people think?" It feels less, high context, I guess? A lot of current discussion is people immersed in some wider meta debate with long established sides and real world stakes to it. 

I imagine that kind of posting wouldn't work particularly well these days given that the environment around it has changed. 

I feel like you're conflating two different levels, the discourse in wider global society and within a specific community. 

I doubt you'd find anyone here who would disagree that actions by big companies that obscure the truth are bad. But they're not the ones arguing on these forums or reading this post. Vegans have a significant presence in EA spaces so should be contributing to those productively and promoting good epistemic norms. What the lobbying team of Big Meat Co. does has no impact on that. 

Also in general I'm leery of any argument of the form "the other side does as bad or worse so its okay for us to do so" given history. 

I somewhat agree with this but I think its an uncharitable framing of the point, since virtue signalling is generally used for insincerity. My impression is that the vegans I've spoken with are mostly acting sincerely based on their moral premises, but those are not ones I share. If you sincerely believe that a vast atrocity is taking place that society is ignoring then a strident emotional reaction is understandable. 

I've definitely noticed a shift in the time I've been involved or aware of EA. In the early 2010s it was mostly focused on global poverty and the general idea of evidence based charity, and veganism was peripheral. Now it seems like a lot of groups are mainly about veganism, and very resistant to people who think otherwise. And as veganism is a minority position that is going to put off people who would otherwise be interested in EA

You still run into the alignment problem of ensuring that the upgraded version of you aligns with your values, or some extension of them. If my uploaded transhuman self decides to turn the world into paperclips that's just as bad as if a non-human AGI does. 

Never really got anywhere. Its long enough ago that I don't really remember why, but think I generally found it unengaging. Have periodically tried to teach myself programming through different methods since then but none have stuck. This probably speaks to the difficulty of learning new skills when you have limited time/energy resources, and no specific motivation, more than anything else. (Have had similar difficulties with language learning, but got past them due to short term practical benefits, and devoting specific time to the task). 

It mixes the personal and professional level

Possibly reflective of a wider issue in EA/rationalist spaces where the two are often not very clearly delineated. In that sense EA is more like hobby/fandom communities than professional ones. 

Saying that people would be better off taking more risks under a particular model elides the question of why they don't take those risks to begin with, and how we can change that. If its desirable to do so. 

The psychological impact of a loss of x is generally higher than that of a corresponding gain. So if I know I will feel worse from losing $10 than I will feel good from gaining $100, then its entirely rational in my utility function to not take a 50/50 bet between those two outcomes.  Maybe I would be better off overall if I didn't over weight losses, but utility functions aren't easily rewritable by humans. The closest you could come is some kind of exposure therapy to losses. 

Also, we have a huge amount of mental architecture devoted to understanding and remembering spatial relationships of objects (for obvious evolutionary reasons). Using that as a metaphor for purely abstract things allows us to take advantage of that mental architecture to make other tasks easier.

A very structured version of this would be something like a memory palace where you assign ideas to specific locations in a place, but I think we are doing the same thing often when we talk about ideas in spatial relationships, and build loose mental models of them as existing in spatial relationship to one another (or at least I do). 

I think the core thing here is same-sidedness.

The converse of this is that the maximally charitable approach can be harmful when the interlocutor is fundamentally not on the same side as you, in trying to honestly discuss a topic and arrive at truth. I've seen people tie themselves in knots when trying to apply the principle of charity, when the most parsimonious explanation is that the other side is not engaging in good faith, and shouldn't be treated as such. 

It's taken me a long time to internalise this, because my instinct is to take what people say at face value. But its important to remember that sometimes there isn't anything complex or nuanced going on, people can just lie.

Load More