How do you determine whether a seat belt cutter/window breaker is a good one? Should you test it on an old rag or something?
I'm afraid I don't know. You might get better luck making this question a top level post.
What contingencies should I be planning for in day to day life? HPMOR was big on the whole "be prepared" theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I'd bet there's some low-hanging fruit that I'm missing out on in terms of preparedness. Any suggestions? They don't have to be big things - people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that the average person is likely to face one at some point in their life, and being prepared for it can have a very high payoff in that case. But there's also a failure mode that people fall into of focusing only on preparing for sexy-but-extremely-low-probability events (I recall a reddit thread that discussed how to survive in case an airplane that you're on breaks up, which...struck me as not the best use of one's planning time). So I'd be just as interested in mundane, everyday tips.
(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)
I am by no means an expert, but here are a couple of options that come to mind. I came up with most of these by thinking "what kind of emergency are you reasonably likely to run into at some point, and what can you do to mitigate them?"
Learn some measure of first aid, or at least the Heimlich maneuver and CPR.
Keep a Seat belt cutter and window breaker in your glove compartment. And on the subject, there are a bunch of other things that you may want to keep in your car as well.
Have an emergency kit at home, and have a plan for dealing with natural disasters (fire, storms, etc). If you live with anyone, make sure that everyone is on the same page about this.
On the financial side, have an emergency fund. This might not impress your friends, but given how likely financial emergencies (e.g. unexpectedly losing a job) are relative to other emergencies, this is a good thing to plan for nonetheless. I think the standard advice is to have something on the order 3-6 months of income tucked away for a rainy day.
I think these concerns are good if we expect the director(s) (/ the process of determining LessWrong's agenda) to not be especially good. If we do expect them the director(s) to be good, then they should be able to take your concerns into account -- include plenty of community feedback, deliberately err on the side of making goals inclusive, etc. -- and still produce better results, I think.
If you (as an individual or as a community) don't have coherent goals, then exclusionary behavior will still emerge by accident; and it's harder to learn from emergent mistakes ('each individual in our group did things that would be good in some contexts, or good from their perspective, but the aggregate behavior ended up having bad effects in some vague fashion') than from more 'agenty' mistakes ('we tried to work together to achieve an explicitly specified goal, and the goal didn't end up achieved').
If you do have written-out goals, then you can more easily discuss whether those goals are the right ones -- you can even make one of your goals 'spend a lot of time questioning these goals, and experiment with pursuing alternative goals' -- and you can, if you want, deliberately optimize for inclusiveness (or for some deeper problem closer to people's True Rejections). That creates some accountability when you aren't sufficiently inclusive, makes it easier to operationalize exactly what we mean by 'let's be more inclusive', and makes it clearer to outside observers that at least we want to be doing the right thing.
(This is all just an example of why I think having explicit common goals at all is a good idea; I don't know how much we do want to become more inclusive on various axes.)
You make a good point, and I am very tempted to agree with you. You are certainly correct in that even a completely non-centralized community with no stated goals can be exclusionary. And I can see "community goals" serving a positive role, guiding collective behavior towards communal improvement, whether that comes in the form of non-exclusiveness or other values.
With that said, I find myself strangely disquieted by the idea of Less Wrong being actively directed, especially by a singular individual. I'm not sure what my intuition is stuck on, but I do feel that it might be important. My best interpretation right now is that having an actively directed community may lend itself to catastrophic failure (in the same way that having a dictatorship lends itself to catastrophic failure).
If there is a single person or group of people directing the community, I can imagine them making decisions which anger the rest of the community, making people take sides or split from the group. I've seen that happen in forums where the moderators did something controversial, leading to considerable (albeit usually localized) disruption. If the community is directed democratically, I again see people being partisan and taking sides, leading to (potentially vicious) internal politics; and politics is both a mind killer and a major driver of divisiveness (which is typically bad for the community).
Now, to be entirely fair, these are somewhat "worst case" scenarios, and I don't know how likely they are. However, I am having trouble thinking of any successful online communities which have taken this route. That may just be a failure of imagination, or it could be that something like this hasn't been tried yet, but it is somewhat alarming. That is largely why I urge caution in the instance.
I loved that post. I commented on it originally, but I'll comment here too.
People probably use bicameral reasoning for a lot of things, but I doubt that too many people actually use it for thinking about politics.
I remember sitting around a campfire with my mom and grandpa in like kindergarten and listening to them discuss politics. My grandpa was talking about all these issues, and my mom was admitting to not being well-informed or having strong opinions about any of them. She said, “As long as democrats approve of abortion, nothing will convince me to vote for them.” My grandpa sighed and told her she was trapped in a religious bubble, that there was a whole world out there she was ignorant of.
But I remember thinking, “Wow! Mom is actually smart.” If democrats are murdering tons and tons of babies every year, and we only hope they go to heaven but God doesn’t actually say, who cares about money or guns or school or any of that other stuff? Even global warming was insignificant, since we believed the earth was going to end anyway. Maybe global warming would just be His way of destroying it. So based on abortion alone, I considered myself Republican until I deconverted from Christianity.
Maybe a week after deconverting, I thought, “well, I guess I should think about voting democrat now.” This was based solely on environmental concerns. There was no longer any guarantee that the earth would end anytime soon, and I’d quite rather it didn’t. All the other issues were fun to think about if I could find the time to inform myself, but I wasn’t terribly concerned about them.
So ultimately, maybe a lack of scope insensitivity could be the root cause of the strong correlation between political party and religious affiliation. Everyone blames herd mentality, which is a huge part of it, but even people who bother thinking for themselves are likely to arrive at the same conclusion as their peers... which is a nice counter for people who think that accurate beliefs aren't too important as long as people's beliefs make them happy.
While your family's situation is explained by lack of scope insensitivity, I'd like to put forward an alternative. I think the behavior you described also fits with rationalization. If you family had already made up their mind about supporting the Republican party, they could easily justify it to themselves (and to you) by citing a particular close-to-the-heart issue as an iron-clad reason.
Rationalization also explains why "even people who bother thinking for themselves are likely to arrive at the same conclusion as their peers" - it just means that said people are engaging in motivated cognition to come up with reasonable-sounding arguments to support the same conclusions as their peers.
Interesting point! It seems obvious in hindsight that if you reward people for making predictions that correspond to reality, they can benefit both by fitting their predictions to reality or fitting reality to their predictions. Certainly, it is an issue that come up even in real life in the context of sporting betting. That said, this particular spin on things hadn't occurred to me, so thanks for sharing!
I think the issue you are seeing is that Less Wrong is fundamentally a online community / forum, not a movement or even a self-help group. "Having direction" is not a typical feature of such a medium, nor would I say that it would necessary be a positive feature.
Think about it this way. The majority of the few (N < 10) times I've seen explicit criticism of Less Wrong, one of the main points cited was that Less Wrong had a direction, and that said direction was annoying. This usually refereed to Less Wrong focusing on the FAI question and X-risk, though I believe I've seen the EA component of Less Wrong challenged as well. By its nature, having direction is exclusionary - people who disagree with you stop feeling welcome in the community.
With that said, I strongly caution about trying to change Less Wrong to import direction to the community as a whole (e.g. by having an official "C.E.O"). With that said, organizing a sub-movement within Less Wrong for that sort of thing carries much less risk of alienating people. I think that would be the most healthy direction to take it, plus it allows you to grow organically (since people can easily join/leave your movement and you don't need to get the entire community mobilized to get started).
I consider philosophy to be a study of human intuitions. Philosophy examines different ways to think about a variety of deep issues (morality, existence, etc.) and tries to resolve results that "feel wrong".
On the other hand, I have very rarely heard it phrased this way. Often, philosophy is said to be reasoning directly about said issues (morality, existence, etc.), albeit with the help of human intuitions. This actually seems to be an underlying assumption of most philosophy discussions I've heard. I actually find that mildly disconcerting, given that I would expect it to confuse everyone involved with substantial frequency.
If anyone knows of a good argument for the assumption above, I would really like to hear it. I've only seen it assumed, never argued.
There is a not necessarily large, but definitely significant chance that developing machine intelligence compatible with human values may very well be the single most important thing that humans have or will ever do, and it seems very likely that economic forces will make strong machine intelligence happen soon, even if we're not ready for it.
So I have two questions about this: firstly, and this is probably my youthful inexperience talking (a big part of why I'm posting this here), but I see so many rationalists do so much awesome work on things like social justice, social work, medicine, and all kinds of poverty-focused effective altruism, but how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to? This sort of segues in to my second question, which is what is the most any person, more specifically, I can do for FAI? I'm still in high school, so there really isn't that much keeping me from devoting my life to helping the cause of making sure AI is friendly. What would that look like? I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
To address your first question: this has to do with scope insensitivity, hyperbolic discounting, and other related biases. To put it bluntly, most humans are actually pretty bad at maximizing expected utility. For example, when I first head about x-risk, my thought process was definitely not "humanity might be wiped out - that's IMPORTANT. I need to devote energy to this." It was more along the lines of "huh; That's interesting. Tragic, even. Oh well; moving on..."
Basically, we don't care much about what happens in the distant future, especially if it isn't guaranteed to happen. We also don't care much more about humanity than we do about ourselves plus our close ones. Plus we don't really care about things that don't feel immediate. And so on. Then end result is that most people's immediate problems are more important to them then x-risk, even if the latter might be by far the more essential according to utilitarian ethics.
Good point. May I ask, is "explicit utility function" standard terminology, and if yes, is there a good reference to it somewhere that explains it? It took me a long time until I realized the interesting difference between humans, who engage in moral philosophy and often can't tell you what their goals are, and my model of paperclippers. I also think that not understanding this difference is a big reason why people don't understand the orthagonality thesis.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
3-6 months? People don't go on piling up savings indefinitely? How else do you retire? I mean... there is state pension in the country I live in but I would not count it not going bust in 30 years so I always assumed I will have what I save and then maybe the state pays a bonus.
You are of course entirely correct in saying that this is far too little to retire on. However, it is possible to save without being able to liquidate said saving; for example by paying down debts. The Emergency Fund advice is that you should make a point to have enough liquid savings tucked away to tide you over in a financial emergency before you direct your discretionary income anywhere else.