Open thread, Dec. 1 - Dec. 7, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (346)
Court OKs Barring High IQs for Cops
An aspiring cop got rejected for scoring too high on an IQ test.
I cannot begin to understand why they would do that.
It may be worth mentioning that the article appears to be from 14 years ago. (Or it may not; for all I know the same policy is still in place.)
I went into the article thinking the guy would have a freakishly high IQ (160+) where I could maybe see the point, but instead was 125. The judges most likely scored higher than that - aren't they feeling even slightly belittled at the suggestion that they'd be ineligible for law enforcement work because they'd find it too boring?
The weird part is that after being rejected as a police officer he goes off to work as a prison guard. The latter is way more boring. If he's able to put up with that, he should be able to cope with the boredom of law enforcement.
Their explanation is that he would get bored and leave. I'm not surprised - I've been rejected for jobs more than once due to being too smart. (I'm not just boasting, it does seem relevant)
What cause would an NRx EA donate to?
Sarah Hoyt isn't quite NRx, but her recent (re)post here seems relevant.
In particular, the old distinction between deserving and undeserving poor.
NRx's are generally not utilitarians.
What ethical system do you follow?
I'm a virtue ethicist.
I've met at least one claiming he is.
The Austrian "Iron Ring" party. Restore the Hapsburg Empire!
Yes, I am aware that there are things to understand about the crazy straw design world. :)
The most coherent proposal I've heard so far is applying being TRS at the polling place to charity: The principle of optimising your donations for cultural-marxist outrage.
Depends on what kind of NRx. There isn't a single value system shared among them.
The popular trichotomy is "Techno-commercialist / Theonomist / Ethno-nationalist" - I don't know about the first two, but the ethnonationalists would probably disagree with a lot of Givewell's suggestions.
Not uniformly, I think - Japan is an Ethno-nationalist state, and also used to be the world's largest supplier of foreign aid.
Ethno-nationalists certainly have no problem with geopolitics or mutually-beneficial investment, and foreign aid can be useful there.
LessWrong in Alameda:
I've posted elsewhere that I'm applying for work in Alameda, CA. At the moment, I'm not at the top of the list to get the job but I'm still in the running, so, before any further interviews occur, I decided I'd ask this.
Do any LWers live or work in Alameda? Given our strong connection to the Bay Area, I'm assuming at least a few people are from the island. I'd especially be interested in talking to anyone working at the library(ies) there. I'd like to get an idea of Alameda from an LWer's perspective. If it's a good place to live, work.
Also, if things end up going well, I'm hoping to find a roommate. There are few places I'd prefer to look than here. For now, though, I'm just hoping to get an idea of Alameda from the LessWrong view.
I am trying to design a competition for students focused on Bayesian understanding of ecology. Could I ask here for some pointers? I will have data on 2 sets of maybe physiologically linked parameters (from some research I plan to do next summer), and then offer the students to review qualitative descriptions of the link between them, like 'Some plants having mycorrhiza have higher biomass than the average - 3 populations out of 5' (see L. Carroll, The game of logic.) There will be other variables that might correlate with mycorrhiza more or less strongly than total biomass, and the students will have to make an educated guess as to which variable (s) have better predictive value (that is somewhat like what we start from in 'real life', if the subject has not been rigorously researched before). For this task the participants will have a month. Then on some set day I will call those of them who have offered the more substantial explanations and give them exact probabilities of, say, (plant having mycorrhiza)/(plant having a higher than average biomass) etc. so they could compute the posterior probability of a GIVEN plant having mycorrhiza. Bonus points if they note that the variable(s) they have chosen work worse than some other one(s). For this task they will have several hours. What is perhaps more interesting to me is not the number they will give, but the way they will arrive at it. I would gladly share the experimental data with anybody interested in a similar experiment. Is there some advice you can give me? Thank you.
Say I have have a desktop with a monitor, a laptop, a tablet and a smart phone. I am looking for creative ideas on how to use them simultaneously, for example when programming to use the tablet for displaying documentation and having multiple screens via desktop computer and laptop, while the smart phone displays some tertiary information.
The biggest hangup I've found in using multiple computers simultaneously is copy pasting long strings. I can chat them to myself, but it's still slightly awkwarder than I'd like.
Otherwise, Sherincall is pretty on point.
Unplug the desktop monitor and plug it in the laptop. Open some docs on the tablet. Keep your todo list on the phone.
Or just get another monitor or two and use that. In my experience, you never need more than 3 monitors at once (for one computer, of course).
Is there a better place than LW to put a big post on causal information, anthropics, being a person as an event in the probability-theory sense, and decision theory?
I'm somewhat concerned that such things are a pollutant in the LW ecosystem, but I don't know of a good alternative.
Manfred, I think your posts on Sleeping Beauty, etc. are fine, people just may not be able to follow you or have anything to contribute.
Thanks. So would you recommend that for the new stuff I use those sorts of 3/4-baked stream of consciousness posts?
The way I do the equivalent of what you are doing is write up something in various stages of "less than fully baked" and send to someone I know is interested/I respect in private, and have a chat about it. What's nice about that is it exploits the threat of embarrassment of outputting nonsense to force me to at least "bake" it sufficiently to have a meaningful conversation about it. It's very easy to output nonsense.
But I am skeptical regarding the wiki model of generating good novel stuff -- there's too much noise.
Why would it be a pollutant in the LW ecosystem? This sounds pretty central in the space of things LW people are interested in; what am I missing? (Are you concerned that it would be too elementary for LW? that it might be full of mistakes and annoy or mislead people? that its topic isn't of interest to LW readers? ...)
What's the intended audience? What's it for? (Introducing ideas to people who don't know them? Cutting-edge research? Thinking aloud to get your ideas in order? ...)
I feel like it increases barrier to entry for new people.
Intended audience is me from three years ago, I guess cutting-edge-adjacent.
Nah. Both Discussion and Main fairly consistently have a mix of intimidating technicality, fun (e.g., "Rationality Quotes"), lifehackery, ethics, random discussion, etc., etc., etc. One more bit of intimidating technicality isn't going to scare anyone away who wasn't going to be scared away anyhow.
Sounds like fun. Go for it, say I.
(Important note: I have not done anything remotely resembling research into the thought processes of potential new LW readers, and my model of them may be badly wrong. Don't trust the above much. It's just one random person's opinion.)
Barrier to entry shouldn't be your main criteria. High quality posts draw intelligent people.
My feeling was that SSC is getting close to LW in terms of popularity, but Alexa says otherwise: SSC hasn't yet cracked top 100k sites (LW is ranked 63,755) and has ~600 of links to it vs ~2000 for LW. Still very impressive for a part-time hobby of one overworked doctor. Sadly, 20% of searches leading to SSC are for heartiste.
My suspicion is that SSC would get a lot more traffic if its lousy WP comment system was better, but then Scott is apparently not motivated by traffic, so there is no incentive for him to improve it.
Why do you think that's the case? Are there any cases of blogger getting much more popular after switching to a different comment system?
And what comment system would you advocate?
It's a good question, maybe it does not, I am not aware of any A/B testing done on that. I simply go by the trivial inconveniences.
Scott is against reddit-style karma system, so I'd go for Scott marking comments he finds interesting, at a minimum.
Additionally, comment formatting and presentation which improves nesting and visibility would be nice. Reddit/LW is an OK compromise, userfriendly.org is better in terms of seeing more threads at a glance.
There are many reasons against using the reddit code base. While it's open source in theory it's not structured in a way that allows easy updating.
Is there any solution that would be plug&play for a wordpress blog that you would favor Scott implementing?
Coding something himself would be more than a trival inconvenience.
I also think you underrate the time cost of comment moderation. Want to be a blogger and wanting to moderate a forum are two different goals.
Scott uses WP, and it has plenty of comment ranking plugins. Here is one popular blog with a simple open voting system: http://www.preposterousuniverse.com/blog . It is probably not good enough for SSC, but many other versions are available. As I said, Scott is not interested in improving the commenting system, and probably is not interested in taking any steps beyond great writing toward improving the blog's popularity, either.
That has voting but it doesn't seem to have threaded comments. That means switching to that plugin would break all the existing comment threads.
I would guess that the main issue is that he doesn't want to do work to improve it.
Arguing what's an improvement also isn't easy.
If I look at the blogs of influential people who do put effort into it, I don't see that they all use a comment solution that Scott refuses to use.
The amount of comments can be rather overwhelming as it is. Do you want a larger SSC community, for the ideas to get a wider audience, or what?
It is overwhelming because it is poorly formatted and presented, not because of the volume. There are plenty of forums with better comment formatting, like reddit, userfriendly.org, or slashdot. Lack of comment ranking does not help readability, either.
I find that the bakot's ~new~ on new comments and the dropdown list of new comments is enough to get by with-- for me, the quantity really is the overwhelming aspect on the more popular posts.
Other forums have lots more comments, yet are easier to navigate through.
SSC getting a lot more traffic might change it and not necessarily for the better.
Self Help Books
I'm looking to buy a couple audiobooks from Amazon. Any good recommendations?
This is a filter rather than a recommendation, but read the reviews to find out whether people used the book rather than just finding it a pleasant read.
What are you hoping to improve about your life?
Right now I think my two weakest points are:
Either I'm put down "crazy" or put in a pedestal as "genius", but I'm always put aside, and have very few friends. Love life is similarly disastrous, but I don't think there's a book for people who fall in love too hard, too soon, and too easy.
Tentative suggestion: Maybe you need to live somewhere where you have more access to smart people.
They're a bit hard to come by, and, let's face it, we can be hard to live with even among ourselves.
What exactly causes a person to stalk other people? Is there research that investigates the question when people start to stalk and when they don't?
To what extend is getting a stalker a risk worth thinking about before it's too late?
No research, just my personal opinion: borderline personality disorder.
First the stalker is obsessed by the person because the target is the most awesome person in the universe. Imagine a person who could give you infinitely many utilons, if they wanted to. Learning all about them and trying to befriend them would be the most important thing in the world. But at some moment, there is an inevitable disappointment.
Scenario A: The target decides to avoid the stalker. At the beginning the stalker believes it is merely a misunderstanding that can be explained, that perhaps they can prove their loyalty by persistence or something. But later they give up hope, or receive a sufficiently harsh refusal.
Scenario B: The stalker succeeds to befriend the the target. But they are still not getting the infinite utilons, which they believe they should be getting. So they try to increase the intensity of the relationship to impossible levels, as if trying to become literally one person. At some moment the target refuses to cooperate, or is simply unable to cooperate in the way the stalker wants them to, but to the stalker even this seems like a spiteful refusal.
In both scenarios, now the stalker feels hurt and cheated, and wants revenge. Projecting their false beliefs on the target, they believe the target has lied to them about the inifinite utilons; they blame the target for starting this whole process, and for destroying the stalker's life. (In the next mood swing, the stalker may offer forgiveness to the target, if the target agrees to give them the inifinite utilons now. Then they become angry again, etc.)
But maybe there are more possible mechanisms than this one. Also, my model does not explain why the stalker is targetting one specific person instead of multiple people, or everyone.
I think it is worth thinking about, but I am not sure what specific advice to offer except for (a) avoiding everyone "weird", which seems like an overkill, and (b) using a pseudonym and other methods of protecting your privacy if you want to become even a bit famous.
I would certainly recommend to everyone who wants to become famous (as a bloger, singer, actor, etc.) to choose a pseudonym, stick to it, and never reveal anything personal. (Probably not even the city you live in; I would imagine that the idea that you are geographically distant would discourage most possible stalkers.)
The only anonymous celebrity I can think of is Bansky.
Staying anonymous is not compatible with becoming famous.
*Banksy
He's so anonymous I don't even know how to spell his (or maybe her) name! :-)
I would guess most people become famous before they realize the advantage of anonymity, and then it's too late to start with a fresh name.
But it's also possible that it's simply not worth the effort, because when you become famous enough, someone will dox you anyway.
Could be interesting to know how much advantage (trivial inconvenience for wannabe stalkers) provides a pseudonym when your real name can be easily found on wikipedia; e.g. "Madonna". Or how big emotional difference for a potential stalker does it make whether a famous blogger displays their photo on their blog or not.
My favorite anonymous person is B. Traven.
Satoshi Nakamoto is also famous and pseudonymous, but this conjunction is very rare IMO.
Aha, thank you, a second example. Though I don't know if he's known by name in the general population.
I'm at the moment quite unsure how to handle a girl who seems to have bipolar depression and wants to have a relationship with me.
Four years ago I think she was in a quite stable mental state (I'm more percetive today than I was back then(. At the time she turned me down. I haven't seen her for a while and now she seems to be pretty broken as a result of mobbing in an enviroment that she now left.
One the one hand there the desire in me to try to fix her. Having a physical relationship with her also has it's appeal. On the other hand I can't see myself being open personally with her as long as she is in that messed up mental state.
I've had a 3 year relationship with a woman I thought I could fix. She said she'd try hard to change, I said I'd help her, I tried to help her and was extremely supportive for a long time. It was emotionally draining because behind each new climbed mountain there was another problem, and another, and another. Every week a new thing that was bad or terrible about the world. I eventually grew tired of the constant stream of disasters, most stemming from normal situations interpreted weirdly then obsessed over until she broke down in tears. It became clear that things were not likely to ever get better so I left.
There were a great number of fantastic things about this woman; we were both breakdancers and rock climbers, we both enjoyed anime and films, we shared a love for spicy food and liked cuddling, we both had good bodies. We had similar mindsets about a lot of things.
I say all this so that you understand exactly how much of a downside an unstable mental state can be. So that you know that all of these great things about her were in the end not enough. Understand what I mean when I tell you it was not worth it for me and that I recommend against it. That I lost 3 years of time I could have spent making progress in a state with no energy. If you do plan to go for it anyway, set a time limit on how long you will try to fix her before letting go, some period of time less than half a year. I'll answer any questions that might seem useful.
Trying hard to change is not useful for changing. It keeps someone in place. Someone who has emotional issues because they obsess too much doesn't get a benefit from trying harder. Accepting such a frame is not the kind of mistake I would make.
If a person breaks down crying I'm not disassociating and going into a low energy state. It rather draw me into a situation and makes me more present. But I'm not sure whether it brings me into a position where I consider the other person an agent rather than a rubics cube having to be solved.
Yes well I wasn't a rationalist at the time, nor did I know enough about psychology to say what the right thing to do to help a person whose father... Well I cannot say the exact thing but suffice to say that If I ever meet the man at least one of us is going to the hospital. I'm rather non-violent at all other times. There wasn't exactly a how-to guide I could read on the subject.
I am also the kind of person that would be drawn out and try to help a person who breaks down crying. You use your energy to help their problems, and have less left for yourself. It starts to wear on you when you get into the third year of it happening every second week like clockwork over such charming subjects as a thoughtless word by a professional acquaintance or having taken the wrong bins out. Bonus points for taking the wrong bins out being a personal insult that means I hate her.
Anyway, that really isn't the point.Telling me how to solve my rubics cube which I am no longer in contact with is not very helpful. The point is, I've been there and I want to help you make the right decision, whatever that may be for you.
As far as I see it, you basically were faced with a situation without having any tools to deal with it. That makes your situation quite different.
When sitting in front of the hospital bed of my father speaking confused stuff because of morphium, my instinctual response was to do a nonverbal trance induction to have him in a silent state in half a minute.
Not because I read some how-to guide of how to deal with the situation but because NLP tools like that are instinctual behavior for me.
I'm very far from normal and so a lot of lessons that might be drawn from your experience for people that might be similar as you are, aren't applicable to me.
While reading a how-to guide doesn't give you any skills, there's is psychological literature on how to help people with most problems.
You may be right about my lack of tools, and I can't honestly say I used the try harder in the proper manner seeing as I hadn't been introduced to it at the time. I played the role of the supportive boyfriend and tried (unsuccessfully) to convince her to go to a therapist who was actually qualified at that sort of thing. I am suspicious, however that you took pains to separate yourself into a new reference class before actually knowing that one way or the other. Unless of course you have a track record of taking massive psychological issues and successfully fixing them in other people and are we really doing this? I mean come on. A person offers to help and you immediately go for the throat, picking apart mistakes made in an attempt to help a person, then using rather personal things in a subtly judgemental manner. Do you foresee that kind of approach ending well? Is that really the way you want this sort of conversation to play out? I like to think we can do better.
I have information. Do you want it or not?
Are you sharing your feelings or asking for advice?
It's context for the question I asked earlier.
There's a lot of information that goes into decision making that I won't be open about publically, so I'm not really asking on specific advice.
That is a difficult situation, but the last sentence suggests that the correct answer is "no". :(
This sounds eerily close to the mystical varieties of theistic religions.
PubMed and Wikipedia give this:
The three predominant stalker typologies currently in use include Zona’s stalker–victim types, Mullen’s stalker typology, and the RECON stalker typology.
Most stalkers are lonely and socially incompetent, but all have the capacity to frighten and distress their victims.
While all subjects exhibited some similarities in stalking behaviors and demographic variables, including childhood attachment disruptions, no single profile of a "stalker" emerged.
... in less forensically focused samples of stalkers, rates of borderline personality are likely to be substantially higher, but confirmatory data is lacking
... celebrity worshippers may exhibit narcissistic features, dissociation, addictive tendencies, stalking behavior, and compulsive buying.
Apropos the "asking personally important questions of LW" posts, I have a question. I'm 30 and wondering what the best way is to swing a mid-career transition to computer science. Some considerations:
I already have some peripheral coding knowledge. I took two years of C back in high school, but probably forgot most of it by now. I do coding-ish stuff often like SQL queries or scripting batch files to automate tasks. Most code makes sense to me and I can write a basic FizzBuzz type algorithm if I look up the syntax.
I don't self-motivate very well. While I could probably teach myself a fair amount of code, without some sort of structure or project deadline, I would likely fail. If I tried to do this part-time, I would probably fail. (Also, I'm looking for a "clean break," such as it is, with my current, toxic job situation.) So I would think that I could either go to a bootcamp or go back to school.
Advantages to school: could defer my remaining loans and work part-time, degree would open more doors within my field (law) as well as outside it. Disadvantages: costs more in the long run, takes longer. Unknowns: post-bacc or MS? I can probably do well on the GRE, but my GPA was unimpressive, and light on math besides. It would have to be an MS program that worked with non-majors.
Advantages to bootcamp: much cheaper in the short run, over in a few months. Disadvantages: my savings would be drained by the tuition and interim living expenses; I would need to be damn sure of a job by the time I exited. Unknowns: which bootcamps are worthwhile? My city only has two: Coder Camps and Iron Yard. They appear to teach more or less totally different platforms.
Does anyone here have experience jumping the tracks to programming later in life? Did you take either of the above strategies, or neither? How did it work out, and what would you have done differently?
Some salient questions:
1) What's your motivation for wanting to do this?
2) What's your current background/skill set?
3) Where in the world are you?
I work on lots of large cases with complex subject matter (often source code itself) with reams of electronic haystacks that need to be sorted for needles. The closer my job is to coding, the more I enjoy it. I get satisfaction out of scripting mundane tasks. I like building and maintaining databases and coming up with absurdly specific queries to get what I need. I remember enjoying and being good at what programming I did do in high school. I am starting to get the creeping feeling that I took a wrong turn eight years ago.
I also feel somewhat stuck in my current position in patent law. Ordinarily step one would be to try a different environment to ensure it's not the workplace as opposed to the work. But most positions advertised in patent law demand an EE/CE/CS background, and I have a peripheral life science degree I use so little as to be irrelevant. I described my skill set as best I could in the parent post but right now it's just a cut above "extremely computer literate." I've dipped my toes but never found the time or motivation to dive (12 hour days kill the initiative).
Houston.
Consider writing a simple Android or iOS app, such as Tetris, from scratch. This should not take very long and has intrinsic rewards built in, like seeing your progress on your phone and showing it off to your friends or prospective employers. You can also work on it during the small chunks of time available, since a project like that can be easily partitioned. Figure out which parts of getting it from the spec to publishing on the Play/App store you like and which you hate. Record your experiences and share them here once done.
Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I've been thinking about a couple of things since I wrote that post.
What makes LessWrong a useful website for asking questions which matter to you personally is that there is lots of insightful people here with wide knowledge base. However, for some questions, LessWrong might be too much, or the wrong kind of, monoculture to provide the best answers. Thus, for weird, unusual, or highly specific questions, there might be better discussion boards, or online communities, to query. In general, Quora might be best for some questions. Stack Overflow might be best for programming questions, Math Overflow might be best for math questions, and some subreddits best for asking questions on very specific topics. What I would like to do is generate a repository of all the best websites for asking specialized or unusual questions across a variety of deep topics. I would turn this into another Discussion post on LessWrong.
Robin Hanson wrote:
Luke Muehlhauser goes over similar material on his own blog here. In short, the rationalist community can't or won't become experts for every subject it wants to extract information from, so it makes sense for it to defer to experts. If rationalists can't become experts themselves, identifying experts seems like the best strategy. This could be broken down into the skills of knowing how or where to find experts, and knowing how to identify which experts are the best or most trustworthy. Developing skills or heuristics like these could make great additions to LessWrong. I'd be willing to be part of this project, but I don't believe I'm competent enough to do it alone. However, working with LessWrong, an initial post on what are some decent sources for getting answers to questions we can't get on LessWrong could be a springboard for such a discussion.
What are your thoughts on these topics?
Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice from here, and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. If you would like to keep the details private, feel free to send me a private message.
If the ensuing thread doesn't get enough feedback, I'll try making asking this question as a Discussion post in its own right. If for some reason you think this whole endeavor isn't necessary, critical feedback about that is also welcome.
New research suggests that life may be hard to come by on certain classes of planets even if they are in the habitable zone since they will lose their water early on. See here. This is noteworthy in that in in the last few years almost all other research has pointed towards astronomical considerations not being a major part of the Great Filter, and this is a suggestion that slightly more of the Filter may be in our past.
Is there a way to sign up for cryonics and sign up to be an organ donor?
I know that some people opt to cryo-preserve only their brain. Is there a way to preserve the whole body, with the exception of the harvested organs ? Is there any reason to? Does the time spent harvesting make a difference to how thoroughly the body is preserved?
No, because the folks responsible for each process need custody of the body in the same time frame after legal death.
But all the organ donor people need is for the body to be kept cold. I get that there's a legal conflict, but couldn't you leave your body to Alcor with instructions for them to hand it over to the organ donor people after they remove the head?
I believe it doesn't work like this; you need the circulatory system in order to perfuse the head, and in doing so the other organs are compromised. This could probably be avoided, but not without more surgical expertise/equipment than today's perfusion teams have, I think.
Oh, because the cryoprotectant is toxic. I forgot about that. I suppose other internal organs apart form the heart could be removed before perfusion starts, but the Alcor people are not qualified to officially do this. All in all it seems like the sort of problem which would be solved if cryonics ever became big enough that it created a sufficient shortage of organs that hospitals actually dedicated some resources to solving the problem.
I am not skilled at storytelling in casual conversation (telling personal anecdotes). How can I improve this? In particular, what is a good environment to practice while limiting the social cost of telling lame stories?
Learn storyteller, read a writing book. A good story has to have setting, a character, situations that are ironic, funny or heartfelt, and then a transformation. Sometimes it can be short and other times it can be longer and more like an epic. If you have all the elements, then learn how to keep an audience's attention with good language.
I'm considered pretty good in this respect. I think the #1 thing that helps is just paying attention to things a lot and having a high degree of situational awareness, which causes you to observe more interesting things and thus have more good stories to share. Reading quickly also helps.
When it comes to actually telling the stories, the most important thing is probably to pay attention to people's faces and see what sorts of reactions they're having. If people seem bored, pick up the pace (or simply withdraw). If they seem overexcited, calm it down.
One good environment to practice the skill of telling stories is tabletop role-playing games, especially as the DM/storyteller/whatever. In general, I think standards in this field are usually fairly low and you get a good amount of time to practice telling (very unusual) stories in any given session.
Although I consider myself average in good storytelling abilities, I'd like to be better. Additionally, it's always been curious to me how one can improve this skill, rather than just leaving one's talent in it to the whims of social fortune, or whatever. As such, I've outsourced this question to my social media networks. If I haven't returned with some sort of response within a few days, feel free to remind me in a few days with either a reply to this comment, or a private message.
Ping :)
I didn't get direct answers to your query, but I got some suggestions for dealing with the problem.
One person told me to defuse an awkward situation if a story isn't well-received with a joke:
Another friend suggested it's all about practice, and bearing through it:
That particular friend is a rationalist. By 'metacognition', I believe he meant 'notice you're practicing the right skills'. Basically, in your head, or on a piece of paper, break down the aspect(s) of storytelling you want to acquire as skills, and only spend time training those.
For example, you probably want to get into the habit of telling stories so the important details that make the story pop come out, rather than getting into the habits of qualifying the points with background details that listeners won't care about. This is a problem I myself have with storytelling. In each of our own minds, we're the only one who remember the causal details that led to the something extraordinary sequence of events that day on vacation. Our listeners don't know the details, because they weren't there, so assume you didn't make any glaring omissions until someone asks about it.
Also, try starting small, I guess. Like, tell shorter anecdotes, and get to bigger ones. Also, I don't believe it's disingenuous to mentally rehearse a short story you might tell beforehand. I used to believe this, because good storytellers I know like my uncle always seem to tell stories off the cuff. Having a good memory, and not using too much jargon, helps. However, I wouldn't be surprised if good storytellers think back on their life experiences and think to themselves 'my encounter today would make a great story'.
Here are some suggestions for generating environments limiting the social costs of telling lame stories.
Another friend of mine thought I was the one asking for how to to limit the social cost of telling lame stories, so he suggested I tell him stories of mine I haven't told him before, and he won't mind if they're bad. This isn't a bad suggestion. You yourself could go on social media, and ask your friends if they want to get together to share stories. If you don't want to go on social media, try texting or calling a friend about it.
If being so direct still seems too awkward, invite a friend or two for coffee, or to the bar, under a pretext of hanging out, and specifically tell stories. Let your friend know that you think you've got a good story, but you might be awkward at telling it, so you hope they don't mind. If they're already you're friend, I expect they'll be genuine and patient enough. However, I recommend ensuring that whoever you're telling a story to is in a neutral or good mood when you start. It's no good to practice storytelling to a friend who just went through a breakup, or lost their job yesterday, or whatever.
I believe this is a good idea for a meetup. At the CFAR alumni reunion this last summer, one alumna hosted a storytelling session. She had a whiteboard out with story suggestions, and we passed around a stick to ensure everyone knew only the designated speaker was supposed to be talking, and there was a short period for questions about the story after each one had finished. Who got to tell a story after the first volunteered happened spontaneously as more people eagerly volunteered because the stories were fun, and their memories about their own experiences were jogged from hearing the last story. However, you don't need all that stuff for a storytelling session to be worthwhile.
The room was jampacked with nearly 30 people at first, and never was less than 10, and the storytelling session went on for several hours rather than only one as was originally intended. To me, this is a testament to how much nerds, or folk from this cluster in person-space, want an environment to try these things in. If you attend a rationality or LessWrong meetup near where you live, try hosting or suggesting a meeting with another group of friends you know, like a meetup for a different topics, or some group of gamers you're a part of. If that doesn't bear out, try again with someone else, or try starting smaller.
Has anyone come across research on parents' attitudes towards their sons when they can see that girls don't find their teen boys sexually attractive? If you saw that happening to your son, that has to affect how you feel about him compared to how you would feel if you saw that your son had sexual opportunities.
This relates to my puzzlement about the idea that the "sexual debut" happens as an organic developmental stage with a median age of 17, compared with the fact that quite a few straight young men miss this window and become the targets of social derision and contempt.
Reference:
Who is the 40-year-old virgin and where did he/she come from? Data from the National Survey of Family Growth.
http://www.ncbi.nlm.nih.gov/pubmed/19493289
I think parents want their children to be successful with their peers, particularly if they are. I helped raise my cousins and the youngest one was the last to really attract men and we felt really sorry for her because she was missing out and she was depressed because her sisters were always attached and she was not. Its a social thing, but it doesn't really hurt you as a person. I do think however, that your attractiveness level when you're young does have affect on your perception of your attractiveness into the rest of your life. Evolutionarily, when we only live to 40, it was important to keep the species going. Now, I think it is a matter of fitting in and finding one's place in society. Knowing, at a young time that you are attractive helps keep you going as life goes along. Whereas, if you don't feel attractive then you get that idea and it can be very hard to break.
Now I'm puzzled by this too. Does the median age for young males making their "sexaul debut" vary by culture?
I'd say it's more of a pity than derision and contempt, but then it probably depends on one's social circles.
Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.
The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)
Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.
Timeless identity is correct, and it's you rather than your copy that wakes up.
We will figure brain scanning.
We will figure brain simulation.
Alternatively, we will figure nanites, and a way to make them work through the ice.
We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; nuclear war; unfriendly AI, etc. That's similar to the longevity escape velocity, although slower: to survive, you don't just have to advance technologies, you have to advance them fast enough.
All that combined, the probability of working out is really darn low. Yes, it is much better than zero, but still low. If I were to play Russian roulette, I would be happy to learn that instead of six bullets I'm playing with five. However, this relief would not stop me from being extremely motivated to remove even more bullets from the cylinder.
The reason why the belief in afterlife is not just neutral but harmful for modern people is that it demotivates them from doing immortality research. Dying is sure scary, we won't truly die, so problem solved, let's do something else. And I'm worried about cryonics becoming this kind of a comforting story for transhumanists. Yes, actually removing one bullet from the cylinder is much much better than hoping that Superman will appear in the last moment, and stop the bullet. But stopping after removing just one bullet isn't a good idea either. Some amount of resources are devoted to the conventional longevity research, but as far as I understand, we're not hoping to achieve the longevity escape velocity for currently living people, especially adults. Cryonics appear to be our only chance to avoid death, and I would be extremely motivated to try to make our only chance as high as we can possibly make it. And I don't think we're trying hard.
I think trying to stop death is a rather pointless endeavour from the start but I agree the fact that most everyone has accepted it and we have some noble myths to paper it over certainly keep resources from being devoted to living forever. But then, why should we live forever?
Who is "we", and what do "we" believe about the point of no return? Surely you're not talking about ordinary doctors pronouncing medical death, because that's just irrelevant (pronouncements of medical death are assertions about what current medicine can repair, not about information-theoretic death). But I don't know what other consensus you could be referring to.
Surely I do. The hypothesis that after a certain period of hypoxia under the normal body temperature the brain sustains enough damage so that it cannot be recovered even if you manage to get the heart and other internal organs working is rather arbitrary, but it's backed up by a lot of data. The hypothesis that with the machinery for direct manipulation of molecules, which doesn't contradict our current understanding of physics, we could fix a lot beyond the self-recovery capabilities of the brain is perfectly sensible, but it's just a hypothesis without the data to back it up.
This, of course, can remind you the skepticism towards flying machines heavier than air in 19th century. And I do believe that some skepticism was a totally valid position to take, given the evidence that they had. There are various degrees of establishing the truth, and "it doesn't seem to follow from our fundamental physics that it's theoretically impossible" is not the highest of them.
You missed a few:
About half of your list is actually an OR statement (timeless identity AND brain scanning AND simulation) OR (nanites through ice), and that doesn't even exhaustively cover the possibilities since at least it needs a term for unknown unknowns we haven't hypothesized yet. It's probably easiest to cover all of them with something like "it's actually possible to turn what we're storing when we vitrify a cryonics patient back into that person, in some form or another".
And the vast majority of cryonicists, or at least, those in Less Wrong circles who your post are likely to reach, already accept that the probability of cryonics working is low, but exactly how low they think the probability is after considering the four assumptions your list reduces to is something they've definitely already considered and probably would disagree with you on, if you actually gave a number for what "very low" means to see whether we even disagree (note: if it's above around 1%, consider how many assumptions there are in trying to achieve "longevity escape velocity", and maybe spread your bets).
And, as others have already pointed out, belief in cryonics doesn't really funge against longevity research. If anything, I expect the two are very strongly correlated together. At least as far as belief in them being desirable or possible goes, it's quite apparent that they're both ideas that are shared by a few communities such as our own and rejected by other communities including "society at large". How much we spend on each is probably affected by e.g. cryonics being a thing you can buy for yourself right now but longevity being a public project suffering from commons problems, so the correlation might be less strong and even inverse if you check it (I would be very surprised if it actually turned out to be inverse), but if so that wouldn't necessarily be because of the reasons you suggest.
I would say it's probably no higher than 0.1%.
But by no means I'm arguing against cryonics. I'm arguing for spending more resources on improving it. All sorts of biologists are working on longevity, but very few seem to work on improving vitrification. And I have a strong suspicion that it's not because nothing can be done about it - most of the time I talked to biologists about it, we were able to pinpoint non-trivial research questions in this field.
I think LW looks favorably on the work of the Brain Preservation Foundation and multiple people even donated.
While mainstream belief in an afterlife is probably a contributing factor in why we aren't doing enough longevity/immortality research, I doubt it's a primary cause.
Firstly, because very few people alieve in an afterlife, i.e. actually anticipate waking up in an afterlife when they die. (Nor, for that matter, do most people who believe in a Heaven/Hell sort of afterlife, actually behave in a way consistent with their belief that they may be eternally rewarded or punished for their behavior.)
Secondly, because the people who are in a position to do such research are less likely than the general population to believe in an afterlife.
And finally, because even without belief in an afterlife, people would still probably have a strong sense of learned helplessness around fighting death, so instead of a "Dying is sure scary, we won't truly die, so problem solved, let's do something else." attitude, we'd have a "Dying is sure scary, but we can't really do anything about it, let's do something else." attitude (I have a hunch the former is really the latter dressed up a bit.).
On this particular point, I would say that people who are in a position to allocate funds for research programs are probably about as likely as the general population to believe in the belief in afterlife.
Generally, I agree - it's definitely not the only problem. The USSR, where people were at least supposed to not believe in afterlife, didn't have longevity research as its top priority. But it's definitely one of the cognitive stop signs, that prevents people from thinking about death hard enough.
How about putting numbers on it? Without doing so, your argument is quite vague.
Have you actually looked at the relevant LW census numbers for what "we are hoping"?
I would estimate the cumulative probability as the ballpark of 0.1%
I was actually referring to the apparent consensus what I see among researchers, but it's indeed vague. I should look up the numbers if they exist.
Most researchers don't do cryonics. I think a good majority of LW anti-aging research is underfunded. I don't buy the thesis that people who do cryonics are investing less effort into other ways of fighting aging.
The 2013 LW census asked a questions : "P(Anti-Agathics) What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time?"
"P(Cryonics) What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then?"
And "Are you signed up for cryonics?"
The general takeaway is that even among people signed up with cryonics the majority doesn't think that it"s chance of working are bigger than 50%. But they do believe that it's bigger than 0.1%.
I'd like to recommend a fun little piece called the The Schizophrenia of Modern Ethical Theories (PDF), which points out that popular moral theories look very strange when actually applied as a grounds for action in real-life situations. Minimally, the author argues that certain reasons for actions are incompatible with certain motives, and that this becomes incoherent if we suppose that these motives were (at least partially) the motivation we had to adopt that set of reasons in the first place.
For example, if you tend to your sick friend, but explain to them that you are (really only) doing so on utilitarian ground, or on egoistic grounds, or because you are obligated to do so, etc, well...doesn't that seem off? And don't those reasons for action, presumably a generalization of a great deal of specific situations of this sort, seem incompatible with the original motivation that we felt was morally good?
If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn't sound off - it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.
... no? I mean, maybe it will sound weird if you actually say it, because that's not a norm in our culture, but apart from that, it doesn't seem morally bad or off to me.
ETA: well, I suppose only helping someone on egoistic grounds sounds off, but the utilitarian/moral obligation motivations still seem fine to me.
I'm not sure even that does, when it's put in an appropriate way. "I'm doing this because I care about you, I don't like to see you in trouble, and I'll be much happier once I see you sorted out."
There are varieties of egoism that can't honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place.
I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this:
But Stocker's argument is against the following quite different proposition:
And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don't give you the answers.
For instance, believing in divine command theory (that moral rules come from God's will) does not tell you what God's will is. It doesn't tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you "Sleep with your cute neighbor!" or "Don't sleep with your cute neighbor!" The theory hasn't pre-calculated the outcome of a particular action. Rather, it tells you, "If sleeping with your cute neighbor maximizes utility, then it is good."
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn't seem like it follows.
It looks like we're in violent agreement. I mention this only because it's not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
No, I was intending to expand on your argument. :)
(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking's fire into the equations).
Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like us, then it would be easy for me to interpret their world and their mind etc. as real (even if I could never experience it viscerally living outside the simulation). However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist. So it seems to me if we believe some simulation is real there is no obvious barrier to believing every (computable) universe exists. As for whether we stop at computable universes or include more of mathematics, I am not sure anything we would call conscious could tell the difference, so perhaps it makes no difference.
(Resulting beliefs + aside on decision theory) I believe in a Level 4 Tegmark with no reality fluid measure (as I have yet to see a convincing argument for one) a la http://lesswrong.com/r/discussion/lw/jn2/preferences_without_existence/ . Moreover, I don't think there is any "correct" decision theory that captures what we should be doing. All we can do is pick the one that feels right with regard to our biological programming. Which future entities are us, how many copies of us will there be, and who should I care about etc. are all flaky concepts at best. Of course, my brain won't buy into the idea I should jump off a bridge or touch a hot stove, but I think it is unplausable that this will follow from any objective optimization principle. Nature didn't need a decision theory to decide if it is a good idea to walk into a teleporter machine if two of us walk out the other side. We have our built in shabby biological decision theory, we can innovate on it theoretically, but there is no objective sense in which some particular decision theory will be the right one for us.
My approach is that everything is equally real, just not everything is equally useful. In a meta level, talking about what's more real is not useful outside a specific setting. Unicorns are real in MLP, cars are real in the world we perceive, electrons are real in Quantum Electrodynamics, virtual particles are real in Feynman diagrams, agents are real in decision theories, etc.
Can you expand on this a bit?
Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?
Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.
I've seen extremely low plastination estimates due to the lack of maintenance costs. Very speculative obviously,, and the main component of cost is still the procedure itself (though there are apparently some savings here as well.)
Currently the main cost in cryonics is getting you frozen, not keeping you frozen. For example, Alcor gives these costs for neuropreservation:
The CMS fund is what covers the Alcor team being ready to stabilize you as soon as you die, and transporting you to their facility. Then your cryopreservation fee covers filling you with cryoprotectants and slowly cooling you. Then the PCT covers your long term care. So 69% of your money goes to getting you frozen, and 31% goes to keeping you like that.
(Additionally I don't think it's likely that current freezing procedures are sufficient to preserve what makes you be you, and that better procedures would be more expensive, once we knew what they were.)
EDIT: To be fair, CMS would be much cheaper if it were something every hospital offered, because you're not paying for people to be on deathbed standby.
So, for how long will that $25K keep you frozen? Any estimates?
I believe the intention is "unlimitedly long", which is reasonable if (1) we're happy to assume something roughly resembling historical performance of investments and (2) the ongoing cost per cryopreservee is on the order of $600/year.
The question is whether the cryofund can tolerate the volatility.
Aha, that's the number I was looking for, thank you.
Note that it's just a guess on my part (on the basis that a conservative estimate is that if you have capital X then you can take 2.5% of it out every year and be pretty damn confident that in the long run you won't run out barring worldshaking financial upheavals). I have no idea what calculations Alcor, CI, etc., may have done; they may be more optimistic or more pessimistic than me. And I haven't made any attempt at estimating the actual cost of keeping cryopreservees suitably chilled.
Didn't you say it's on the order of $600/year?
It sounds as if I wasn't clear, so let me be more explicit.
(Why 2.5%? Because I've heard figures more like 3-4% bandied around in a personal-finance context, and I reckon an institution like Alcor should be extra-cautious. A really conservative figure would of course be zero.)
Ah, I see. I think I misread how the parentheses nest in your post :-)
So you have no information on the actual maintenance cost of cryopreservation and are just working backwards from what Alcor charges.
I'm having doubts about this number, but that's not a finance thread. And anyway, in this context what matters is not reality, but Alcor's estimates.
That's debatable -- inflation can decimate your wealth easily enough. Currently inflation-adjusted Treasury bonds (TIPS) trade at negative yields.
Correct.
I did try to make it as clear as I could that I do too...
Well, I defined it as the maximum amount you can take out without running out of money. I agree that if instead you define it as the maximum net outflow that (with some probability close to 1) leaves your fortune increasing rather than decreasing in both long and short terms, it could be negative in times of economic stagnation.
No, ve said that "unlimitedly long" is reasonable if that's the cost. Ve didn't say that that was the cost.
How do you track and control your spending? Disregarding financial privacy I started paying with card for everything which allows me to track where I do spend my money but not really on what. I find that I in general spend less than what I earn because spending money somehow hurts.
I did a rough estimation of my normal monthly costs of living and then added a small amount for fun. The rest of my monthly paycheck gets semi-automatically invested in ETFs and can't be used without transaction costs. I have a small buffer account that I can use for unexpected expenses and if this happens I'll be aware of it the next month and try to spend less and grow the buffer account.
MoneyDashboard.com - links directly to my credit cards and bank accounts. I hear that the US equivalent is mint.com
My income is variable and hasn't been great lately. As a result, several months ago I flipped the "I'm poor!" switch that's been lingering in my brain since I was a student, and so I avoid almost all unnecessary spending(a small recreation budget is allowed, for sanity, but otherwise it's necessities and business expenses only). Every few months I review spending to see if there's any excessive categories, but my intuition has been pretty good.
And yeah, everything on plastic. Not even because of tracking, mostly because Visa gives me 1% cash back, which is a better bribe than anyone else offers.
Some cards will give you 1.5% back and I think I've seen an ad for a Citibank card that gives you 1% on purchase plus another 1% on payment.
Most of those have annual fees, though - I've done the math, my spending isn't high enough to justify them. My 1% card is free. Also, I have my credit card number memorized, so changing it would impose a fairly high annoyance burden on me. But it's worth noting for those who have higher spending patterns than I do(~$1000-1500/month on credit).
Nope.
Here is the 1.5% Capital One card.
Here is the 2% Citi card.
For clarity, I'm in Canada. All the card offers I've seen up here that are meaningfully better than 1% have fees. Americans can take note of those, though.
Ah. Sorry for my presumption.
Once you have a few thousand socked away, remember to start investing and picking up your free money.
I have a spreadsheet in which I record every financial transaction, and enter all future transactions, estimated as necessary, out to a year ahead. Whenever I get a bank statement, credit card statement, or the like, I compare everything in it with the spreadsheet and correct things as necessary. I don't try to keep track of cash spent out of my pocket. I tried that once, but found it wasn't practical. The numbers would never add up and there would be no independent record to check them against.
One row of the spreadsheet computes my total financial assets, which I observe ticking upwards month by month.
I don't record in detail what I buy, only the money spent and where (which is a partial clue to what I bought). I'm sufficiently well off that I don't need to plan any of my expenditure in detail, only consider from time to time whether I want to direct X amount of my resources in the way I observe myself doing.
I spend less than I earn, because it seems to me that that is simply what one does, if one can, in a sensibly ordered life.
Congratulations, you're about a million times more organized than most people. Even my girlfriend isn't that particular - she records transactions, and has a dozen budget categories, but she doesn't predict a year out.
Animal Charity Evaluators have updated their top charity recommendations, adding Animal Equality to The Humane League and Mercy for Animals. Also, their donation-doubling drive is nearly over.
Why would an effective altruist (or anyone wanting their donations to have a genuine beneficial effect) consider donating to animal charities? Isn't the whole premise of EA that everyone should donate to the highest utilon/$ charities, all of which happen to be directed at helping humans?
Just curiosity from someone uninterested in altruism. Why even bring this up here?
We don't all agree on what a utilon is. I think a year of human suffering is very bad, while a year of animal suffering is nearly irrelevant by comparison, so I think charities aimed at helping humans are where we get the most utility for our money. Other people's sense of the relative weight of humans and animals is different, however, and some value animals about the same as humans or only somewhat below.
To take a toy example, imagine there are two charities: one that averts a year of human suffering for $200 and one that averts a year of chicken suffering for $2. If I think human suffering is 1000x as bad as chicken suffering and you think human suffering is only 10x as bad, then even though we both agree on the facts of what will happen in response to our donations, we'll give to different charities because of our disagreement over values.
In reality, however, it's more complicated. The facts of what will happen in response to a donation are uncertain even in the best of times, but because a lot of people care about humans the various ways of helping them are much better researched. GiveWell's recommendations are all human-helping charities because of a combination of "they think humans matter more" and "the research on helping humans is better". Figuring out how to effectively help animals is hard, and while ACE has good people working on it, they're a small organization with limited funding and their recommendations are still much less robust than GiveWell's.
I may write a full discussion thread on this at some point, but I've been thinking a lot about undergraduate core curriculum lately. What should it include? I have no idea why history has persisted in virtually every curriculum I know of for so long. Do many college professors still believe history has transfer of learning value in terms of critical thinking skills? Why? The transfer of learning thread touches on this issue somewhat, but I feel like most people on there are overvaluing their own field hence computational science is overrepresented and social science, humanties, and business are underrepresented. Any thoughts?
Here is Eliezer's post on the subject.
Scott Alexander from Slate Star Codex has the idea that if the humanities are going to be taught as part of a core curriculum, it might be better to teach the history of them backwards.
When I was in high school, I discussed this very idea with my Philosophy teacher. She said that (at least here in Italy) curricula for humanities are still caught in the Hegelian idea that history unfolds in logical structures, so that it's easier to understand them in chronological order.
I reasoned instead that contemporary subjects are more relevant, more interesting and we have much more data about them, so they would appeal much better to first year students.
History seems to me a subject in its teachings that aims to produce critical thinking in a sense different than what LessWrong typically tries to optimize for. I figure LessWrong optimizes for the critical thinking of the individual, which benefits from an education in logic, computer science, and mathematics, among a general knowledge of the natural sciences. I'm not sure how much history would contribute to that sort of skill, but others in this thread seem skeptical of its value.
However, learning history seems like it improves how critically groups and societies can think together, across a few domains key to society. A general education in history as part of the core curriculum could be a heuristic for circumventing group irrationality, and mob rule, in a way that critical thinking skills designed for only the individual might not. Understanding the history of one's own nation in a democracy give the electorate knowledge of what's worked in the past, what's different in the nation in the present compared to the past, and the context in which policy platforms and cultural and political divides were forged. This extends to the less grand history of the geographical location in which ones resides, or was raised in, within one's own nation. An understanding of a history of other nations, and the world, gives one the context in which international relations have formed over centuries.
Here's an example of how knowledge of world history and international relations might be useful. If the executive branch of the United States federal government wants to declare war on the country, to intervene against a predator country on the behalf of one victimized, it makes sense to understand the context of that conflict. If the history of those faraway regions is known, than the electorate can check the narrative the government puts forward against what they learned in schooling. Even very recent history could be useful knowledge in this regard. If the electorate of the United States was aware of the hundreds of years of colonial or ideological conflict, and how intractably stupid the whole thing is and has been, they might have been warier of condoning invasions of Iraq, Vietnam, the former Yugoslavia, etc. Knowing the background of such regions in the future, by having better access to options in learning of these regions in undergraduate education, might make whole generations less likely to vote for parties or presidents who will sink the United States into costly and drawn-out wars that are negative-sum games for all sides.
Groupthink and other pitfalls of group psychology that aren't circumvented by merely knowing science might be debunked by everyone knowing more history. In writing this, I'm realizing that the value of history would be in having enough information as a baseline to not make mistakes of ignorance, the same way that knowing of biology or psychology might. This decreases the chances that a society at large will make mistakes, like supporting a stupid war, or rallying behind an anti-vaccination movement. However, it doesn't seem to fall into the more valuable category of subjects which (presumably) directly improve reasoning ability for individuals, such as maths, and computer science.
My above illustration is a hypothesis or thought experiment for how an education in history might be valuable for critical thinking skills. If it's mostly valuable for having a better democracy with better politics, then perhaps the question can't be divorced from what other education makes for a better democratic polity. That leads us to opening the Pandora's Box of producing better thinking on politics, which is its own behemoth of a problem.
I just want to point out for the record that if we're discussing a core curriculum for undergraduate education, I figure it would be even better to get such a core curriculum into the regular, secondary schooling system that almost everyone goes through. Of course, in practice, implementing such would require an overhaul of the secondary schooling system, which seems much more difficult than changing post-secondary education. The reason for this would probably because changing the curriculum for post-secondary education, or at least one post-secondary institution, is easier, because there is less bureaucratic deadweight, a greater variety of choice, and a nimbler mechanisms in place for instigating change. So, I understand where you're coming from in your original comment above.
History illuminates the present. A lot of people care about it, a lot of feuds stem from it, and a lot of situations echo it. You can't understand the Ukrainian adventures Putin is going on without a) knowing about the collapse of the Soviet Union to understand why the Russians want it, b) knowing about the Holodomor to understand why the Ukrainians aren't such big fans of Russian domination, and arguably c) knowing about the mistakes the west made with Hitler, to get a sense of what we should do about it.
History gives you a chance to learn from mistakes without needing to make them yourself.
History is basically a collection of the coolest stories in human history. How can you not love that?
Given how hard it is to establish causality, history where you don't have a lot of the relevant information and there a lot of motivated reasoning going on is often a bad source for learning.
Which is better - weak evidence, or none?
Often none.
For example, if a piece of evidence E is such that:
- I ought to, in response to it, update my confidence in some belief B by some amount A, but
- I in fact update my confidence in B by A2,
and updating by A2 gets me further from justified confidence than I started out, then to the extent that I value justified confidence in propositions I was better off without E.
Incidentally, this is also what I understood RowanE to be referring to as well.
But it's only bad because you made the mistake of updating by A2. I often notice a different problem of people to always argue A=0 and then present alternative belief C with no evidence. On some issues, we can't get a great A, but if the best evidence available points to B we should still assume it's B.
Agreed.
Agreed.
Yes, I notice that too, and I agree both that it's a problem, and that it's a different problem.
An interesting question. Let me offer a different angle.
You don't have weak evidence. You have data. The difference is that "evidence" implies a particular hypothesis that the data is evidence for or against.
One problem with being in love with Bayes is that the very important step of generating hypotheses is underappreciated. Notably, if you don't have the right hypothesis in the set of hypotheses that you are considering, all the data and/or evidence in the world is not going to help you.
To give a medical example, if you are trying to figure out what causes ulcers and you are looking at whether evidence points at diet, stress, or genetic predisposition, well, you are likely to find lots of weak evidence (and people actually did). Unfortunately, ulcers turned out to be an bacterial disease and all that evidence, actually, meant nothing.
Another problem with weak evidence is that "weak" can be defined as evidence that doesn't move you away from your prior. And if you don't move away from your prior, well, nothing much changed, has it?
"Weak" means that it doesn't change your beliefs very much - if the prior probability is 50%, and the posterior probability is 51%, calling it weak evidence seems pretty natural. But it still helps improve your estimates.
Only if it's actually good evidence and you interpret it correctly. Another plausible interpretation of "weak" is "uncertain".
Consider a situation where you unknowingly decided to treat some noise as evidence. It's weak and it only changed your 50% prior to a 51% posterior, but it did not improve your estimate.
Overconfidence is a huge problem. Knowing that you don't understand how the world works is important. To the extend that people believe that they can learn significant things from history, "weak evidence" can often produce problems.
If you look at the Western Ukraine policy they didn't make a treaty to accept Russian annexion of the Krim in return for stability in the rest of Ukraine. That might have prevented the mess we have at the moment.
In general political decisions in cases like this should be made by doing scenario planning.
It on thing to say that Britian and France should have declared war on Germany earlier. It quite another thing to argue that the West should take military action against Russia.
Accept an annexation in return for promises of stability? Hmm, reminds me of something...
That's partly the point, we didn't go that route and now have the mess we have at the moment.
And what happened the last time we DID go that route?
Making decisions because on a single data point is not good policy.
Also the alternative to the Munich agreements would have been to start WWII earlier. That might have had advantages but it would still have been very messy.
Might have, but my money isn't on it. You think Putin cares about treaties? He's a raw-power sort of guy.
And yes, the scenarios are not identical - if nothing else, Russia has many more ICBMs than Hitler did. Still, there's ways to take action that are likely to de-escalate the situation - security guarantees, repositioning military assets, joint exercises, and other ways of drawing a clear line in the sand. We can't kick him out, but we can tell him where the limits are.
(Agreed on your broader point, though - we should ensure we don't draw too many conclusions).
Putin does care about the fact that Ukraine might join NATO or the EU free trade zone. He probably did feel threatened by what he perceived as a color revolution with a resulting pro-Western Ukrainian government.
At the end of the day Putin doesn't want the crisis to drag on indefinitely so sooner or later it's in Russia's interest to have a settlement. Russia relies on selling it's gas to Europe.
Having the Krim under embargo is quite bad for Russia. It means that it's costly to keep up the economy of the Krim in a way that it's population doesn't think the Krim decayed under Russian rule and there unrest.
On the other hand it's not quite clear the US foreign policy has a problem with dragging out the crisis. It keeps NATO together even through Europeans are annoyed of getting spied at by the US. It makes it defensibly to have foreign miltary bases inside Germany that spy on Germans.
Do you really think joint exercises contribute to deescalation?
As far as repositioning military assets goes, placing NATO assets inside Ukraine is the opposite of deescalation.
The only real way to descalate is a diplomatic solution and there probably isn't one without affirming Crimea as part of Russia.
There's a certain type of leader, over-represented among strongmen, that will push as far as they think they can and stop when they can't any more. They don't care about diplomacy or treaties, they care about what they can get away with. I think Putin is one of those - weak in most meaningful ways, but strong in will and very willing to exploit our weakness in same. The way to stop someone like that is with strength. Russia simply can't throw down, so if we tell them that they'd have to do so to get anywhere, they'd back off.
Of course, we need to be sure we don't push too far - they can still destroy the world, after all - but Putin is sane, and doesn't have any desire to do anything nearly so dramatic.
Putting gains inner politcs strength from the conflict.
That assumes that you can simply change from being weak to being strong. In poker you can do this as bluffing. In Chess you can't. You actually have to calculate your moves.
Holding joint military exercises isn't strength if you aren't willing to use the military to fight.
Bailing out European countries is expensive enough. There not really the money to additionally prop up Ukraine.
Only as long as he's winning.
NATO is, far and away, the strongest military alliance that has ever existed. They have the ability to be strong. When the missing element is willpower, "Man up, already!" is perfectly viable strategic advice.
Sometimes none, if the source of the evidence is biased and you're a mere human.
There are unbiased sources of evidence now?
That question doesn't have anything to do with the claim that you can make someone less informed by giving them biased evidence.
Some sources of evidence are less biased than others. Some sources of evidence will contain biases which are more problematic than others for the problem at hand.
Of course. But Rowan seemed to be arguing a much stronger claim.
How useful is knowing about Ukraine to the average person? What percentage of History class will cover things which are relevant? Which useful mistakes to avoid does a typical History class teach you about?
1) Depends how political you are. I'm of the opinion that education should at least give people the tools to be active in democracy, even if they don't use them, so I consider at least a broad context for the big issues to be important.
2) Hard to say - I'm a history buff, so most of my knowledge is self-taught. I'd have to go back and look at notes.
3) Depends on the class. I tend to prefer the big-picture stuff, which is actually shockingly relevant to my life(not because I'm a national leader, but because I'm a strategy gamer), but there's more than enough historians who are happy to teach you about cultural dynamics and popular movements. You think popular music history might help someone who's fiddling with a bass guitar?
tl;dr: having a set of courses for everyone to take is probably a bad idea. People are different and any given course is going to, at best, waste the time of some class of people.
A while ago, I decided that it would be a good thing for gender equality to have everyone take a class on bondage that consisted of opposite-gender pairs tying each other up. Done right, it would train students "it's okay for the opposite gender to have power, nothing bad will happen!" and "don't abuse the power you have over people." In my social circle, which is disproportionately interested in BDSM, this kinda makes sense. It may even help (although my experience is that by the time anyone's ready to do BDSM maturely, they've pretty much mastered not treating people poorly based on gender.) It would also be a miraculously bad idea to implement.
In general, I think it's a mistake to have a "core curriculum" for everyone. Within 5 people I know, I could go through the course catalog of, say, MIT, and find one person for whom nobody would benefit from them taking the course. (This is easier than it seems at first; me taking social science or literature courses makes nobody better off (the last social science course I took made me start questioning whether freedom of religion was a good thing. I still think it's a very good thing, but presenting me with a highly-compressed history of every inconvenience it's produced in America's history doesn't convince my system 1). Similarly, there exist a bunch of math/science courses that I would benefit greatly from taking, but would just make the social science or literature people sad. Also, I know a lot of musicians, for whom there's no benefit from academic classes; they just need to practice a lot.)
Having a typical LWer take a token literature class generally means they're going to spend ~200 hours learning stuff they'll forget exponentially. (This could be remedied by Anki, but there's a better-than-even chance the deck gets deleted the moment the final's over.) Going the other way, forcing writers to take calculus probably won't produce any tangible benefits, but it will make them pissed off and write things with science is bad plotlines. (Yes, most of us probably wish writers would get scientifically literate, but until we can figure out a way to make that happen, forcing them to take math and science courses is just going to have predictable effects on what they write and do you really think it helps to have a group of people who substantially influence culture to hate math and science?)
For the typical LWer, I'd go heavy on the math and CS with enough science (physics through psych) to counteract Dunning-Kruger, and some specialization, the idea being that math and CS are tools that let you take something you already know and find out something you didn't know for free, the sciences are there to reduce inferential gaps and eliminate illusory competence, and the specialization gets you a job. This would be very good for people-who-are-central-examples-of-LWers (although I'm sure there many are people here who this would be very bad for), but I have trouble imagining that this would work for more than a few percent of the population. In fact, for everyone going into a field that doesn't need a lot of technical knowledge, I'd look for the most efficient way to measure intelligence and conscientiousness (preferably separately), which looks very little like an undergraduate curriculum.
As a writer, I agree with you. I am horrible at math. In my life 2x3=5 most of the time. If I had to suffer and fail at Calculus when I can't multiply some days I would certainly start writing books about evil scientists abusing a village for its resources and then have the village revolt against its scientific masters with pitchforks. Throw in a great protagonist and a love interest and I have a bestseller with possible international movie rights.
If a field doesn't require a lot of technical knowledge, why bother with college in the first place? I'm not so sure how useful your examples are since most creative writers and musicians will eventually fail and be forced to switch to a different career path. Even related fields like journalism or band manager require some technical skills.
Obligatory SMBC comic. :)
Signalling, AKA why my friend majoring in liberal arts at Harvard can get a high-paying job even though college has taught him almost no relevant job skills.
Undergraduate core curriculum where, for whom, and for what purposes?
I think the idea of a core curriculum that contains things such as history is awful. Diversity is pretty useful.
Business in general is useful, but little of the relevant skills are well learned via lectures. Being able to negotiate is a useful business skill.
Diversity courses strike me as an odd combination of sociology, anthropology, and history, but since you specifically criticized history courses; I'm a bit confused as to why you like diversity courses. Are culturally-focused history courses such as history of hip-hop, latin american culture, or women in American history better than standard history courses? Is there a certain category of business courses that does a better job than others? Are there any skills that can be easily taught in a lecture format? I have a friend who felt communications courses were very good at teaching negotiation strategies.
I like diversity in course offerings. That's not the same thing as liking courses that supposedly teach diversity.
I don't want a world in which every college student learns the same thing. As such I reject the idea of a core curriculum.
Probably courses that don't use textbooks but that do exercises with strong emotional engagement.
I was at personal development seminars where at the end of the day some people lie on the floor because of emotional exhaustion. I think doing a lot of deep inner work brings higher returns than learning intellectual theory.
How djd he came to that conclusion? Has the amount that the person pays for the average thing he buys gone down because he has become much better at negotiating?
I only took one class in communications so I don't understand the field too well. The class itself seemed useful, but there was no mention of negotiation strategies. It would seem more likely that better negotiation leads to more offers than that better negotiation leads to a better offer. A smart businessman is going to know how to value the deal, and it's going to be hard to significantly change his price.
What practical effect did it have that make you consider it to be useful?
If you buy a car in many cases a person with good negotating skills can achieve a better price.
I think you have misinterpreted "Diversity is pretty useful" as "Diversity courses are pretty useful". My reading of ChristianKI's comment is that he meant "having different people take different courses is useful" and I would be rather surprised if he thought diversity courses as such were much use.
If I were designing a core curriculum off the top of my head, it might look something like this:
First year: Statistics, pure math if necessary, foundational biology, literature and history of a time and place far removed from your native culture. Classics is the traditional solution to the latter and I think it's still a pretty good one, but now that we can't assume knowledge of Greek or Latin, any other culture at a comparable remove would probably work as well. The point of this year is to lay foundations, to expose students to some things they probably haven't seen before, and to put some cognitive distance between the student and their K-12 education. Skill at reading and writing should be built through the history curriculum.
Second year: Data science, more math if necessary, evolutionary biology (perhaps with an emphasis on hominid evolution), basic philosophy (focusing on general theory rather than specific viewpoints), more literature and history. We're building on the subjects introduced in the first year, but still staying mostly theoretical.
Third year: Economics, cognitive science, philosophy (at this level, students start reading primary sources), more literature and history. At this point you'd start learning the literature and history of your native language. You're starting to specialize, and to lay the groundwork for engaging with contemporary culture on an educated level.
Fourth year: More economics, political science, recent history, cultural studies (e.g. film, contemporary literature, religion).
Um, the reason for studying Greek and Latin is not just because they're a far-removed culture. It's also because they're the cultures which are the memetic ancestors of the memes that we consider the highest achievements of our culture, e.g., science, modern political forms.
Also this suffers from the problem of attempting to go from theoretical to practical, which is the opposite of how humans actually learn. Humans learn from examples, not from abstract theories.
What do you mean with those terms?
Understanding the principle of evolution is useful but I don't see why it needs a whole semester.
1st year: 5 / 2nd year: 7 / 3rd year: 5 / 4th year: 4 That's over half their classes. I also counted 14 of those 21 classes are in the social sciences or humanities which seems rather strange after you denigrated the fields. Now the big question: how much weight do you put on the accuracy of this first draft?
It's pretty simple. I think the subjects are important; I'm just not too thrilled about how they're taught right now. Since there's no chance of this ever being influential in any way, I may as well go with the fields I wish I had rather than the ones I have.
As to accuracy: not much.
Fifth year: spent unemployed and depressed because of all the student debt and no marketable skills.
This is a curriculum for future philosopher-kings who never have to worry about such mundane things as money.
That was basically my education (I took 5 years of Latin, 2 of ancient greek, philosophy, literature, art) and the only reason I didn't end up homeless camping out in Lumifer's yard was because I learned how to do marketing and branding. I think having practical skills is a good idea. Trade and Technical schools are a great idea.
"Core curriculum" generally means "what you do that isn't your major". Marketable skills go there, not here; it does no one any good to produce a crop of students all of whom have taken two classes each in physics, comp sci, business, etc.
What counts as a 'marketable skill', or even what would be the baseline assumption of skill for becoming a fully and generally competent adult in twenty-first century society, might be very different from what was considered skill and competence in society 50 years ago. Rather than merely updating a liberal education as conceived in the Post-War era, might it make sense to redesign the liberal education from scratch? Like, does a Liberal Education 2.0 make sense?
What skills or competencies aren't taught much in universities yet, but are ones everyone should learn?
Perhaps we need to re-think what jobs and employment look like in the 21st century and build from there?
That seems like a decent starting point. I don't know my U.S. history to well, as I'm a young Canadian. However, a cursory glance at the Wikipedia page for the G.I. Bill in the U.S. reveals that it, among other benefits, effectively lowered the cost not only for veterans after World War II, but also their dependents. The G.I. Bill was still used through 1973, by Vietnam War veterans, so that's millions more than I expected. As attending post-secondary school became normalized, it shifted toward the status quo for getting better jobs. In favor of equality, people of color and women also demanded equal opportunity to such education by having discriminatory acceptance policies and whatnot scrapped. This was successful to the extent that several million more Americans attended university.
So, a liberal education that was originally intended for upper(-middle) class individuals was seen as a rite of passage, for status, and then to stay competitive, for the 'average American'. This trend extrapolated until the present. It doesn't seem to me typical baccalaureate is optimized for what the economy needed for the 20th century, nor for what would maximize the chances of employment success for individuals. I don't believe this is true for some STEM degrees, of course. Nonetheless, if there are jobs for the 21st century that don't yet exist, we're not well-equipped for those either, because we're not even equipped for the education needed for the jobs of the present.
I hope the history overview wasn't redundant, but I wanted an awareness of design flaws of the current education system before thinking about a new one. Not that we're designing anything for real here, but it's interesting to spitball ideas.
If not already in high school, universities might mandate a course on coding, or at least how to navigate information and data better, the same way almost all degrees mandate a course in English or communications in the first year. It seems ludicrous this isn't already standard, and careers will involve only more understanding of computing in the future.. There needs to be a way to make the basics of information science intelligible for everyone, like literacy, and pre-calculus.
There's an unsettled debate about whether studying the humanities increases critical thinking skills or not. Maybe the debate is settled, but I can't tell the signal from the noise in that regard. To be cautious, rather than removing the humanities entirely, maybe a class can be generated that gets students thinking rhetorically and analytically with words, but is broader or more topical than the goings-on of Ancient Greece.
These are obvious and weak suggestions I've made. I don't believe I can predict the future well, because I don't know where to start researching what the careers and jobs of the 21st century will be like.
Persuasive writing and speaking. Alternatively, interesting writing and speaking.
If you count the courses you suggest, there isn't much room left for a major.
I think a fruitful avenue of thought here would be to consider higher (note the word) education in its historical context. Universities are very traditional places and historically they provided the education for the elite. Until historically recently education did not involve any marketable skills at all -- its point was, as you said, "engaging with contemporary culture on an educated level".
Four to six classes a year, out of about twelve in total? That doesn't sound too bad to me. I took about that many non-major classes when I was in school, although they didn't build on each other like the curriculum I proposed.
It may amuse you to note that I was basically designing that as a modernized liberal arts curriculum, with more emphasis on stats and econ and with some stuff (languages, music) stripped out to accommodate major courses. Obviously there's some tension between the vocational and the liberal aims here, but I know enough people who e.g. got jobs at Google with philosophy degrees that I think there's enough room for some of the latter.
I studied at two state universities. At both of them, classes were measured in "credit hours" corresponding to an hour of lecture per week. A regular class was three credit hours and semester loads at both universities were capped at eighteen credits, corresponding to six regular classes per semester and twelve regular classes per year (excluding summers). Few students took this maximal load, however. The minimum semester load for full-time students was twelve credit hours and sample degree plans tended to assume semester loads of fifteen credit hours, both of which were far more typical.
Sure, but that's evidence that they are unusually smart people. That's not evidence that four years of college were useful for them.
As you probably know, there is a school of thought that treats college education as mostly signaling. Companies are willing to hire people from, say, the Ivies, because these people proved that they are sufficiently smart (by getting into an Ivy) and sufficiently conscientious (by graduating). What they learned during these four years is largely irrelevant.
Is four years of a "modernized liberal arts curriculum" the best use of four years of one's life and a couple of hundred thousand dollars?