If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Dec. 1 - Dec. 7, 2014
New Comment
346 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

GiveWell's top charities updated today. Compared to previous recommendations, they have put Against Malaria Foundation back on the top charities list (partial explanation here), and they have also added an "Other Standout Charities" section.

8tog
Here's GiveWell's detailed announcement post. I've posted a summary of the new (and old) charities to LW Discussion.
6Metus
Also note that there is information on tax-deductibility of donations outside of the U.S. on that site. If you are paying a lot of income tax you might be able to get some money back, donate even more or some combination of those two.
9tog
Even more easily, you can visit this interactive tool I made and it'll tell you which charities are tax-deductible or tax-efficient in your country, and give you the best links to them. It also has a dropdown covering 18 countries, including some in which tax-efficient routes are far from obvious.
5Metus
Thank you. It is a bit of a shame that it is so complicated to donate tax-efficiently from one EU country to another. I can understand complications going from the US to the EU member states and vice versa but this is plenty strange.

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice from here, and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. If you would like to keep the details private, feel free to send me a private message.

If the ensuing thread doesn't get enough feedback, I'll try making asking this question as a Discussion post in its own right. If for some reason you think this whole endeavor isn't necessary, critical feedback about that is also welcome.

[-]Shmi110

What cause would an NRx EA donate to?

8bramflakes
Depends on what kind of NRx. There isn't a single value system shared among them. The popular trichotomy is "Techno-commercialist / Theonomist / Ethno-nationalist" - I don't know about the first two, but the ethnonationalists would probably disagree with a lot of Givewell's suggestions.
3skeptical_lurker
Not uniformly, I think - Japan is an Ethno-nationalist state, and also used to be the world's largest supplier of foreign aid.
3[anonymous]
Ethno-nationalists certainly have no problem with geopolitics or mutually-beneficial investment, and foreign aid can be useful there.
3ZankerH
The most coherent proposal I've heard so far is applying being TRS at the polling place to charity: The principle of optimising your donations for cultural-marxist outrage.
1Azathoth123
Sarah Hoyt isn't quite NRx, but her recent (re)post here seems relevant. In particular, the old distinction between deserving and undeserving poor.
1IlyaShpitser
The Austrian "Iron Ring" party. Restore the Hapsburg Empire! ---------------------------------------- Yes, I am aware that there are things to understand about the crazy straw design world. :)
-1Azathoth123
NRx's are generally not utilitarians.
4Shmi
I've met at least one claiming he is.
0skeptical_lurker
What ethical system do you follow?
-1Azathoth123
I'm a virtue ethicist.
-4Lumifer
Ha. Good question. Subverting the Cathedral, maybe?
[-]Shmi110

My feeling was that SSC is getting close to LW in terms of popularity, but Alexa says otherwise: SSC hasn't yet cracked top 100k sites (LW is ranked 63,755) and has ~600 of links to it vs ~2000 for LW. Still very impressive for a part-time hobby of one overworked doctor. Sadly, 20% of searches leading to SSC are for heartiste.

My suspicion is that SSC would get a lot more traffic if its lousy WP comment system was better, but then Scott is apparently not motivated by traffic, so there is no incentive for him to improve it.

SSC would get a lot more traffic

SSC getting a lot more traffic might change it and not necessarily for the better.

4ChristianKl
Why do you think that's the case? Are there any cases of blogger getting much more popular after switching to a different comment system? And what comment system would you advocate?
2Shmi
It's a good question, maybe it does not, I am not aware of any A/B testing done on that. I simply go by the trivial inconveniences. Scott is against reddit-style karma system, so I'd go for Scott marking comments he finds interesting, at a minimum. Additionally, comment formatting and presentation which improves nesting and visibility would be nice. Reddit/LW is an OK compromise, userfriendly.org is better in terms of seeing more threads at a glance.
-2ChristianKl
There are many reasons against using the reddit code base. While it's open source in theory it's not structured in a way that allows easy updating. Is there any solution that would be plug&play for a wordpress blog that you would favor Scott implementing? Coding something himself would be more than a trival inconvenience. I also think you underrate the time cost of comment moderation. Want to be a blogger and wanting to moderate a forum are two different goals.
0Shmi
Scott uses WP, and it has plenty of comment ranking plugins. Here is one popular blog with a simple open voting system: http://www.preposterousuniverse.com/blog . It is probably not good enough for SSC, but many other versions are available. As I said, Scott is not interested in improving the commenting system, and probably is not interested in taking any steps beyond great writing toward improving the blog's popularity, either.
0ChristianKl
That has voting but it doesn't seem to have threaded comments. That means switching to that plugin would break all the existing comment threads. I would guess that the main issue is that he doesn't want to do work to improve it. Arguing what's an improvement also isn't easy. If I look at the blogs of influential people who do put effort into it, I don't see that they all use a comment solution that Scott refuses to use.
2NancyLebovitz
The amount of comments can be rather overwhelming as it is. Do you want a larger SSC community, for the ideas to get a wider audience, or what?
5Shmi
It is overwhelming because it is poorly formatted and presented, not because of the volume. There are plenty of forums with better comment formatting, like reddit, userfriendly.org, or slashdot. Lack of comment ranking does not help readability, either.
1NancyLebovitz
I find that the bakot's ~new~ on new comments and the dropdown list of new comments is enough to get by with-- for me, the quantity really is the overwhelming aspect on the more popular posts.
2Shmi
Other forums have lots more comments, yet are easier to navigate through.

Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.

  • The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)

  • Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.

  • Timeless identity is correct, and it's you rather than your copy that wakes up.

  • We will figure brain scanning.

  • We will figure brain simulation.

  • Alternatively, we will figure nanites, and a way to make them work through the ice.

  • We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; n

... (read more)
9Gondolinian
While mainstream belief in an afterlife is probably a contributing factor in why we aren't doing enough longevity/immortality research, I doubt it's a primary cause. Firstly, because very few people alieve in an afterlife, i.e. actually anticipate waking up in an afterlife when they die. (Nor, for that matter, do most people who believe in a Heaven/Hell sort of afterlife, actually behave in a way consistent with their belief that they may be eternally rewarded or punished for their behavior.) Secondly, because the people who are in a position to do such research are less likely than the general population to believe in an afterlife. And finally, because even without belief in an afterlife, people would still probably have a strong sense of learned helplessness around fighting death, so instead of a "Dying is sure scary, we won't truly die, so problem solved, let's do something else." attitude, we'd have a "Dying is sure scary, but we can't really do anything about it, let's do something else." attitude (I have a hunch the former is really the latter dressed up a bit.).
2maxikov
On this particular point, I would say that people who are in a position to allocate funds for research programs are probably about as likely as the general population to believe in the belief in afterlife. Generally, I agree - it's definitely not the only problem. The USSR, where people were at least supposed to not believe in afterlife, didn't have longevity research as its top priority. But it's definitely one of the cognitive stop signs, that prevents people from thinking about death hard enough.
2RowanE
About half of your list is actually an OR statement (timeless identity AND brain scanning AND simulation) OR (nanites through ice), and that doesn't even exhaustively cover the possibilities since at least it needs a term for unknown unknowns we haven't hypothesized yet. It's probably easiest to cover all of them with something like "it's actually possible to turn what we're storing when we vitrify a cryonics patient back into that person, in some form or another". And the vast majority of cryonicists, or at least, those in Less Wrong circles who your post are likely to reach, already accept that the probability of cryonics working is low, but exactly how low they think the probability is after considering the four assumptions your list reduces to is something they've definitely already considered and probably would disagree with you on, if you actually gave a number for what "very low" means to see whether we even disagree (note: if it's above around 1%, consider how many assumptions there are in trying to achieve "longevity escape velocity", and maybe spread your bets). And, as others have already pointed out, belief in cryonics doesn't really funge against longevity research. If anything, I expect the two are very strongly correlated together. At least as far as belief in them being desirable or possible goes, it's quite apparent that they're both ideas that are shared by a few communities such as our own and rejected by other communities including "society at large". How much we spend on each is probably affected by e.g. cryonics being a thing you can buy for yourself right now but longevity being a public project suffering from commons problems, so the correlation might be less strong and even inverse if you check it (I would be very surprised if it actually turned out to be inverse), but if so that wouldn't necessarily be because of the reasons you suggest.
3maxikov
I would say it's probably no higher than 0.1%. But by no means I'm arguing against cryonics. I'm arguing for spending more resources on improving it. All sorts of biologists are working on longevity, but very few seem to work on improving vitrification. And I have a strong suspicion that it's not because nothing can be done about it - most of the time I talked to biologists about it, we were able to pinpoint non-trivial research questions in this field.
6ChristianKl
I think LW looks favorably on the work of the Brain Preservation Foundation and multiple people even donated.
1ChristianKl
How about putting numbers on it? Without doing so, your argument is quite vague. Have you actually looked at the relevant LW census numbers for what "we are hoping"?
0maxikov
I would estimate the cumulative probability as the ballpark of 0.1% I was actually referring to the apparent consensus what I see among researchers, but it's indeed vague. I should look up the numbers if they exist.
1ChristianKl
Most researchers don't do cryonics. I think a good majority of LW anti-aging research is underfunded. I don't buy the thesis that people who do cryonics are investing less effort into other ways of fighting aging. The 2013 LW census asked a questions : "P(Anti-Agathics) What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time?" "P(Cryonics) What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then?" And "Are you signed up for cryonics?" The general takeaway is that even among people signed up with cryonics the majority doesn't think that it"s chance of working are bigger than 50%. But they do believe that it's bigger than 0.1%.
0pengvado
Who is "we", and what do "we" believe about the point of no return? Surely you're not talking about ordinary doctors pronouncing medical death, because that's just irrelevant (pronouncements of medical death are assertions about what current medicine can repair, not about information-theoretic death). But I don't know what other consensus you could be referring to.
0maxikov
Surely I do. The hypothesis that after a certain period of hypoxia under the normal body temperature the brain sustains enough damage so that it cannot be recovered even if you manage to get the heart and other internal organs working is rather arbitrary, but it's backed up by a lot of data. The hypothesis that with the machinery for direct manipulation of molecules, which doesn't contradict our current understanding of physics, we could fix a lot beyond the self-recovery capabilities of the brain is perfectly sensible, but it's just a hypothesis without the data to back it up. This, of course, can remind you the skepticism towards flying machines heavier than air in 19th century. And I do believe that some skepticism was a totally valid position to take, given the evidence that they had. There are various degrees of establishing the truth, and "it doesn't seem to follow from our fundamental physics that it's theoretically impossible" is not the highest of them.
0gothgirl420666
You missed a few: * you will die in a way that leaves your brain intact * people will care enough in the future to revive frozen people * the companies that provide these services will stick around for a long time
-1cameroncowan
I think trying to stop death is a rather pointless endeavour from the start but I agree the fact that most everyone has accepted it and we have some noble myths to paper it over certainly keep resources from being devoted to living forever. But then, why should we live forever?

New research suggests that life may be hard to come by on certain classes of planets even if they are in the habitable zone since they will lose their water early on. See here. This is noteworthy in that in in the last few years almost all other research has pointed towards astronomical considerations not being a major part of the Great Filter, and this is a suggestion that slightly more of the Filter may be in our past.

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee... (read more)

[-]jefftk220

Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?

Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start them again?

My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.

(But I'm not signed up for cryonics because I don't think the information would be preserved.)

0MockTurtle
Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!). Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won't exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time... Thanks for the help!
9CBHacking
Short version: I adjusted my sense of "self" until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day. It didn't actually take much for me to take that leap when it came to cryonics. The trigger for me was "you don't die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die". I'm not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: "Yes, of course I want you to perform the last-ditch operation to save my life!" If you're curious: My default self-view for a long time was basically "the continuity that led to me is me, and any forks or future copies/simulations aren't me", which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it "CBH Alpha") as myself. If a copy of me was created; "I" was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn't make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can't pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves. This doesn't mean I'm not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn't need to be optimized for. On the other hand, this does mean I'm not OK with the idea of a mac
0MockTurtle
I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and you don't get recreated at your destination? Is that a bad thing?' is almost without meaning, as there would no-longer be a 'me' to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious. As pointed out by Richard, this is probably even more absurd than I realise, as I am not 'conscious' of all my desires at all times, and thus I cannot go on this road of 'if I do not currently care about something, does it matter?'. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when. Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those 'click' moments...
0CBHacking
The first is a short story that is basically a "garden path" toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won't be harmed regardless. That world-view gets turned on its ear at the end of the story. The second is longer, but still a pretty short story; I didn't see a version of it online independent of the novel-length collection it's published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do. The third is a huuuuuuge comic archive (totally worth reading anyhow, but it's been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less "thinky" but funnier / more approachable than the others.
0RowanE
Although with your example in particular it's probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don't understand how or why you'd want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there's a true answer to the question of "which beings can accurately be labelled "me"?", and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.
2Richard_Kennaway
Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time. If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics? Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word "want". (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily "dying" on an operating table, or, if cryonics works, being frozen.
1MockTurtle
I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'? I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn't feel quite right. For now I'll meditate on it and see if I can internalise that message. Thanks for the help!
-9advancedatheist

What exactly causes a person to stalk other people? Is there research that investigates the question when people start to stalk and when they don't?

To what extend is getting a stalker a risk worth thinking about before it's too late?

No research, just my personal opinion: borderline personality disorder.

alternating between high positive regard and great disappointment

First the stalker is obsessed by the person because the target is the most awesome person in the universe. Imagine a person who could give you infinitely many utilons, if they wanted to. Learning all about them and trying to befriend them would be the most important thing in the world. But at some moment, there is an inevitable disappointment.

Scenario A: The target decides to avoid the stalker. At the beginning the stalker believes it is merely a misunderstanding that can be explained, that perhaps they can prove their loyalty by persistence or something. But later they give up hope, or receive a sufficiently harsh refusal.

Scenario B: The stalker succeeds to befriend the the target. But they are still not getting the infinite utilons, which they believe they should be getting. So they try to increase the intensity of the relationship to impossible levels, as if trying to become literally one person. At some moment the target refuses to cooperate, or is simply unable to cooperate in the way the stalker wants them to, but to the stalker even this... (read more)

5polymathwannabe
This sounds eerily close to the mystical varieties of theistic religions.
2Lumifer
The only anonymous celebrity I can think of is Bansky. Staying anonymous is not compatible with becoming famous.
8Jayson_Virissimo
Satoshi Nakamoto is also famous and pseudonymous, but this conjunction is very rare IMO.
0Lumifer
Aha, thank you, a second example. Though I don't know if he's known by name in the general population.
3Viliam_Bur
I would guess most people become famous before they realize the advantage of anonymity, and then it's too late to start with a fresh name. But it's also possible that it's simply not worth the effort, because when you become famous enough, someone will dox you anyway. Could be interesting to know how much advantage (trivial inconvenience for wannabe stalkers) provides a pseudonym when your real name can be easily found on wikipedia; e.g. "Madonna". Or how big emotional difference for a potential stalker does it make whether a famous blogger displays their photo on their blog or not. My favorite anonymous person is B. Traven.
1Gondolinian
*Banksy
5Lumifer
He's so anonymous I don't even know how to spell his (or maybe her) name! :-)
0ChristianKl
I'm at the moment quite unsure how to handle a girl who seems to have bipolar depression and wants to have a relationship with me. Four years ago I think she was in a quite stable mental state (I'm more percetive today than I was back then(. At the time she turned me down. I haven't seen her for a while and now she seems to be pretty broken as a result of mobbing in an enviroment that she now left. One the one hand there the desire in me to try to fix her. Having a physical relationship with her also has it's appeal. On the other hand I can't see myself being open personally with her as long as she is in that messed up mental state.
4Viliam_Bur
That is a difficult situation, but the last sentence suggests that the correct answer is "no". :(
2Jackercrack
I've had a 3 year relationship with a woman I thought I could fix. She said she'd try hard to change, I said I'd help her, I tried to help her and was extremely supportive for a long time. It was emotionally draining because behind each new climbed mountain there was another problem, and another, and another. Every week a new thing that was bad or terrible about the world. I eventually grew tired of the constant stream of disasters, most stemming from normal situations interpreted weirdly then obsessed over until she broke down in tears. It became clear that things were not likely to ever get better so I left. There were a great number of fantastic things about this woman; we were both breakdancers and rock climbers, we both enjoyed anime and films, we shared a love for spicy food and liked cuddling, we both had good bodies. We had similar mindsets about a lot of things. I say all this so that you understand exactly how much of a downside an unstable mental state can be. So that you know that all of these great things about her were in the end not enough. Understand what I mean when I tell you it was not worth it for me and that I recommend against it. That I lost 3 years of time I could have spent making progress in a state with no energy. If you do plan to go for it anyway, set a time limit on how long you will try to fix her before letting go, some period of time less than half a year. I'll answer any questions that might seem useful.
0ChristianKl
Trying hard to change is not useful for changing. It keeps someone in place. Someone who has emotional issues because they obsess too much doesn't get a benefit from trying harder. Accepting such a frame is not the kind of mistake I would make. If a person breaks down crying I'm not disassociating and going into a low energy state. It rather draw me into a situation and makes me more present. But I'm not sure whether it brings me into a position where I consider the other person an agent rather than a rubics cube having to be solved.
1Jackercrack
Yes well I wasn't a rationalist at the time, nor did I know enough about psychology to say what the right thing to do to help a person whose father... Well I cannot say the exact thing but suffice to say that If I ever meet the man at least one of us is going to the hospital. I'm rather non-violent at all other times. There wasn't exactly a how-to guide I could read on the subject. I am also the kind of person that would be drawn out and try to help a person who breaks down crying. You use your energy to help their problems, and have less left for yourself. It starts to wear on you when you get into the third year of it happening every second week like clockwork over such charming subjects as a thoughtless word by a professional acquaintance or having taken the wrong bins out. Bonus points for taking the wrong bins out being a personal insult that means I hate her. Anyway, that really isn't the point.Telling me how to solve my rubics cube which I am no longer in contact with is not very helpful. The point is, I've been there and I want to help you make the right decision, whatever that may be for you.
-1ChristianKl
As far as I see it, you basically were faced with a situation without having any tools to deal with it. That makes your situation quite different. When sitting in front of the hospital bed of my father speaking confused stuff because of morphium, my instinctual response was to do a nonverbal trance induction to have him in a silent state in half a minute. Not because I read some how-to guide of how to deal with the situation but because NLP tools like that are instinctual behavior for me. I'm very far from normal and so a lot of lessons that might be drawn from your experience for people that might be similar as you are, aren't applicable to me. While reading a how-to guide doesn't give you any skills, there's is psychological literature on how to help people with most problems.
1Jackercrack
You may be right about my lack of tools, and I can't honestly say I used the try harder in the proper manner seeing as I hadn't been introduced to it at the time. I played the role of the supportive boyfriend and tried (unsuccessfully) to convince her to go to a therapist who was actually qualified at that sort of thing. I am suspicious, however that you took pains to separate yourself into a new reference class before actually knowing that one way or the other. Unless of course you have a track record of taking massive psychological issues and successfully fixing them in other people and are we really doing this? I mean come on. A person offers to help and you immediately go for the throat, picking apart mistakes made in an attempt to help a person, then using rather personal things in a subtly judgemental manner. Do you foresee that kind of approach ending well? Is that really the way you want this sort of conversation to play out? I like to think we can do better. I have information. Do you want it or not?
0chaosmage
Are you sharing your feelings or asking for advice?
0ChristianKl
It's context for the question I asked earlier. There's a lot of information that goes into decision making that I won't be open about publically, so I'm not really asking on specific advice.

Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?

Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.

5jefftk
Currently the main cost in cryonics is getting you frozen, not keeping you frozen. For example, Alcor gives these costs for neuropreservation: * $25k -- Comprehensive Member Standby (CMS) Fund * $30k -- Cryopreservation * $25k -- Patient Care Trust (PCT) * $80k -- Total The CMS fund is what covers the Alcor team being ready to stabilize you as soon as you die, and transporting you to their facility. Then your cryopreservation fee covers filling you with cryoprotectants and slowly cooling you. Then the PCT covers your long term care. So 69% of your money goes to getting you frozen, and 31% goes to keeping you like that. (Additionally I don't think it's likely that current freezing procedures are sufficient to preserve what makes you be you, and that better procedures would be more expensive, once we knew what they were.) EDIT: To be fair, CMS would be much cheaper if it were something every hospital offered, because you're not paying for people to be on deathbed standby.
0Lumifer
So, for how long will that $25K keep you frozen? Any estimates?
3gjm
I believe the intention is "unlimitedly long", which is reasonable if (1) we're happy to assume something roughly resembling historical performance of investments and (2) the ongoing cost per cryopreservee is on the order of $600/year.
3Lumifer
The question is whether the cryofund can tolerate the volatility. Aha, that's the number I was looking for, thank you.
1gjm
Note that it's just a guess on my part (on the basis that a conservative estimate is that if you have capital X then you can take 2.5% of it out every year and be pretty damn confident that in the long run you won't run out barring worldshaking financial upheavals). I have no idea what calculations Alcor, CI, etc., may have done; they may be more optimistic or more pessimistic than me. And I haven't made any attempt at estimating the actual cost of keeping cryopreservees suitably chilled.
0Lumifer
Didn't you say it's on the order of $600/year?
3gjm
It sounds as if I wasn't clear, so let me be more explicit. * I believe the intention is to be able to keep people cryopreserved for an unlimited period. * For this to be so, the alleged one-off cost of keeping them cryopreserved should be such as to sustain that ongoing cost for an unlimited period. * A conservative estimate is that with a given investment you can take 2.5% of it out every year and, if your investments' future performance isn't tragically bad in comparison with historical records, be reasonably confident of never running out. * This suggests that Alcor's estimate of the annual cost of keeping someone cryopreserved is (as a very crude estimate) somewhere around $600/year. * This is my only basis for the $600/year estimate; in particular, I haven't made any attempt to estimate (e.g.) the cost of the electricity required to keep their coolers running, or the cost of employing people to watch for trouble and fix things that go wrong. (Why 2.5%? Because I've heard figures more like 3-4% bandied around in a personal-finance context, and I reckon an institution like Alcor should be extra-cautious. A really conservative figure would of course be zero.)
2Lumifer
Ah, I see. I think I misread how the parentheses nest in your post :-) So you have no information on the actual maintenance cost of cryopreservation and are just working backwards from what Alcor charges. I'm having doubts about this number, but that's not a finance thread. And anyway, in this context what matters is not reality, but Alcor's estimates. That's debatable -- inflation can decimate your wealth easily enough. Currently inflation-adjusted Treasury bonds (TIPS) trade at negative yields.
0gjm
Correct. I did try to make it as clear as I could that I do too... Well, I defined it as the maximum amount you can take out without running out of money. I agree that if instead you define it as the maximum net outflow that (with some probability close to 1) leaves your fortune increasing rather than decreasing in both long and short terms, it could be negative in times of economic stagnation.
2philh
No, ve said that "unlimitedly long" is reasonable if that's the cost. Ve didn't say that that was the cost.
4RomeoStevens
I've seen extremely low plastination estimates due to the lack of maintenance costs. Very speculative obviously,, and the main component of cost is still the procedure itself (though there are apparently some savings here as well.)

I'm going to narrate a Mutants and Masterminds roleplaying campaign for my friends, and I'm planning that the final big villain behind all the plots will be... Clippy.

Any story suggestions?

[-]RowanE110

Sabotage of a big company's IT systems, or of an IT company that maintains those systems, to force people to use paperclip-needing physical documents while the systems are down. Can have the paperclips be made mention of, but as what seems to the players like just fluff describing how this (rival company/terrorist/whatever) attack has disrupted things.

7ilzolende
It depends on how familiar your friends are with uFAI tropes, so you may want to tone these up or down to keep foreshadowing at the right level. If they're highly familiar, you may want to switch paperclips with staples. * Monsters attack a factory, which happens to manufacture binder clips. * An infectious disease spreads across [home city], causing photosensitive epilepsy. Careful observers will note that seizures occur most often when lights strobe at computer monitor refresh rates. * Corporate executives experience wave of meningitis (nanotechnology-induced). When they return to work, they cancel all paperless-office initiatives. * Population of [distant area] missing. Buried underground: lots of paperclips. (If needed, have the paperclips test positive for some hallucinogen as a red herring). * Iron mines report massive thefts, magnetism-related supervillain denies all responsibility and is actually innocent. Alternatively, if any heroes have metal-related powers, frame one of them and present false evidence to the players that a supervillain did it. * Biotechnology companies seem to be colluding about something. The secret: somebody or something has been producing genetic material with their equipment, and they need to find out who, ideally without causing a panic. Maybe some superheroes could investigate for them? If you do run this, please share your notes with us. Edit: Now I want to run this sort of campaign. Thanks!
2polymathwannabe
Good ideas. My friends don't know anything about uFAI topics; if I drop the name "Clippy," they'll think of the MS Office assistant.

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I've been thinking about a couple of things since I wrote that post.

  • What makes LessWrong a useful website for asking questions which matter to you personally is that there is lots of insightful people here with wide knowledge base. However, for some questions, LessWrong might be too much, or the wrong kind of, monoculture to provide the best answers. Thus, for weird, unusual, or highly specific questions, there might be better d

... (read more)

Animal Charity Evaluators have updated their top charity recommendations, adding Animal Equality to The Humane League and Mercy for Animals. Also, their donation-doubling drive is nearly over.

6ZankerH
Why would an effective altruist (or anyone wanting their donations to have a genuine beneficial effect) consider donating to animal charities? Isn't the whole premise of EA that everyone should donate to the highest utilon/$ charities, all of which happen to be directed at helping humans? Just curiosity from someone uninterested in altruism. Why even bring this up here?
[-]jefftk170

We don't all agree on what a utilon is. I think a year of human suffering is very bad, while a year of animal suffering is nearly irrelevant by comparison, so I think charities aimed at helping humans are where we get the most utility for our money. Other people's sense of the relative weight of humans and animals is different, however, and some value animals about the same as humans or only somewhat below.

To take a toy example, imagine there are two charities: one that averts a year of human suffering for $200 and one that averts a year of chicken suffering for $2. If I think human suffering is 1000x as bad as chicken suffering and you think human suffering is only 10x as bad, then even though we both agree on the facts of what will happen in response to our donations, we'll give to different charities because of our disagreement over values.

In reality, however, it's more complicated. The facts of what will happen in response to a donation are uncertain even in the best of times, but because a lot of people care about humans the various ways of helping them are much better researched. GiveWell's recommendations are all human-helping charities because of a combination of "... (read more)

[-][anonymous]70

I may write a full discussion thread on this at some point, but I've been thinking a lot about undergraduate core curriculum lately. What should it include? I have no idea why history has persisted in virtually every curriculum I know of for so long. Do many college professors still believe history has transfer of learning value in terms of critical thinking skills? Why? The transfer of learning thread touches on this issue somewhat, but I feel like most people on there are overvaluing their own field hence computational science is overrepresented and social science, humanties, and business are underrepresented. Any thoughts?

The first question is what goals should undergraduate education have.

There is a wide spectrum of possible answers ranging from "make someone employable" to "create a smart, well-rounded, decent human being".

There is also the "provide four years of cruise-ship fun experience" version, too...

5[anonymous]
Check out page 40 of this survey. In order of importance: To be able to get a better job 86% / To learn more about things that interest me 82% / To get training for a specific career 77% / To be able to make more money 73% / To gain a general education and appreciation of ideas 70% / To prepare myself for graduate or professional school 61% / To make me a more cultured person 46%

First, undergrad freshmen are probably not the right source for wisdom about what a college should be.

Second, I notice a disturbing lack of such goals as "go to awesome parties" and "get laid a lot" which, empirically speaking, are quite important to a lot of 18-year-olds.

0RowanE
In systems like the US, where undergraduate freshmen are basically customers paying a fee, I expect their input on what they want and expect the product they're purchasing to be like should be extremely relevant.
0polymathwannabe
Indeed, customers are usually expected to be informed about what they're buying. But in the case of education, where what the "customer" is buying is precisely knowledge, a freshman's opinion on what education should contain may be less well informed than, for example, a grad student's opinion.
0Lumifer
Yes, that is the "provide four years of cruise-ship fun experience" version mentioned. The idea that it's freshmen who are purchasing college education also needs a LOT of caveats.
0[anonymous]
Exactly which courses do you imagine do the most to help students go to the most awesome parties and get laid a lot?
6Alsadius
Ones with very little homework and a good gender ratio.
5Lumifer
The point is not that they need courses to help them with that. The point is that if you are accepting freshman desires as your basis for shaping college education, you need to recognize that surveys like the one you linked to present a very incomplete picture of what freshmen want.
2[anonymous]
If the desires you named are irrelevant to the discussion at hand, then can you please name the desires that you think are relevant which are not encapsulated by the survey and explain how they are relevant to what classes students are taking? Also, who is the right source of wisdom about what a college should be?
0Lumifer
For the bit of mental doodling that this thread is, the right source is you -- your values, your preferences, your prejudices, your ideals.
6Metus
Nerds tend to undervalue anything that is not math-heavy or easily quantifiable.
3Evan_Gaensbauer
Scott Alexander from Slate Star Codex has the idea that if the humanities are going to be taught as part of a core curriculum, it might be better to teach the history of them backwards.
2MrMind
When I was in high school, I discussed this very idea with my Philosophy teacher. She said that (at least here in Italy) curricula for humanities are still caught in the Hegelian idea that history unfolds in logical structures, so that it's easier to understand them in chronological order. I reasoned instead that contemporary subjects are more relevant, more interesting and we have much more data about them, so they would appeal much better to first year students.
3Nornagest
If I were designing a core curriculum off the top of my head, it might look something like this: First year: Statistics, pure math if necessary, foundational biology, literature and history of a time and place far removed from your native culture. Classics is the traditional solution to the latter and I think it's still a pretty good one, but now that we can't assume knowledge of Greek or Latin, any other culture at a comparable remove would probably work as well. The point of this year is to lay foundations, to expose students to some things they probably haven't seen before, and to put some cognitive distance between the student and their K-12 education. Skill at reading and writing should be built through the history curriculum. Second year: Data science, more math if necessary, evolutionary biology (perhaps with an emphasis on hominid evolution), basic philosophy (focusing on general theory rather than specific viewpoints), more literature and history. We're building on the subjects introduced in the first year, but still staying mostly theoretical. Third year: Economics, cognitive science, philosophy (at this level, students start reading primary sources), more literature and history. At this point you'd start learning the literature and history of your native language. You're starting to specialize, and to lay the groundwork for engaging with contemporary culture on an educated level. Fourth year: More economics, political science, recent history, cultural studies (e.g. film, contemporary literature, religion).
6Lumifer
Fifth year: spent unemployed and depressed because of all the student debt and no marketable skills. This is a curriculum for future philosopher-kings who never have to worry about such mundane things as money.
2Nornagest
"Core curriculum" generally means "what you do that isn't your major". Marketable skills go there, not here; it does no one any good to produce a crop of students all of whom have taken two classes each in physics, comp sci, business, etc.
3Lumifer
If you count the courses you suggest, there isn't much room left for a major. I think a fruitful avenue of thought here would be to consider higher (note the word) education in its historical context. Universities are very traditional places and historically they provided the education for the elite. Until historically recently education did not involve any marketable skills at all -- its point was, as you said, "engaging with contemporary culture on an educated level".
0Nornagest
Four to six classes a year, out of about twelve in total? That doesn't sound too bad to me. I took about that many non-major classes when I was in school, although they didn't build on each other like the curriculum I proposed. It may amuse you to note that I was basically designing that as a modernized liberal arts curriculum, with more emphasis on stats and econ and with some stuff (languages, music) stripped out to accommodate major courses. Obviously there's some tension between the vocational and the liberal aims here, but I know enough people who e.g. got jobs at Google with philosophy degrees that I think there's enough room for some of the latter.
2jaime2000
I studied at two state universities. At both of them, classes were measured in "credit hours" corresponding to an hour of lecture per week. A regular class was three credit hours and semester loads at both universities were capped at eighteen credits, corresponding to six regular classes per semester and twelve regular classes per year (excluding summers). Few students took this maximal load, however. The minimum semester load for full-time students was twelve credit hours and sample degree plans tended to assume semester loads of fifteen credit hours, both of which were far more typical.
1Lumifer
Sure, but that's evidence that they are unusually smart people. That's not evidence that four years of college were useful for them. As you probably know, there is a school of thought that treats college education as mostly signaling. Companies are willing to hire people from, say, the Ivies, because these people proved that they are sufficiently smart (by getting into an Ivy) and sufficiently conscientious (by graduating). What they learned during these four years is largely irrelevant. Is four years of a "modernized liberal arts curriculum" the best use of four years of one's life and a couple of hundred thousand dollars?
1Evan_Gaensbauer
What counts as a 'marketable skill', or even what would be the baseline assumption of skill for becoming a fully and generally competent adult in twenty-first century society, might be very different from what was considered skill and competence in society 50 years ago. Rather than merely updating a liberal education as conceived in the Post-War era, might it make sense to redesign the liberal education from scratch? Like, does a Liberal Education 2.0 make sense? What skills or competencies aren't taught much in universities yet, but are ones everyone should learn?
1cameroncowan
Perhaps we need to re-think what jobs and employment look like in the 21st century and build from there?
0Evan_Gaensbauer
That seems like a decent starting point. I don't know my U.S. history to well, as I'm a young Canadian. However, a cursory glance at the Wikipedia page for the G.I. Bill in the U.S. reveals that it, among other benefits, effectively lowered the cost not only for veterans after World War II, but also their dependents. The G.I. Bill was still used through 1973, by Vietnam War veterans, so that's millions more than I expected. As attending post-secondary school became normalized, it shifted toward the status quo for getting better jobs. In favor of equality, people of color and women also demanded equal opportunity to such education by having discriminatory acceptance policies and whatnot scrapped. This was successful to the extent that several million more Americans attended university. So, a liberal education that was originally intended for upper(-middle) class individuals was seen as a rite of passage, for status, and then to stay competitive, for the 'average American'. This trend extrapolated until the present. It doesn't seem to me typical baccalaureate is optimized for what the economy needed for the 20th century, nor for what would maximize the chances of employment success for individuals. I don't believe this is true for some STEM degrees, of course. Nonetheless, if there are jobs for the 21st century that don't yet exist, we're not well-equipped for those either, because we're not even equipped for the education needed for the jobs of the present. I hope the history overview wasn't redundant, but I wanted an awareness of design flaws of the current education system before thinking about a new one. Not that we're designing anything for real here, but it's interesting to spitball ideas. * If not already in high school, universities might mandate a course on coding, or at least how to navigate information and data better, the same way almost all degrees mandate a course in English or communications in the first year. It seems ludicrous this isn't already s
1NancyLebovitz
Persuasive writing and speaking. Alternatively, interesting writing and speaking.
0cameroncowan
That was basically my education (I took 5 years of Latin, 2 of ancient greek, philosophy, literature, art) and the only reason I didn't end up homeless camping out in Lumifer's yard was because I learned how to do marketing and branding. I think having practical skills is a good idea. Trade and Technical schools are a great idea.
2[anonymous]
1st year: 5 / 2nd year: 7 / 3rd year: 5 / 4th year: 4 That's over half their classes. I also counted 14 of those 21 classes are in the social sciences or humanities which seems rather strange after you denigrated the fields. Now the big question: how much weight do you put on the accuracy of this first draft?
0Nornagest
It's pretty simple. I think the subjects are important; I'm just not too thrilled about how they're taught right now. Since there's no chance of this ever being influential in any way, I may as well go with the fields I wish I had rather than the ones I have. As to accuracy: not much.
1ChristianKl
What do you mean with those terms? Understanding the principle of evolution is useful but I don't see why it needs a whole semester.
0Azathoth123
Um, the reason for studying Greek and Latin is not just because they're a far-removed culture. It's also because they're the cultures which are the memetic ancestors of the memes that we consider the highest achievements of our culture, e.g., science, modern political forms. Also this suffers from the problem of attempting to go from theoretical to practical, which is the opposite of how humans actually learn. Humans learn from examples, not from abstract theories.
2Evan_Gaensbauer
I just want to point out for the record that if we're discussing a core curriculum for undergraduate education, I figure it would be even better to get such a core curriculum into the regular, secondary schooling system that almost everyone goes through. Of course, in practice, implementing such would require an overhaul of the secondary schooling system, which seems much more difficult than changing post-secondary education. The reason for this would probably because changing the curriculum for post-secondary education, or at least one post-secondary institution, is easier, because there is less bureaucratic deadweight, a greater variety of choice, and a nimbler mechanisms in place for instigating change. So, I understand where you're coming from in your original comment above.
2zedzed
tl;dr: having a set of courses for everyone to take is probably a bad idea. People are different and any given course is going to, at best, waste the time of some class of people. A while ago, I decided that it would be a good thing for gender equality to have everyone take a class on bondage that consisted of opposite-gender pairs tying each other up. Done right, it would train students "it's okay for the opposite gender to have power, nothing bad will happen!" and "don't abuse the power you have over people." In my social circle, which is disproportionately interested in BDSM, this kinda makes sense. It may even help (although my experience is that by the time anyone's ready to do BDSM maturely, they've pretty much mastered not treating people poorly based on gender.) It would also be a miraculously bad idea to implement. In general, I think it's a mistake to have a "core curriculum" for everyone. Within 5 people I know, I could go through the course catalog of, say, MIT, and find one person for whom nobody would benefit from them taking the course. (This is easier than it seems at first; me taking social science or literature courses makes nobody better off (the last social science course I took made me start questioning whether freedom of religion was a good thing. I still think it's a very good thing, but presenting me with a highly-compressed history of every inconvenience it's produced in America's history doesn't convince my system 1). Similarly, there exist a bunch of math/science courses that I would benefit greatly from taking, but would just make the social science or literature people sad. Also, I know a lot of musicians, for whom there's no benefit from academic classes; they just need to practice a lot.) Having a typical LWer take a token literature class generally means they're going to spend ~200 hours learning stuff they'll forget exponentially. (This could be remedied by Anki, but there's a better-than-even chance the deck gets deleted the mome
0cameroncowan
As a writer, I agree with you. I am horrible at math. In my life 2x3=5 most of the time. If I had to suffer and fail at Calculus when I can't multiply some days I would certainly start writing books about evil scientists abusing a village for its resources and then have the village revolt against its scientific masters with pitchforks. Throw in a great protagonist and a love interest and I have a bestseller with possible international movie rights.
0[anonymous]
If a field doesn't require a lot of technical knowledge, why bother with college in the first place? I'm not so sure how useful your examples are since most creative writers and musicians will eventually fail and be forced to switch to a different career path. Even related fields like journalism or band manager require some technical skills.
0Gondolinian
Obligatory SMBC comic. :)
0zedzed
Signalling, AKA why my friend majoring in liberal arts at Harvard can get a high-paying job even though college has taught him almost no relevant job skills.
1Alsadius
* History illuminates the present. A lot of people care about it, a lot of feuds stem from it, and a lot of situations echo it. You can't understand the Ukrainian adventures Putin is going on without a) knowing about the collapse of the Soviet Union to understand why the Russians want it, b) knowing about the Holodomor to understand why the Ukrainians aren't such big fans of Russian domination, and arguably c) knowing about the mistakes the west made with Hitler, to get a sense of what we should do about it. * History gives you a chance to learn from mistakes without needing to make them yourself. * History is basically a collection of the coolest stories in human history. How can you not love that?
6[anonymous]
How useful is knowing about Ukraine to the average person? What percentage of History class will cover things which are relevant? Which useful mistakes to avoid does a typical History class teach you about?
1Alsadius
1) Depends how political you are. I'm of the opinion that education should at least give people the tools to be active in democracy, even if they don't use them, so I consider at least a broad context for the big issues to be important. 2) Hard to say - I'm a history buff, so most of my knowledge is self-taught. I'd have to go back and look at notes. 3) Depends on the class. I tend to prefer the big-picture stuff, which is actually shockingly relevant to my life(not because I'm a national leader, but because I'm a strategy gamer), but there's more than enough historians who are happy to teach you about cultural dynamics and popular movements. You think popular music history might help someone who's fiddling with a bass guitar?
1ChristianKl
Given how hard it is to establish causality, history where you don't have a lot of the relevant information and there a lot of motivated reasoning going on is often a bad source for learning.
0Alsadius
Which is better - weak evidence, or none?
2Lumifer
An interesting question. Let me offer a different angle. You don't have weak evidence. You have data. The difference is that "evidence" implies a particular hypothesis that the data is evidence for or against. One problem with being in love with Bayes is that the very important step of generating hypotheses is underappreciated. Notably, if you don't have the right hypothesis in the set of hypotheses that you are considering, all the data and/or evidence in the world is not going to help you. To give a medical example, if you are trying to figure out what causes ulcers and you are looking at whether evidence points at diet, stress, or genetic predisposition, well, you are likely to find lots of weak evidence (and people actually did). Unfortunately, ulcers turned out to be an bacterial disease and all that evidence, actually, meant nothing. Another problem with weak evidence is that "weak" can be defined as evidence that doesn't move you away from your prior. And if you don't move away from your prior, well, nothing much changed, has it?
0Alsadius
"Weak" means that it doesn't change your beliefs very much - if the prior probability is 50%, and the posterior probability is 51%, calling it weak evidence seems pretty natural. But it still helps improve your estimates.
0Lumifer
Only if it's actually good evidence and you interpret it correctly. Another plausible interpretation of "weak" is "uncertain". Consider a situation where you unknowingly decided to treat some noise as evidence. It's weak and it only changed your 50% prior to a 51% posterior, but it did not improve your estimate.
0TheOtherDave
Often none. For example, if a piece of evidence E is such that: * I ought to, in response to it, update my confidence in some belief B by some amount A, but * I in fact update my confidence in B by A2, and updating by A2 gets me further from justified confidence than I started out, then to the extent that I value justified confidence in propositions I was better off without E. Incidentally, this is also what I understood RowanE to be referring to as well.
2[anonymous]
But it's only bad because you made the mistake of updating by A2. I often notice a different problem of people to always argue A=0 and then present alternative belief C with no evidence. On some issues, we can't get a great A, but if the best evidence available points to B we should still assume it's B.
0TheOtherDave
Agreed. Agreed. Yes, I notice that too, and I agree both that it's a problem, and that it's a different problem.
0ChristianKl
Overconfidence is a huge problem. Knowing that you don't understand how the world works is important. To the extend that people believe that they can learn significant things from history, "weak evidence" can often produce problems. If you look at the Western Ukraine policy they didn't make a treaty to accept Russian annexion of the Krim in return for stability in the rest of Ukraine. That might have prevented the mess we have at the moment. In general political decisions in cases like this should be made by doing scenario planning. It on thing to say that Britian and France should have declared war on Germany earlier. It quite another thing to argue that the West should take military action against Russia.
6Alsadius
Might have, but my money isn't on it. You think Putin cares about treaties? He's a raw-power sort of guy. And yes, the scenarios are not identical - if nothing else, Russia has many more ICBMs than Hitler did. Still, there's ways to take action that are likely to de-escalate the situation - security guarantees, repositioning military assets, joint exercises, and other ways of drawing a clear line in the sand. We can't kick him out, but we can tell him where the limits are. (Agreed on your broader point, though - we should ensure we don't draw too many conclusions).
1ChristianKl
Putin does care about the fact that Ukraine might join NATO or the EU free trade zone. He probably did feel threatened by what he perceived as a color revolution with a resulting pro-Western Ukrainian government. At the end of the day Putin doesn't want the crisis to drag on indefinitely so sooner or later it's in Russia's interest to have a settlement. Russia relies on selling it's gas to Europe. Having the Krim under embargo is quite bad for Russia. It means that it's costly to keep up the economy of the Krim in a way that it's population doesn't think the Krim decayed under Russian rule and there unrest. On the other hand it's not quite clear the US foreign policy has a problem with dragging out the crisis. It keeps NATO together even through Europeans are annoyed of getting spied at by the US. It makes it defensibly to have foreign miltary bases inside Germany that spy on Germans. Do you really think joint exercises contribute to deescalation? As far as repositioning military assets goes, placing NATO assets inside Ukraine is the opposite of deescalation. The only real way to descalate is a diplomatic solution and there probably isn't one without affirming Crimea as part of Russia.
2Alsadius
There's a certain type of leader, over-represented among strongmen, that will push as far as they think they can and stop when they can't any more. They don't care about diplomacy or treaties, they care about what they can get away with. I think Putin is one of those - weak in most meaningful ways, but strong in will and very willing to exploit our weakness in same. The way to stop someone like that is with strength. Russia simply can't throw down, so if we tell them that they'd have to do so to get anywhere, they'd back off. Of course, we need to be sure we don't push too far - they can still destroy the world, after all - but Putin is sane, and doesn't have any desire to do anything nearly so dramatic.
0ChristianKl
Putting gains inner politcs strength from the conflict. That assumes that you can simply change from being weak to being strong. In poker you can do this as bluffing. In Chess you can't. You actually have to calculate your moves. Holding joint military exercises isn't strength if you aren't willing to use the military to fight. Bailing out European countries is expensive enough. There not really the money to additionally prop up Ukraine.
2Alsadius
Only as long as he's winning. NATO is, far and away, the strongest military alliance that has ever existed. They have the ability to be strong. When the missing element is willpower, "Man up, already!" is perfectly viable strategic advice.
-7ChristianKl
0Lumifer
Accept an annexation in return for promises of stability? Hmm, reminds me of something...
0ChristianKl
That's partly the point, we didn't go that route and now have the mess we have at the moment.
0Lumifer
And what happened the last time we DID go that route?
0ChristianKl
Making decisions because on a single data point is not good policy. Also the alternative to the Munich agreements would have been to start WWII earlier. That might have had advantages but it would still have been very messy.
0RowanE
Sometimes none, if the source of the evidence is biased and you're a mere human.
0Alsadius
There are unbiased sources of evidence now?
0ChristianKl
That question doesn't have anything to do with the claim that you can make someone less informed by giving them biased evidence.
0Evan_Gaensbauer
Some sources of evidence are less biased than others. Some sources of evidence will contain biases which are more problematic than others for the problem at hand.
0Alsadius
Of course. But Rowan seemed to be arguing a much stronger claim.
1[anonymous]
Undergraduate core curriculum where, for whom, and for what purposes?
1Punoxysm
I think history and the softer social sciences / humanities can, if taught well, definitely improve your ability to understand and analyze present-day media and politics. This can improve your qualitative appreciation of works of art, understand journalistic works on their own terms and context instead of taking them at face value, and read and write better. They can also provide specific cultural literacy, which is useful for your own qualitative appreciation as well as some status things. I had a pretty shallow understanding of a lot of political ideas until I took a hybrid history/philosophy course that was really excellently taught. It allowed me to read a lot of poltical articles more deeply and understand their motivations and context and the core academic ideas they built around. That last part, seeing theses implicitly referenced in popular works, is pretty neat.
7Nornagest
I think this is true... but also that "taught well" is a difficult and ideologically fraught criterion. The humanities and most (but not all; linguistics, for example, is a major exception) of the social sciences are not generally taught in a value-neutral way, and subjective quality judgments often have as much to do with finding a curriculum amenable to your values as with the actual quality of the curriculum. Unfortunately, the fields most relevant to present-day media and politics are also the most value-loaded.
0Punoxysm
Well, the impossibility of neutrality, except when giving the most mundane recitation of events, when talking about history or the humanities is a pretty vital lesson to understand. The best way to approach this is to present viewpoints then counterpoints, present a thesis then a criticism. I have had one non-core course that was pretty much purely one perspective (left-radical tradition), but this is still a tradition opposed to and critical of even mainstream-leftist history and politics. What I mean to say is I don't think it was a great class, but I still learned plenty when I thought critically about it on my own time. If you have a certain amount of foundation (which I got through a much more responsibly-taught class pretty much following the traditional western philosophical canon), in other words, you should still learn plenty from a curriculum that is not amenable to your values, if you put in an effort. But I think most core history and philosophy courses at liberal arts colleges stick to a pretty mainstream view and present a decent range of criticisms, achieving the ends I talked about. If you really want far-left or right-wing or classical liberal views, there are certainly colleges built around those.
1Nornagest
The thing that bothers me is that (at least at my university, which was to be fair a school that leaned pretty far to the left) neutrality seems to have been thrown out not only as a practical objective but also as an optimization objective. You're never going to manage to produce a perfectly unbiased narrative of events; we're not wired that way. But narratives are grounded in something; some renditions are more biased than others; and that's a fact that was not emphasized. In a good class (though I didn't take many good classes) you'll be exposed to more than one perspective, yes. But the classes I took, even the good ones, were rather poor at grounding these views in anything outside themselves or at providing value-neutral tools for discriminating between them. (Emphasis on "value-neutral": we were certainly taught critical tools, but the ones we were taught tended to have ideology baked into them. If you asked one of my professors they'd likely tell you that this is true of all critical tools, but I don't really buy that.)
0Punoxysm
Of course bias can vary, but I think most of the professors you ask would say they are being unbiased, or they are calibrating their bias to counteract their typical student's previous educational bias. After all, you were taught history through high school, but in a state-approved curriculum taught by overworked teachers. As far as critical tools, which ones are you thinking of? Are you thinking of traditionally-leftist tools like investigations into power relationships? What do you think of as a value-neutral critical tool? You seem to have an idea of what differentiated the good classes from the bad. I'm not disagreeing that some classes are bad, I'm focusing on the value the good ones can give. A bad engineering class, by analogy, teaches about a subject of little practical interest AND teaches it at a slow pace. Bad classes happen across disciplines. And I admit I am probably speaking from a lot of hindsight. I took a couple good classes in college, and since then have read a ton of academic's blogs and semi-popular articles, and it has taken a while for things to "click" and for me to be able to say I can clearly analyze/criticize an editorial about history at a direct and meta-level the way I'm saying this education helps one do. You're right, for instance, that in college you probably won't get an aggressive defense of imperialism to contrast with its criticisms, even though that might be useful to understanding it. But that's because an overwhelming majority of academics consider it to be such a clearly wretched, even evil, that they see no value in teaching it. It's just how we rarely see a serious analysis of abolition vs. slavery, because come on right? On slavery, academia and the mainstream are clearly in sync. On Imperialism? Maybe not as much, especially given the blurry question of "what is modern imperialism?" (is it the IMF; is it NAFTA; is it Iraq?). But many academics are striving to make their classes the antidote to a naive narrative of A
0Nornagest
I mentioned critical theory elsewhere in these comments. There's also gender theory, Marxian theory, postcolonial theory... basically, if it comes out of the social sciences and has "theory" in its name, it's probably value-loaded. These are frameworks rather than critical tools per se, but that's really what I was getting at: in the social sciences, you generally don't get the tools outside an ideological framework, and academics of a given camp generally stick to their own camp's tools and expect you to do the same in the work you submit to them. Pointing to value-neutral critical tools is harder for the same reason, but like I said earlier I think linguistics does an outstanding job with its methodology, so that could be a good place to start looking. Data science in general could be one, but in the social sciences it tends to get used in a supporting rather than a foundational role. Ditto cognitive science, except that that hardly ever gets used at all. This in itself is a problem. If you start with a group of students that have been exposed to a biased perspective, you don't make them less biased by exposing them to a perspective that's equally biased but in another direction. We've all read the cog-sci paper measuring strength of identification through that sort of situation, but I expect this sort of thing is especially acute for your average college freshman: that's an age when distrust of authority and the fear of being bullshitted is particularly strong. (The naive narrative wasn't taught in my high school, incidentally, but I'm Californian. I expect a Texan would say something different.)
-1Punoxysm
But these frameworks/theories are pretty damn established, as far as academics are concerned. Postcolonial theory and gender theory make a hell-of-a-lot of sense. They're crowning accomplishments of their fields, or define fields. They're worth having a class about them. Most academics would also say that they consider distinctly right-wing theories intellectually weak, or simply invalid; they'd no more teach them than a bio professor would teach creationism. If you strongly feel all of mainstream academia is biased, then pick a school known for being right-wing. Academia's culture is an issue worthy of discussion, but well outside the scope of "should history be in core curriculums". Maybe things like game-theoretic explanations of power dynamics, or something like discussion of the sociology of in-groups and out-groups when discussing nationalism, or something similar, are neglected in these classes. If you think that, I wouldn't disagree. I guess most professors would probably say "leave the sociology to the sociologists; my class on the industrial revolution doesn't have room to teach about thermodynamics of steam engines either". I don't know much about linguistics, except that Chomsky is a Linguist and that some people like him and some people don't. I do know it is on the harder end of the social sciences. The softer social sciences and humanities simply won't be able to use a lot of nice, rigorous tools. I think good teachers, even ones with a strong perspective, approach things so that the student will feel engaged in a dialogue. They will make the student feel challenged, not defensive. More of my teachers achieved this than otherwise. Bad teachers and teaching practices that fail to do this should be pushed against, but I don't think the academic frameworks are the main culprit.
6ChristianKl
If left-wing academia is low quality that in no way implies that right-wing academia is high quality. Seeing everything as left vs. right might even be part of the deeper problem plaguing the subject.
0gjm
On the other hand, if (in someone's opinion) academia as a whole is of low quality on account of a leftward political bias then it seems reasonable for that person to take a look at more right-leaning academic institutions.
-2ChristianKl
Nobody here said that's it's primarily a leftward bias. A while ago someone tried to understand who controls the majority of companies and found that less than few institutions do control most of the economy. Did they publish in a economics journal? Probably too political. Instead they publised in Plos One. I have a German book that makes arguments about how old German accounting standards are much nicer than the Anglo American ones. Politics that makes Anglo-American accounting standards the global default are not well explored by either leftwing or rightwing academic institutions. Substantial debates about the political implications of accounting standards just aren't a topic that a lot of political academics who focus on left vs. right care about. A lot of right wing political academia is also funded via think tanks that exist to back certain policies.
0gjm
True, but the things Nornagest was complaining about could all be at-least-kinda-credibly claimed to have a leftward bias, and could not be at all credibly claimed to have a rightward bias. Of course, as you say, there's a lot more to politics (and putative biases in academia) than left versus right, but it's a useful approximation. Lest I be misunderstood, I will add that I too have a leftward bias, and I do not in fact think anyone would get a better education, or find better researchers, by choosing a right-leaning place (except that there are some places that happen both to be good and to have a rightward slant, I think largely by coincidence, and if you pick one of them then you win). And I share (what I take to be) your disapproval of attempts to manipulate public opinion by funding academics with a particular political bent.
4Nornagest
Though I suspect I have a rather dimmer view of the social sciences' "crowning achievements" than you do, I'm not objecting directly to their political content there. I was mainly trying to point to their structure: each one consists of a set of critical tools and attached narrative and ideology that's relatively self-contained and internally consistent relative to those tools. Soft academia's culture, to me, seems highly concerned with crafting and extending those narratives and distinctly unconcerned with grounding or verifying them; an anthropologist friend of mine, for example, has told me outright that her field's about telling stories as opposed to doing research in the sense that I'm familiar with, STEMlord that I am. The subtext is that anything goes as long as it doesn't vindicate what you've called the naive view of culture. That's a broader topic than "should history be in core curriculums?", but the relevance should be obvious. The precise form it takes, and the preferred models, do vary by school, but picking a right-wing school would simply replace one narrative with another. (I'd probably also like the students less.) They don't. That doesn't mean they can't. There's plenty of rigorous analysis of issues involved in social science out there; it's just that most of it doesn't come from social scientists. Some of the best sociology I've ever seen was done by statisticians. (Chomsky, incidentally, was a brilliant linguist -- if not always one vindicated by later research -- but he's now so well known for his [mostly unrelated] radical politics that focusing on him is likely to give the wrong impression of the field.)
-1Punoxysm
I think this is a problem, BUT it wouldn't be a problem if we had more people willing to pick up the ball and take these narratives as hypotheses and test/ground them. I think there IS a broad but slow movement towards this. I think these narrative-building cultures are fantastic at generating hypotheses, and I am also sympathetic in that it is pretty hard to test many of hypotheses concretely. That said, constant criticism and analysis is a (sub-optimal) form of testing. Historians tend to be as concrete as they can, even if it's non-quantitatively. If an art historian says one artist influenced another, they will demonstrate stylistic similarities and a possible or verified method of contact between the two artists. That's pretty concrete. It can rely on more abstract theories about what is a "stylistic similarity" though, but that's inevitable. I also think that the broadest and best theories are the ones you see taught at an undergrad level. The problems you point out are all more pernicious at the higher levels. Surely true. But I think (from personal discussions with academics) there is a big movement towards quantitative and empirical in social sciences (particularly political science and history), and the qualitative style is still great for hypothesis generation. I also think our discussion is getting a bit unclear because we've lumped the humanities and social sciences together. That's literally millions of researchers using a vast array of methodologies. Some departments are incredibly focused on being quantitative, some are allergic to numbers.
3Lumifer
I would call that "damning with faint praise" :-D
0Punoxysm
It's praise sincerely intended. What strikes you as inadequate about, say, feminist theory and related ideas?
0Lumifer
Can we do postcolonial theory instead? What kind of falsifiable (in the Popperian sense) claims does it make? Any predictions?
-2Punoxysm
First I'll do a couple examples from feminism, since it is often tarred as academic wankery, and I feel more knowledgeable about it: * Feminist theories say that movies underrepresent women, or represent them in relation to men. A simple count of the number of movies that pass the BechdelTest vs. it's male inverse shows this to be plainly true. In fact, the gap is breathtaking. Not only that, but this gap continues with movies released today, supporting the idea that only direct and conscious intervention can fix the gap and related iniquities in the portrayal of men and women in media. * Feminist theory predicts that issues like female reproductive autonomy, education, and various categories of violence against women are strongly correlated. Statistics appear to show this is true (not indisputable; reporting and confounding factors exist). As for postcolonialism, I'll give it a shot, though I'm not the best to speak on it: * Postcolonial theory states that most of the institutions of formerly colonial nations (their media, the World Bank, etc.) fetishize the strong nationalist state and a capitalist economy with all the trappings (central banks, urbanization, progression from agrarian to industrial to service economy) that western nations have developed over the past two centuries, and will attempt to impose states where they can. Many argue that Western intervention in the Balkans and in Somalia bear this out. * Postcolonial theory makes many other statements about development, like that postcolonial nations shouldn't try to emulate western paths of development (because they will result in poorer economic growth). Some of them are hotly disputed. However they are empirical. * More broadly, postcolonialism says that for any intervention in a non-western nation, basing this intervention on methodology for western nation will yield worse results than building the approach up based on the ethnographic characteristics of that nation, despite the fact that in
1Lumifer
That's merely an empirical observation. That's a normative statement about what should be. Can you be a bit more precise about these relationships? Also, does the feminist theory predict or does it say that's what it sees? Off the top of my head I'd say I have at least two issues with feminism. The first is that it loves to tell other people what they should think, feel, and value. Science is not normative and feminism is -- that makes it closer to preaching than to science. The second is that I am not sure why feminism (as an academic discipline) exists. I understand that historically there was the movement of "these not-quite-yet-dead white men in the social studies departments don't understand us and don't do things we find important, so fuck'em -- we'll set up our own department". That's fine, but first that's not true any more, and second, that's an office-politics argument for the administrative structure of a university, not reason for a whole new science to come into existence. What exactly is feminism doing that's not covered by sociology + political studies + cultural studies? Again, this is a post-factum empirical observation. And that doesn't seem to be quite true. Most newly independent countries love state power and often played with some variety of socialism, "third way", etc. Given the context of the Cold War, their political economy generally reflected which superpower they aligned with. Who will? Impose on whom? I don't quite understand what do you mean here. An interesting point. The problem with it is that nations which did NOT try to "emulate western paths of development" experienced even more poor economic growth. It is, in fact, an empirical observation that the economic growth in the developing world was, by and large, quite poor. However the conclusion that this is the result of transplanting Western practices to alien soil and home-grown solutions are much better does not seem to be empirically supported. And another curious statem
0Punoxysm
I made a mistake trying to defend postcolonial theory here, it's just not my area of expertise. Whether it's valid or not, I can't defend it well. But we do seem to be on the same page that it's falsifiable. However, I do have a substantial beef with your beefs with feminism. Come on... Things falling to the ground is an empirical observation, gravity is the theory. No, it's a prediction. If the gender representation gap spontaneously solved itself without any evident adoption of feminist attitudes that would be a strike against feminism as a theory. Predicts; It observed it then it continued to be true so it's not overfitting It has a normative and an empirical element. An organization like GiveWell empirically assesses charities then makes normative recommendations based on a particular version of utilitarianism. Feminism assesses institutions and makes recommendations. Most of what feminism does in influence other fields. Gender studies departments exist some places and not other, but it's influence is pervasive in academia. I think this is a misinformed criticism.
0Lumifer
In another post you called feminism "a project dedicated to changing certain policies and cultural attitudes". I like this definition, it makes a lot of sense to me. However the implication is that feminism is neither a science nor even a field of study. Recall that the original question was feminism (gender studies) in academia. You said I'm fine with treating feminism as a socio-cultural movement based on a certain set of values. But then it's not an academic theory which is a crowning accomplishment of a field of study.
-1Punoxysm
It's both scholarly field and social movement. And scholars involved in it may be involved in one or both elements. Feminism is a HUGE tent. It provides a framework for everyone from economists studying what factors drive labor participation rates among women to judges ruling on a case of sexual harassment to a film critic analyzing a character. There are probably tens of thousands of academics alone (forget lawyers, legislators, lobbyists and journalists) who would say feminism influences their work. This includes many who are very quantitative and empirical.
1Lumifer
What does this "scholarly field" study that is not covered by the usual social sciences? And, given that we are on LW, how prevalent do you think is motivated cognition in this field of study? What covers everything covers nothing. How would you define feminism -- in a useful way, specifying what kind of a thing is it and how it's different from other similar things?
-4Punoxysm
This is getting very Socratic. I don't know what your assumptions are or what would satisfy you as a definition and it is beginning to get frustrating to figure out, but I think these two links are pretty good. http://en.wikipedia.org/wiki/Gender_studies http://en.wikipedia.org/wiki/Feminist_theory As for motivated cognition, of course it's present, as it is virtually everywhere in life and academia. Do you have a more specific case? Remember that though the humanities and softer social sciences have all sorts of flaws that are easy to make fun of, they don't submit grants for $100 million dollar construction projects with stated goals they know to be totally unachievable (I'm looking at you local university particle accelerator). Don't condemn the field just by its sins.
5Lumifer
Don't you think that being both a field of study and a social movement aiming to change prevalent values and social structures offers especially rich opportunities for motivated cognition? Compared to the baseline of life and academia average? That's peanuts. When social scientists fuck things up, the cost is in millions of human lives. Exhibit A: Karl Marx. Well, the problem is that I don't think it's a field of study at all. I think it is, as you said, a project to change the society.
0[anonymous]
I can see your point about social sciences, but I would think this doesn't apply to most of the humanities. How is a creative writing, theatre, or communications course fraught by ideological criterion?
5Nornagest
In a word: theory. I didn't take as many of those classes in college as I did social science, so I'm speaking with a little less authority here, but the impression I got is that the framework underpinning creative writing etc. draws heavily on critical theory, which is about as value-loaded as it gets in academia. The implementation part, of course, isn't nearly as much so.
1ChristianKl
How do you know that you understand motivations of political articles better? Are you able to predict anything politically relevant that you couldn't have predicted beforehand?
0Punoxysm
Concretely, I can often tell if the article writer is coming from a particular school of thought or referencing a specific thesis, then interpret jargon, fill in unstated assumptions, see where they're deviating or conforming to that overarching school of thought. This directly enhances my ability to extrapolate to what other political views they might have and understand what they are attempting to write, and who their intended audience is. As far as predicting the real world, that's tough. These frameworks of thought are in constant competition with one another. They are more about making normative judgments than predictive ones. The political theories that I believe have the most concrete usefulness are probably those that analyze world affairs in terms of neocolonialism, in part because those theories directly influence a ton of intellectuals but also in part because they provide a coherent explanation of how the US has managed its global influence in the past and (I predict) how it will do so in the future. I can also do things like more fully analyze the factors behind US police and African-American relations, or how a film will influence a young girl.
0ChristianKl
That reminds me of the Marxist who can explain everything with the struggle of the workers against the capitalists. The sentence looks like your study did damage. You shouldn't come out of learning about politics believing that you can fully understand the factors of anything political.
2gjm
I think the difference I highlighted is an important one.
0Punoxysm
I am referring to the normative parts of frameworks. For instance feminism makes many normative statements. It is a project dedicated to changing certain policies and cultural attitudes. The eventual influence of these frameworks are based largely on their acceptance.
3ChristianKl
People make statements. Abstract intellectual labels don't. People have all sorts of personal goals. If one sees everything as the battle of certain frameworks then a lot dealing with individual people is lost. Additionally you can also miss when new thoughts come along that don't fit into your existing scheme. A lot of people coming from the humanities for example have very little understanding of the discourse of geeks. The political effects of getting people to meditate and be in touch with their bodies are also unknown unknowns for a lot of people trained in the standard political ways of thinking.
-2Punoxysm
I don't have much to comment on this except that many academics in the humanities level charges of dehumanization and ignoring individual agency against a lot of works in economics or quantitative sociology and political science (ex. they might criticize an economics paper that attributes civil unrest to food shortages without discussing how it might originate in individual dissatisfaction with oppression and corruption). So it's ironic if I've done the same disservice to those academics. I don't really know what you're referring to. But if you're talking LW-style memes, I think that it is generally true that futurism isn't of much interest to many in the humanities. And to a great degree it is orthogonal to what they do. A scenario like the singularity may not be, in that it's not orthogonal to anyone or anything, but I haven't had many conversations about it with those in the humanities. What are you thinking of? But I am sure there are academics who can readily discuss the effects of the fall of physically demanding labor, the effect of physical rigors on those in the military, or the interaction of all flavors of Buddhism with politics.
4ChristianKl
Dissatisfaction with oppression and corruption in itself doesn't have much to do with individual people being actors. Standard feminist theory suggests that social groups are oppressed. As far as LW ideas go, prediction markets do have political implications. X-risk prevention does have political implications. CFAR mission also mentions that they want to change how we decide which legislation to pass. A bunch of geeks are working on getting liquid democracy to work. Wikileaks and it's in actions do have political effects. Sweden recently changed their Freedom of Press laws to make it clear that having a server in Sweden is not enough to profit from Swedish press protections because Julian Assanges Wikileaks tried to use Swedish press protection to threaten people who try to uncover sources of Wikileaks. In Germany a professor of sociology recently wrote a book that argued that Quantified Self is driven by the belief that it's possible to know everything. It isn't. The kind of geeks New Atheists that want everything to be evidence-based and who believe that they can know everything generally reject QS for not doing blinded and controlled trials. He simply treated all geeks the same way and therefore missed the heart of the issue. How much have polticial scientists wrote about Crypto Wars and in Cory Doctorow words the recent war on general computing? Estonia had to be defended against cyber war by a lose collection where likely the stronger players weren't government associated. It's also quite likely that we live in a time where a nongovernemntal force is strong enough to start such a war. The NSA is geeky enough that it's NSA chief Gen Keith Alexander modeled his office after the Star Treck bridge. Jeff Bezos brought the Washington post. Pierre Omidyar who made his money with ebay sponsored First Look Media. Those are the signs that more and more political power goes to geeks. I'm just pointing to a political idea to which you probably aren't exposed. M
0[anonymous]
How much of this course was history? How similar was it to other history courses you've taken? A course syllabus might be useful, but I understand if you'd prefer privacy. I could see this happening with a course on general scientific principles that used history to develop practice problems, but then what you learned would be scientific principles; not really history. Was it just the professor was better as his/her job than other history professors or is there something that's readily replicable to other history courses?
1Punoxysm
It was history of philosophy, focused on reading major works chronologically with a good dose of historical context and background for each (e.g. biblical authorship theories, prevailing attitudes that works were responding to, historical events like wars that would have influenced the authors, etc.). Work included twice-weekly journal entries on our readings, occasional quizzes, and essays tying several works together. A partial list of the curriculum, which we read in this (chronological) order, was: * Plato * Aristotle * Aurelius * Bible * St. Augustine * Koran * Avicenna * Aquinus * Hobbes * Locke * Rousseau * Burke * Kant * Hegel * Marx * Nietzsche * W.E.B. Dubois * Simone De Beauvoir * Foucault * Peter Singer The other hybrid philosophy/history course, the radical one, did have a couple excellent, very historically-oriented, readings. One was Black Jacobins about the Haitian revolution, others were about the French Revolution, the Paris Commune, and a left-radical rebellion against the Bolsheviks in the early USSR (which I have unfortunately forgotten the name of, but it does demonstrate the pluralism of pre-Bolshevik socialism). Detailed historical explorations were the stronger part of that course, and served to show how clear investigation into the facts could dispel or nuance a charicatured view of history.
0Azathoth123
Here is Eliezer's post on the subject.
0Evan_Gaensbauer
History seems to me a subject in its teachings that aims to produce critical thinking in a sense different than what LessWrong typically tries to optimize for. I figure LessWrong optimizes for the critical thinking of the individual, which benefits from an education in logic, computer science, and mathematics, among a general knowledge of the natural sciences. I'm not sure how much history would contribute to that sort of skill, but others in this thread seem skeptical of its value. However, learning history seems like it improves how critically groups and societies can think together, across a few domains key to society. A general education in history as part of the core curriculum could be a heuristic for circumventing group irrationality, and mob rule, in a way that critical thinking skills designed for only the individual might not. Understanding the history of one's own nation in a democracy give the electorate knowledge of what's worked in the past, what's different in the nation in the present compared to the past, and the context in which policy platforms and cultural and political divides were forged. This extends to the less grand history of the geographical location in which ones resides, or was raised in, within one's own nation. An understanding of a history of other nations, and the world, gives one the context in which international relations have formed over centuries. Here's an example of how knowledge of world history and international relations might be useful. If the executive branch of the United States federal government wants to declare war on the country, to intervene against a predator country on the behalf of one victimized, it makes sense to understand the context of that conflict. If the history of those faraway regions is known, than the electorate can check the narrative the government puts forward against what they learned in schooling. Even very recent history could be useful knowledge in this regard. If the electorate of the United
0ChristianKl
I think the idea of a core curriculum that contains things such as history is awful. Diversity is pretty useful. Business in general is useful, but little of the relevant skills are well learned via lectures. Being able to negotiate is a useful business skill.
0[anonymous]
Diversity courses strike me as an odd combination of sociology, anthropology, and history, but since you specifically criticized history courses; I'm a bit confused as to why you like diversity courses. Are culturally-focused history courses such as history of hip-hop, latin american culture, or women in American history better than standard history courses? Is there a certain category of business courses that does a better job than others? Are there any skills that can be easily taught in a lecture format? I have a friend who felt communications courses were very good at teaching negotiation strategies.
2gjm
I think you have misinterpreted "Diversity is pretty useful" as "Diversity courses are pretty useful". My reading of ChristianKI's comment is that he meant "having different people take different courses is useful" and I would be rather surprised if he thought diversity courses as such were much use.
0ChristianKl
I like diversity in course offerings. That's not the same thing as liking courses that supposedly teach diversity. I don't want a world in which every college student learns the same thing. As such I reject the idea of a core curriculum. Probably courses that don't use textbooks but that do exercises with strong emotional engagement. I was at personal development seminars where at the end of the day some people lie on the floor because of emotional exhaustion. I think doing a lot of deep inner work brings higher returns than learning intellectual theory. How djd he came to that conclusion? Has the amount that the person pays for the average thing he buys gone down because he has become much better at negotiating?
0[anonymous]
I only took one class in communications so I don't understand the field too well. The class itself seemed useful, but there was no mention of negotiation strategies. It would seem more likely that better negotiation leads to more offers than that better negotiation leads to a better offer. A smart businessman is going to know how to value the deal, and it's going to be hard to significantly change his price.
0ChristianKl
What practical effect did it have that make you consider it to be useful? If you buy a car in many cases a person with good negotating skills can achieve a better price.

Apropos the "asking personally important questions of LW" posts, I have a question. I'm 30 and wondering what the best way is to swing a mid-career transition to computer science. Some considerations:

  • I already have some peripheral coding knowledge. I took two years of C back in high school, but probably forgot most of it by now. I do coding-ish stuff often like SQL queries or scripting batch files to automate tasks. Most code makes sense to me and I can write a basic FizzBuzz type algorithm if I look up the syntax.

  • I don't self-motivate very w

... (read more)
6sixes_and_sevens
Some salient questions: 1) What's your motivation for wanting to do this? 2) What's your current background/skill set? 3) Where in the world are you?
2ioshva
I work on lots of large cases with complex subject matter (often source code itself) with reams of electronic haystacks that need to be sorted for needles. The closer my job is to coding, the more I enjoy it. I get satisfaction out of scripting mundane tasks. I like building and maintaining databases and coming up with absurdly specific queries to get what I need. I remember enjoying and being good at what programming I did do in high school. I am starting to get the creeping feeling that I took a wrong turn eight years ago. I also feel somewhat stuck in my current position in patent law. Ordinarily step one would be to try a different environment to ensure it's not the workplace as opposed to the work. But most positions advertised in patent law demand an EE/CE/CS background, and I have a peripheral life science degree I use so little as to be irrelevant. I described my skill set as best I could in the parent post but right now it's just a cut above "extremely computer literate." I've dipped my toes but never found the time or motivation to dive (12 hour days kill the initiative). Houston.
9Shmi
Consider writing a simple Android or iOS app, such as Tetris, from scratch. This should not take very long and has intrinsic rewards built in, like seeing your progress on your phone and showing it off to your friends or prospective employers. You can also work on it during the small chunks of time available, since a project like that can be easily partitioned. Figure out which parts of getting it from the spec to publishing on the Play/App store you like and which you hate. Record your experiences and share them here once done.

Has anyone come across research on parents' attitudes towards their sons when they can see that girls don't find their teen boys sexually attractive? If you saw that happening to your son, that has to affect how you feel about him compared to how you would feel if you saw that your son had sexual opportunities.

This relates to my puzzlement about the idea that the "sexual debut" happens as an organic developmental stage with a median age of 17, compared with the fact that quite a few straight young men miss this window and become the targets of s... (read more)

3Shmi
I'd say it's more of a pity than derision and contempt, but then it probably depends on one's social circles.
2cameroncowan
I think parents want their children to be successful with their peers, particularly if they are. I helped raise my cousins and the youngest one was the last to really attract men and we felt really sorry for her because she was missing out and she was depressed because her sisters were always attached and she was not. Its a social thing, but it doesn't really hurt you as a person. I do think however, that your attractiveness level when you're young does have affect on your perception of your attractiveness into the rest of your life. Evolutionarily, when we only live to 40, it was important to keep the species going. Now, I think it is a matter of fitting in and finding one's place in society. Knowing, at a young time that you are attractive helps keep you going as life goes along. Whereas, if you don't feel attractive then you get that idea and it can be very hard to break.
1Evan_Gaensbauer
Now I'm puzzled by this too. Does the median age for young males making their "sexaul debut" vary by culture?
[-][anonymous]50

I'd like to recommend a fun little piece called the The Schizophrenia of Modern Ethical Theories (PDF), which points out that popular moral theories look very strange when actually applied as a grounds for action in real-life situations. Minimally, the author argues that certain reasons for actions are incompatible with certain motives, and that this becomes incoherent if we suppose that these motives were (at least partially) the motivation we had to adopt that set of reasons in the first place.

For example, if you tend to your sick friend, but explain to... (read more)

4DanielFilan
... no? I mean, maybe it will sound weird if you actually say it, because that's not a norm in our culture, but apart from that, it doesn't seem morally bad or off to me. ETA: well, I suppose only helping someone on egoistic grounds sounds off, but the utilitarian/moral obligation motivations still seem fine to me.
7gjm
I'm not sure even that does, when it's put in an appropriate way. "I'm doing this because I care about you, I don't like to see you in trouble, and I'll be much happier once I see you sorted out." There are varieties of egoism that can't honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place. I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this: * The morally best actions are the ones that lead to maximum overall happiness. But Stocker's argument is against the following quite different proposition: * We should restructure our minds so that all we do is to calculate maximum overall happiness. And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.
3fubarobfusco
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don't give you the answers. For instance, believing in divine command theory (that moral rules come from God's will) does not tell you what God's will is. It doesn't tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head. And similarly, utilitarianism does not tell you "Sleep with your cute neighbor!" or "Don't sleep with your cute neighbor!" The theory hasn't pre-calculated the outcome of a particular action. Rather, it tells you, "If sleeping with your cute neighbor maximizes utility, then it is good." The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn't seem like it follows.
0gjm
It looks like we're in violent agreement. I mention this only because it's not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
1fubarobfusco
No, I was intending to expand on your argument. :)
2blacktrance
If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn't sound off - it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.

(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking's fire into the equations).

Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like ... (read more)

3[anonymous]
Can you expand on this a bit?
1Shmi
My approach is that everything is equally real, just not everything is equally useful. In a meta level, talking about what's more real is not useful outside a specific setting. Unicorns are real in MLP, cars are real in the world we perceive, electrons are real in Quantum Electrodynamics, virtual particles are real in Feynman diagrams, agents are real in decision theories, etc.

I wonder why people like us who talk about wanting to "live forever" don't think more seriously about what that could mean in terms of overturning our current assumptions and background conditions, if our lives stretch into centuries and then into mlllennia.

I started to think about this based on something Mike Darwin wrote on his blog a few years back:

http://chronopause.com/chronopause.com/index.php/2011/04/19/cryonics-nanotechnology-and-transhumanism-utopia-then-and-now/index.html

Many years ago, when I was a teenager, Curtis Henderson was driv

... (read more)

I would expect social arrangements to appear that we aren't even beginning to imagine much more than anything especially neo-reactionary.

About that quote: If life is not worth living for 1000 years, then why is it worth living for 80? And if it's worth living for 80, why not 1000? If you don't want to live 1000 years, why not kill yourself now?

Is there some utility function that is positive up to 80 years but starts to become negative after that? (independent of level of health, since we're implicitly assuming that if you lived for 1000 years you'd be reasonably healthy during most of that time). If so, what is it?

3Evan_Gaensbauer
I'm jumping on this bandwagon. User advancedatheist wrote: If the relative (dis)value of gains or losses for society at large regress to a mean over time, why wouldn't this trend extend to what happens to us personally? Why wouldn't everything we observe or experience not matter as much? In a lifetime of centuries, if I see everything I now love degrade or disappear, I may also have the opportunity to grow a more nuanced love for things or persons that are more robust over time. The sting of pain at losing something loved in our first century of living may fade as its dwarfed by how deeply we feel the loss or gain of love for something greater, something that can only be appreciated in a lifetime spanning centuries.
0Alsadius
Boredom.
4passive_fist
Why is the threshold for boredom 80 years?
1Alsadius
Empirically, it seems to be nearly identical to the age of retirement as things stand. Lots of 70 year olds are just punching the clock most of the time(though there's certainly exceptions). I don't claim that we've extended life as long as our attention spans can allow. I think we could live longer and be okay. But current human psychology and culture are not designed for extremely long lives, even if we solved the issues of physiology.
3passive_fist
It's the age of retirement because physical and mental health decreases but I explicitly said assume reasonable health.
0Alsadius
I know, but that's not the biggest reason for retirement. Remember, a lot of people despise their jobs - they're looking to get out as soon as they can. A lot more don't really hate it, but wait for the time when they can quit working financially(due to pensions, etc.), and leave as soon as they can, because retirement is seen as more fun. Those aren't dependant on aging.
2RowanE
A lot of people who say they're looking to get out of retirement as soon as they can are optimising for it very poorly, as the early retirement community will argue and in many cases demonstrate. If you're in a sufficiently high-earning job OR are sufficiently frugal that you can save two thirds of what you earn or more and still enjoy your life with expenses at that level, you can retire in about ten to fifteen years. Social effects dominate - if you earn three times the median salary or more, probably most of your peers earn comparable amounts and spend ~90% of what they earn, so trying to live on what's actually a perfectly normal amount to live on seems like extreme deprivation. And what the social effects do is keep the age of retirement at the age it was set at by governments enacting the first pension schemes a hundred years ago when everyone was a factory worker. And that age was decided upon based on health deteriorating.
0Alsadius
No arguments. My comment isn't that all people are perfectly rational, it's that many people dislike their jobs.
0Evan_Gaensbauer
I concur that if the issues of physiology are solved, the layout of culture and psychology would have to change to accommodate, lest quality of life decreases. Just because humans might start living for several more centuries or millenia into the future doesn't mean we can assume there's a hunky-dory post-scarcity wonderland waiting for us there. For one, a longer lifespan means longer time meant working to save for a future of greater needs and wants, to sustain that longer lifespan. Maybe humans would despair that several more centuries of living means several more centuries of work. Alternatively, as how humans experience time changes as a function of how long they live, they may be more willing to work longer if there's a greater diversity of work, and more opportunity for novel experiences and projects that a shorter lifespan couldn't afford.
2NancyLebovitz
Boredom, memory issues, etc. discussion-- it's both about psychological effects of aging at present tech levels, and what long lives might be like if there were no physiological aging
-1cameroncowan
I think life after 80 goes downhill not just because of health but because people you are familiar with and things start going away. Things change so quickly the world starts to become unfamiliar to you. Its like living on an alien planet. I think living to 1000 years would require one to leave the world, do some adjusting/re-education/reworking and then re-engaging with the world again. It would be like every 100 years going back to college and starting again. New friends, new music, new everything so that one could keep going.
7IlyaShpitser
Have you read R. Scott Bakker's fiction? You might enjoy it, he deals with some issues that arise with living forever. I am surprised more LW folks aren't into Bakker. It's sort of Tolkien by way of Herbert with heavy rationalist overtones, e.g.: "This trilogy details the emergence of Anasûrimbor Kellhus, a brilliant monastic warrior, as he takes control of a holy war and the hearts and minds of its leaders. Kellhus exhibits incredible powers of prediction and persuasion, which are derived from deep knowledge of rationality, cognitive biases, and causality, as discovered by the Dûnyain, a secret monastic sect. "
2Richard_Kennaway
I read the first book in the series (after seeing it mentioned here some years back), and got some way into the second, but once I put it down I couldn't pick it up again. There are six books (so far). Are they worth it? I started wondering who the books were about, and if different readers would have different answers to that question. To someone interested in rationality, Kellhus is the obvious protagonist, at least in the first volume, or perhaps, just if introduced to the books through a mention on LessWrong. In the second, that theme is not prominent, as far as I recall, and the whole arc of Kellhus waging jihad across the world seems to be merely background -- but to what? Other readers might consider the relationship between Achamian and Esmeret to be the focus of the story. Others, the power struggles amongst the various factions. Others, the nature of the dark force of past ages that is emerging into the world again, which is mentioned but hardly appears on stage. What are these books about? HT to the Prince of Nothing wiki for refreshing my memory of character names. Maybe it would be quicker to read the wiki than labour through the books.
0IlyaShpitser
One heuristic I heard is that if you didn't like the Silmarillion, you probably wouldn't like Bakker's stuff. ---------------------------------------- These things are a matter of taste, I suppose. I was not very interested in Kellhus the rationalist Mary Sue so much, but I found it interesting to ponder the ontological puzzle of the "No-God."
6Richard_Kennaway
Global warming; Islam; a cure for ageing; practical use of space; AI; something else. To know anything about the next 300 years one would have to know how all of these pan out.
0Evan_Gaensbauer
Are there any lesser cultural or technological trends that, if they panned out, you expect might have just as great an impact as the greater trends you've just mentioned above? I'm interested in your thoughts in this regard.
5NancyLebovitz
Getting more information out of less data, with the notable current example being discovering exoplanets from tiny variations in starlight and the history of humanity and other lifeforms from DNA. I'm expecting this trend to continue, but I have no idea what else it will apply to.
1Richard_Kennaway
I'm sure there must be. That was just a list of what immediately came to mind, with a catchall at the end. The Great Stagnation? Dysgenic breeding patterns? Malthusian limitations closing in? Genetically modified humans? Rationalism going viral? ("There is no God but Bayes and Eliezer is His prophet.") Rationalism going viral differently? ("Neuroscience proves that you do not exist. I do not exist. Even Eliezer does not exist. Room 101 exists.")
5ChristianKl
Why exactly Neoreactionary? Why don't you talk about the chance of fundamental Muslims dominating? Our social ideology changed a lot in the 300 years. The fact that it hasn't is one of the more central misconceptions of Neoreactionary thought. Even in 200 years we went from homosexuality being legal, to it being illegal because of puritans, then being legal again and now gay marriage. It's just ridiculous to say that the puritians that got homosexuality banned have roughly the same ideology as today's diversity advocates.
6Vaniver
Right- and even if you take the more reasonable view and claim that the Puritans have the same genes or personalities or social roles or so on as today's diversity advocates, that means that we need to explain future social change in terms of those genes and personalities. If there will always be Mrs. Grundy, what will the future Mrs. Grundy oppress?
5Azathoth123
Citation please.
5Richard_Kennaway
Sounds good to me. (Disclaimer, or something: I am not signed up for cryonics.)
4Evan_Gaensbauer
In Asia, there are ideologies, philosophies, and/or religions two- or threefold older than Christianity, and they still have hundreds of millions of followers, or, at least, more than 'only a few academic specialists' who know about them. In particular, thought from ancient Chinese and Indian civilizations still have great impact on the modern incarnations of those civilizations. Also, how evidence is gathered and stored is so much better than it was two thousand years ago. If ever a point comes that future civilizations look at ours as ancient, they will have information on our histories and cultures much better (in quality and quantity) than we have of, e.g., the origins of Hinduism, Judaism, or Mesopotamia.
2cameroncowan
I like this Curtis Henderson guy. My great-grandmother lived to be 96 and one of her complaints was that everyone she knew, loved, and cared about had died and she hardly had anyone left.
2Richard_Kennaway
"Our"? From other comments of yours I gather that you expect your own assumptions to be upheld, it was only everyone else's (outside the NRsphere) who were due for a come-uppance.
1RowanE
Christians believe that a god exists and was interventionist enough to start a religion that taught the truth about him, so why wouldn't they expect him to at least also be interventionist enough to prevent that same religion from disappearing? And I'm not sure how, given someone already believes in an omnipotent interventionist god who's revealed his will to mankind, also believing that he'll perform a particular intervention in the future is "ridiculous" - do you have a theological argument or one based on the bible for why only an idiot would think God plans to make the rapture happen?
0Lalartu
By modern standarts it will be worse than present. This is how social change works.

This is for the people versed in international and tax law.

By a ruling of the ECJ all tax-payers in the EU can deduct charitable donations to any organisation within the EU from their taxes. In Germany at least this means that the charitability has to be certified by the German authorities. The usual process here is that a legal entity wishing to accept tax-deductible donations has to document how their funds are used and can then issue certificates to donors which then document the tax-deductibility of their donations.

Which leads to a couple of ideas and ... (read more)

Court OKs Barring High IQs for Cops

An aspiring cop got rejected for scoring too high on an IQ test.

I cannot begin to understand why they would do that.

0gjm
It may be worth mentioning that the article appears to be from 14 years ago. (Or it may not; for all I know the same policy is still in place.)
0bramflakes
I went into the article thinking the guy would have a freakishly high IQ (160+) where I could maybe see the point, but instead was 125. The judges most likely scored higher than that - aren't they feeling even slightly belittled at the suggestion that they'd be ineligible for law enforcement work because they'd find it too boring?
0MathiasZaman
The weird part is that after being rejected as a police officer he goes off to work as a prison guard. The latter is way more boring. If he's able to put up with that, he should be able to cope with the boredom of law enforcement.
0skeptical_lurker
Their explanation is that he would get bored and leave. I'm not surprised - I've been rejected for jobs more than once due to being too smart. (I'm not just boasting, it does seem relevant)

Is there a better place than LW to put a big post on causal information, anthropics, being a person as an event in the probability-theory sense, and decision theory?

I'm somewhat concerned that such things are a pollutant in the LW ecosystem, but I don't know of a good alternative.

[-]gjm100

Why would it be a pollutant in the LW ecosystem? This sounds pretty central in the space of things LW people are interested in; what am I missing? (Are you concerned that it would be too elementary for LW? that it might be full of mistakes and annoy or mislead people? that its topic isn't of interest to LW readers? ...)

What's the intended audience? What's it for? (Introducing ideas to people who don't know them? Cutting-edge research? Thinking aloud to get your ideas in order? ...)

0Manfred
I feel like it increases barrier to entry for new people. Intended audience is me from three years ago, I guess cutting-edge-adjacent.
9ChristianKl
Barrier to entry shouldn't be your main criteria. High quality posts draw intelligent people.
3gjm
Nah. Both Discussion and Main fairly consistently have a mix of intimidating technicality, fun (e.g., "Rationality Quotes"), lifehackery, ethics, random discussion, etc., etc., etc. One more bit of intimidating technicality isn't going to scare anyone away who wasn't going to be scared away anyhow. Sounds like fun. Go for it, say I. (Important note: I have not done anything remotely resembling research into the thought processes of potential new LW readers, and my model of them may be badly wrong. Don't trust the above much. It's just one random person's opinion.)
2IlyaShpitser
Manfred, I think your posts on Sleeping Beauty, etc. are fine, people just may not be able to follow you or have anything to contribute.
0Manfred
Thanks. So would you recommend that for the new stuff I use those sorts of 3/4-baked stream of consciousness posts?
2IlyaShpitser
The way I do the equivalent of what you are doing is write up something in various stages of "less than fully baked" and send to someone I know is interested/I respect in private, and have a chat about it. What's nice about that is it exploits the threat of embarrassment of outputting nonsense to force me to at least "bake" it sufficiently to have a meaningful conversation about it. It's very easy to output nonsense. But I am skeptical regarding the wiki model of generating good novel stuff -- there's too much noise.
[-]rkdj40

Do you or would you secretly invade your child's privacy for their own protection?

TL;DR because this turned into a lot of looking back on my relationship with my parents: I'd make sure they knew I had the capability, and then, if I saw a need to use it, I would. I wouldn't give an expectation of privacy and then violate it.

First, let me state that I'm in my late 20s, and have no children.

Secretly? No. Or rather, I would never hide that I have the capability, though I wouldn't necessarily tell them when I was using it. If I had reason to suspect them hiding things from me, I might even hide the mechanism, but I'd let them know that I could check. The goal would be to indicate that whatever it is I'm concerned about is REALLY IMPORTANT (i.e. more important than privacy), and that I expect that to act as a deterrence.

On the other hand, I can't think of many scenarios that would call for such action. I would make it clear, for example, that a diary is private unless I expect the kid to be in danger, but the scenarios that actually come to mind for when I would go through it all involve things like "E left without telling anybody where e'd be, can't be reached by any way, and has been gone since yesterday" or similar; if it was a suspicion of something like... (read more)

I don't have children. But my answer is that, potentially, I would, but it would depend on the situation.

Firstly, I think the level of privacy that a child can reasonably expect to have from his parents is age and context-dependent. A thirty-year-old who has left the home has a far greater legitimate expectation of privacy than a fifteen-year-old living at home, who in turn can legitimately expect far more than a 5-year-old. I don't think most people have any problem with, say, using a baby-monitor on a young child, even though this could be viewed as a gross invasion of privacy if done on someone older.

Secondly, it is better to be honest and open where possible, as otherwise when your actions are discovered (and they likely will be), it could be seen as a breach of trust. However, if your child is lying to you, then it could be appropriate. For example, suppose my teenage child kept receiving suspicious parcels through the mail, and gave implausible accounts for the contents. I would then try to sit the child down and say right, we're going to open this together, and see what's inside. But if that wasn't possible, then yes, I might secretly open one of the parcels, to ascertain whether the child is doing something illegal, dangerous, or otherwise inappropriate.

3someonewrongonthenet
Corollary - would you secretly invade an adult's privacy for their own protection? I have more trouble answering that one. The answer to the "children" question begins somewhere on "estimate what they might think about the situation as adults"...now if I only knew where the line could be drawn for an adult, this would be simple...
2imuli
No. Aside from the don't do things to other people without other their consent angle (which is hard with a child), my two year old is ascribing motives to my actions, and when privacy comes into being I doubt I'll be able to use any information that I do acquire without them noticing.
2Anatoly_Vorobey
Yes, I would. (Have two small children, haven't needed to).
2passive_fist
Depends on age. If they were teenagers, not secretly. For the simple reason that it could backfire (they find out their privacy has been invaded, then in the future hide things even more strongly). I would, however, expect them to tell me whatever information about their lives I wanted to know. For full disclosure: I'm in my late 20's and have no children.
0James_Miller
For a young child, of course it's not even close since the kid probably doesn't even value privacy.

Is there a way to sign up for cryonics and sign up to be an organ donor?

I know that some people opt to cryo-preserve only their brain. Is there a way to preserve the whole body, with the exception of the harvested organs ? Is there any reason to? Does the time spent harvesting make a difference to how thoroughly the body is preserved?

7fubarobfusco
No, because the folks responsible for each process need custody of the body in the same time frame after legal death.
0skeptical_lurker
But all the organ donor people need is for the body to be kept cold. I get that there's a legal conflict, but couldn't you leave your body to Alcor with instructions for them to hand it over to the organ donor people after they remove the head?
6Leonhart
I believe it doesn't work like this; you need the circulatory system in order to perfuse the head, and in doing so the other organs are compromised. This could probably be avoided, but not without more surgical expertise/equipment than today's perfusion teams have, I think.
0skeptical_lurker
Oh, because the cryoprotectant is toxic. I forgot about that. I suppose other internal organs apart form the heart could be removed before perfusion starts, but the Alcor people are not qualified to officially do this. All in all it seems like the sort of problem which would be solved if cryonics ever became big enough that it created a sufficient shortage of organs that hospitals actually dedicated some resources to solving the problem.

I am not skilled at storytelling in casual conversation (telling personal anecdotes). How can I improve this? In particular, what is a good environment to practice while limiting the social cost of telling lame stories?

3katydee
I'm considered pretty good in this respect. I think the #1 thing that helps is just paying attention to things a lot and having a high degree of situational awareness, which causes you to observe more interesting things and thus have more good stories to share. Reading quickly also helps. When it comes to actually telling the stories, the most important thing is probably to pay attention to people's faces and see what sorts of reactions they're having. If people seem bored, pick up the pace (or simply withdraw). If they seem overexcited, calm it down. One good environment to practice the skill of telling stories is tabletop role-playing games, especially as the DM/storyteller/whatever. In general, I think standards in this field are usually fairly low and you get a good amount of time to practice telling (very unusual) stories in any given session.
3Evan_Gaensbauer
Although I consider myself average in good storytelling abilities, I'd like to be better. Additionally, it's always been curious to me how one can improve this skill, rather than just leaving one's talent in it to the whims of social fortune, or whatever. As such, I've outsourced this question to my social media networks. If I haven't returned with some sort of response within a few days, feel free to remind me in a few days with either a reply to this comment, or a private message.
2cmdXNmwH
Ping :)
2Evan_Gaensbauer
I didn't get direct answers to your query, but I got some suggestions for dealing with the problem. One person told me to defuse an awkward situation if a story isn't well-received with a joke: Another friend suggested it's all about practice, and bearing through it: That particular friend is a rationalist. By 'metacognition', I believe he meant 'notice you're practicing the right skills'. Basically, in your head, or on a piece of paper, break down the aspect(s) of storytelling you want to acquire as skills, and only spend time training those. For example, you probably want to get into the habit of telling stories so the important details that make the story pop come out, rather than getting into the habits of qualifying the points with background details that listeners won't care about. This is a problem I myself have with storytelling. In each of our own minds, we're the only one who remember the causal details that led to the something extraordinary sequence of events that day on vacation. Our listeners don't know the details, because they weren't there, so assume you didn't make any glaring omissions until someone asks about it. Also, try starting small, I guess. Like, tell shorter anecdotes, and get to bigger ones. Also, I don't believe it's disingenuous to mentally rehearse a short story you might tell beforehand. I used to believe this, because good storytellers I know like my uncle always seem to tell stories off the cuff. Having a good memory, and not using too much jargon, helps. However, I wouldn't be surprised if good storytellers think back on their life experiences and think to themselves 'my encounter today would make a great story'. Here are some suggestions for generating environments limiting the social costs of telling lame stories. Another friend of mine thought I was the one asking for how to to limit the social cost of telling lame stories, so he suggested I tell him stories of mine I haven't told him before, and he won't mind if they're
0cameroncowan
Learn storyteller, read a writing book. A good story has to have setting, a character, situations that are ironic, funny or heartfelt, and then a transformation. Sometimes it can be short and other times it can be longer and more like an epic. If you have all the elements, then learn how to keep an audience's attention with good language.

Sweden launches a price comparison site, moneyfromsweden.se, for transferring money abroad. The site has information in many languages spoken by different immigrant communities in Sweden.

People living in Sweden send large sums of money to family and friends abroad. But it can be expensive. Fees and currency exchange costs differ between money transfer operators.

On average it costs about 15 percent (150 SEK) to send 1 000 SEK . In extreme cases the cost can be as high as 48 percent.

Therefore, the Swedish Consumer Agency has, at the Government's request, d

... (read more)

Say I have have a desktop with a monitor, a laptop, a tablet and a smart phone. I am looking for creative ideas on how to use them simultaneously, for example when programming to use the tablet for displaying documentation and having multiple screens via desktop computer and laptop, while the smart phone displays some tertiary information.

4Sherincall
Unplug the desktop monitor and plug it in the laptop. Open some docs on the tablet. Keep your todo list on the phone. Or just get another monitor or two and use that. In my experience, you never need more than 3 monitors at once (for one computer, of course).
2eeuuah
The biggest hangup I've found in using multiple computers simultaneously is copy pasting long strings. I can chat them to myself, but it's still slightly awkwarder than I'd like. Otherwise, Sherincall is pretty on point.

How do you track and control your spending? Disregarding financial privacy I started paying with card for everything which allows me to track where I do spend my money but not really on what. I find that I in general spend less than what I earn because spending money somehow hurts.

4Richard_Kennaway
I have a spreadsheet in which I record every financial transaction, and enter all future transactions, estimated as necessary, out to a year ahead. Whenever I get a bank statement, credit card statement, or the like, I compare everything in it with the spreadsheet and correct things as necessary. I don't try to keep track of cash spent out of my pocket. I tried that once, but found it wasn't practical. The numbers would never add up and there would be no independent record to check them against. One row of the spreadsheet computes my total financial assets, which I observe ticking upwards month by month. I don't record in detail what I buy, only the money spent and where (which is a partial clue to what I bought). I'm sufficiently well off that I don't need to plan any of my expenditure in detail, only consider from time to time whether I want to direct X amount of my resources in the way I observe myself doing. I spend less than I earn, because it seems to me that that is simply what one does, if one can, in a sensibly ordered life.
0Alsadius
Congratulations, you're about a million times more organized than most people. Even my girlfriend isn't that particular - she records transactions, and has a dozen budget categories, but she doesn't predict a year out.
3Robin_Hartell
MoneyDashboard.com - links directly to my credit cards and bank accounts. I hear that the US equivalent is mint.com
2Alsadius
My income is variable and hasn't been great lately. As a result, several months ago I flipped the "I'm poor!" switch that's been lingering in my brain since I was a student, and so I avoid almost all unnecessary spending(a small recreation budget is allowed, for sanity, but otherwise it's necessities and business expenses only). Every few months I review spending to see if there's any excessive categories, but my intuition has been pretty good. And yeah, everything on plastic. Not even because of tracking, mostly because Visa gives me 1% cash back, which is a better bribe than anyone else offers.
2Lumifer
Some cards will give you 1.5% back and I think I've seen an ad for a Citibank card that gives you 1% on purchase plus another 1% on payment.
0Alsadius
Most of those have annual fees, though - I've done the math, my spending isn't high enough to justify them. My 1% card is free. Also, I have my credit card number memorized, so changing it would impose a fairly high annoyance burden on me. But it's worth noting for those who have higher spending patterns than I do(~$1000-1500/month on credit).
2Lumifer
Nope. Here is the 1.5% Capital One card. Here is the 2% Citi card.
0Alsadius
For clarity, I'm in Canada. All the card offers I've seen up here that are meaningfully better than 1% have fees. Americans can take note of those, though.
0Lumifer
Ah. Sorry for my presumption.
0hyporational
I did a rough estimation of my normal monthly costs of living and then added a small amount for fun. The rest of my monthly paycheck gets semi-automatically invested in ETFs and can't be used without transaction costs. I have a small buffer account that I can use for unexpected expenses and if this happens I'll be aware of it the next month and try to spend less and grow the buffer account.
0Manfred
Once you have a few thousand socked away, remember to start investing and picking up your free money.
[-][anonymous]00

LessWrong in Alameda:

I've posted elsewhere that I'm applying for work in Alameda, CA. At the moment, I'm not at the top of the list to get the job but I'm still in the running, so, before any further interviews occur, I decided I'd ask this.

Do any LWers live or work in Alameda? Given our strong connection to the Bay Area, I'm assuming at least a few people are from the island. I'd especially be interested in talking to anyone working at the library(ies) there. I'd like to get an idea of Alameda from an LWer's perspective. If it's a good place to live, work... (read more)

[-][anonymous]00

I am trying to design a competition for students focused on Bayesian understanding of ecology. Could I ask here for some pointers? I will have data on 2 sets of maybe physiologically linked parameters (from some research I plan to do next summer), and then offer the students to review qualitative descriptions of the link between them, like 'Some plants having mycorrhiza have higher biomass than the average - 3 populations out of 5' (see L. Carroll, The game of logic.) There will be other variables that might correlate with mycorrhiza more or less strongly ... (read more)

Self Help Books

I'm looking to buy a couple audiobooks from Amazon. Any good recommendations?

3NancyLebovitz
This is a filter rather than a recommendation, but read the reviews to find out whether people used the book rather than just finding it a pleasant read. What are you hoping to improve about your life?
6Ritalin
Right now I think my two weakest points are: * Akrasia: I have a lot of trouble keeping a proper sleeping schedule and not slipping into night owl lifestyle, going to the gym as often as I should, keeping my diet, and, depending on the circumstances, I have a lot of trouble keeping myself motivated, organized, and productive. * Relatively poor social skills. They're not nearly as bad as they once were, but I still find myself somewhat clumsy and awkward, in the way high IQ people tend to be. Out of synch. Having different priorities than the folks around me. Coming up with stuff out of left field. Spacing out, being prompted to explain, retelling a train of thought that to them seems convoluted and to me seems natural. Having a terrible time maintaining proper etiquette, especially table manners. Either I'm put down "crazy" or put in a pedestal as "genius", but I'm always put aside, and have very few friends. Love life is similarly disastrous, but I don't think there's a book for people who fall in love too hard, too soon, and too easy.
2NancyLebovitz
Tentative suggestion: Maybe you need to live somewhere where you have more access to smart people.
2Ritalin
They're a bit hard to come by, and, let's face it, we can be hard to live with even among ourselves.