You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, Dec. 1 - Dec. 7, 2014

3 Post author: MrMind 01 December 2014 08:29AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (346)

Sort By: Controversial
Comment author: advancedatheist 01 December 2014 04:05:01PM *  6 points [-]

I wonder why people like us who talk about wanting to "live forever" don't think more seriously about what that could mean in terms of overturning our current assumptions and background conditions, if our lives stretch into centuries and then into mlllennia.

I started to think about this based on something Mike Darwin wrote on his blog a few years back:

http://chronopause.com/chronopause.com/index.php/2011/04/19/cryonics-nanotechnology-and-transhumanism-utopia-then-and-now/index.html

Many years ago, when I was a teenager, Curtis Henderson was driving us out of Sayville to go the Cryo-Span facility, and I said something that irritated him – really set him off on a tear. Beverly (Gillian Cumings) had just died, and it had become clear that she was not going to get frozen, and I was moaning about it, crying about it in fact, and this is what he said to me: “You wanna live forever kid? You really wanna live forever?! Well, then you better be ready to go through a lot more of this – ’cause this ain’t nothin. Ever been burned all over, or had your hand squashed in a machine? Well get prepared for it, because you’re gonna experience that, and a lot more that’s worse than either you or me can imagine. Ever lost your girlfriend or you wife, or your mother or your father, or your best friend? Well, you’re gonna loose ‘em, and if you live long enough, really, really long enough, you’re gonna lose everybody; and then you’ll lose ‘em over and over again. Even if they don’t die, you’ll lose ‘em, so be prepared. You see all this here; them boats, this street, that ocean, that sun in that sky? You’re gonna lose ‘em all! The more you go on, the more you’ll leave behind, so I’m telling you here and now, you’d best be damn certain about this living forever thing, because it’s gonna be every bit as much Hell as it Heaven.”

So, for example, I've started to question the assumption that the social ideology we've inherited from the Enlightenment - a recent intellectual movement only 300 years old - has gotten locked in as a permanent part of the human condition. Now I wouldn't assume anything of the sort, and I can see the likelihood of Neoreactionary future societies. Even if we don't get that way because of the inherent weaknesses of the Enlightenment Project itself, we could stumble into them regardless through a drunkard's walk.

I also like to ask christians why their religion can't disappear eventually, and I don't mean through that ridiculous rapture belief some simple-minded evangelicals hold. From the perspective of people living ten thousand years from now, assuming humans survive, their dominant world religion might have started sometime between now and then, and if knowledge of christianity still exists then, only a few academic specialists would know anything about it from fragmentary evidence.

In practical terms, this perspective helps me to disengage from current events that don't matter much in the long run. At my current age (55), for example, American Presidents come and go subjectively quickly, so I tend to ignore them as much as possible compared with longer-term trends like the demographic social engineering in the U.S. that bloggers like Steve Sailer write about. I also tend to ignore geek fads that will allegedly "change everything," like Bitcoin, 3D printing and seasteading, until the beta testers beat the hell out of these innovations and we can get a more realistic view of what they can do despite what the hype and propaganda say.

So what do you think about the conditions of human life over, say, the next 300 years?

Comment author: ChristianKl 02 December 2014 12:32:35AM 2 points [-]

Why exactly Neoreactionary? Why don't you talk about the chance of fundamental Muslims dominating?

I've started to question the assumption that the social ideology we've inherited from the Enlightenment

Our social ideology changed a lot in the 300 years. The fact that it hasn't is one of the more central misconceptions of Neoreactionary thought.

Even in 200 years we went from homosexuality being legal, to it being illegal because of puritans, then being legal again and now gay marriage.

It's just ridiculous to say that the puritians that got homosexuality banned have roughly the same ideology as today's diversity advocates.

Comment author: Azathoth123 03 December 2014 02:11:00AM *  4 points [-]

Even in 200 years we went from homosexuality being legal

Citation please.

Comment author: Vaniver 02 December 2014 06:35:19PM 4 points [-]

It's just ridiculous to say that the puritians that got homosexuality banned have roughly the same ideology as today's diversity advocates.

Right- and even if you take the more reasonable view and claim that the Puritans have the same genes or personalities or social roles or so on as today's diversity advocates, that means that we need to explain future social change in terms of those genes and personalities. If there will always be Mrs. Grundy, what will the future Mrs. Grundy oppress?

Comment author: RowanE 02 December 2014 06:25:53PM 1 point [-]

Christians believe that a god exists and was interventionist enough to start a religion that taught the truth about him, so why wouldn't they expect him to at least also be interventionist enough to prevent that same religion from disappearing? And I'm not sure how, given someone already believes in an omnipotent interventionist god who's revealed his will to mankind, also believing that he'll perform a particular intervention in the future is "ridiculous" - do you have a theological argument or one based on the bible for why only an idiot would think God plans to make the rapture happen?

Comment author: RichardKennaway 02 December 2014 10:29:51AM 0 points [-]

what that could mean in terms of overturning our current assumptions and background conditions

"Our"? From other comments of yours I gather that you expect your own assumptions to be upheld, it was only everyone else's (outside the NRsphere) who were due for a come-uppance.

Comment author: passive_fist 01 December 2014 11:49:10PM 6 points [-]

About that quote: If life is not worth living for 1000 years, then why is it worth living for 80? And if it's worth living for 80, why not 1000? If you don't want to live 1000 years, why not kill yourself now?

Is there some utility function that is positive up to 80 years but starts to become negative after that? (independent of level of health, since we're implicitly assuming that if you lived for 1000 years you'd be reasonably healthy during most of that time). If so, what is it?

Comment author: cameroncowan 07 December 2014 09:25:29AM 0 points [-]

I think life after 80 goes downhill not just because of health but because people you are familiar with and things start going away. Things change so quickly the world starts to become unfamiliar to you. Its like living on an alien planet. I think living to 1000 years would require one to leave the world, do some adjusting/re-education/reworking and then re-engaging with the world again. It would be like every 100 years going back to college and starting again. New friends, new music, new everything so that one could keep going.

Comment author: Alsadius 02 December 2014 05:51:14AM 0 points [-]

Boredom.

Comment author: NancyLebovitz 02 December 2014 05:59:52AM 1 point [-]
Comment author: passive_fist 02 December 2014 05:57:24AM 3 points [-]

Why is the threshold for boredom 80 years?

Comment author: Alsadius 02 December 2014 06:14:59AM 1 point [-]

Empirically, it seems to be nearly identical to the age of retirement as things stand. Lots of 70 year olds are just punching the clock most of the time(though there's certainly exceptions).

I don't claim that we've extended life as long as our attention spans can allow. I think we could live longer and be okay. But current human psychology and culture are not designed for extremely long lives, even if we solved the issues of physiology.

Comment author: passive_fist 02 December 2014 07:42:42AM 2 points [-]

It's the age of retirement because physical and mental health decreases but I explicitly said assume reasonable health.

Comment author: Alsadius 02 December 2014 08:29:36AM 1 point [-]

I know, but that's not the biggest reason for retirement. Remember, a lot of people despise their jobs - they're looking to get out as soon as they can. A lot more don't really hate it, but wait for the time when they can quit working financially(due to pensions, etc.), and leave as soon as they can, because retirement is seen as more fun. Those aren't dependant on aging.

Comment author: RowanE 02 December 2014 06:17:44PM 1 point [-]

A lot of people who say they're looking to get out of retirement as soon as they can are optimising for it very poorly, as the early retirement community will argue and in many cases demonstrate. If you're in a sufficiently high-earning job OR are sufficiently frugal that you can save two thirds of what you earn or more and still enjoy your life with expenses at that level, you can retire in about ten to fifteen years. Social effects dominate - if you earn three times the median salary or more, probably most of your peers earn comparable amounts and spend ~90% of what they earn, so trying to live on what's actually a perfectly normal amount to live on seems like extreme deprivation. And what the social effects do is keep the age of retirement at the age it was set at by governments enacting the first pension schemes a hundred years ago when everyone was a factory worker. And that age was decided upon based on health deteriorating.

Comment author: Evan_Gaensbauer 03 December 2014 04:25:11AM 2 points [-]

I'm jumping on this bandwagon.

User advancedatheist wrote:

In practical terms, this perspective helps me to disengage from current events that don't matter much in the long run. At my current age (55), for example, American Presidents come and go subjectively quickly, so I tend to ignore them as much as possible compared with longer-term trends like the demographic social engineering in the U.S. that bloggers like Steve Sailer write about. I also tend to ignore geek fads that will allegedly "change everything," like Bitcoin, 3D printing and seasteading, until the beta testers beat the hell out of these innovations and we can get a more realistic view of what they can do despite what the hype and propaganda say.

If the relative (dis)value of gains or losses for society at large regress to a mean over time, why wouldn't this trend extend to what happens to us personally? Why wouldn't everything we observe or experience not matter as much? In a lifetime of centuries, if I see everything I now love degrade or disappear, I may also have the opportunity to grow a more nuanced love for things or persons that are more robust over time. The sting of pain at losing something loved in our first century of living may fade as its dwarfed by how deeply we feel the loss or gain of love for something greater, something that can only be appreciated in a lifetime spanning centuries.

Comment author: NancyLebovitz 01 December 2014 06:51:27PM 13 points [-]

I would expect social arrangements to appear that we aren't even beginning to imagine much more than anything especially neo-reactionary.

Comment author: cameroncowan 07 December 2014 09:22:45AM 1 point [-]

I like this Curtis Henderson guy. My great-grandmother lived to be 96 and one of her complaints was that everyone she knew, loved, and cared about had died and she hardly had anyone left.

Comment author: Evan_Gaensbauer 03 December 2014 04:17:23AM 3 points [-]

if knowledge of christianity still exists then, only a few academic specialists would know anything about it from fragmentary evidence.

In Asia, there are ideologies, philosophies, and/or religions two- or threefold older than Christianity, and they still have hundreds of millions of followers, or, at least, more than 'only a few academic specialists' who know about them. In particular, thought from ancient Chinese and Indian civilizations still have great impact on the modern incarnations of those civilizations. Also, how evidence is gathered and stored is so much better than it was two thousand years ago. If ever a point comes that future civilizations look at ours as ancient, they will have information on our histories and cultures much better (in quality and quantity) than we have of, e.g., the origins of Hinduism, Judaism, or Mesopotamia.

Comment author: RichardKennaway 01 December 2014 11:35:00PM 5 points [-]

So what do you think about the conditions of human life over, say, the next 300 years?

Global warming; Islam; a cure for ageing; practical use of space; AI; something else. To know anything about the next 300 years one would have to know how all of these pan out.

Comment author: RichardKennaway 01 December 2014 11:13:52PM 3 points [-]

You see all this here; them boats, this street, that ocean, that sun in that sky? You’re gonna lose ‘em all! The more you go on, the more you’ll leave behind, so I’m telling you here and now, you’d best be damn certain about this living forever thing, because it’s gonna be every bit as much Hell as it Heaven.”

Sounds good to me.

(Disclaimer, or something: I am not signed up for cryonics.)

Comment author: IlyaShpitser 01 December 2014 04:11:35PM *  5 points [-]

Have you read R. Scott Bakker's fiction? You might enjoy it, he deals with some issues that arise with living forever. I am surprised more LW folks aren't into Bakker. It's sort of Tolkien by way of Herbert with heavy rationalist overtones, e.g.:

"This trilogy details the emergence of Anasûrimbor Kellhus, a brilliant monastic warrior, as he takes control of a holy war and the hearts and minds of its leaders. Kellhus exhibits incredible powers of prediction and persuasion, which are derived from deep knowledge of rationality, cognitive biases, and causality, as discovered by the Dûnyain, a secret monastic sect. "

Comment author: RichardKennaway 01 December 2014 11:26:10PM *  1 point [-]

I read the first book in the series (after seeing it mentioned here some years back), and got some way into the second, but once I put it down I couldn't pick it up again. There are six books (so far). Are they worth it?

I started wondering who the books were about, and if different readers would have different answers to that question. To someone interested in rationality, Kellhus is the obvious protagonist, at least in the first volume, or perhaps, just if introduced to the books through a mention on LessWrong. In the second, that theme is not prominent, as far as I recall, and the whole arc of Kellhus waging jihad across the world seems to be merely background -- but to what? Other readers might consider the relationship between Achamian and Esmeret to be the focus of the story. Others, the power struggles amongst the various factions. Others, the nature of the dark force of past ages that is emerging into the world again, which is mentioned but hardly appears on stage.

What are these books about?

HT to the Prince of Nothing wiki for refreshing my memory of character names. Maybe it would be quicker to read the wiki than labour through the books.

Comment author: Capla 02 December 2014 11:55:59PM *  1 point [-]

Is there a way to sign up for cryonics and sign up to be an organ donor?

I know that some people opt to cryo-preserve only their brain. Is there a way to preserve the whole body, with the exception of the harvested organs ? Is there any reason to? Does the time spent harvesting make a difference to how thoroughly the body is preserved?

Comment author: fubarobfusco 03 December 2014 05:44:48AM 3 points [-]

Is there a way to sign up for cryonics and sign up to be an organ donor?

No, because the folks responsible for each process need custody of the body in the same time frame after legal death.

Comment author: advancedatheist 02 December 2014 05:20:54PM 4 points [-]

Has anyone come across research on parents' attitudes towards their sons when they can see that girls don't find their teen boys sexually attractive? If you saw that happening to your son, that has to affect how you feel about him compared to how you would feel if you saw that your son had sexual opportunities.

This relates to my puzzlement about the idea that the "sexual debut" happens as an organic developmental stage with a median age of 17, compared with the fact that quite a few straight young men miss this window and become the targets of social derision and contempt.

Reference:

Who is the 40-year-old virgin and where did he/she come from? Data from the National Survey of Family Growth.

http://www.ncbi.nlm.nih.gov/pubmed/19493289

Comment author: shminux 02 December 2014 07:38:50PM *  3 points [-]

quite a few straight young men miss this window and become the targets of social derision and contempt.

I'd say it's more of a pity than derision and contempt, but then it probably depends on one's social circles.

Comment author: cameroncowan 07 December 2014 09:30:26AM 1 point [-]

I think parents want their children to be successful with their peers, particularly if they are. I helped raise my cousins and the youngest one was the last to really attract men and we felt really sorry for her because she was missing out and she was depressed because her sisters were always attached and she was not. Its a social thing, but it doesn't really hurt you as a person. I do think however, that your attractiveness level when you're young does have affect on your perception of your attractiveness into the rest of your life. Evolutionarily, when we only live to 40, it was important to keep the species going. Now, I think it is a matter of fitting in and finding one's place in society. Knowing, at a young time that you are attractive helps keep you going as life goes along. Whereas, if you don't feel attractive then you get that idea and it can be very hard to break.

Comment author: Evan_Gaensbauer 03 December 2014 04:50:08AM 1 point [-]

This relates to my puzzlement about the idea that the "sexual debut" happens as an organic developmental stage with a median age of 17, compared with the fact that quite a few straight young men miss this window and become the targets of social derision and contempt.

Now I'm puzzled by this too. Does the median age for young males making their "sexaul debut" vary by culture?

Comment author: polymathwannabe 01 December 2014 12:17:37PM 5 points [-]

I'm going to narrate a Mutants and Masterminds roleplaying campaign for my friends, and I'm planning that the final big villain behind all the plots will be... Clippy.

Any story suggestions?

Comment author: RowanE 01 December 2014 01:29:39PM 6 points [-]

Sabotage of a big company's IT systems, or of an IT company that maintains those systems, to force people to use paperclip-needing physical documents while the systems are down. Can have the paperclips be made mention of, but as what seems to the players like just fluff describing how this (rival company/terrorist/whatever) attack has disrupted things.

Comment author: ilzolende 02 December 2014 05:46:02AM *  5 points [-]

It depends on how familiar your friends are with uFAI tropes, so you may want to tone these up or down to keep foreshadowing at the right level. If they're highly familiar, you may want to switch paperclips with staples.

  • Monsters attack a factory, which happens to manufacture binder clips.
  • An infectious disease spreads across [home city], causing photosensitive epilepsy. Careful observers will note that seizures occur most often when lights strobe at computer monitor refresh rates.
  • Corporate executives experience wave of meningitis (nanotechnology-induced). When they return to work, they cancel all paperless-office initiatives.
  • Population of [distant area] missing. Buried underground: lots of paperclips. (If needed, have the paperclips test positive for some hallucinogen as a red herring).
  • Iron mines report massive thefts, magnetism-related supervillain denies all responsibility and is actually innocent. Alternatively, if any heroes have metal-related powers, frame one of them and present false evidence to the players that a supervillain did it.
  • Biotechnology companies seem to be colluding about something. The secret: somebody or something has been producing genetic material with their equipment, and they need to find out who, ideally without causing a panic. Maybe some superheroes could investigate for them?

If you do run this, please share your notes with us.

Edit: Now I want to run this sort of campaign. Thanks!

Comment author: polymathwannabe 02 December 2014 01:00:40PM 1 point [-]

Good ideas. My friends don't know anything about uFAI topics; if I drop the name "Clippy," they'll think of the MS Office assistant.

Comment author: FrameBenignly 01 December 2014 07:08:06PM 3 points [-]

I may write a full discussion thread on this at some point, but I've been thinking a lot about undergraduate core curriculum lately. What should it include? I have no idea why history has persisted in virtually every curriculum I know of for so long. Do many college professors still believe history has transfer of learning value in terms of critical thinking skills? Why? The transfer of learning thread touches on this issue somewhat, but I feel like most people on there are overvaluing their own field hence computational science is overrepresented and social science, humanties, and business are underrepresented. Any thoughts?

Comment author: Evan_Gaensbauer 03 December 2014 03:50:12AM 1 point [-]

Scott Alexander from Slate Star Codex has the idea that if the humanities are going to be taught as part of a core curriculum, it might be better to teach the history of them backwards.

Comment author: MrMind 03 December 2014 08:36:54AM 0 points [-]

When I was in high school, I discussed this very idea with my Philosophy teacher. She said that (at least here in Italy) curricula for humanities are still caught in the Hegelian idea that history unfolds in logical structures, so that it's easier to understand them in chronological order.
I reasoned instead that contemporary subjects are more relevant, more interesting and we have much more data about them, so they would appeal much better to first year students.

Comment author: zedzed 02 December 2014 05:44:54AM *  0 points [-]

tl;dr: having a set of courses for everyone to take is probably a bad idea. People are different and any given course is going to, at best, waste the time of some class of people.

A while ago, I decided that it would be a good thing for gender equality to have everyone take a class on bondage that consisted of opposite-gender pairs tying each other up. Done right, it would train students "it's okay for the opposite gender to have power, nothing bad will happen!" and "don't abuse the power you have over people." In my social circle, which is disproportionately interested in BDSM, this kinda makes sense. It may even help (although my experience is that by the time anyone's ready to do BDSM maturely, they've pretty much mastered not treating people poorly based on gender.) It would also be a miraculously bad idea to implement.

In general, I think it's a mistake to have a "core curriculum" for everyone. Within 5 people I know, I could go through the course catalog of, say, MIT, and find one person for whom nobody would benefit from them taking the course. (This is easier than it seems at first; me taking social science or literature courses makes nobody better off (the last social science course I took made me start questioning whether freedom of religion was a good thing. I still think it's a very good thing, but presenting me with a highly-compressed history of every inconvenience it's produced in America's history doesn't convince my system 1). Similarly, there exist a bunch of math/science courses that I would benefit greatly from taking, but would just make the social science or literature people sad. Also, I know a lot of musicians, for whom there's no benefit from academic classes; they just need to practice a lot.)

Having a typical LWer take a token literature class generally means they're going to spend ~200 hours learning stuff they'll forget exponentially. (This could be remedied by Anki, but there's a better-than-even chance the deck gets deleted the moment the final's over.) Going the other way, forcing writers to take calculus probably won't produce any tangible benefits, but it will make them pissed off and write things with science is bad plotlines. (Yes, most of us probably wish writers would get scientifically literate, but until we can figure out a way to make that happen, forcing them to take math and science courses is just going to have predictable effects on what they write and do you really think it helps to have a group of people who substantially influence culture to hate math and science?)

For the typical LWer, I'd go heavy on the math and CS with enough science (physics through psych) to counteract Dunning-Kruger, and some specialization, the idea being that math and CS are tools that let you take something you already know and find out something you didn't know for free, the sciences are there to reduce inferential gaps and eliminate illusory competence, and the specialization gets you a job. This would be very good for people-who-are-central-examples-of-LWers (although I'm sure there many are people here who this would be very bad for), but I have trouble imagining that this would work for more than a few percent of the population. In fact, for everyone going into a field that doesn't need a lot of technical knowledge, I'd look for the most efficient way to measure intelligence and conscientiousness (preferably separately), which looks very little like an undergraduate curriculum.

Comment author: FrameBenignly 02 December 2014 06:21:33AM 1 point [-]

If a field doesn't require a lot of technical knowledge, why bother with college in the first place? I'm not so sure how useful your examples are since most creative writers and musicians will eventually fail and be forced to switch to a different career path. Even related fields like journalism or band manager require some technical skills.

Comment author: zedzed 02 December 2014 08:55:02AM -1 points [-]

Signalling, AKA why my friend majoring in liberal arts at Harvard can get a high-paying job even though college has taught him almost no relevant job skills.

Comment author: Nornagest 02 December 2014 12:34:54AM *  2 points [-]

If I were designing a core curriculum off the top of my head, it might look something like this:

First year: Statistics, pure math if necessary, foundational biology, literature and history of a time and place far removed from your native culture. Classics is the traditional solution to the latter and I think it's still a pretty good one, but now that we can't assume knowledge of Greek or Latin, any other culture at a comparable remove would probably work as well. The point of this year is to lay foundations, to expose students to some things they probably haven't seen before, and to put some cognitive distance between the student and their K-12 education. Skill at reading and writing should be built through the history curriculum.

Second year: Data science, more math if necessary, evolutionary biology (perhaps with an emphasis on hominid evolution), basic philosophy (focusing on general theory rather than specific viewpoints), more literature and history. We're building on the subjects introduced in the first year, but still staying mostly theoretical.

Third year: Economics, cognitive science, philosophy (at this level, students start reading primary sources), more literature and history. At this point you'd start learning the literature and history of your native language. You're starting to specialize, and to lay the groundwork for engaging with contemporary culture on an educated level.

Fourth year: More economics, political science, recent history, cultural studies (e.g. film, contemporary literature, religion).

Comment author: Lumifer 02 December 2014 01:04:08AM 6 points [-]

Fifth year: spent unemployed and depressed because of all the student debt and no marketable skills.

This is a curriculum for future philosopher-kings who never have to worry about such mundane things as money.

Comment author: Nornagest 02 December 2014 01:04:58AM *  1 point [-]

"Core curriculum" generally means "what you do that isn't your major". Marketable skills go there, not here; it does no one any good to produce a crop of students all of whom have taken two classes each in physics, comp sci, business, etc.

Comment author: Evan_Gaensbauer 03 December 2014 03:10:05AM 1 point [-]

What counts as a 'marketable skill', or even what would be the baseline assumption of skill for becoming a fully and generally competent adult in twenty-first century society, might be very different from what was considered skill and competence in society 50 years ago. Rather than merely updating a liberal education as conceived in the Post-War era, might it make sense to redesign the liberal education from scratch? Like, does a Liberal Education 2.0 make sense?

What skills or competencies aren't taught much in universities yet, but are ones everyone should learn?

Comment author: cameroncowan 07 December 2014 09:18:55AM 1 point [-]

Perhaps we need to re-think what jobs and employment look like in the 21st century and build from there?

Comment author: NancyLebovitz 03 December 2014 12:49:29PM *  1 point [-]

Persuasive writing and speaking. Alternatively, interesting writing and speaking.

Comment author: Lumifer 02 December 2014 01:11:35AM 3 points [-]

If you count the courses you suggest, there isn't much room left for a major.

I think a fruitful avenue of thought here would be to consider higher (note the word) education in its historical context. Universities are very traditional places and historically they provided the education for the elite. Until historically recently education did not involve any marketable skills at all -- its point was, as you said, "engaging with contemporary culture on an educated level".

Comment author: ChristianKl 02 December 2014 03:34:54PM 1 point [-]

foundational biology evolutionary biology

What do you mean with those terms?

Understanding the principle of evolution is useful but I don't see why it needs a whole semester.

Comment author: FrameBenignly 02 December 2014 01:40:23AM 2 points [-]

1st year: 5 / 2nd year: 7 / 3rd year: 5 / 4th year: 4 That's over half their classes. I also counted 14 of those 21 classes are in the social sciences or humanities which seems rather strange after you denigrated the fields. Now the big question: how much weight do you put on the accuracy of this first draft?

Comment author: Azathoth123 08 December 2014 03:37:09AM *  0 points [-]

Classics is the traditional solution to the latter and I think it's still a pretty good one, but now that we can't assume knowledge or Greek or Latin, any other culture at a comparable remove would probably work as well.

Um, the reason for studying Greek and Latin is not just because they're a far-removed culture. It's also because they're the cultures which are the memetic ancestors of the memes that we consider the highest achievements of our culture, e.g., science, modern political forms.

Also this suffers from the problem of attempting to go from theoretical to practical, which is the opposite of how humans actually learn. Humans learn from examples, not from abstract theories.

Comment author: Azathoth123 08 December 2014 03:48:37AM 1 point [-]

Here is Eliezer's post on the subject.

Comment author: Evan_Gaensbauer 03 December 2014 02:59:32AM *  1 point [-]

I've been thinking a lot about undergraduate core curriculum lately. What should it include?

I just want to point out for the record that if we're discussing a core curriculum for undergraduate education, I figure it would be even better to get such a core curriculum into the regular, secondary schooling system that almost everyone goes through. Of course, in practice, implementing such would require an overhaul of the secondary schooling system, which seems much more difficult than changing post-secondary education. The reason for this would probably because changing the curriculum for post-secondary education, or at least one post-secondary institution, is easier, because there is less bureaucratic deadweight, a greater variety of choice, and a nimbler mechanisms in place for instigating change. So, I understand where you're coming from in your original comment above.

Comment author: Alsadius 02 December 2014 05:56:33AM 2 points [-]
  • History illuminates the present. A lot of people care about it, a lot of feuds stem from it, and a lot of situations echo it. You can't understand the Ukrainian adventures Putin is going on without a) knowing about the collapse of the Soviet Union to understand why the Russians want it, b) knowing about the Holodomor to understand why the Ukrainians aren't such big fans of Russian domination, and arguably c) knowing about the mistakes the west made with Hitler, to get a sense of what we should do about it.

  • History gives you a chance to learn from mistakes without needing to make them yourself.

  • History is basically a collection of the coolest stories in human history. How can you not love that?

Comment author: ChristianKl 02 December 2014 01:03:12PM 0 points [-]

History gives you a chance to learn from mistakes without needing to make them yourself.

Given how hard it is to establish causality, history where you don't have a lot of the relevant information and there a lot of motivated reasoning going on is often a bad source for learning.

Comment author: Alsadius 02 December 2014 03:51:19PM 1 point [-]

Which is better - weak evidence, or none?

Comment author: RowanE 02 December 2014 05:01:40PM -2 points [-]

Sometimes none, if the source of the evidence is biased and you're a mere human.

Comment author: Alsadius 02 December 2014 08:23:20PM 1 point [-]

There are unbiased sources of evidence now?

Comment author: ChristianKl 03 December 2014 01:25:03PM -1 points [-]

That question doesn't have anything to do with the claim that you can make someone less informed by giving them biased evidence.

Comment author: Lumifer 03 December 2014 04:01:17PM 1 point [-]

An interesting question. Let me offer a different angle.

You don't have weak evidence. You have data. The difference is that "evidence" implies a particular hypothesis that the data is evidence for or against.

One problem with being in love with Bayes is that the very important step of generating hypotheses is underappreciated. Notably, if you don't have the right hypothesis in the set of hypotheses that you are considering, all the data and/or evidence in the world is not going to help you.

To give a medical example, if you are trying to figure out what causes ulcers and you are looking at whether evidence points at diet, stress, or genetic predisposition, well, you are likely to find lots of weak evidence (and people actually did). Unfortunately, ulcers turned out to be an bacterial disease and all that evidence, actually, meant nothing.

Another problem with weak evidence is that "weak" can be defined as evidence that doesn't move you away from your prior. And if you don't move away from your prior, well, nothing much changed, has it?

Comment author: Alsadius 04 December 2014 05:41:54PM 1 point [-]

"Weak" means that it doesn't change your beliefs very much - if the prior probability is 50%, and the posterior probability is 51%, calling it weak evidence seems pretty natural. But it still helps improve your estimates.

Comment author: FrameBenignly 02 December 2014 06:10:40AM 5 points [-]

How useful is knowing about Ukraine to the average person? What percentage of History class will cover things which are relevant? Which useful mistakes to avoid does a typical History class teach you about?

Comment author: Alsadius 02 December 2014 06:22:58AM 2 points [-]

1) Depends how political you are. I'm of the opinion that education should at least give people the tools to be active in democracy, even if they don't use them, so I consider at least a broad context for the big issues to be important.

2) Hard to say - I'm a history buff, so most of my knowledge is self-taught. I'd have to go back and look at notes.

3) Depends on the class. I tend to prefer the big-picture stuff, which is actually shockingly relevant to my life(not because I'm a national leader, but because I'm a strategy gamer), but there's more than enough historians who are happy to teach you about cultural dynamics and popular movements. You think popular music history might help someone who's fiddling with a bass guitar?

Comment author: [deleted] 02 December 2014 04:36:52AM 2 points [-]

Undergraduate core curriculum where, for whom, and for what purposes?

Comment author: ChristianKl 02 December 2014 12:53:17AM -1 points [-]

I think the idea of a core curriculum that contains things such as history is awful. Diversity is pretty useful.

Business in general is useful, but little of the relevant skills are well learned via lectures. Being able to negotiate is a useful business skill.

Comment author: Punoxysm 01 December 2014 07:37:31PM 1 point [-]

I think history and the softer social sciences / humanities can, if taught well, definitely improve your ability to understand and analyze present-day media and politics. This can improve your qualitative appreciation of works of art, understand journalistic works on their own terms and context instead of taking them at face value, and read and write better.

They can also provide specific cultural literacy, which is useful for your own qualitative appreciation as well as some status things.

I had a pretty shallow understanding of a lot of political ideas until I took a hybrid history/philosophy course that was really excellently taught. It allowed me to read a lot of poltical articles more deeply and understand their motivations and context and the core academic ideas they built around.

That last part, seeing theses implicitly referenced in popular works, is pretty neat.

Comment author: ChristianKl 02 December 2014 12:52:28AM 1 point [-]

It allowed me to read a lot of poltical articles more deeply and understand their motivations and context

How do you know that you understand motivations of political articles better? Are you able to predict anything politically relevant that you couldn't have predicted beforehand?

Comment author: Punoxysm 02 December 2014 01:53:27AM *  -1 points [-]

How do you know that you understand motivations of political articles better? Are you able to predict anything politically relevant that you couldn't have predicted beforehand?

Concretely, I can often tell if the article writer is coming from a particular school of thought or referencing a specific thesis, then interpret jargon, fill in unstated assumptions, see where they're deviating or conforming to that overarching school of thought. This directly enhances my ability to extrapolate to what other political views they might have and understand what they are attempting to write, and who their intended audience is.

As far as predicting the real world, that's tough. These frameworks of thought are in constant competition with one another. They are more about making normative judgments than predictive ones. The political theories that I believe have the most concrete usefulness are probably those that analyze world affairs in terms of neocolonialism, in part because those theories directly influence a ton of intellectuals but also in part because they provide a coherent explanation of how the US has managed its global influence in the past and (I predict) how it will do so in the future.

I can also do things like more fully analyze the factors behind US police and African-American relations, or how a film will influence a young girl.

Comment author: ChristianKl 02 December 2014 09:51:25AM 1 point [-]

These frameworks of thought are in constant competition with one another.

That reminds me of the Marxist who can explain everything with the struggle of the workers against the capitalists.

I can also do things like more fully analyze the factors behind US police and African-American relations, or how a film will influence a young girl.

The sentence looks like your study did damage. You shouldn't come out of learning about politics believing that you can fully understand the factors of anything political.

Comment author: Punoxysm 02 December 2014 05:39:53PM -1 points [-]

That reminds me of the Marxist who can explain everything with the struggle of the workers against the capitalists.

I am referring to the normative parts of frameworks. For instance feminism makes many normative statements. It is a project dedicated to changing certain policies and cultural attitudes. The eventual influence of these frameworks are based largely on their acceptance.

Comment author: ChristianKl 02 December 2014 10:20:32PM 3 points [-]

People make statements. Abstract intellectual labels don't. People have all sorts of personal goals. If one sees everything as the battle of certain frameworks then a lot dealing with individual people is lost.

Additionally you can also miss when new thoughts come along that don't fit into your existing scheme. A lot of people coming from the humanities for example have very little understanding of the discourse of geeks.

The political effects of getting people to meditate and be in touch with their bodies are also unknown unknowns for a lot of people trained in the standard political ways of thinking.

Comment author: Punoxysm 03 December 2014 07:17:00AM -2 points [-]

People make statements. Abstract intellectual labels don't. People have all sorts of personal goals. If one sees everything as the battle of certain frameworks then a lot dealing with individual people is lost.

I don't have much to comment on this except that many academics in the humanities level charges of dehumanization and ignoring individual agency against a lot of works in economics or quantitative sociology and political science (ex. they might criticize an economics paper that attributes civil unrest to food shortages without discussing how it might originate in individual dissatisfaction with oppression and corruption). So it's ironic if I've done the same disservice to those academics.

Additionally you can also miss when new thoughts come along that don't fit into your existing scheme. A lot of people coming from the humanities for example have very little understanding of the discourse of geeks.

I don't really know what you're referring to. But if you're talking LW-style memes, I think that it is generally true that futurism isn't of much interest to many in the humanities. And to a great degree it is orthogonal to what they do. A scenario like the singularity may not be, in that it's not orthogonal to anyone or anything, but I haven't had many conversations about it with those in the humanities.

The political effects of getting people to meditate and be in touch with their bodies are also unknown unknowns for a lot of people trained in the standard political ways of thinking.

What are you thinking of?

But I am sure there are academics who can readily discuss the effects of the fall of physically demanding labor, the effect of physical rigors on those in the military, or the interaction of all flavors of Buddhism with politics.

Comment author: ChristianKl 03 December 2014 12:36:41PM *  2 points [-]

(ex. they might criticize an economics paper that attributes civil unrest to food shortages without discussing how it might originate in individual dissatisfaction with oppression and corruption).

Dissatisfaction with oppression and corruption in itself doesn't have much to do with individual people being actors. Standard feminist theory suggests that social groups are oppressed.

And to a great degree it is orthogonal to what they do. A scenario like the singularity may not be, in that it's not orthogonal to anyone or anything, but I haven't had many conversations about it with those in the humanities.

As far as LW ideas go, prediction markets do have political implications. X-risk prevention does have political implications.

CFAR mission also mentions that they want to change how we decide which legislation to pass.

A bunch of geeks are working on getting liquid democracy to work.

Wikileaks and it's in actions do have political effects.

Sweden recently changed their Freedom of Press laws to make it clear that having a server in Sweden is not enough to profit from Swedish press protections because Julian Assanges Wikileaks tried to use Swedish press protection to threaten people who try to uncover sources of Wikileaks.

In Germany a professor of sociology recently wrote a book that argued that Quantified Self is driven by the belief that it's possible to know everything. It isn't. The kind of geeks New Atheists that want everything to be evidence-based and who believe that they can know everything generally reject QS for not doing blinded and controlled trials. He simply treated all geeks the same way and therefore missed the heart of the issue.

How much have polticial scientists wrote about Crypto Wars and in Cory Doctorow words the recent war on general computing?

Estonia had to be defended against cyber war by a lose collection where likely the stronger players weren't government associated. It's also quite likely that we live in a time where a nongovernemntal force is strong enough to start such a war.

The NSA is geeky enough that it's NSA chief Gen Keith Alexander modeled his office after the Star Treck bridge.

Jeff Bezos brought the Washington post. Pierre Omidyar who made his money with ebay sponsored First Look Media. Those are the signs that more and more political power goes to geeks.

What are you thinking of?

I'm just pointing to a political idea to which you probably aren't exposed.

the effect of physical rigors on those in the military

Military training is not supposed to build empathy but the opposite. Soldiers are trained to ignore bodily feelings.

Comment author: gjm 02 December 2014 02:35:33PM 2 points [-]

I can also do things like more fully analyze [...]

You shouldn't come out [...] believing that you can fully understand [...]

I think the difference I highlighted is an important one.

Comment author: Nornagest 01 December 2014 07:54:59PM 5 points [-]

I think this is true... but also that "taught well" is a difficult and ideologically fraught criterion. The humanities and most (but not all; linguistics, for example, is a major exception) of the social sciences are not generally taught in a value-neutral way, and subjective quality judgments often have as much to do with finding a curriculum amenable to your values as with the actual quality of the curriculum.

Unfortunately, the fields most relevant to present-day media and politics are also the most value-loaded.

Comment author: Metus 01 December 2014 07:34:15PM 5 points [-]

Nerds tend to undervalue anything that is not math-heavy or easily quantifiable.

Comment author: Lumifer 01 December 2014 07:18:38PM 11 points [-]

The first question is what goals should undergraduate education have.

There is a wide spectrum of possible answers ranging from "make someone employable" to "create a smart, well-rounded, decent human being".

There is also the "provide four years of cruise-ship fun experience" version, too...

Comment author: FrameBenignly 01 December 2014 07:45:09PM *  2 points [-]

Check out page 40 of this survey.

In order of importance: To be able to get a better job 86% / To learn more about things that interest me 82% / To get training for a specific career 77% / To be able to make more money 73% / To gain a general education and appreciation of ideas 70% / To prepare myself for graduate or professional school 61% / To make me a more cultured person 46%

Comment author: Lumifer 01 December 2014 08:08:25PM *  8 points [-]

First, undergrad freshmen are probably not the right source for wisdom about what a college should be.

Second, I notice a disturbing lack of such goals as "go to awesome parties" and "get laid a lot" which, empirically speaking, are quite important to a lot of 18-year-olds.

Comment author: RowanE 02 December 2014 05:23:45PM -1 points [-]

In systems like the US, where undergraduate freshmen are basically customers paying a fee, I expect their input on what they want and expect the product they're purchasing to be like should be extremely relevant.

Comment author: polymathwannabe 03 December 2014 03:07:58PM -1 points [-]

Indeed, customers are usually expected to be informed about what they're buying. But in the case of education, where what the "customer" is buying is precisely knowledge, a freshman's opinion on what education should contain may be less well informed than, for example, a grad student's opinion.

Comment author: Lumifer 02 December 2014 05:43:26PM 2 points [-]

Yes, that is the "provide four years of cruise-ship fun experience" version mentioned. The idea that it's freshmen who are purchasing college education also needs a LOT of caveats.

Comment author: FrameBenignly 01 December 2014 08:49:43PM -1 points [-]

Exactly which courses do you imagine do the most to help students go to the most awesome parties and get laid a lot?

Comment author: Alsadius 02 December 2014 05:57:11AM 5 points [-]

Ones with very little homework and a good gender ratio.

Comment author: Lumifer 01 December 2014 09:07:53PM 4 points [-]

The point is not that they need courses to help them with that. The point is that if you are accepting freshman desires as your basis for shaping college education, you need to recognize that surveys like the one you linked to present a very incomplete picture of what freshmen want.

Comment author: FrameBenignly 01 December 2014 09:18:24PM 0 points [-]

If the desires you named are irrelevant to the discussion at hand, then can you please name the desires that you think are relevant which are not encapsulated by the survey and explain how they are relevant to what classes students are taking? Also, who is the right source of wisdom about what a college should be?

Comment author: Lumifer 01 December 2014 09:30:12PM 1 point [-]

Also, who is the right source of wisdom about what a college should be?

For the bit of mental doodling that this thread is, the right source is you -- your values, your preferences, your prejudices, your ideals.

Comment author: MockTurtle 01 December 2014 11:09:03AM 6 points [-]

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset - I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.

I'm definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?

I'm pretty sure I'd be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?

Comment author: CBHacking 01 December 2014 12:09:59PM *  6 points [-]

Short version: I adjusted my sense of "self" until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day.

It didn't actually take much for me to take that leap when it came to cryonics. The trigger for me was "you don't die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die". I'm not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: "Yes, of course I want you to perform the last-ditch operation to save my life!"

If you're curious: My default self-view for a long time was basically "the continuity that led to me is me, and any forks or future copies/simulations aren't me", which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it "CBH Alpha") as myself. If a copy of me was created; "I" was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn't make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can't pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves.

This doesn't mean I'm not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn't need to be optimized for. On the other hand, this does mean I'm not OK with the idea of a machine that duplicates myself for the purpose of the duplicate dying, unless it's literally a matter of saving any instance of myself; I would optimize for the benefit of all of me, not just for the one who pushed the button.

I'm not yet sure how I'd feel about a "transporter" which offered the option of destroying the original, but didn't have to. The utility of such a thing is obviously so high I would use it, and I'd probably default to destroying the original just because I don't feel like I'm such a wonderful benefit to the world that there needs to be more of me (so long as there's at least one), but when I reframe the question from "why would I want to not be transported (i.e. to go on experiencing life here instead of wherever I was being sent)" to "why would I want to have fewer experiences than I could (i.e. only experience the destination of the transporter, instead of simultaneously experiencing both), I feel like I'd want to keep the original. If we alter the scenario just slightly, such that the duplicate is created as a fork and the fork is then optionally destroyed, I don't think I would ever choose destruction except if it was a scenario along the lines of "painless disintegration or death by torture" and the torture wasn't going to last long (no rescue opportunity) but I'd still experience a lot of pain.

These ideas largely came about from various fiction I've read in the last few years. Some examples that come to mind:

Comment author: MockTurtle 02 December 2014 10:12:30AM -1 points [-]

I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and you don't get recreated at your destination? Is that a bad thing?' is almost without meaning, as there would no-longer be a 'me' to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.

As pointed out by Richard, this is probably even more absurd than I realise, as I am not 'conscious' of all my desires at all times, and thus I cannot go on this road of 'if I do not currently care about something, does it matter?'. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.

Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those 'click' moments...

Comment author: RowanE 01 December 2014 06:06:57PM -1 points [-]

Although with your example in particular it's probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don't understand how or why you'd want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there's a true answer to the question of "which beings can accurately be labelled "me"?", and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.

Comment author: jkaufman 01 December 2014 12:05:28PM *  17 points [-]

Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?

Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start them again?

My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.

(But I'm not signed up for cryonics because I don't think the information would be preserved.)

Comment author: MockTurtle 02 December 2014 10:31:29AM -1 points [-]

Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!).

Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won't exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time...

Thanks for the help!

Comment author: RichardKennaway 01 December 2014 12:00:40PM 2 points [-]

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life.

Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time.

If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics?

Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word "want". (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily "dying" on an operating table, or, if cryonics works, being frozen.

Comment author: MockTurtle 02 December 2014 10:22:57AM 0 points [-]

I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'?

I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn't feel quite right. For now I'll meditate on it and see if I can internalise that message. Thanks for the help!

Comment author: DanielFilan 01 December 2014 09:10:06PM *  4 points [-]

Animal Charity Evaluators have updated their top charity recommendations, adding Animal Equality to The Humane League and Mercy for Animals. Also, their donation-doubling drive is nearly over.

Comment author: ZankerH 01 December 2014 10:28:46PM 6 points [-]

Why would an effective altruist (or anyone wanting their donations to have a genuine beneficial effect) consider donating to animal charities? Isn't the whole premise of EA that everyone should donate to the highest utilon/$ charities, all of which happen to be directed at helping humans?

Just curiosity from someone uninterested in altruism. Why even bring this up here?

Comment author: jkaufman 01 December 2014 11:05:00PM *  12 points [-]

We don't all agree on what a utilon is. I think a year of human suffering is very bad, while a year of animal suffering is nearly irrelevant by comparison, so I think charities aimed at helping humans are where we get the most utility for our money. Other people's sense of the relative weight of humans and animals is different, however, and some value animals about the same as humans or only somewhat below.

To take a toy example, imagine there are two charities: one that averts a year of human suffering for $200 and one that averts a year of chicken suffering for $2. If I think human suffering is 1000x as bad as chicken suffering and you think human suffering is only 10x as bad, then even though we both agree on the facts of what will happen in response to our donations, we'll give to different charities because of our disagreement over values.

In reality, however, it's more complicated. The facts of what will happen in response to a donation are uncertain even in the best of times, but because a lot of people care about humans the various ways of helping them are much better researched. GiveWell's recommendations are all human-helping charities because of a combination of "they think humans matter more" and "the research on helping humans is better". Figuring out how to effectively help animals is hard, and while ACE has good people working on it, they're a small organization with limited funding and their recommendations are still much less robust than GiveWell's.

Comment author: shminux 04 December 2014 08:25:00PM 6 points [-]

What cause would an NRx EA donate to?

Comment author: Azathoth123 08 December 2014 04:18:47AM 1 point [-]

Sarah Hoyt isn't quite NRx, but her recent (re)post here seems relevant.

In particular, the old distinction between deserving and undeserving poor.

Comment author: Azathoth123 06 December 2014 11:11:29PM 0 points [-]

NRx's are generally not utilitarians.

Comment author: shminux 07 December 2014 03:08:58AM 2 points [-]

I've met at least one claiming he is.

Comment author: IlyaShpitser 06 December 2014 04:41:41PM *  1 point [-]

The Austrian "Iron Ring" party. Restore the Hapsburg Empire!


Yes, I am aware that there are things to understand about the crazy straw design world. :)

Comment author: ZankerH 04 December 2014 09:50:50PM 3 points [-]

The most coherent proposal I've heard so far is applying being TRS at the polling place to charity: The principle of optimising your donations for cultural-marxist outrage.

Comment author: bramflakes 04 December 2014 08:29:20PM *  6 points [-]

Depends on what kind of NRx. There isn't a single value system shared among them.

The popular trichotomy is "Techno-commercialist / Theonomist / Ethno-nationalist" - I don't know about the first two, but the ethnonationalists would probably disagree with a lot of Givewell's suggestions.

Comment author: skeptical_lurker 05 December 2014 05:01:28PM 3 points [-]

ethnonationalists would probably disagree with a lot of Givewell's suggestions.

Not uniformly, I think - Japan is an Ethno-nationalist state, and also used to be the world's largest supplier of foreign aid.

Comment author: [deleted] 06 December 2014 07:48:02PM 3 points [-]

Ethno-nationalists certainly have no problem with geopolitics or mutually-beneficial investment, and foreign aid can be useful there.

Comment author: JoshuaZ 03 December 2014 12:16:58AM 6 points [-]

New research suggests that life may be hard to come by on certain classes of planets even if they are in the habitable zone since they will lose their water early on. See here. This is noteworthy in that in in the last few years almost all other research has pointed towards astronomical considerations not being a major part of the Great Filter, and this is a suggestion that slightly more of the Filter may be in our past.

Comment author: Ritalin 07 December 2014 06:10:14PM 3 points [-]

Court OKs Barring High IQs for Cops

An aspiring cop got rejected for scoring too high on an IQ test.

I cannot begin to understand why they would do that.

Comment author: Metus 04 December 2014 01:11:01AM 1 point [-]

Say I have have a desktop with a monitor, a laptop, a tablet and a smart phone. I am looking for creative ideas on how to use them simultaneously, for example when programming to use the tablet for displaying documentation and having multiple screens via desktop computer and laptop, while the smart phone displays some tertiary information.

Comment author: eeuuah 10 December 2014 07:25:35AM 1 point [-]

The biggest hangup I've found in using multiple computers simultaneously is copy pasting long strings. I can chat them to myself, but it's still slightly awkwarder than I'd like.

Otherwise, Sherincall is pretty on point.

Comment author: Sherincall 04 December 2014 08:07:57PM 3 points [-]

Unplug the desktop monitor and plug it in the laptop. Open some docs on the tablet. Keep your todo list on the phone.

Or just get another monitor or two and use that. In my experience, you never need more than 3 monitors at once (for one computer, of course).

Comment author: Manfred 04 December 2014 12:52:17AM *  3 points [-]

Is there a better place than LW to put a big post on causal information, anthropics, being a person as an event in the probability-theory sense, and decision theory?

I'm somewhat concerned that such things are a pollutant in the LW ecosystem, but I don't know of a good alternative.

Comment author: IlyaShpitser 05 December 2014 09:41:04PM *  2 points [-]

Manfred, I think your posts on Sleeping Beauty, etc. are fine, people just may not be able to follow you or have anything to contribute.

Comment author: gjm 04 December 2014 10:36:03AM 7 points [-]

Why would it be a pollutant in the LW ecosystem? This sounds pretty central in the space of things LW people are interested in; what am I missing? (Are you concerned that it would be too elementary for LW? that it might be full of mistakes and annoy or mislead people? that its topic isn't of interest to LW readers? ...)

What's the intended audience? What's it for? (Introducing ideas to people who don't know them? Cutting-edge research? Thinking aloud to get your ideas in order? ...)

Comment author: shminux 03 December 2014 06:22:54PM *  8 points [-]

My feeling was that SSC is getting close to LW in terms of popularity, but Alexa says otherwise: SSC hasn't yet cracked top 100k sites (LW is ranked 63,755) and has ~600 of links to it vs ~2000 for LW. Still very impressive for a part-time hobby of one overworked doctor. Sadly, 20% of searches leading to SSC are for heartiste.

My suspicion is that SSC would get a lot more traffic if its lousy WP comment system was better, but then Scott is apparently not motivated by traffic, so there is no incentive for him to improve it.

Comment author: ChristianKl 03 December 2014 07:53:58PM 2 points [-]

My suspicion is that SSC would get a lot more traffic if its lousy WP comment system was better, but then Scott is apparently not motivated by traffic, so there is no incentive for him to improve it.

Why do you think that's the case? Are there any cases of blogger getting much more popular after switching to a different comment system?

And what comment system would you advocate?

Comment author: shminux 03 December 2014 09:18:50PM 1 point [-]

Why do you think that's the case? Are there any cases of blogger getting much more popular after switching to a different comment system?

It's a good question, maybe it does not, I am not aware of any A/B testing done on that. I simply go by the trivial inconveniences.

And what comment system would you advocate?

Scott is against reddit-style karma system, so I'd go for Scott marking comments he finds interesting, at a minimum.

Additionally, comment formatting and presentation which improves nesting and visibility would be nice. Reddit/LW is an OK compromise, userfriendly.org is better in terms of seeing more threads at a glance.

Comment author: ChristianKl 03 December 2014 11:31:34PM *  -1 points [-]

Scott is against reddit-style karma system

There are many reasons against using the reddit code base. While it's open source in theory it's not structured in a way that allows easy updating.

Is there any solution that would be plug&play for a wordpress blog that you would favor Scott implementing?

Coding something himself would be more than a trival inconvenience.

I also think you underrate the time cost of comment moderation. Want to be a blogger and wanting to moderate a forum are two different goals.

Comment author: NancyLebovitz 03 December 2014 07:20:38PM 1 point [-]

The amount of comments can be rather overwhelming as it is. Do you want a larger SSC community, for the ideas to get a wider audience, or what?

Comment author: shminux 03 December 2014 08:14:11PM 4 points [-]

It is overwhelming because it is poorly formatted and presented, not because of the volume. There are plenty of forums with better comment formatting, like reddit, userfriendly.org, or slashdot. Lack of comment ranking does not help readability, either.

Comment author: NancyLebovitz 03 December 2014 08:18:07PM *  1 point [-]

I find that the bakot's ~new~ on new comments and the dropdown list of new comments is enough to get by with-- for me, the quantity really is the overwhelming aspect on the more popular posts.

Comment author: shminux 04 December 2014 04:29:31AM 2 points [-]

Other forums have lots more comments, yet are easier to navigate through.

Comment author: Lumifer 03 December 2014 06:29:28PM 9 points [-]

SSC would get a lot more traffic

SSC getting a lot more traffic might change it and not necessarily for the better.

Comment author: Ritalin 03 December 2014 04:14:41PM -1 points [-]

Self Help Books

I'm looking to buy a couple audiobooks from Amazon. Any good recommendations?

Comment author: NancyLebovitz 03 December 2014 07:18:09PM 2 points [-]

This is a filter rather than a recommendation, but read the reviews to find out whether people used the book rather than just finding it a pleasant read.

What are you hoping to improve about your life?

Comment author: Ritalin 04 December 2014 11:31:15AM *  2 points [-]

Right now I think my two weakest points are:

  • Akrasia: I have a lot of trouble keeping a proper sleeping schedule and not slipping into night owl lifestyle, going to the gym as often as I should, keeping my diet, and, depending on the circumstances, I have a lot of trouble keeping myself motivated, organized, and productive.
  • Relatively poor social skills. They're not nearly as bad as they once were, but I still find myself somewhat clumsy and awkward, in the way high IQ people tend to be. Out of synch. Having different priorities than the folks around me. Coming up with stuff out of left field. Spacing out, being prompted to explain, retelling a train of thought that to them seems convoluted and to me seems natural. Having a terrible time maintaining proper etiquette, especially table manners.

Either I'm put down "crazy" or put in a pedestal as "genius", but I'm always put aside, and have very few friends. Love life is similarly disastrous, but I don't think there's a book for people who fall in love too hard, too soon, and too easy.

Comment author: NancyLebovitz 04 December 2014 02:11:28PM 1 point [-]

Tentative suggestion: Maybe you need to live somewhere where you have more access to smart people.

Comment author: Ritalin 04 December 2014 02:59:08PM 0 points [-]

They're a bit hard to come by, and, let's face it, we can be hard to live with even among ourselves.

Comment author: ChristianKl 03 December 2014 03:28:47PM 5 points [-]

What exactly causes a person to stalk other people? Is there research that investigates the question when people start to stalk and when they don't?

To what extend is getting a stalker a risk worth thinking about before it's too late?

Comment author: Viliam_Bur 03 December 2014 04:49:09PM *  9 points [-]

No research, just my personal opinion: borderline personality disorder.

alternating between high positive regard and great disappointment

First the stalker is obsessed by the person because the target is the most awesome person in the universe. Imagine a person who could give you infinitely many utilons, if they wanted to. Learning all about them and trying to befriend them would be the most important thing in the world. But at some moment, there is an inevitable disappointment.

Scenario A: The target decides to avoid the stalker. At the beginning the stalker believes it is merely a misunderstanding that can be explained, that perhaps they can prove their loyalty by persistence or something. But later they give up hope, or receive a sufficiently harsh refusal.

Scenario B: The stalker succeeds to befriend the the target. But they are still not getting the infinite utilons, which they believe they should be getting. So they try to increase the intensity of the relationship to impossible levels, as if trying to become literally one person. At some moment the target refuses to cooperate, or is simply unable to cooperate in the way the stalker wants them to, but to the stalker even this seems like a spiteful refusal.

In both scenarios, now the stalker feels hurt and cheated, and wants revenge. Projecting their false beliefs on the target, they believe the target has lied to them about the inifinite utilons; they blame the target for starting this whole process, and for destroying the stalker's life. (In the next mood swing, the stalker may offer forgiveness to the target, if the target agrees to give them the inifinite utilons now. Then they become angry again, etc.)

But maybe there are more possible mechanisms than this one. Also, my model does not explain why the stalker is targetting one specific person instead of multiple people, or everyone.

To what extend is getting a stalker a risk worth thinking about before it's too late?

I think it is worth thinking about, but I am not sure what specific advice to offer except for (a) avoiding everyone "weird", which seems like an overkill, and (b) using a pseudonym and other methods of protecting your privacy if you want to become even a bit famous.

I would certainly recommend to everyone who wants to become famous (as a bloger, singer, actor, etc.) to choose a pseudonym, stick to it, and never reveal anything personal. (Probably not even the city you live in; I would imagine that the idea that you are geographically distant would discourage most possible stalkers.)

Comment author: polymathwannabe 03 December 2014 04:58:41PM 2 points [-]

Imagine a person who could give you infinitely many utilons

they try to increase the intensity of the relationship to impossible levels, as if trying to become literally one person

This sounds eerily close to the mystical varieties of theistic religions.

Comment author: Lumifer 04 December 2014 04:31:50PM *  2 points [-]

I would certainly recommend to everyone who wants to become famous (as a bloger, singer, actor, etc.) to choose a pseudonym, stick to it, and never reveal anything personal.

The only anonymous celebrity I can think of is Bansky.

Staying anonymous is not compatible with becoming famous.

Comment author: Gondolinian 05 December 2014 04:02:36PM 1 point [-]

The only anonymous celebrity I can think of is Bansky.

*Banksy

Comment author: Lumifer 05 December 2014 04:07:16PM 2 points [-]

He's so anonymous I don't even know how to spell his (or maybe her) name! :-)

Comment author: Viliam_Bur 05 December 2014 12:01:29PM *  2 points [-]

I would guess most people become famous before they realize the advantage of anonymity, and then it's too late to start with a fresh name.

But it's also possible that it's simply not worth the effort, because when you become famous enough, someone will dox you anyway.

Could be interesting to know how much advantage (trivial inconvenience for wannabe stalkers) provides a pseudonym when your real name can be easily found on wikipedia; e.g. "Madonna". Or how big emotional difference for a potential stalker does it make whether a famous blogger displays their photo on their blog or not.

My favorite anonymous person is B. Traven.

Comment author: Jayson_Virissimo 05 December 2014 03:27:53AM 7 points [-]

Staying anonymous is not compatible with becoming famous.

Satoshi Nakamoto is also famous and pseudonymous, but this conjunction is very rare IMO.

Comment author: ioshva 03 December 2014 08:40:36AM 4 points [-]

Apropos the "asking personally important questions of LW" posts, I have a question. I'm 30 and wondering what the best way is to swing a mid-career transition to computer science. Some considerations:

  • I already have some peripheral coding knowledge. I took two years of C back in high school, but probably forgot most of it by now. I do coding-ish stuff often like SQL queries or scripting batch files to automate tasks. Most code makes sense to me and I can write a basic FizzBuzz type algorithm if I look up the syntax.

  • I don't self-motivate very well. While I could probably teach myself a fair amount of code, without some sort of structure or project deadline, I would likely fail. If I tried to do this part-time, I would probably fail. (Also, I'm looking for a "clean break," such as it is, with my current, toxic job situation.) So I would think that I could either go to a bootcamp or go back to school.

  • Advantages to school: could defer my remaining loans and work part-time, degree would open more doors within my field (law) as well as outside it. Disadvantages: costs more in the long run, takes longer. Unknowns: post-bacc or MS? I can probably do well on the GRE, but my GPA was unimpressive, and light on math besides. It would have to be an MS program that worked with non-majors.

  • Advantages to bootcamp: much cheaper in the short run, over in a few months. Disadvantages: my savings would be drained by the tuition and interim living expenses; I would need to be damn sure of a job by the time I exited. Unknowns: which bootcamps are worthwhile? My city only has two: Coder Camps and Iron Yard. They appear to teach more or less totally different platforms.

Does anyone here have experience jumping the tracks to programming later in life? Did you take either of the above strategies, or neither? How did it work out, and what would you have done differently?

Comment author: sixes_and_sevens 03 December 2014 12:27:52PM 5 points [-]

Some salient questions:

1) What's your motivation for wanting to do this?

2) What's your current background/skill set?

3) Where in the world are you?

Comment author: ioshva 03 December 2014 04:13:37PM *  1 point [-]

I work on lots of large cases with complex subject matter (often source code itself) with reams of electronic haystacks that need to be sorted for needles. The closer my job is to coding, the more I enjoy it. I get satisfaction out of scripting mundane tasks. I like building and maintaining databases and coming up with absurdly specific queries to get what I need. I remember enjoying and being good at what programming I did do in high school. I am starting to get the creeping feeling that I took a wrong turn eight years ago.

I also feel somewhat stuck in my current position in patent law. Ordinarily step one would be to try a different environment to ensure it's not the workplace as opposed to the work. But most positions advertised in patent law demand an EE/CE/CS background, and I have a peripheral life science degree I use so little as to be irrelevant. I described my skill set as best I could in the parent post but right now it's just a cut above "extremely computer literate." I've dipped my toes but never found the time or motivation to dive (12 hour days kill the initiative).

Houston.

Comment author: shminux 03 December 2014 06:36:13PM 5 points [-]

Consider writing a simple Android or iOS app, such as Tetris, from scratch. This should not take very long and has intrinsic rewards built in, like seeing your progress on your phone and showing it off to your friends or prospective employers. You can also work on it during the small chunks of time available, since a project like that can be easily partitioned. Figure out which parts of getting it from the spec to publishing on the Play/App store you like and which you hate. Record your experiences and share them here once done.

Comment author: Evan_Gaensbauer 03 December 2014 02:29:00AM *  4 points [-]

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I've been thinking about a couple of things since I wrote that post.

  • What makes LessWrong a useful website for asking questions which matter to you personally is that there is lots of insightful people here with wide knowledge base. However, for some questions, LessWrong might be too much, or the wrong kind of, monoculture to provide the best answers. Thus, for weird, unusual, or highly specific questions, there might be better discussion boards, or online communities, to query. In general, Quora might be best for some questions. Stack Overflow might be best for programming questions, Math Overflow might be best for math questions, and some subreddits best for asking questions on very specific topics. What I would like to do is generate a repository of all the best websites for asking specialized or unusual questions across a variety of deep topics. I would turn this into another Discussion post on LessWrong.

  • Robin Hanson wrote:

    It seems obvious to me that almost no humans are able to force themselves to see honestly and without substantial bias on all topics. Even for the best of us, the biasing forces in and around us are often much stronger than our will to avoid bias. Because it takes effort to overcome these forces, we must choose our battles, i.e., we must choose where to focus our efforts to attend carefully to avoiding possible biases. He follows this up with problems of being a 'rationalist', and suggestions for alternatives in Don't Be A Rationalist.

Luke Muehlhauser goes over similar material on his own blog here. In short, the rationalist community can't or won't become experts for every subject it wants to extract information from, so it makes sense for it to defer to experts. If rationalists can't become experts themselves, identifying experts seems like the best strategy. This could be broken down into the skills of knowing how or where to find experts, and knowing how to identify which experts are the best or most trustworthy. Developing skills or heuristics like these could make great additions to LessWrong. I'd be willing to be part of this project, but I don't believe I'm competent enough to do it alone. However, working with LessWrong, an initial post on what are some decent sources for getting answers to questions we can't get on LessWrong could be a springboard for such a discussion.

What are your thoughts on these topics?

Comment author: Evan_Gaensbauer 03 December 2014 01:57:49AM *  9 points [-]

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice from here, and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. If you would like to keep the details private, feel free to send me a private message.

If the ensuing thread doesn't get enough feedback, I'll try making asking this question as a Discussion post in its own right. If for some reason you think this whole endeavor isn't necessary, critical feedback about that is also welcome.

Comment author: cmdXNmwH 02 December 2014 10:11:44PM 2 points [-]

I am not skilled at storytelling in casual conversation (telling personal anecdotes). How can I improve this? In particular, what is a good environment to practice while limiting the social cost of telling lame stories?

Comment author: katydee 03 December 2014 05:00:03AM *  2 points [-]

I'm considered pretty good in this respect. I think the #1 thing that helps is just paying attention to things a lot and having a high degree of situational awareness, which causes you to observe more interesting things and thus have more good stories to share. Reading quickly also helps.

When it comes to actually telling the stories, the most important thing is probably to pay attention to people's faces and see what sorts of reactions they're having. If people seem bored, pick up the pace (or simply withdraw). If they seem overexcited, calm it down.

One good environment to practice the skill of telling stories is tabletop role-playing games, especially as the DM/storyteller/whatever. In general, I think standards in this field are usually fairly low and you get a good amount of time to practice telling (very unusual) stories in any given session.

Comment author: Evan_Gaensbauer 03 December 2014 04:55:10AM 2 points [-]

Although I consider myself average in good storytelling abilities, I'd like to be better. Additionally, it's always been curious to me how one can improve this skill, rather than just leaving one's talent in it to the whims of social fortune, or whatever. As such, I've outsourced this question to my social media networks. If I haven't returned with some sort of response within a few days, feel free to remind me in a few days with either a reply to this comment, or a private message.

Comment author: cmdXNmwH 13 December 2014 12:06:32AM 1 point [-]

Ping :)

Comment author: Evan_Gaensbauer 15 December 2014 05:17:14PM 1 point [-]

I didn't get direct answers to your query, but I got some suggestions for dealing with the problem.

One person told me to defuse an awkward situation if a story isn't well-received with a joke:

I find it helps to jokingly acknowledge when a story fell flat. "...and then I found $5"

Also when I make a joke that doesn't get a laugh, I'll look at the time on my phone and say "9:35. I'm calling it." or whatever. That gets a laugh usually and helps defuse the situation.

Another friend suggested it's all about practice, and bearing through it:

Volume, brah. Volume of social interactions coupled with metacognition and intelligence.

That particular friend is a rationalist. By 'metacognition', I believe he meant 'notice you're practicing the right skills'. Basically, in your head, or on a piece of paper, break down the aspect(s) of storytelling you want to acquire as skills, and only spend time training those.

For example, you probably want to get into the habit of telling stories so the important details that make the story pop come out, rather than getting into the habits of qualifying the points with background details that listeners won't care about. This is a problem I myself have with storytelling. In each of our own minds, we're the only one who remember the causal details that led to the something extraordinary sequence of events that day on vacation. Our listeners don't know the details, because they weren't there, so assume you didn't make any glaring omissions until someone asks about it.

Also, try starting small, I guess. Like, tell shorter anecdotes, and get to bigger ones. Also, I don't believe it's disingenuous to mentally rehearse a short story you might tell beforehand. I used to believe this, because good storytellers I know like my uncle always seem to tell stories off the cuff. Having a good memory, and not using too much jargon, helps. However, I wouldn't be surprised if good storytellers think back on their life experiences and think to themselves 'my encounter today would make a great story'.

Here are some suggestions for generating environments limiting the social costs of telling lame stories.

Another friend of mine thought I was the one asking for how to to limit the social cost of telling lame stories, so he suggested I tell him stories of mine I haven't told him before, and he won't mind if they're bad. This isn't a bad suggestion. You yourself could go on social media, and ask your friends if they want to get together to share stories. If you don't want to go on social media, try texting or calling a friend about it.

If being so direct still seems too awkward, invite a friend or two for coffee, or to the bar, under a pretext of hanging out, and specifically tell stories. Let your friend know that you think you've got a good story, but you might be awkward at telling it, so you hope they don't mind. If they're already you're friend, I expect they'll be genuine and patient enough. However, I recommend ensuring that whoever you're telling a story to is in a neutral or good mood when you start. It's no good to practice storytelling to a friend who just went through a breakup, or lost their job yesterday, or whatever.

I believe this is a good idea for a meetup. At the CFAR alumni reunion this last summer, one alumna hosted a storytelling session. She had a whiteboard out with story suggestions, and we passed around a stick to ensure everyone knew only the designated speaker was supposed to be talking, and there was a short period for questions about the story after each one had finished. Who got to tell a story after the first volunteered happened spontaneously as more people eagerly volunteered because the stories were fun, and their memories about their own experiences were jogged from hearing the last story. However, you don't need all that stuff for a storytelling session to be worthwhile.

The room was jampacked with nearly 30 people at first, and never was less than 10, and the storytelling session went on for several hours rather than only one as was originally intended. To me, this is a testament to how much nerds, or folk from this cluster in person-space, want an environment to try these things in. If you attend a rationality or LessWrong meetup near where you live, try hosting or suggesting a meeting with another group of friends you know, like a meetup for a different topics, or some group of gamers you're a part of. If that doesn't bear out, try again with someone else, or try starting smaller.

Comment author: maxikov 02 December 2014 08:02:29AM 8 points [-]

Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.

  • The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)

  • Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.

  • Timeless identity is correct, and it's you rather than your copy that wakes up.

  • We will figure brain scanning.

  • We will figure brain simulation.

  • Alternatively, we will figure nanites, and a way to make them work through the ice.

  • We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; nuclear war; unfriendly AI, etc. That's similar to the longevity escape velocity, although slower: to survive, you don't just have to advance technologies, you have to advance them fast enough.

All that combined, the probability of working out is really darn low. Yes, it is much better than zero, but still low. If I were to play Russian roulette, I would be happy to learn that instead of six bullets I'm playing with five. However, this relief would not stop me from being extremely motivated to remove even more bullets from the cylinder.

The reason why the belief in afterlife is not just neutral but harmful for modern people is that it demotivates them from doing immortality research. Dying is sure scary, we won't truly die, so problem solved, let's do something else. And I'm worried about cryonics becoming this kind of a comforting story for transhumanists. Yes, actually removing one bullet from the cylinder is much much better than hoping that Superman will appear in the last moment, and stop the bullet. But stopping after removing just one bullet isn't a good idea either. Some amount of resources are devoted to the conventional longevity research, but as far as I understand, we're not hoping to achieve the longevity escape velocity for currently living people, especially adults. Cryonics appear to be our only chance to avoid death, and I would be extremely motivated to try to make our only chance as high as we can possibly make it. And I don't think we're trying hard.

Comment author: RowanE 02 December 2014 04:42:53PM 1 point [-]

About half of your list is actually an OR statement (timeless identity AND brain scanning AND simulation) OR (nanites through ice), and that doesn't even exhaustively cover the possibilities since at least it needs a term for unknown unknowns we haven't hypothesized yet. It's probably easiest to cover all of them with something like "it's actually possible to turn what we're storing when we vitrify a cryonics patient back into that person, in some form or another".

And the vast majority of cryonicists, or at least, those in Less Wrong circles who your post are likely to reach, already accept that the probability of cryonics working is low, but exactly how low they think the probability is after considering the four assumptions your list reduces to is something they've definitely already considered and probably would disagree with you on, if you actually gave a number for what "very low" means to see whether we even disagree (note: if it's above around 1%, consider how many assumptions there are in trying to achieve "longevity escape velocity", and maybe spread your bets).

And, as others have already pointed out, belief in cryonics doesn't really funge against longevity research. If anything, I expect the two are very strongly correlated together. At least as far as belief in them being desirable or possible goes, it's quite apparent that they're both ideas that are shared by a few communities such as our own and rejected by other communities including "society at large". How much we spend on each is probably affected by e.g. cryonics being a thing you can buy for yourself right now but longevity being a public project suffering from commons problems, so the correlation might be less strong and even inverse if you check it (I would be very surprised if it actually turned out to be inverse), but if so that wouldn't necessarily be because of the reasons you suggest.

Comment author: maxikov 02 December 2014 08:07:58PM 2 points [-]

I would say it's probably no higher than 0.1%.

But by no means I'm arguing against cryonics. I'm arguing for spending more resources on improving it. All sorts of biologists are working on longevity, but very few seem to work on improving vitrification. And I have a strong suspicion that it's not because nothing can be done about it - most of the time I talked to biologists about it, we were able to pinpoint non-trivial research questions in this field.

Comment author: ChristianKl 02 December 2014 08:58:18PM 5 points [-]

I think LW looks favorably on the work of the Brain Preservation Foundation and multiple people even donated.

Comment author: ChristianKl 02 December 2014 11:37:30AM *  1 point [-]

All that combined, the probability of working out is really darn low.

How about putting numbers on it? Without doing so, your argument is quite vague.

Some amount of resources are devoted to the conventional longevity research, but as far as I understand, we're not hoping to achieve the longevity escape velocity for currently living people, especially adults.

Have you actually looked at the relevant LW census numbers for what "we are hoping"?

Comment author: cameroncowan 07 December 2014 09:09:07AM 0 points [-]

I think trying to stop death is a rather pointless endeavour from the start but I agree the fact that most everyone has accepted it and we have some noble myths to paper it over certainly keep resources from being devoted to living forever. But then, why should we live forever?

Comment author: Gondolinian 02 December 2014 03:05:47PM *  8 points [-]

The reason why the belief in afterlife is not just neutral but harmful for modern people is that it demotivates them from doing immortality research.

While mainstream belief in an afterlife is probably a contributing factor in why we aren't doing enough longevity/immortality research, I doubt it's a primary cause.

Firstly, because very few people alieve in an afterlife, i.e. actually anticipate waking up in an afterlife when they die. (Nor, for that matter, do most people who believe in a Heaven/Hell sort of afterlife, actually behave in a way consistent with their belief that they may be eternally rewarded or punished for their behavior.)

Secondly, because the people who are in a position to do such research are less likely than the general population to believe in an afterlife.

And finally, because even without belief in an afterlife, people would still probably have a strong sense of learned helplessness around fighting death, so instead of a "Dying is sure scary, we won't truly die, so problem solved, let's do something else." attitude, we'd have a "Dying is sure scary, but we can't really do anything about it, let's do something else." attitude (I have a hunch the former is really the latter dressed up a bit.).

Comment author: maxikov 02 December 2014 07:34:08PM 2 points [-]

Secondly, because the people who are in a position to do such research are less likely than the general population to believe in an afterlife.

On this particular point, I would say that people who are in a position to allocate funds for research programs are probably about as likely as the general population to believe in the belief in afterlife.

Generally, I agree - it's definitely not the only problem. The USSR, where people were at least supposed to not believe in afterlife, didn't have longevity research as its top priority. But it's definitely one of the cognitive stop signs, that prevents people from thinking about death hard enough.

Comment author: [deleted] 02 December 2014 05:52:24AM *  3 points [-]

I'd like to recommend a fun little piece called the The Schizophrenia of Modern Ethical Theories (PDF), which points out that popular moral theories look very strange when actually applied as a grounds for action in real-life situations. Minimally, the author argues that certain reasons for actions are incompatible with certain motives, and that this becomes incoherent if we suppose that these motives were (at least partially) the motivation we had to adopt that set of reasons in the first place.

For example, if you tend to your sick friend, but explain to them that you are (really only) doing so on utilitarian ground, or on egoistic grounds, or because you are obligated to do so, etc, well...doesn't that seem off? And don't those reasons for action, presumably a generalization of a great deal of specific situations of this sort, seem incompatible with the original motivation that we felt was morally good?

Comment author: DanielFilan 02 December 2014 07:13:39AM *  3 points [-]

For example, if you tend to your sick friend, but explain to them that you are (really only) doing so on utilitarian ground, or on egoistic grounds, or because you are obligated to do so, etc, well...doesn't that seem off?

... no? I mean, maybe it will sound weird if you actually say it, because that's not a norm in our culture, but apart from that, it doesn't seem morally bad or off to me.

ETA: well, I suppose only helping someone on egoistic grounds sounds off, but the utilitarian/moral obligation motivations still seem fine to me.

Comment author: gjm 02 December 2014 02:03:18PM 5 points [-]

I suppose only helping someone on egoistic grounds sounds off

I'm not sure even that does, when it's put in an appropriate way. "I'm doing this because I care about you, I don't like to see you in trouble, and I'll be much happier once I see you sorted out."

There are varieties of egoism that can't honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place.

I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this:

  • The morally best actions are the ones that lead to maximum overall happiness.

But Stocker's argument is against the following quite different proposition:

  • We should restructure our minds so that all we do is to calculate maximum overall happiness.

And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.

Comment author: fubarobfusco 02 December 2014 03:47:34PM *  1 point [-]

We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don't give you the answers.

For instance, believing in divine command theory (that moral rules come from God's will) does not tell you what God's will is. It doesn't tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.

And similarly, utilitarianism does not tell you "Sleep with your cute neighbor!" or "Don't sleep with your cute neighbor!" The theory hasn't pre-calculated the outcome of a particular action. Rather, it tells you, "If sleeping with your cute neighbor maximizes utility, then it is good."

The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn't seem like it follows.

Comment author: blacktrance 03 December 2014 05:41:56AM *  1 point [-]

If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn't sound off - it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.

Comment author: SodaPopinski 02 December 2014 03:39:00AM *  4 points [-]

(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking's fire into the equations).

Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like us, then it would be easy for me to interpret their world and their mind etc. as real (even if I could never experience it viscerally living outside the simulation). However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist. So it seems to me if we believe some simulation is real there is no obvious barrier to believing every (computable) universe exists. As for whether we stop at computable universes or include more of mathematics, I am not sure anything we would call conscious could tell the difference, so perhaps it makes no difference.

(Resulting beliefs + aside on decision theory) I believe in a Level 4 Tegmark with no reality fluid measure (as I have yet to see a convincing argument for one) a la http://lesswrong.com/r/discussion/lw/jn2/preferences_without_existence/ . Moreover, I don't think there is any "correct" decision theory that captures what we should be doing. All we can do is pick the one that feels right with regard to our biological programming. Which future entities are us, how many copies of us will there be, and who should I care about etc. are all flaky concepts at best. Of course, my brain won't buy into the idea I should jump off a bridge or touch a hot stove, but I think it is unplausable that this will follow from any objective optimization principle. Nature didn't need a decision theory to decide if it is a good idea to walk into a teleporter machine if two of us walk out the other side. We have our built in shabby biological decision theory, we can innovate on it theoretically, but there is no objective sense in which some particular decision theory will be the right one for us.

Comment author: shminux 03 December 2014 06:47:55PM 1 point [-]

My approach is that everything is equally real, just not everything is equally useful. In a meta level, talking about what's more real is not useful outside a specific setting. Unicorns are real in MLP, cars are real in the world we perceive, electrons are real in Quantum Electrodynamics, virtual particles are real in Feynman diagrams, agents are real in decision theories, etc.

Comment author: [deleted] 02 December 2014 04:55:07AM 2 points [-]

However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist.

Can you expand on this a bit?

Comment author: SodaPopinski 01 December 2014 10:25:59PM 5 points [-]

Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?

Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.

Comment author: jkaufman 01 December 2014 11:18:44PM *  4 points [-]

The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat.

Currently the main cost in cryonics is getting you frozen, not keeping you frozen. For example, Alcor gives these costs for neuropreservation:

  • $25k -- Comprehensive Member Standby (CMS) Fund
  • $30k -- Cryopreservation
  • $25k -- Patient Care Trust (PCT)
  • $80k -- Total

The CMS fund is what covers the Alcor team being ready to stabilize you as soon as you die, and transporting you to their facility. Then your cryopreservation fee covers filling you with cryoprotectants and slowly cooling you. Then the PCT covers your long term care. So 69% of your money goes to getting you frozen, and 31% goes to keeping you like that.

(Additionally I don't think it's likely that current freezing procedures are sufficient to preserve what makes you be you, and that better procedures would be more expensive, once we knew what they were.)

EDIT: To be fair, CMS would be much cheaper if it were something every hospital offered, because you're not paying for people to be on deathbed standby.

Comment author: Lumifer 02 December 2014 12:54:51AM 1 point [-]

So, for how long will that $25K keep you frozen? Any estimates?

Comment author: gjm 02 December 2014 02:06:56PM 2 points [-]

I believe the intention is "unlimitedly long", which is reasonable if (1) we're happy to assume something roughly resembling historical performance of investments and (2) the ongoing cost per cryopreservee is on the order of $600/year.

Comment author: Lumifer 02 December 2014 03:40:51PM 3 points [-]

we're happy to assume something roughly resembling historical performance of investments

The question is whether the cryofund can tolerate the volatility.

the ongoing cost per cryopreservee is on the order of $600/year.

Aha, that's the number I was looking for, thank you.

Comment author: gjm 02 December 2014 04:33:45PM 1 point [-]

Note that it's just a guess on my part (on the basis that a conservative estimate is that if you have capital X then you can take 2.5% of it out every year and be pretty damn confident that in the long run you won't run out barring worldshaking financial upheavals). I have no idea what calculations Alcor, CI, etc., may have done; they may be more optimistic or more pessimistic than me. And I haven't made any attempt at estimating the actual cost of keeping cryopreservees suitably chilled.

Comment author: Lumifer 02 December 2014 04:43:41PM 1 point [-]

And I haven't made any attempt at estimating the actual cost of keeping cryopreservees suitably chilled

Didn't you say it's on the order of $600/year?

Comment author: philh 02 December 2014 04:48:48PM -1 points [-]

No, ve said that "unlimitedly long" is reasonable if that's the cost. Ve didn't say that that was the cost.

Comment author: gjm 02 December 2014 05:10:55PM 2 points [-]

It sounds as if I wasn't clear, so let me be more explicit.

  • I believe the intention is to be able to keep people cryopreserved for an unlimited period.
  • For this to be so, the alleged one-off cost of keeping them cryopreserved should be such as to sustain that ongoing cost for an unlimited period.
  • A conservative estimate is that with a given investment you can take 2.5% of it out every year and, if your investments' future performance isn't tragically bad in comparison with historical records, be reasonably confident of never running out.
  • This suggests that Alcor's estimate of the annual cost of keeping someone cryopreserved is (as a very crude estimate) somewhere around $600/year.
  • This is my only basis for the $600/year estimate; in particular, I haven't made any attempt to estimate (e.g.) the cost of the electricity required to keep their coolers running, or the cost of employing people to watch for trouble and fix things that go wrong.

(Why 2.5%? Because I've heard figures more like 3-4% bandied around in a personal-finance context, and I reckon an institution like Alcor should be extra-cautious. A really conservative figure would of course be zero.)

Comment author: Lumifer 02 December 2014 05:41:13PM 2 points [-]

Ah, I see. I think I misread how the parentheses nest in your post :-)

So you have no information on the actual maintenance cost of cryopreservation and are just working backwards from what Alcor charges.

2.5%

I'm having doubts about this number, but that's not a finance thread. And anyway, in this context what matters is not reality, but Alcor's estimates.

A really conservative figure would of course be zero

That's debatable -- inflation can decimate your wealth easily enough. Currently inflation-adjusted Treasury bonds (TIPS) trade at negative yields.

Comment author: RomeoStevens 02 December 2014 12:58:03AM 3 points [-]

I've seen extremely low plastination estimates due to the lack of maintenance costs. Very speculative obviously,, and the main component of cost is still the procedure itself (though there are apparently some savings here as well.)

Comment author: Metus 01 December 2014 10:03:55PM 1 point [-]

How do you track and control your spending? Disregarding financial privacy I started paying with card for everything which allows me to track where I do spend my money but not really on what. I find that I in general spend less than what I earn because spending money somehow hurts.

Comment author: Robin_Hartell 02 December 2014 09:08:22PM 2 points [-]

MoneyDashboard.com - links directly to my credit cards and bank accounts. I hear that the US equivalent is mint.com

Comment author: Alsadius 02 December 2014 06:08:09AM 2 points [-]

My income is variable and hasn't been great lately. As a result, several months ago I flipped the "I'm poor!" switch that's been lingering in my brain since I was a student, and so I avoid almost all unnecessary spending(a small recreation budget is allowed, for sanity, but otherwise it's necessities and business expenses only). Every few months I review spending to see if there's any excessive categories, but my intuition has been pretty good.

And yeah, everything on plastic. Not even because of tracking, mostly because Visa gives me 1% cash back, which is a better bribe than anyone else offers.

Comment author: Lumifer 02 December 2014 04:37:28PM *  2 points [-]

which is a better bribe than anyone else offers

Some cards will give you 1.5% back and I think I've seen an ad for a Citibank card that gives you 1% on purchase plus another 1% on payment.

Comment author: RichardKennaway 01 December 2014 11:41:37PM 3 points [-]

I have a spreadsheet in which I record every financial transaction, and enter all future transactions, estimated as necessary, out to a year ahead. Whenever I get a bank statement, credit card statement, or the like, I compare everything in it with the spreadsheet and correct things as necessary. I don't try to keep track of cash spent out of my pocket. I tried that once, but found it wasn't practical. The numbers would never add up and there would be no independent record to check them against.

One row of the spreadsheet computes my total financial assets, which I observe ticking upwards month by month.

I don't record in detail what I buy, only the money spent and where (which is a partial clue to what I bought). I'm sufficiently well off that I don't need to plan any of my expenditure in detail, only consider from time to time whether I want to direct X amount of my resources in the way I observe myself doing.

I spend less than I earn, because it seems to me that that is simply what one does, if one can, in a sensibly ordered life.

Comment author: Artaxerxes 01 December 2014 04:29:53PM *  14 points [-]

GiveWell's top charities updated today. Compared to previous recommendations, they have put Against Malaria Foundation back on the top charities list (partial explanation here), and they have also added an "Other Standout Charities" section.

Comment author: tog 01 December 2014 09:45:37PM 5 points [-]
Comment author: Metus 01 December 2014 04:38:33PM 4 points [-]

Also note that there is information on tax-deductibility of donations outside of the U.S. on that site. If you are paying a lot of income tax you might be able to get some money back, donate even more or some combination of those two.

Comment author: tog 01 December 2014 09:56:35PM 5 points [-]

Even more easily, you can visit this interactive tool I made and it'll tell you which charities are tax-deductible or tax-efficient in your country, and give you the best links to them. It also has a dropdown covering 18 countries, including some in which tax-efficient routes are far from obvious.

Comment author: Metus 01 December 2014 10:01:15PM 3 points [-]

Thank you. It is a bit of a shame that it is so complicated to donate tax-efficiently from one EU country to another. I can understand complications going from the US to the EU member states and vice versa but this is plenty strange.

Comment author: Stefan_Schubert 01 December 2014 02:23:31PM *  2 points [-]

Sweden launches a price comparison site, moneyfromsweden.se, for transferring money abroad. The site has information in many languages spoken by different immigrant communities in Sweden.

People living in Sweden send large sums of money to family and friends abroad. But it can be expensive. Fees and currency exchange costs differ between money transfer operators.

On average it costs about 15 percent (150 SEK) to send 1 000 SEK . In extreme cases the cost can be as high as 48 percent.

Therefore, the Swedish Consumer Agency has, at the Government's request, developed an independent price comparison service.

  • It is important that we can help consumers find information in this complex market. The service will make it easier for those who send money abroad. The goal is to ensure that as much money as possible reaches the receiver, says Gunnar Larsson, Director General of the Swedish Consumer Agency.

In my view a very good initiative, since it gives people objective information on which they can make rational decisions. It should be of interest to rationalists and effective altruists (since remittances have a big impact on developing countries).

Comment author: Metus 01 December 2014 01:55:55PM 3 points [-]

This is for the people versed in international and tax law.

By a ruling of the ECJ all tax-payers in the EU can deduct charitable donations to any organisation within the EU from their taxes. In Germany at least this means that the charitability has to be certified by the German authorities. The usual process here is that a legal entity wishing to accept tax-deductible donations has to document how their funds are used and can then issue certificates to donors which then document the tax-deductibility of their donations.

Which leads to a couple of ideas and questions:

  1. It is obvious then that to receive tax-deductible donations in principle necessitates exactly one legal seat in any of the EU member states. This opens up donations from potentially 500 million people in the largest economic zone of the world. Does charitability have to be proven in every one of the currently 28 member states or is there an easier way to do this, e.g. by some sort of transitivity? Is it possible to get the usual certification at all or does charitability have to be proven for each and every one of the donations?

  2. Is this process specific to Germany or do other EU member states recognise charitable organisations from other member states by default such as Ireland and United Kingdom? How do medium sized organisations solve the problem of receiving tax-deductible donations, do they just found a bunch of subsidaries?

  3. Assuming that the situation is maximally bleak with regard of tax deductibility, would it be useful to lobby for reform at the level of the EU to make it way easier to donate from any member state to any legal entity in any other member? Or is the marginal unit of money better spent elsewhere?

Considering the tax-load in the EU and the potential wealth available for donations a lot of thought should be given to these kind of things.

I have a couple more thoughts on the whole matter of extracting more donations, if anyone is interested.

Comment author: rkdj 01 December 2014 09:25:26AM 4 points [-]

Do you or would you secretly invade your child's privacy for their own protection?

Comment author: CBHacking 01 December 2014 11:31:26AM 8 points [-]

TL;DR because this turned into a lot of looking back on my relationship with my parents: I'd make sure they knew I had the capability, and then, if I saw a need to use it, I would. I wouldn't give an expectation of privacy and then violate it.

First, let me state that I'm in my late 20s, and have no children.

Secretly? No. Or rather, I would never hide that I have the capability, though I wouldn't necessarily tell them when I was using it. If I had reason to suspect them hiding things from me, I might even hide the mechanism, but I'd let them know that I could check. The goal would be to indicate that whatever it is I'm concerned about is REALLY IMPORTANT (i.e. more important than privacy), and that I expect that to act as a deterrence.

On the other hand, I can't think of many scenarios that would call for such action. I would make it clear, for example, that a diary is private unless I expect the kid to be in danger, but the scenarios that actually come to mind for when I would go through it all involve things like "E left without telling anybody where e'd be, can't be reached by any way, and has been gone since yesterday" or similar; if it was a suspicion of something like drug abuse, my inclination is simply to talk about it, not even necessarily asking anything specifically. If you can show your kid(s) the utility of giving a positive weight to your views on a subject, then you can often avoid needing to do anything so drastic as violating their privacy (in any scenario where they reasonably expect to have it).

With all that said, I don't really have a good view of how adversarial parent/child relationships function (or rather, dysfunction); I certainly didn't always get along with my parents, and didn't even always respect them very much (and oh damn but did my dad blow up when I told him that, circa age 12) but they never violated my trust on big things. It was never a case of me-vs.-him (most of my problems were with my father), but rather of my utility functions vs. his appreciation for my utility functions. He could make me incredibly angry by promising some treat and then simply failing to follow through (for what never seemed, to me, to be a valid reason to break your word) but that was because I valued a verbal promise of something trivial far more strongly than it warranted, not because he was inherently untrustworthy; to me, the breaking of a promise was a much greater betrayal than the loss of the treat. Once I learned to understand him better, I simply discounted any promise he according to how (un)important he thought it was (not how important I thought it was; I didn't get so far back then as to think about "how important he thinks it is to me"). The only times he came close to breaking the big ones I could usually argue him around.

Took me a long while to work out the details there, though. Might be good to help the kid(s) in question understand where you're coming from, and how much you value something like your implicit (or explicit) promise of their privacy. Of course, if you already have given an expectation of the child's privacy being sacrosanct, I don't know what I'd do in your place. If you've already been caught violating such expectations, my only recommendation would be to immediately explain why you fucked up because, if the kid's worldview is anything like mine at that age (which it totally may not be, and I'm no psychologist) you sure as hell have. Not by the snooping itself, but by simultaneously creating a scenario where the kid had reason to believe emselves private and yet one where you felt it was justified to violate that.

Comment author: James_Miller 07 December 2014 04:35:28AM 1 point [-]

For a young child, of course it's not even close since the kid probably doesn't even value privacy.

Comment author: imuli 03 December 2014 02:52:34PM 1 point [-]

No.

Aside from the don't do things to other people without other their consent angle (which is hard with a child), my two year old is ascribing motives to my actions, and when privacy comes into being I doubt I'll be able to use any information that I do acquire without them noticing.

Comment author: someonewrongonthenet 02 December 2014 08:38:54AM *  3 points [-]

Corollary - would you secretly invade an adult's privacy for their own protection?

I have more trouble answering that one. The answer to the "children" question begins somewhere on "estimate what they might think about the situation as adults"...now if I only knew where the line could be drawn for an adult, this would be simple...

Comment author: Anatoly_Vorobey 02 December 2014 04:57:06AM 2 points [-]

Yes, I would.

(Have two small children, haven't needed to).

Comment author: passive_fist 01 December 2014 11:38:05PM 2 points [-]

Depends on age. If they were teenagers, not secretly. For the simple reason that it could backfire (they find out their privacy has been invaded, then in the future hide things even more strongly). I would, however, expect them to tell me whatever information about their lives I wanted to know.

For full disclosure: I'm in my late 20's and have no children.

Comment author: Salemicus 01 December 2014 10:37:23AM 9 points [-]

I don't have children. But my answer is that, potentially, I would, but it would depend on the situation.

Firstly, I think the level of privacy that a child can reasonably expect to have from his parents is age and context-dependent. A thirty-year-old who has left the home has a far greater legitimate expectation of privacy than a fifteen-year-old living at home, who in turn can legitimately expect far more than a 5-year-old. I don't think most people have any problem with, say, using a baby-monitor on a young child, even though this could be viewed as a gross invasion of privacy if done on someone older.

Secondly, it is better to be honest and open where possible, as otherwise when your actions are discovered (and they likely will be), it could be seen as a breach of trust. However, if your child is lying to you, then it could be appropriate. For example, suppose my teenage child kept receiving suspicious parcels through the mail, and gave implausible accounts for the contents. I would then try to sit the child down and say right, we're going to open this together, and see what's inside. But if that wasn't possible, then yes, I might secretly open one of the parcels, to ascertain whether the child is doing something illegal, dangerous, or otherwise inappropriate.