If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Oct. 27 - Nov. 2, 2014
New Comment
403 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Not all of the MIRI blog posts get cross posted to Lesswrong. Examples include the recent post AGI outcomes and civilisational competence and most of the conversations posts. Since it doesn't seem like the comment section on the MIRI site gets used much if at all, perhaps these posts would receive more visibility and some more discussion would occur if these posts were linked to or cross posted on LW?

Re: "civilizational incompetence". I've noticed "civilizational incomptence" being used as a curiosity stopper. It seems like people who use the phrase typically don't do much to delve in to the specific failure modes civilization is falling prey to in the scenario they're analyzing. Heaven forbid that we try to come up with a precise description of a problem, much less actually attempt to solve it.

(See also: http://celandine13.livejournal.com/33599.html)

4Artaxerxes
I too, have seen it used too early or in contexts where it probably shouldn't have been used. As long as people don't use it so much as an explanation for something, but rather as a description or judgement, its use as a curiosity stopper is avoidable. So I suppose there is a difference between saying "bad thing x happens because of civilisational incompetence", and "bad thing x happens, which is evidence that there is civilisational incompetence." Separate to this concern is that it also has a slight Lesswrong-exceptionalism 'peering at the world from above the sanity waterline' vibe to it as well. But that's no biggie.
1Curiouskid
I had the same thought when I read Hayworth's recent interview. It's really good.

Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.

I remember lukeprog used to recommend Bermudez's Cognitive Science over many others. But then So8res reviewed it and didn't like it much, and now the current recommendation is for The Oxford Handbook of Thinking and Reasoning, which I haven't really seen anyone say much about.

There are a few other things like this, for example So8res apparently read Heuristics and Biases as part of his review of books on the course list, but it doesn't seem to appear on the course list anymore, and under the heuristics and biases section Thinking and Deciding is recommended (once reviewed by Vaniver).

[-]So8res120

No, it's not up to date. (It's on my list of things to fix, but I don't have many spare cycles right now.) I'd start with a short set theory book (such as Naive Set Theory), follow it up with Computation and Logic (by Boolos), and then (or if those are too easy) drop me a PM for more suggestions. (Or read the first four chapters of Jaynes on Probability Theory and the first two chapters of Model Theory by Chang and Keisler.)

Edit: I have now updated the course list (or, rather, turned it into a research guide) that is fairly up-to-date (if unpolished) as of 6 Nov 14.

2Strangeattractor
I have some suggestions for books related to the topics you mentioned. There's a pretty good section on cognitive ergonomics in Wickens' Introduction to Human Factors Engineering that is a clear introduction to the topic, and mentions some examples of design issues that can arise from human beings' cognitive limitations and biases. Also, Chris Eliasmith's book Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems shows some of the technical approaches people have taken to modelling what happens in the brain. I'm not sure if either of those is what you're looking for, but I found them interesting.
1[anonymous]
I think Understanding Machine Learning (out this year) is better than Bishop's book (which is, frankly, insufferably obscurantist), and that instead of model-checking you ought to be learning a proof assistant (I learned Coq from Benjamin Pierce's Software Foundations).
3Artaxerxes
The book the page recommends is Kevin Murphy's Machine Learning: A Probabilistic Perspective. I don't see any of Chris Bishop's books on the MIRI list right now, was Pattern Recognition and Machine Learning there at some point? Or am I missing something you're saying.
3[anonymous]
Oh, well all right then. I was under the mistaken impression Bishop's book was listed. My bad!

Luke's IAMA on reddit's r/futurology in 2012 was pretty great. I think it would be cool if he did another, a lot has changed in 2+ years. Maybe to coincide with the December fundraising drive?

5[anonymous]
If he could not repeat the claim that UFAI is so easily compressible it could "spread across the world in seconds" through the internet, that would be quite helpful, actually. Even in the rich world, with broadband, transferring an intelligent agent all across the world will take whole hours, especially given the time necessary for the bugger to crack into and take control of the relevant systems (packaging itself as a trojan horse and uploading itself to 4chan in a "self-extracting zip" of pornography will take even longer).
3Evan_Gaensbauer
I just sent a message to Luke. Hopefully he will notice it.

The outside view.... (The whole link is quoted.)

Yesterday, before I got here, my dad was trying to fix an invisible machine. By all accounts, he began working on the phantom device quite intently, but as his repairs began to involve the hospice bed and the tubes attached to his body, he was gently sedated, and he had to leave it, unresolved.

This was out-of-character for my father, who I presumed had never encountered a machine he couldn’t fix. He built model aeroplanes in rural New Zealand, won a scholarship to go to university, and ended up as an aeronautical engineer for Air New Zealand, fixing engines twice his size. More scholarships followed and I first remember him completing his PhD in thermodynamics, or ‘what heat does’, as he used to describe it, to his six-year-old son.

When he was first admitted to the hospice, more than a week go, he was quite lucid – chatting, talking, bemoaning the slow pace of dying. “Takes too long,” he said, “who designed this?” But now he is mostly unconscious.

Occasionally though, moments of lucidity dodge between the sleep and the confusion. “When did you arrive?” he asked me in the early hours of this morning, having woken up wanting water. Onc

... (read more)

Today I had an aha moment when discussing coalition politics (I didn't call it that, but it was) with elementary schoolers, 3rd grade.

As a context: I offer an interdisciplinary course in school (voluntary, one hour per week). It gives a small group of pupils a glimpse of how things really work. Call it rationality training if you want.

Today the topic was pairs and triple. I used analogies from relationships: Couples, parents, friendships. What changes in a relationship when a new element appears. Why do relationships form in the first place? And this revealed differences in how friendships work among boys and among girls. And that in this class at this moment at least the girl friendships were largely coalition politics: "If you do this your are my best friend," or "No we can't be best friends if she it your best friend." For the boys it appears to be at least wquantitatively different. But maybe just the surface differs.

I the end I represented this as graphs (kind of) on the board. And the children were delighted to draw their own coalition diagrams, even abbreviating names by single letters. You wouldn't have bet that these diagrams were from 3rd grade.

6MrMind
I wonder what would happen if we trained monkeys to reveal this kind of detalis with us.
[-]Emile160

You may be interested in "Chimpanzee Politics", by Frans de Waals (something like that), which is about exactly that (observing a group of Chimps in a zoo, and how their politics and alliances evolves, with a couple coups).

1MrMind
Great! Added to my Amazon whislist ;)
4Gunnar_Zarncke
But maybe we could. Considering the tricky setups scientists use to compare the intelligence of mice and rats I'd think that it should be possible to devise an experiment which teaches monkeys to reveal their clan structure. I'm thinking along the line of first training association of buttons with clan members (photos) and the allowing to select groups which should get or not get a treat.
4ChristianKl
How did you deal with the prospect of one of the kids being emotional hurt by the whole process of being explicit about relationships?
1Gunnar_Zarncke
I of course have an eye on the emotional wellbeing of the children. But I'm not really clear what kind of emotional hurt you mean. Being exposed to e.g. be the loner possibly? I probably wouldn't try it in this relatively direct way if the group weren't that small (4 children) when I can keep the discourse inspirational and playful at all time.
4ChristianKl
Yes. Getting children to openly state: "We can't be best friend because you are best friends with X" seems to ask for trouble but if you have enough presence in the room to keep the discourse inspirational and playful it might be fine.
4Gunnar_Zarncke
Ah yes. "We can't be best friend because you are best friends with X" wasn't literally said with respect to someone in the room. Something like that was quoted by a girl as an example thus it wasn't personal in that moment but I assume that it is a real statement too.
[-][anonymous]150

Recently, I started a writing wager with a friend to encourage us both to produce a novel. At the same time, I have been improving my job hunting by narrowing my focus on what I want out of my next job and how I want it. While doing these two activities, I began to think about what I was adding to the world. More specifically, I began to ask myself what good I wanted to make.

I realized that writing a novel was not from a desire to add a good to the world (I don't want to write a world changing book), but just something enjoyable. So, I looked at my job. I realized that it was much the same. I'm not driven to libraries specifically by a desire to improve the world's intellectual resources; that's just a side effect. I'm driven to them out of enjoyment for the work.

So, if I'm not producing good from the two major productions of my life, I thought about what else I could produce or if I should at all. But I couldn't think of any concrete examples of good I could add to the world outside of effective altruism. I'm not an inventor nor am I a culture-shifting artist. But I wanted to find something I could add to the world to improve it, if only for my own vanity.

I decided, for the time b... (read more)

Yes, take the Invisible Hand approach to altruism, by pursuing your own productive wellbeing you will generate wellbeing in the worlds of others. Trickle down altruism is a feasible moral policy. Come to the Dark Side and bask in Moral Libertarianism.

8ChristianKl
Important insights usually happen to sound simple but the insight still takes years to achieve.
5[anonymous]
Link/source?

How communities Work, and What Wrecks Them

One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves.

Behavior patterns that grind communities down: endless contrarianism, axe-grinding, persistent negativity, ranting, and grudges.

3Nornagest
I agree about all of that except for contrarianism (and yes, I'm aware of the irony). You want to have some amount of contrarianism in your ecosystem, because people sometimes aren't satisfied with the hivemind and they need a place to go when that happens. Sometimes they need solutions that work where the mainstream answers wouldn't, because they fall into a weird corner case or because they're invisible to the mainstream for some other reason. Sometimes they just want emotional support. And sometimes they want an argument, and there's a place for that too. What you don't want is for the community's default response to be "find the soft bits of this statement, and then go after them like a pack of starving hyenas tearing into a pinata made entirely of ham". There need to be safe topics and safe stances, or people will just stop engaging -- no one's always in the mood for an argument. On the other hand, too much agreeableness leads to another kind of failure mode -- and IMO a more sinister one.
3Adele_L
The article talked about endless contrarianism, where people disagree as a default reaction, instead of because of a pre-existing difference in models. I think that is a problem in the LW community.
4TrE
On the contrary, from my experience it isn't. Sorry, I could not resist the opportunity. But seriously, I don't often see people disagreeing for the sake of disagreeing. More often, they'll point out different aspects, or their own perspective on a topic. To be honest, support and affirmation are perhaps a bit rarer than they should be, but I've rarely perceived disagreement to be hostile, as opposed to misunderstanding, or legitimate and resolvable via further discussion. More datapoints, anyone?
3ChristianKl
If other people disagree with what I write they usually do it for the sake of disagreeing. However if I disagree... ;)

I posted a link to the 2014 survey in the 'Less Wrong' Facebook group, and some people commented they filled it out. Another friend of mine started a Less Wrong account to comment that she did the survey, and got her first karma. Now I'm curious how many lurkers become survey participants, and are then incenitivized to start accounts to get the promised karma by commenting they completed it. If it's a lot, that's cool, because having one's first comment upvoted after just registering an account on Less Wrong seems like a way of overcoming the psychological barrier of 'oh, I wouldn't fit in as an active participant on Less Wrong...'

If you, or someone you know, got active on Less Wrong for the first time because of the survey, please reply as a data point. If you're a regular user who has a hypothesis about this, please share. Either way, I'm curious to discover how strong an effect this is, or is not.

4[anonymous]
My first comment was after I completed the 2014 survey. I've only been lurking for about a month, and this was the first survey I've participated in.
4Sjcs
I have been an on-and-off lurker for ~15 months, and only recently created an account (not because of the survey though). I have participated in both 2013 and 2014's surveys.

Someone has created a fake Singularity Summit website.

(Link is to MIRI blog post claiming they are not responsible for the site.)

MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.

[-]Omid130

What chores do I need to learn how to do in order to keep a clean house?

[-]Emily190

Laundry (plus ironing, if you have clothes that require that - I try not to), washing up (I think this is called doing the dishes in America), mopping, hoovering (vacuuming), dusting, cleaning bathroom and kitchen surfaces, cleaning toilets, cleaning windows and mirrors. That might cover the obvious ones? Seems like most of them don't involve much learning but do take a bit of getting round to, if you're anything like me.

I'd add, not leaving clutter lying around. It both collects dust, and makes cleaning more of an effort. Keep it packed away in boxes and cupboards. (Getting rid of clutter entirely is a whole separate subject.)

2Omid
Thank you, how many hours a week do you spend doing these things?

It's really hard to estimate that accurately, because for me something like 90% of cleanliness is developing habits that couple it with the tasks that necessitate it: always and automatically washing dishes after cooking, putting away used clothes and other sources of clutter, etc. Habits don't take mental effort, but for the same reason it's almost impossible to quantify the time or physical effort that goes into them, at least if you don't have someone standing over you with a stopwatch.

For periodic rather than habitual tasks, though, I spend maybe half an hour a week on laundry (this would take longer if I didn't have a washer and dryer in my house, though, and there are opportunity costs involved), and another half hour to an hour on things like vacuuming, mopping, and cleaning porcelain and such.

2Emily
My timelog tells me that over the last ~7 weeks I've spent an average of 22 mins/day doing things with the tag "chores". That time period does include a two week holiday during which I spent a lot less time than usual on that stuff, so it's probably an underestimate. Agree with Nornagest below about the importance of small everyday habits! (Personally I am good at some of these, terrible at others.)
2Emily
I should add that I live with another person, who does his share of the chores, so this time would probably increase if I wanted the same level of clean/tidy while living alone. I'm not sure how time per person scales with changes in the number of people though... probably not linearly, but it must depend on all sorts of things like how exactly you share out the chores, what the overhead sort of times are like for doing a task once regardless of how much task there is, and how size of living space changes with respect to number of people living in it. Also, if you add actively non-useful people like babies, I expect all hell breaks loose.
4Manfred
Adding on to Emily: Having a particular hamper or even corner of your room where you put dirty laundry, so that it isn't all over your floor. When this hamper / corner is full, do your laundry. Analogous organized or occasionally-organized places for paperwork or whatever else is being clutter-y. If you have ancient carpet and it's dirty and stinky, learn how to rent a Rug Doctor-type steam cleaner from a nearby supermarket. If you have a bunch of broken or dirty / stinky stuff in your house, learn how to get the trash people to haul it away, and learn where to buy cheap used furniture / cheap online kitchen supplies / whatever to replace your old junk. Having tools handy to tidy up nails / tighten loose screws etc. when you notice them. Keeping a bush and plunger near your toilet. If your sink has clogged any time in the past 6 months, also consider having chemical unclogger / a long skinny "snake" (that's what it's actually called) that you shove down the drain and wiggle around to bust clogs. Figure out where all the places that are hard to clean are. These are the places that will have 50 years of accumulated nasty dirt that will make the whole house smell better when you get rid of it.
2Risto_Saarelma
Learn to notice things that need cleaning. Know a good way to get rid of everything you possess when you no longer need it (bookcrossing, electronic waste recycling or just a trash bag). Learn to notice when you have things cluttering up the place that you no longer need.
0hyporational
If you've got the money and a simple enough apartment layout, I recommend a vacuum cleaning robot. My crawling saucer collects a ridiculous amount of dust from the floor every day, and this seems to keep other surfaces and the air dustless too. There's no way I could clean up that much dust myself, and I'd do the cleaning so rarely that the dust would get all over the place.
0Richard_Kennaway
Avoid these and you'll be off to a good start. :)

Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.

Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”

You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.

But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?

I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.

I had a conversation with another person regarding this Leslie's firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It's pretty clear - from the outside - that the cavemen which do probability correctly and don't do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.

2James_Miller
Thanks for letting me know about "Leslie's firing squad[s]"
5private_messaging
You're welcome. So what's your actual take on the issue? I never seen a coherent explanation why bombs must make a difference. I seen appeals to "but you wouldn't be thinking anything if it was red", which ought to perfectly cancel out if you apply that to the urn choice as well. edit: i.e. this anthropics, to me, is sort of like how you could calculate the forces in a mechanical system, but make an error somewhere, and that yields an apparent perpetuum mobile, as forces on your wheel with water and magnets fail to cancel out. Likewise, you evaluate impacts of some irrelevant information, and you make an error somewhere, and irrelevant information makes a difference.
1James_Miller
To a first approximation I don't think it makes a difference, but it does add some logical uncertainty. Also, intuitively I want to be able to use anthropic reasoning to say "there is only a tiny chance that the universe would have condition X, but I'm not surprised by X because without X observers such as us won't exist", but I think doing this implies I have to give a different estimate if red = bomb.
5private_messaging
Hmm, that's an interesting angle on the issue, I didn't quite realize that was the motivation here. I would be surprised by our existence if that was the case, and not further surprised by observation of X (because I already observed X by the way of perceiving my existence). Let's say I remember that there was an strange, surprising sign painted on the wall, and I go by the wall, and I see that sign, and I am surprised that there's that sign on the wall at all, but I am not surprised that I am seeing it (because I can perform an operation in my head that implies existence of the sign - my memory tells me I seen it before). Same with the existence, I am surprised we exist at all but I am not surprised when I observe something necessary for my existence because I could've derived it from prior observations.
2jnarx
I think this particular example doesn't really exemplify what I think you're trying to demonstrate here. A simpler example would be: You draw one ball our of a jar containing 99% red balls and 1% silver balls (randomly mixed). The ball is silver. Is this surprising? Yes. What if you instead draw a ball in a dark room so you can't see the color of the ball (same probability distribution). After drawing the ball, you are informed that the red balls contain a high explosive, and if you draw a red ball from the jar it would instantly explode, killing you. The lights go on. You see that you're holding a silver ball. Does this surprise you?
3private_messaging
Well, being alive would surprise me, but not the colour of the ball. Essentially what happens is that the internal senses (e.g. perceiving own internal monologue) end up sensing the ball colour (by the way of the high explosive).
5jefftk
This is related to the Sleeping Beauty Problem, and in general the answer depends what you're trying to do with "probability". For lots and lots more, Bostrom's PhD thesis is very detailed: Anthropic Bias: Observation Selection Effects in Science and Philosophy. Bostrom's Observation Selection Effects and Human Extinction Risks paper is less philosophical and sounds like it's more relvant to the paper you're working on.
5polymathwannabe
Before I actually do the math, "you hear nothing" appears to affect my estimate exactly in the same way as "you're still alive."
0Kindly
This seems like the obvious answer to me as well. What am I missing?
0polymathwannabe
Now that I see this problem again, my thoughts on it are slightly different. In the version with no bombs, there's a possible scenario where the picker draws a red ball but lies to you by keeping silent. So, there's a viable way for "you hear nothing" AND "Jar R" to happen. But in the version with bombs, the scenario with "you are alive" AND "Jar R" can never happen. So, being alive in the with-bomb version is stronger evidence for Jar S than hearing nothing in the no-bomb version.
0Kindly
Okay, sure. The picker could be lying or speaking quietly; the bomb could be malfunctioning or have a timer that hasn't gone off yet. (Note to self: put down the ball as soon as you find out that it could be a bomb.) These things don't seem like they should be the point of a thought experiment.
4Lumifer
A side note: under the cherry bomb scenario the probability of you hearing the word "red" is zero.
0Manfred
If the two jar scenarios start with equal anthropic measure (i.e. looking in from the outside), then you really are less likely to have jar R if you're not dead.

I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?

To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.

The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.

Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a pro... (read more)

4RowanE
I'm only a not-very-studious undergraduate (in physics), and don't spend an awful lot of time thinking about maths ourside of that, but I pretty much only think about maths in the nonverbal way - I can understand an idea when verbally explained to me, but I have to "translate it" into nonverbal maths to get use out of it.
3Luke_A_Somers
I don't tend to do a lot of proofs anymore. When I think of math, I find it most important to be able to flip back and forth between symbol and referent freely - look at an equation and visualize the solutions, or (to take one example of the reverse) see a curve and think of ways of representing it as an equation. Since when visualizing numbers will often not be available, I tend to think of properties of a Taylor or Fourier series for that graph. I do a visual derivative and integral. That way, the visual part tells me where to go with the symbolic part. Things grind to a halt when I have trouble piecing that visualization together.
1ruelian
This appears to be a useful skill that I haven't practiced enough, especially for non-proof-related thinking. I'll get right on that.
3Strangeattractor
I usually think about math nonverbally. I am not usually doing such thinking to come up with proofs. My background is in engineering, so I got a different sort of approach to math in my education about math than the people who were in the math faculty at the university I attended. Sometimes I do go through a problem step by step, but usually not verbally. I sometimes make notes to help me remember things as I go along. Constraints, assumptions, design goals, etc. Explicitly stating these, which I usually do by writing them on paper, not speaking them aloud, if I'm working by myself on a problem, can help. But sometimes I am not working by myself and would say them out loud to discuss them with other people. Also, there is often more than one way to visualize or approach a problem, and I will do all of them that come to mind. I would suggest, to spend more time thinking about math, find something that you find really beautiful about math and start there, and learn more about it. Appreciate it, and be playful with it. Also, find a community where you can bounce ideas around and get other people's thoughts and ideas about the math you are thinking about. Some of this stuff can be tough to learn alone. I'm not sure how well this advice might work, your mileage may vary. When I am really understanding the math, it seems like it goes directly from equations on the paper right into my brain as images and feelings and relations between concepts. No verbal part of it. I dream about math that way too.
2ruelian
I only got to a nonverbal level of understanding of advanced math fairly recently, and the first time I experienced it I think it might have permanently changed my life. But if you dream about math...well, that means I still have a long way to go and deeper levels of understanding to discover. Yay! Follow-up question (just because I'm curious): how do you approach math problems differently when working on them from the angle of engineering, as opposed to pure math?
5Strangeattractor
It seemed to me that the people I knew who were studying pure math spent a lot of time on proofs, and that math was taught to them with very little context for how the math might be used in the real world, and without a view as to which parts were more important than others. In engineering classes we proved things too, but that was usually only a first step to using the concepts to work on some other problem. There was more time spent on some types of math than on others. Some things were considered to be more useful and important than others. Usually some sort of approximations or assumptions would be used, in order to make a problem simpler and able to be solved, and techniques from different branches of math were combined together whenever useful, often making for some overlap in the notation that had to be dealt with. There was also the idea that any kind of math is only an approximate model of the true situation. Any model is going to fail at some point. Every bridge that has been built has been built using approximations and assumptions, and yet most bridges stay up. Learning when one can trust the approximations and assumptions is vital. People can die if you get it wrong. Learning the habit of writing down explicitly what the assumptions and approximations are, and to have a sense for where they are valid and where they are not, is a skill that I value, and have carried over into other aspects of my life. Another thing is that math is usually in service of some other goal. There are design constraints and criteria, and whatever math you can bring in to get it done is welcome, and other math is extraneous. The beauty of math can be admired, but a kludgy theory that is accurate to real world conditions gets more respect than a pretty theory that is less accurate. In fact, sometimes engineers end up making kludgy theory that solves engineering problems into some sophisticated mathematics that looks more formal and has some interesting properties, and then it
3wadavis
As someone employed doing mid-level math (structural design), I'm much like most others you've talked to. The entirely non-verbal intuitive method is fast, and it tends to be highly correct if not accurate. The verbal method is a lot slower, but it lends itself nicely to being put to paper and great for getting highly accurate if not correct answers. So everything that matters gets done twice, for accurate correct results. Of course, because it is fast the intuitive method is prefered for brainstorming, then the verbal method verifies any promising brainstorms.
2ruelian
Could you please explain what you mean by "correct" and "accurate" in this case? I have a general idea, but I'm not quite sure I get it.
1wadavis
Correct and Precise may have been better terms. By correct I mean a result that I have very high confidence in, but that is not precise enough to be useable. By accurate I mean a result that is very precise but with far less confidence that it is correct. As an example, consider a damped oscillation word problem from first year. You are very confident that as time approaches infinity that the displacement will approach a value just by looking at it, but you don't know that value. Now when you crunch the numbers (the verbal process in the extreme) you get a very specific value that the function approaches, but have less confidence that that value is correct, you could have made any of a number of mistakes. In this example the classic wrong result is the displacement is in the opposite direction as the applied force. This is a very simple example so it may be hard to separate the non-verbal process from the verbal, but there are many cases where you know what the result should look like but deriving the equations and relations can turn into a black box.
2ruelian
Right, that makes much more sense now, thanks. One of my current problems is that I don't understand my brain well enough for nonverbal thinking not to turn into a black box. I think this might be a matter of inexperience, as I only recently managed intuitive, nonverbal understanding of math concepts, so I'm not always entirely sure what my brain is doing. (Anecdotally, my intuitive understanding of a problem produces good results more often than not, but any time my evidence is anecdotal there's this voice in my head that yells "don't update on that, it's not statistically relevant!") Does experience in nonverbal reasoning on math lend actually itself to better understanding of said reasoning, or is that just a cached thought of mine?
1wadavis
Doing everything both ways, nonverbal and verbal, has lent itself to better understanding of the reasoning. Which touches on the anecdote problem, if you test every nonverbal result; you get something statistically relevant. If your odds are more often than not with nonverbal, testing every result and digging for the mistakes will increase your understanding (disclaimer: this is hard work).
2ruelian
So, essentially, there isn't actually any way of getting around the hard work. (I think I already knew that and just decided to go on not acting on it for a while longer.) Oh well, the hard work part is also fun.
1Richard_Kennaway
Each serves its own purpose. It is like the technical and artistic sides of musical performance: the technique serves the artistry. In a sense the former is subordinate to the latter, but only in the sense that the foundation of a building is subordinate to its superstructure. To perform well enough that someone else would want to listen, you need both. This may be useful reading, and the essays here (from which the former is linked).
1ruelian
reads the first essay and bookmarks the page with the rest Thanks for that, it made for enjoyable and thought-provoking reading.
1Bundle_Gerbe
As someone with a Ph.D. in math, I tend to think verbally in as much as I have words attached to the concepts I'm thinking about, but I never go so far as to internally vocalize the steps of the logic I'm following until I'm at the point of actually writing something down. I think there is another much stronger distinction in mathematical thinking, which is formal vs. informal. This isn't the same distinction as verbal vs. nonverbal, for instance, formal thinking can involve manipulation of symbols and equations in addition to definitions and theorems, and I often do informal thinking by coming up with pretty explicitly verbal stories for what a theorem or definition means (though pictures are helpful too). I personally lean heavily towards informal thinking, and I'd say that trying to come up with a story or picture for what each theorem or definition means as you are reading will help you a lot. This can be very hard sometimes. If you open a book or paper and aren't able to get anywhere when you try do this to the first chapter, it's a good sign that you are reading something too difficult for your current understanding of that particular field. At a high level of mastery of a particular subject, you can turn informal thinking into proofs and theorems, but the first step is to be able to create stories and pictures out of the theorems, proofs, and definitions you are reading.
1Fhyve
I'm a math undergrad, and I definitely spend more time in the second sort of style. I find that my intuition is rather reliable, so maybe that's why I'm so successful at math. This might be hitting into the "two cultures of mathematics", where I am definitely on the theory builder/algebraist side. I study category theory and other abstract nonsense, and I am rather bad (relative to my peers) at Putnam style problems.
1Gunnar_Zarncke
I don't see a clear verbal vs. non-verbal dichotomy - or at least the non-verbal side has lots of variants. To gain an intuitive non-verbal understanding can involve * visual aids (from precise to vague): graphs, diagrams, patterns (esp. repetitions), pictures, vivid imagination (esp. for memorizing) * acoustic aids: rhythms (works with muscle memory too), patterns in the spoken form, creating sounds for elements * abstract thinking (from precise to vague): logical inference, semantic relationships (is-a, exists, always), vague relationships (discovering that the more of this seems to imply the more of that) Note: Logical inference seems to be the verbal part you mean, but I don't think symbolic thinking is always verbal. Its conscious derivation may be though. And I hear that the verbal side despite lending itself to more symbolic thinking can nonetheless work its grammar magic on an intuitive level too (though not for me). Personally if I really want to solve a mathematical problem I immerse myself in it. I try lots of attack angles from the list above (not systematically but as it seems fit). I'm an abstract thinker and don't rely on verbal, acoustic or motor cues a lot. Even visual aids don't play a large role though I do a lot of sketching, listing/enumerating combinations, drawing relations/trees, tabulating values/items. If I suspect a repeating pattern I may tap to it to sound it out. If there is lengthy logical inference involved that I haven't internalized I speak the rule repeatedly to use the acoustic loop as memory aid. I play around with it during the day visualizing relationships or following steps, sometimes until in the evening everyting blurs and I fall asleep.
1TsviBT
Personally, the nonverbal thing is the proper content of math---drawing (possibly mental) pictures to represent objects and their interactions. If I get stuck, I try doing simpler examples. If I'm still stuck, then I start writing things down verbally, mainly as a way to track down where I'm confused or where exactly I need to figure something out.
1lmm
I don't really draw that distinction. I'd say that my thinking about mathematics is just as verbal as any other thinking. In fact, a good indication that I'm picking up a field is when I start thinking in the language of the field (i.e. I will actually think "homology group" and that will be a term that means something, rather than "the group formed by these actions...")
1ruelian
Just to clarify, because this will help me categorize information: do you not do the nonverbal kind of thinking at all, or is it all just mixed together?
1lmm
I'm not really conscious of the distinction, unless you're talking about outright auditory things like rehearsing a speech in my head. The overwhelming majority of my thinking is in a format where I'm thinking in terms of concepts that I have a word for, but probably not consciously using the word until I start thinking about what I'm thinking about. Do you have a precise definition of "verbal"? But whether you call it verbal or not, it feels like it's all the same thing.
1ruelian
I don't really have good definitions at this point, but in my head the distinction between verbal and nonverbal thinking is a matter of order. When I'm thinking nonverbally, my brain addresses the concepts I'm thinking about and the way they relate to each other, then puts them to words. When I'm thinking verbally, my brain comes up with the relevant word first, then pulls up the concept. It's not binary; I tend to put it on a spectrum, but one that has a definite tipping point. Kinda like a number line: it's ordered and continuous, but at some point you cross zero and switch from positive to negative. Does that even make sense?
1lmm
It makes sense but it doesn't match my subjective experience.
1ruelian
Alright, that works too. We're allowed to think differently. Now I'm curious, could you define your way of thinking more precisely? I'm not quite sure I grok it.
1lmm
So, I'd say there are three modes of thinking I can identify: * Normal thinking, what I'm doing the vast majority of the time. I'm thinking by manipulating concepts, which are just, well, things. * Introspective thinking, where I'm doing the first kind of thinking, and thinking about it. Because the map can't be the territory, when I'm thinking about thinking the concepts I'm thinking about are represented by something simpler than themselves - if you're thinking about thinking about sheep then the sheep you're thinking about thinking about can't be as complex as the sheep you're thinking about. In fact they're represented either by words, or by something isomorphic to words - labels for concepts. So when I'm thinking about thinking, the thinking-about-thinking is verbal - but the thinking isn't (although there's a light-in-the-fridge effect that might make one think it was). * Auditory thinking, where I'm thinking in words in my head, planning a speech (or more likely a piece of writing - and most of the time I never actually write or say it). This is the only kind of thinking I'm conscious of doing that really feels verbal, but it feels sensory rather than thinking in words; I'm hearing a voice in my cartesian theater.

An good semi-rant by Ken White of popehat on GamerGate. I recommend it as an excellent example of applied rationality and sorting out through the hysterics.

-8Azathoth123
1Vulture
This could be a big deal for the bestiality debate (although conducting the necessary training without falling afoul of the original ethical concerns would probably be a trick).
5NancyLebovitz
A general training in do want, don't want for ordinary things like blankets and types of food could go a long way to solving the problem.
5fubarobfusco
Warning: this comment is a ramble without a conclusion. Horses participating in tell culture? Cool. Preferences and consent are complicated. This line of thinking seems to lead to some interesting places about the idea of consent. I'm increasingly of the opinion that the whole notion of "consent" is socially constructed (that is, learned) — that it is desirable but cannot be assumed to be natural or inherent. People have to learn, not only to ask others' consent, but to recognize when their consent is being asked: not only to ask "Do you want this?" but to know when someone wants them to have and express a preference. Indeed, the idea of developing preferences of one's own has to be learned. (Possibly the whole notion of having an identity, too.) People raised in very controlling households seem to have trouble with this — with formulating and communicating preferences and seeking consent, rather than just ① going ahead and doing things that affect others and then seeing how those others react, or ② expecting others to do the reciprocal. They expect interactions to be, not necessarily forced, but certainly not negotiated. "Better to ask forgiveness than seek permission" is one thing as a maxim for decision-making in a bureaucratic office, but quite another thing in personal relationships! This leads to communications problems between these folks and people who have been taught to exchange consent. For instance, "Would you like to do thus-and-so with me?" for one person can mean "I expect you to do thus-and-so with me and will be disappointed or angry if you don't" whereas for another it can mean "I actually don't know if thus-and-so would be worth doing for us; what do you think?" Previously I thought that this difference was that (to put it overly strongly) people from controlling households had had their free will beaten out of them — that they had been abused or neglected in a way that made them alieve that people would not respect their preferences or diss
1ChristianKl
Consent is really tricky. Imagine a woman sitting at the bar. The woman knows what she's doing and knows that when she smiles in a certain way at a man there a 90% chance that the man will approach her, however only in 10% of the cases the man has an idea that the woman did something to make the woman approach. If the woman initiates an interaction like that does she have informed consent? Is there some ethical imperative for her to inform the man that she initiated the interaction? To frame the question in another way, if all you are doing is trigger the system 1 of the other person do let the person engage in certain actions, but you never ask a question to give system 2 the opportunity to reflect, do you have consent?
1fubarobfusco
Guess cultures are really tricky! If it is indeed the case that everyone knows for certain what the signals mean, then they can be very specific communications of intent and consent: there is not actually any guessing going on! But if the point of using facial expressions and gestures rather than words is that the former are deniable, then it probably can't be the case that everyone knows for certain: deniability relies on ambiguity. If two people have slightly different interpretations of what the signals mean, then they can end up with extremely divergent interpretations of what happened in a particular exchange. For that matter, if everyone in the bar grew up in the same town and went to the same schools, that's a pretty different situation from if the bar is an assemblage of people from wildly different backgrounds who happen to have landed in the same location. (I may be computing from stereotypes in saying this ... but I expect that guess cultures prize uniformity, and fear diversity as a source of confusion; whereas tell cultures may consider uniformity boring, and prize diversity as a source of novelty.) Sexually, it seems to me that if all you are doing is triggering the System 1 of the other person and neither person is waiting around for System 2 to engage and reflect, that may be very hot indeed — Erica Jong's "zipless fuck" — but the failure modes are correspondingly huge.
1ChristianKl
It's possible to send signal A and the other person not understanding what the signal means and doing nothing. But it's also possible that they don't understand the signal but the signal causes them to feel a certain emotion and that emotion lets them engage in an action without them having any idea of the casual chain. The more I learn about how humans work the more I get those practical ethical dilemmas. Even worse, to really know what I'm doing I have to experiment and I'm curious ;)
1Vulture
That seems like a huge leap in terms of capability, though, to add the free parameter of "condition to be started/stopped" somehow.

I've recently started a tumblr dedicated to teaching people what amounts to Rationality 101. This post isn't about advertising that blog, since the sort of people that actually read Less Wrong are unlikely to be the target audience. Rather, I'd like to ask the community for input on what are the most important concepts I could put on that blog.

(For those that would like to follow this endeavor, but don't like tumblr, I've got a parallel blog on wordpress)

Admitting you are wrong.

Highly related: When you even might be wrong, get curious about that possibility rather than scared of it.

6jkadlubo
Excercises in small rational behaviours. E.g. people genrally are very reluctant to apologize about anything, even if the case means little to them and a lot to the other person. Maybe it's "if I apologize, that will mean I was a bad person in the first place" thinking, maybe something else. It's a nice excercise: if somebody seems to want something from you or apparently is angry with you when you did nothing wrong, stop for a moment and think: how much will it cost me to just say "I'm sorry, I didn't mean to offend you". After all, those are just words. You don't have to "win" every confrontation and convince the other person you are right and their requirements are ridiculus. And if you apologize, in fact you both will have a better day - the other person will feel appreciated and you will be proud you did something right. (A common situation from my experience is that somebody pushes me in a queue, I say "excuse me, but please don't stand so close to me/don't look over my arm when I'm writing the PIN code etc." and then the pusher often starts arguing how my behaviour is out of line - making both of us and the cashier upset) Come to think of it, it's a lot like Quirrell's second lesson in HPMoR...
3dthunt
Noticing confusion is the first skill I tried to train up last year, and is definitely a big one, because knowing what your models predict and noticing when they fail is a very valuable feedback loop that prevents you from learning if you can't even notice it. Picturing what sort of evidence would unconvince you of something you actively believe is a good exercise to pair with the exercise of picturing what sort of evidence would convince you of something that seems super unlikely. Noticing unfairness there is a big one. Realizing when you are trying to "win" at truthfinding, which is... ugh.
2Manfred
Taking stock of what information you have, and what might be good sources for information, well in advance of making a decision.
1ruelian
Map and territory - why is rationality important in the first place?

I'd like to ask LessWrong's advice. I want to benefit from CFAR's knowledge on improving ones instrumental rationality, but being a poor graduate I do not have several thousand in disposable income nor a quick way to acquire it. I've read >90% of the sequences but despite having read lukeprog's and Alicorn's sequences I am aware that I do not know what I do not know about motivation and akrasia. How can I best improve my instrumental rationality on the cheap?

Edit: I should clarify, I am asking for information sources: blogs, book recommendations, particularly practice exercises and other areas of high quality content. I also have a good deal of interest in the science behind motivation, cognitive rewiring and reinforcement. I've searched myself and I have a number of things on my reading list, but I wanted to ask the advice of people who have already done, read or vetted said techniques so I can find and focus on the good stuff and ignore the pseudoscience.

[-]cursed160

I've been to several of CFAR's classes throughout the last 2 years (some test classes and some more 'official' ones) and I feel like it wasn't a good use of my time. Spend your money elsewhere.

6hyporational
What made it poor use of your time?
[-]cursed250

I didn't learn anything useful. They taught, among other things, "here's what you should do to gain better habits". Tried it and didn't work on me. YMMV.

One thing that really irked me was the use of cognitive 'science' to justify their lessons 'scientifically'. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I'm not sure what the correct phrase is - the words weren't constraining beliefs? don't pay rent? they could have made up scientific sounding words and it would have had the same effect.)

Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.

ETA: I might go to another class in a year or two to see if they've improved. Not convinced that they're worth donating money towards at this moment.

(This is Dan from CFAR again)

We have a fair amount of data on the experiences of people who have been to CFAR workshops.

First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question "0 to 10, are you glad you came?" The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.

Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn's (also linked above under "C") seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that "The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect."

In-person conversations that I've had with alumni (including some interviews that ... (read more)

4MTGandP
I've seen CFAR talk about this before, and I don't view it as strong evidence that CFAR is valuable. * If people pay a lot of money for something that's not worth it, we'd expect them to rate it as valuable by the principle of cognitive dissonance. * If people rate something as valuable, is it because it improved their lives, or because it made them feel good? For these ratings to be meaningful, I'd like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don't suggest actually doing this. Perhaps "derpy self-improvement" workshops can serve as a control?
0cursed
Hey Dan, thanks for responding. I wanted to ask a few questions: You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey? "0 to 10, are you glad you came?" This is a biased question, because you frame that the person is glad. A similar negative question may say "0 to 10, are you dissatisfied that you came?" Would it be possible to anonymize and post the survey questions and data? It's great that you're following up with people long after the workshops end. Why not survey all alumni? You have their emails. I've read most of the blog posts about CFAR workshops that you linked to - they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren't satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff? Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.

(Dan from CFAR here)

Hi cursed - glad to hear your feedback, though I'm obviously not glad that you didn't have a good experience at the CFAR events you went to.

I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR's curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I'll leave a separate comment about thing 2.

Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer's research). Some of our techniques are not from the academic literature (e.g., the technique that we call "propagating urges" started out in 2011 as something that CFAR co-founder Andrew Critch did).

The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The "theorizing" step of this process inclu... (read more)

4Jackercrack
Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective? Do you think your experience was typical? How useful do you think it would be to an average person? An average rationalist?
[-]cursed120

Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective?

I don't believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the 'science'.

Do you think your experience was typical?

I believe that the majority of people don't get much, if anything, from CFAR's rationality lessons. However, after the lesson, people may be slightly more motivated to accomplish whatever they want to, in the short term just because they've paid money towards a course to increase their motivation.

How useful do you think it would be to an average person?

There was one average person at one of the workshops I attended. e.g. never read LessWrong/other rationality material. He fell asleep a few hours into the lesson, I don't think he gained much from attending. I'm hesitant to extrapolate, because I'm not exactly sure what an average person entails.

An average rationalist?

I haven't met many rationalists, but would believe they wouldn't benefit much/at all.

2Jackercrack
Well that's a bit dispiriting, though I suppose looking back my view of CFAR was a bit unrealistic. Downregulating chance that CFAR is some kind of panacea.
0[anonymous]
(Dan from CFAR here) Hi cursed - glad to hear your feedback, though I'm obviously not glad that you didn't have a good experience at the CFAR events you went to. I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR's curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I'll leave a separate comment about thing 2. Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer's research). Some of our techniques are not from the academic literature (e.g., the technique that we call "propagating urges" started out in 2011 as something that CFAR co-founder Andrew Critch did). The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The "theorizing" step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With "propagating urges," at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he'd been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I'm tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.) Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to com
0[anonymous]
Well, that's a bit dispiriting but thanks for responding anyway. Was this recently or when they were just starting up?
7gjm
(Apologies for the slight thread hijack here.) It occurs to me that CFAR's model of expensive workshops and generous grants to the impoverished (note: I am guessing about the generosity) is likely to produce rather odd demographics: there's probably a really big gap between (1) the level of wealth/income at which you could afford to go, and (2) the level of wealth/income at which you would feel comfortable going, especially as -- see e.g. cursed's comments in this thread -- it's reasonable to have a lot of doubt about whether they're worth the cost. (The offer of a refund mitigates that a bit.) Super-handwavy quantification of the above: I would be really surprised if a typical person whose annual income is $30k or more were eligible for CFAR financial aid. I would be really surprised if a typical person whose income is $150k or less were willing to blow $4k on a CFAR workshop. (NB: "typical". It's easy to imagine exceptions.) Accordingly, I would guess that a typical CFAR workshop is attended mostly by people in three categories: impoverished grad students, etc., who are getting big discounts; people on six-figure salaries, many of them quite substantial six-figure salaries; and True Believers who are exceptionally convinced of the value of CFAR-style rationality, and willing to make a hefty sacrifice to attend. I'm not suggesting that there's anything wrong with that. In fact, it strikes me as a pretty good recipe for getting an interesting mix of people. But it does mean there's something of a demographic "hole".
2Jackercrack
I rather think there may be demand for a cheaper, less time dependent method of attending. It may be several seasons before they end up back in my country for example. Streaming/recording the whole thing and selling the video package seems like it could still get a lot of the benefits across. Their current strategy only really makes sense to me if they're still in the testing and refining stage.
1ChristianKl
I think they are. If everything goes well they will have published papers that proves that their stuff works by the time they move out of the testing and refining stage.
1Jackercrack
Any idea how long that will be (months, years, decades)?
0dthunt
You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.
7RomeoStevens
CFAR has financial aid. Also, attending LW meetups and asking about organizing meetups based on instrumental rationality material is cheap and fun.
3Jackercrack
Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome. I'm on the LW meetups already as it happens. I'm currently attempting to have my local one include more instrumental rationality but I lack a decent guide of what methods work, what techniques to try or what games are fun and useful. For that matter I don't know what games there are at all beyond a post or two I stumbled upon.
7Vaniver
You could ask Metus how much they covered for them, or someone at CFAR how much they'd be willing to cover. The costs for asking are small, and you won't get anything you don't ask for.
4Jackercrack
Fair point, done. On a related note, I wonder how I can practice convincing my brain that failure does not mean death like it did in the old ancestral environment.
[-]Metus100

Exposure therapy: Fail on small things, then larger ones, where it is obvious that failiure doesn't mean death. First remember past experiences where you failed and did not die, then go into new situations.

5ChristianKl
CFAR suggests doing exercises to extend your comfort zone for that purpose.
4NancyLebovitz
Even in the ancestral environment, not all failures (I suspect a fairly small proportion of them) meant death.

Seeking LWist Caricatures

I've written the existence of a cult-like "Bayesian Conspiracy" of mostly rebellious post-apocalypse teens - and now I'm looking for individuals to populate it with. What I /want/ to do is come up with as many ways that someone who's part of the LW/HPMOR/Sequences/Yudkowsky-ite/etc memeplex could go wrong, that tend not to happen to members of the regular skeptical community. Someone who's focused on a Basilisk, someone on Pascal's Mugging, someone focused on dividing up an infinity of timelines into unequal groups...

Put another way, I've been trying to think of the various ways that people outside the memeplex see those inside it as weirdos.

(My narrative goal: For my protagonist to experience trying to be a teacher. I'd be ecstatic if I could have at least one of the cultists be able to teach her a thing or two in return, but since I've based her knowledge of the memeplex on mine, that's kind of tricky to arrange.)

I can't guarantee that I'll end up spending more than a couple of sentences on any of this - but I figure that the more ideas I have to try building with, the more likely I will.

(Also asked on Reddit at https://www.reddit.com/r/rational/comments/2kopgx/qbst_seeking_lwist_caricatures/ .)

[-]philh100

The person who uses ev psych to justify their romantic preferences to potential and current partners. (There's a generalisation of this that I'm not sure how to describe, but I've fallen into it when talking with friends about the game-theoretical value of friendship.)

4fubarobfusco
One possible generalization: Being insecure about personal preferences, and so seeking to show that one's personal likes are rooted directly in something universal — something outside one's own personal history, culture, subculture, upbringing, etc.
2skeptical_lurker
If the problem is that you shouldn't have to justify your romantic preferences then I can see where you are coming from, but if you do want a justification, what is wrong with evo psych?
2NancyLebovitz
Evo psych tends to be too general and too unproven.
0skeptical_lurker
I dunno if that's true, but regardless its a general argument against evo psych, rather than an aguement against using ev psych to justify romantic preferences.
8fubarobfusco
The person who airs fringe supremacist (or even eliminationist) views ... then is surprised and offended when members of the targeted groups shun him or her instead of arguing the points as if they were a matter of abstract intellectual interest. No, wait, that's probably not LW-specific enough.
0[anonymous]
I dunno, it seems to be happening here a disturbingly large amount lately.
8ChristianKl
Calculating Bayes rule for everything can be quite weird for a lot of people. I remember a case where someone found it weird that another person asked on LW how to do a Bayesian calculation for the likelihood that a specific girl likes him. Calculating probabilities for many everyday issues is hugely weird for many people. You might even have to take care to make it sound believable even if you do describe a real world character. I remember an anecdote of a person doing an utility calculation that suggest having sex without a condom and being exposed to the chance of getting AIDS is quite okay. Another of those things that CFAR preaches that can be seen as pretty weird is purposeful comfort zone extension. It's the kind of topic where you also have to worry about believability if you just tell real world stories.
3Lumifer
And rightly so. The great majority of people are badly calibrated, can't estimate priors properly, etc. If they tried to calculate probabilities for "many everyday issues" I would bet most of them would land straight in the valley of bad rationality.
0Azathoth123
Heck, many people here can't do it right. I'm in particular thinking of the recent thread about computing probability of UFOs or aliens.
6[anonymous]
Someone who applies useful effective behaviors towards the achievement of a ridiculous or reprehensible end goal.
1DataPacRat
I think I have this one covered; my character entry is simply "I wanna be a pony!". (And, now that I think about it, my protagonist has said that if they don't have any other end goals they can think of, they're going to act as if their end goal is to "read comics".)
6IlyaShpitser
When judging how weird a community is, people often approximate a kind of "weirdness Pagerank" by looking at people the community holds in high esteem. I think Yudkowsky can come across as weird and offputting to some folks (not in person, but online. This is a bit of a tangent, but I think it is very interesting to think about the systematic ways our online and offline personas differ and why they do so). If people perceive that, their alarms immediately go off and they conclude folks are brainwashed since they are not seeing the weirdness themselves.
2DataPacRat
This can add some useful background detail. My protagonist is acting as a pseudo-Yudkowsky to the group, and has already been called the "Mad Queen" at least once.
4Sjcs
The lurker, who may not be gaining as much utility as they would if they participated. However, they still receive the same (or a degree of) connotations from those outside the memeplex, due to their association with the group. These percepts from the outside may be either good or bad.

Hey, does anyone else struggle with feelings of loneliness?

What strategies have you found for either dealing with the negative feelings, or addressing the cause of loneliness, and have they worked?

Do you feel lonely because you spent your time alone or because you will you don't connect with the people with whom you spend your time?

Two separate problems.

4dthunt
Not feeling connected with people, or, increasingly feeling less connection with people. I actively socialize myself, and this helps, but the other thing maybe suggests to me I'm doing something wrong. (Edit: to clarify, my empathy thingy works as well (maybe better) than it ever has, I just feel like the things I crave from social interactions are getting harder to acquire. Like, people "getting" you, or having enough things in common that you can effectively talk about the stuff that interests you. So, like, obviously, one of the solutions there is to hang out with more bright-and-happy CFAR-ish/LW-ish/EA-ish people.)
1Ben_LandauTaylor
I found the Nonviolent Communication method extremely helpful for feeling more connected to my friends.
0[anonymous]
I found the http://www.amazon.com/Nonviolent-Communication-Language-Marshall-Rosenberg/dp/1892005034/ method extremely helpful for feeling more connected to the people in my life.
0ChristianKl
www.meetup.com can be a good place to find groups of likeminded people.
8cousin_it
In my experience, "dealing with the negative feelings" is useless, because if you deal with them today and you're still lonely tomorrow, the feelings will just come back. It's better to find people who are interested in the same things as you, and hang out with them.
6Manfred
Joining clubs is good - especially if you're willing to put in enough work for it to be implicitly joining a social scene (unfortuanately, this bit has plenty of caveats, but trial and error sometimes works fine). Do you make music? There are scenes for that. Dance, ditto. Playing card games, ditto. LW is almost big enough to work for this, actually - certainly if one lives in a big city.
2IlyaShpitser
Sometimes negative emotions are just bad weather -- you have to get stuff done anyways. I also agree with and second sensible advice below on dealing with causes.
0MrMind
On one side, a feeling of loneliness is a signal that in my life I should socialize and connect more. Other times though, decisions and actions taken under that emotion turned up to be pretty bad: it would have been better to just be and feel alone. I have thus filled up my week but have left slots of time to be alone, and I know that any feeling of loneliness that I get is just a withdrawal symptom. I've filled my social life with dancing classes, founding a local go club, joined a teaching class and time to go out with my generic friends. On the other side, when I still feel alone I just take some minutes to sit quietly and imagine being in a pleasant social or sexual situation, trying to focusing on every detail. This is usually more than enough to clean me from any negative state of mind.

Bayesianism and Causality, or, Why I am only a Half-Bayesian (Judea Pearl)

“The bulk of human knowledge is organized around causal, not probabilistic relationships, and the grammar of probability calculus is insufficient for capturing those relationships.”

[-]Dias80

Suppose I was an unusual moral, unusually insightful used car saleswoman. I have studied the dishonest sales techniques my colleagues use, and because I am unusually wise, worked out the general principles behind them. I think it is plausible that this analysis is new, though I guess it could already exist in an obscure journal.

Is it moral of me to publish this research, or should I practice the virtue of silence?

  • It might help people resist such techniques.
  • It might help salesmen employ these immoral techniques better.
  • Salesmen are more likely to already
... (read more)
6gjm
Robert Cialdini did something a bit like this in researching his book "Influence", and so far as I can tell pretty much everyone agrees it's a good thing he wrote it. I suspect attitudes to your doing this would depend on what your publication looked like. You could write * a book called "Secrets of Successful Second-hand Sales", aimed at used car salespeople, advising them on how to manipulate their customers; * a book called "Secrets of the Sinister Second-hand Sellers", aimed at used car buyers, advising them on what sort of things they should expect to be done to them and how to see through the bullshit and resist the manipulation; * a book called "A Scientific Study of Second-hand Sales Strategies", aimed at psychologists and other interested parties, presenting the information neutrally for whatever use anyone wants to make. (As an unusually moral person you probably wouldn't actually want to write the first of those books. But some others in a similar situation might.) My gut reaction to the first would be "ewww", to the second would be "oh, someone trying to drum up sales by attention-grabbing hype",and to the third would be "hey, that's interesting". Other people's guts may well differ from mine. Cialdini's book is mostly the third, with a little touch of the second.
8ChristianKl
And read by people who want to read the first ;)
1gjm
And also who want to read the second or the third. But yes, of course, writing for one audience won't stop others taking advantage.
6Douglas_Knight
I estimate that 95% of readers of Cialdini read it for business.
2ChristianKl
I think it depends very much on the case. There are things in the social skill space that I discovered via experimentation that I don't openly share. Sales man aren't the only people who care about getting people to make decisions. In medicine compliance is pretty important and choice engineering as a field isn't completely evil. Understanding our decision making can also give us insight into issues like akrasia.

There have been discussions here in the past about whether "extreme", lesswrong-style rationality is actually useful, and why we don't have many extremely successful people as members of the community.

I've noticed that Ramit Sethi often uses concepts we talk about here, but under different names. I'm not sure if he's as high a level as we're looking for as evidence, but he appears to be extremely successful as a businessman. I think he started out in life/career coaching, and then switched to selling online courses when he got popular. His stuff ... (read more)

Side point: I've found material like his, "concepts we talk about here, but under different names", extremely useful when I want to explain the idea of rationality to someone without having to work around the lesswrong lingo and trying to have a conversion while tabooing all the lesswrong phases and cached thoughts.

6IlyaShpitser
Yes! In my opinion, it's a great habit to be on the lookout for things under a different name. This is the "academic coordination problem:" things are often rediscovered again and again, because people have incentives to write but not to read.
2hyporational
I'm not sure if the community has been around long enough for this to be a useful kind of a measurement. Success doesn't happen in an instant and there's a lot of turnover. People who are already successful don't have much pressure to join in.
4RowanE
Additionally, "extreme success" is usually defined in zero sum terms that make it definitionally extremely rare, in addition to the strong influence of chance in whether one achieves success in most fields. So a community as small as ours with "not many extremely successful people" may still be completely worthwhile and have a high rate of extreme success per capita compared to most groups.
2wadavis
Fully agree that he uses concepts used with less wrong, under different names. And I've seen him referenced frequently on less wrong as somewhere to look for rational financial / career advice. I follow his free material, it has provided me with inspiration/direction/confidence to aggressively pursue increased compensation, successfully. I've been tempted to purchase his material before, but am always discouraged last second by the smell of snake oil.
3mare-of-night
I've been doing the same thing, for a while. I also get turned off a bit by the snake oil, and I've been following some of the mailing lists long enough that the content starts to feel repetitive. I might still buy, if he ever put out anything inexpensive (doesn't seem likely, but Jeff Walker did a while ago even though his business has a similar strategy, so it might happen..). I wonder if everyone gets that slight snake oil feeling from him? And in particular, whether the kinds of marketing he's using still work when the reader recognizes what tactic is being used.
0wadavis
The question kept coming up; If I can smell snake oil, am I the target audience? Even if it is legit and honest (I think it is), it kept on reminding me of nigerian phishers using poor language to discourage all but the most gullible from wasting their time.

Those who are currently using Anki on a mostly daily or weekly basis: what are you studying/ankifying?

To start: I'm working on memorizing programming languages and frameworks because I have trouble remembering parameters and method names.

6Emile
These days, most of my time on Anki is on Japanese (which I'm learning for fun) and Chinese (which I already know, but I'm brushing up on tones and characters). Looking through my decks, I also have decks on: * Algorithms and data structures (from a couple books I read on that) * Communication (misc. tips on storytelling, giving talks, etc.) * Game Design (insights and concepts that seemed valuable) * German * Git and Unix Command Line commands * Haskell * Insight (misc. stuff that seemed interesting/important) * Mnemonics * Productivity (notes from Lukeprog's posts and vairous other sources) * Psychology and neuroscience * Rationality Habits (one of the few decks I have that come all made, from Anna Salmon I think, though I also added some stuff and delted others) * Statistics * Web Technologies (some stuff on Angular JS and CSS that I got tired of looking up all the time) (also a few minor decks with very few cards) I review those pretty much every day (I sometimes leave a few unfinished, depending on how much idle time I have in queues, transport, etc.)
3cursed
That's fantastic. How many cards total do you have, and how many minutes a day do you study?
3Emile
Apparently I have 6887 cards (though that includes those I suspended because they're boring, useless, too difficult, duplicated, or possibly wrong; I tend to often suspend cards instead of deleting them); of those around 3000 are Chinese pinyin cards I automatically created with a Python script (I set them up to get between 1 and 5 new ones per day, depending on how busy I tend to be), 1000 are Japanese (the biggest deck of manually-entered cards), and the remaining decks rarely go over 300 cards. I study probably between 20 and 40 minutes per day, usually in public transit or during "downtime" (waiting in line, carrying the baby around the house hoping for him to sleep, in the restroom, the elevator...). The time depends of how many new cards I entered recently.
4philh
Geography: "what direction [relative to central london] is this tube stop in?", English counties (locations), U.S. states (locations, capitals), Canadian territories and provinces (locations and capitals), countries (locations, capitals, and at some point I'll add flags). (Most of these came from ankiweb originally, but I had to add reverse cards.) Bayes: conversions between odds, probabilities and decibels (specific numbers and more recently the general formulas) Miscellaneous: the NATO phonetic alphabet, logs (base 2 of 1.25, 1.5, 1.75, and base 10 of 2 through 9), some words I can never remember how to spell (this turns out not to help), some computer stuff (the order of the arguments in python's datetime.strptime, and the difference between a left join and a right join), some definitions in machine learning, some historical dates (e.g. wars, first moon landing, introduction of the model T), some historical inflation rates, some astronomical facts. Also a deck based on the twelve virtues of rationality essay. (This one and most of the bayes one I found through LW.) I'm not sure most of this is useful, but most of it hasn't cost me significant effort either.
5Scott Garrabrant
if you memorize logs, I recommend memorizing natural logs of primes. This is all you need to quickly calculate natural log, log_2, and log_10 of any integer. You get ln of any number by adding together the natural logs of the prime factors, and you get log_m of n by the formula log_m(n)=ln(n)/ln(m) (maybe memorize ln(10) too to make the calculation a little easier)
3philh
I can't do real division in my head, but if I wanted to maximise my logarithm-ability while minimizing my number of cards, I would go for logs base (probably 10) of primes, and 1/log(e) and 1/log(2). But I'm not too fussed about minimizing cards, or about natural logs. Learning more primes might be helpful, but I can get them approximately. E.g. I don't have log_10(11) memorized, but I know it's between log_10(10) and log_10(2*6) which are 1 and 1.08, and it would be closer to the latter (my calculator says 1.041, which is slightly lower than I would have guessed, but if I put it in Anki I'd only go to 1.04 anyway).

I've seen a few discussions recently where people seem to argue past one another because they're using different senses of the terms "subjective" and "objective".

Some things are called "subjective" because they are parametrized by subject. For instance, everyone who can see has a field of vision, but no two people have the same field of vision (because two people can't stand in the same spot at the same time). However, we can reason and calculate accurately about someone else's field of vision.

Other things are called "sub... (read more)

0ChristianKl
I think various people are better at mood detection via reading body language than brain-scans. Both brain scans and reading body language are cases where you have partial information and use that to do pattern matching. I have multiple experiences where I meet people who can perceive my own mood better than I can myself. There are many times where I get a better idea about someone mood by hugging that person then by asking them verbally and them telling me about how they feel.

NY Times on the wrongness of political party-related discrimination.

4fubarobfusco
I doubt this generalizes very well. There have clearly been cases in the history of the world where one party made it clear that they really did intend to hurt or kill their perceived opponents. And then, after acceding to power, went on to do just that. I've seen remarks here on LW from at least one person in a central European country that he or she felt increasingly personally unsafe due to particular political factions in that country producing increasingly violent rhetoric. I would not tell that person that he or she would be wrong to shun people who advocated political violence against him or her. Here in the U.S., it sure seems that political eliminationist rhetoric (of the "All the Other Party should be killed as traitors" sort) is produced largely as a form of commercial entertainment, not serious political advocacy. But I say that from a position of relative security and privilege ....
0Lumifer
David Brooks is more or less correct about the US where the two mainstream parties are not very distinguishable. He is entirely wrong about many other places of the world. There are enough countries where someone's political views are "a marker for basic decency". P.S. I am amused by a piece of incidental research he cites: That is called blatant racism and in case of s/black/white/ would be cause for much hand-wringing, soul-searching, and probably obligatory "diversity training" for everyone.

Where are you right, while most others are wrong? Including people on LW!

My thoughts on the following are rather disorganized and I've been meaning to collate them into a post for quite some time but here goes:

Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that's where the light is. I also think there's a more-or-less unstated assumption that considerations other than Harm are low-status.

4Azathoth123
Ah, yes. The standard problem with measurement based incentives: you start optimizing for what's easy to measure.
3Larks
Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.

It is extremely important to find out how to have a successful community without sociopaths.

(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole". I believe that avoiding these - any maybe many other - failure modes is critical if we ever want to have a Friendly society.)

It is extremely important to find out how to have a successful community without sociopaths.

It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?

(The analogy to Friendly AI is worth considering, though.)

0Azathoth123
Ok, so start by examining the suspected sociopath's source code. Wait, we have a problem.
8ChristianKl
What do you mean with the phrase "sociopath"? A person who's very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get's clinically diagnosed as sociopathy. I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
8Viliam_Bur
I'd rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does "fire" mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless. Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist. (I once had a boss like this for a short time, and... well, it's like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.) I'd like to agree with the spirit of this. But there is the problem that the sociopath would optimize their "indecent" behavior to make it difficult to prove.
7ChristianKl
I'm not saying that the topic is meaningless. I'm saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about. Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don't have a properly working empathy module. Probably more than average in society. When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: "I feel your pain" has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about "finding willing victims for BDSM". The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly. I'm not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I'm no psychologist. But part of being rational is that you don't ignore patterns that are there. I don't think that this community would overall benefit from kicking out people who fill multiple marks on that checklist. Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it's good that way. If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.
6Viliam_Bur
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead. This feels to me like worrying about a vegetarian who eats "soy meat" because it exposes their unconscious meat-eating desire, while there are real carnivores out there. I am not even sure if "removing a kind of people" is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns. Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start "winning" in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths. I think someone should at least try to solve this problem, instead of pretending it doesn't exist or couldn't happen to us. Because it's just a question of time.
4ChristianKl
Human beings frequently like to think of people they don't like and understand as evil. There various very bad mental habits associated with it. Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren't just evil. Their emotional processes is screwed in systematic ways. Translated into every day language that's: "Rationalists should gossip more about each other." Whether we should follow that maxime is a quite complex topic on it's own and if you think that's important write an article about it and actually address the reasons why people don't like to gossip. You are not really addressing what I said. It's very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn't take the openness the person who said that they only care about themselves at face value. Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case? If so, that's a real concern. Probably worth addressing more directly.

Unfortunately, I don't feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don't think I have a solution. I just noticed a danger, and general unwillingness to debate it.

Probably the best thing I can do right now is to recommend good books on this topic. That would be:

  • The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
  • People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read

I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.

As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and "winning". (And "something bad" offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real li... (read more)

4Lumifer
Can you express what you want to protect against while tabooing words like "bad", "evil", and "abuse"?
1ChristianKl
In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good. That's no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person. There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use. If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn't have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much. But if you use that as a criteria for kicking people out you people won't be open about their own beliefs anymore. In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn't a good idea. On the other hand LW is per default inclusive and not structured in a way where it's a good idea to kick out people on such a basis.
5Nornagest
Intelligent sociopaths generally don't go around telling people that they're sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they've generally been confused teenagers, Internet Tough Guys, and a few people who're just really bad at recognizing their own emotions -- who also aren't the best people to trust, granted, but for different reasons. I'd be more worried about people who habitually underestimate the empathy of others and don't have obviously poor self-image or other issues to explain it. Most of the sociopaths I've met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.
1ChristianKl
The usually won't say it in a way that the would predict will put other people on guard. On the other hand that doesn't mean that they don't say it at all. I don't find the link at the moment but a while ago someone posted on LW that he shouldn't have trusted another person from a LW meetup who openly said those things and then acted like that. Categorising Internet Tough Guys is hard. Base rates for psychopathy aren't that low but you are right that not everyone who says those things is a psychopath. Even that it's a signal for not giving full trust to that person.
2Lumifer
(a) What exactly is the problem? I don't really see a sociopath getting enough power in the community to take over LW as a realistic scenario. (b) What kind of possible solutions do you think exist?
0Azathoth123
What do you mean by "harm". I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of "harm". For example, if you've shattered someone's worldview have you "harmed" him?
3Viliam_Bur
Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them "Jesus is not real", the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying "let's hope we will meet our beloved Jane in heaven with Jesus", that would not be a proper moment to jump and scream "Jesus is not real".
2Vaniver
Steve Sailer's description of Michael Milken: Is that the sort of description you have in mind?

I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people... and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.

If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn't have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say "yeah, yeah, I know exactly what you mean" while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)

Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say "2 + 2 = 5" so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go aw... (read more)

1Azathoth123
Not a person, but I've had similar experiences dealing with Cthulhu and certain political factions.
2Viliam_Bur
Sure, human terms are usually applied to humans. Groups are not humans, and using human terms for them would at best be a metaphor.
1Azathoth123
On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.
0NancyLebovitz
Narcissist Characteristics I was wondering if this sounds like your abusive boss-- it's mostly a bunch of social habits which could be identified rather quickly.
5lmm
I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.
2Jiro
The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don't need a watertight justification because those are used to avoid corruption and bias and you know you don't have any of that anyway.
5Lumifer
There is also that kinda-important bit about shared values...
2lmm
I'm not being utopian, I'm giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.
6Viliam_Bur
I think I know what you mean. The situations like: "there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?"
4Azathoth123
The thing is from what I've heard the problem isn't so much sociopaths as ideological entryists.
3Risto_Saarelma
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage. And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others? The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church's writings, but there's also like a million pages of that stuff so I'm not going to try and pick examples.
1Viliam_Bur
All cheap detection methods could be fooled easily. It's like with that old meme "if someone is lying to you, they will subconsciously avoid looking into your eyes", which everyone has already heard, so of course today every liar would look into your eyes. I see two possible angles of attack: a) Make a correct model of sociopathy. Don't imagine sociopaths to be "like everyone else, only much smarter". They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization. b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)
1drethelin
I think it's extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.
1pianoforte611
Are you directing this at LW? Ie. is there a sociopath that you think is bad for our community?
2Viliam_Bur
Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.) Now that was an online behavior. Imagine the same kind of person in real life. I believe it's just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.
5IlyaShpitser
A "sociopath" is not an alternative label for [someone I don't like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as "player characters", for whatever reason. A monster, basically. I think it's a bad habit to go around calling people monsters.
9Viliam_Bur
I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore. The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don't have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of "unpleasant" imagine "hurting people, but only as much as is safe in given situation".) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn't happen, because they don't want to risk being wrong in a way that would feel morally wrong to them. I don't have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a "league against monsters", it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.) So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn't exist. I wish we had a better solution.
2NancyLebovitz
I can believe that 1% - 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don't have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.
5Viliam_Bur
That's right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population. (However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor -- which I don't know whether it's true or not -- it could again be 1-4% among the wannabe rationalists.)
3Lumifer
I would describe that person as a charismatic manipulator. I don't think it requires being a sociopath, though being one helps.
1NancyLebovitz
The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we're relatively safe. :-) I can hope that a people in a rationalist community would be better than average at eventually noticing they're in a mind-warping confusion and charisma field, but I'm really hoping we don't get tested on that one.
3Viliam_Bur
Returning to the original question ("Where are you right, while most others are wrong? Including people on LW!"), this is exactly the point where my opinion differs from the LW consensus. For a sufficiently high value of "eventually", I agree. I am worried about what would happen until then. I'm hoping that this is not the best answer we have. :-(
5NancyLebovitz
To what extent is that sort of sociopath dependent on in-person contact? Thinking about the problem for probably less than five minutes, it seems to me that the challenge is having enough people in the group who are resistant to charisma. Does CFAR or anyone else teach resistance to charisma? Would noticing when one is confused and writing the details down help?
6Viliam_Bur
In addition to what I wrote in the other comment, a critical skill is to imagine the possibility that someone close to you may be manipulating you. I am not saying that you must suspect all people all the time. But when strange things happen and you notice that you are confused, you should assign a nonzero value to this hypothesis. You should alieve that this is possible. If I may use the fictional evidence here, the important thing for Rational!Harry is to realize that someone close to him may be Voldemort. Then it becomes a question of paying attention, good bookkeeping, gathering information, and perhaps making a clever experiment. As long as Harry alieves that Voldemort is far away, he is likely to see all people around him as either NPCs or his party members. He doesn't expect strategic activity from the NPCs, and he believes that his party members share the same values even if they have a few wrong beliefs which make cooperation difficult. (For example, he is frustrated that Minerva doesn't trust him more, or that Dumbledore is okay with the idea of death, but he wouldn't expect either of them trying to hurt him. And the list of nice people includes also Quirrell, which is the most awesome of them all.) He alieves that he lives in a relatively safe bubble, that Voldemort is somewhere outside of the bubble, and that if Voldemort tried to enter the bubble, it would be an obviously extraordinary event that he would notice. (Note: This is no longer true in the recent chapters.)
1NancyLebovitz
Harry also just doesn't want to believe that Quirrell might be very bad news. (Does he consider the possibility that Quirrell is inimical, but not Voldemort?) Harry is very attached to the only person who can understand him reliably.
0NancyLebovitz
This was unclear-- I meant that Quirrell could be inimical without being Voldemort. The idea of Voldemort not being a bad guy (without being dead)-- he's reformed or maybe he's developed other hobbies-- would be an interesting shift. Voldemort as a gigantic force for good operating in secret would be the kind of shift I'd expect from HPMOR, but I don't know of any evidence for it in the text.
4Viliam_Bur
Perhaps we should taboo "resistance to charisma" first. What specifically are we trying to resist? Looking at an awesome person and thinking "this is an awesome person" is not harmful per se. Not even if the person uses some tricks to appear even more awesome than they are. Yeah, it would be nice to measure someone's awesomeness properly, but that's not the point. A sociopath may have some truly awesome traits, for example genuinely high intelligence. So maybe the thing we are trying to resist is the halo effect. An awesome person tells me X, and I accept it as true because it would be emotionally painful to imagine that an awesome person would lie to me. The correct response is not to deny the awesomeness, but to realize that I still don't have any evidence for X other than one person saying it is so. And that awesomeness alone is not expertise. But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.) Yes, I believe so. If you already have a suspicion that something is wrong, you should start writing a diary. And a very important part would be, for every information you have, write down who said that to you. Don't report your conclusions; report the raw data you have received. This will make it easier to see your notes later from a different angle, e.g. when you start suspecting someone you find perfectly credible today. Don't write "X", write "Joe said: X", even if you perfectly believe him at the moment. If Joe says "A" and Jane says "B", write "Joe said A. Jane said B" regardless of which one of them makes sense and which one doesn't. If Joe says that Jane said X, write "Joe said that Jane said X", not "Jane said X". Also, don't edit the past. If you wrote "X" yesterday, but today Joe corrected you that he actually said "Y" yesterday
1ChristianKl
I don't think "no nonverbal cues" is accurate. A psychopath shows no signs of emotional distress when he lies. On the other hand if they say something that should go along with a emotion if a normal person says it, you can detect that something doesn't fit. In the LW community however, there are a bunch of people with autism that show strange nonverbals and don't show emotions when you would expect a neurotypical person to show emotions. I think that's a strawman. Not having long-term goals is a feature of psychopaths. The don't have a single purpose according to which they organize things. The are impulsive.
1Viliam_Bur
That seems correct according to what I know (but I am not an expert). They are not like "I have to maximize the number of paperclips in the universe in the long term" but rather "I must produce some paperclips, soon". Given sufficiently long time interval, they would probably fail at Marshmallow test. Then I suspect the difference between a successful and an unsuccessful one is whether their impulses executed with their skills are compatible with what the society allows. If the impulse is "must get drunk and fight with people", such person will sooner or later end in prison. If the impulse is "must lie to people and steal from them", with some luck and skill, such person could become rich, if they can recognize situations where it is safe to lie and steal. But I'm speculating here.
0ChristianKl
Human behavior is more complex than that. Rather than thinking "I must steal" the impulse is more likely to be "I want to have X" and a lack of inhibition for stealing. Psychopath usually don't optimize for being evil.
1NancyLebovitz
Are you suggesting journaling about all your interactions where someone gives you information? That does sound exhausting and unnecessary. It might make sense to do for short periods for memory training. Another possibility would be to record all your interactions-- this isn't legal in all jurisdictions unless you get permission from the other people being recorded, but I don't think you're likely to be caught if you're just using the information for yourself. Journaling when you have reason to suspicious of someone is another matter, and becoming miserable and confusing for no obvious reason is grounds for suspicion. (The children of such manipulators are up against a much more serious problem.) It does seem to me that this isn't exactly an individual problem if what you need is group resistance to extremely skilled manipulators. http://www.ribbonfarm.com/the-gervais-principle/-- some detailed analysis of sociopathy in offices.
1Viliam_Bur
Ironically, now I will be the one complaining that this definition of a "sociopath" seems to include too many people to be technically correct. (Not every top manager is a sociopath. And many sociopaths don't make it into corporate positions of power.) I agree that making detailed journals is probably not practical in real life. Maybe some mental habits would make it easier. For example, you could practice the habit of remembering the source of information, at least until you get home to write your diary. You could start with shorter time intervals; have a training session where people will tell you some information, and at the end you have an exam where you have to write an answer to the question and the name of the person who told you that. If keeping the diary itself turns out to be good for a rationalist, this additional skill of remembering sources could be relatively easier, and then you will have the records you can examine later.
3Lumifer
Since we are talking about LW, let me point out that charisma in meatspace is much MUCH more effective than charisma on the 'net, especially in almost-purely-text forums.
2Azathoth123
Well, consider who started CFAR (and LW for that matter) and how he managed to accomplish most of what he has.
0IlyaShpitser
Ex-cult members seem to have fairly general antibodies vs "charisma." Perhaps studying cults without being directly involved might help a little as well, it would be a shame if there was no substitute for a "school of hard knocks" that actual cult membership would be. Incidentally, cults are a bit of a hobby of mine :).
1arromdee
https://allthetropes.orain.org/wiki/Hired_to_Hunt_Yourself
2Lumifer
Why do you suspect so? Gaming ill-defined social rules of an internet forum doesn't look like a symptom of sociopathy to me. You seem to be stretching the definition too far.
7Viliam_Bur
Abusing rules to hurt people is at least a weak evidence. Doing it persistently for years, even more so.
0drethelin
Why is this important?
9Viliam_Bur
My goal is to create a rationalist community. A place to meet other people with similar values and "win" together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here. Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness. The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start "winning", this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it's just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any "barbarians" would consider you a serious danger. (I don't want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It's not just the "affective death spirals", although they also play a large role. But there are people in important positions who don't think about "how to make the world a better place for humans", but rather "how could I most benefit
0ChristianKl
How do you come to that conclusion? Simply because you don't agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?
2Viliam_Bur
Uhm, no. Allow me to quote from my other comment: I hope it illustrates that my mental model has separate buckets for "people I suspect to be sociopaths" and "people I disagree with".
0ChristianKl
Diagnosing mental illness based on the kind of second hand information you have about politicians isn't a trivial effort. Especially if you lack the background in psychology.
[-]RowanE110

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.

5ChristianKl
I think you are wrong. Identifying a belief as wrong is not enough to remove it. If someone has low self esteem and you give him an intellectual argument that's sound and that he wants to believe that's frequently not enough to change the fundamental belief behind low self esteem. Scott Alexander wrote a blog post about how asking a schizophrenic for weird beliefs makes the schizophrenic tell the doctor about the faulty beliefs. If you ask a question differently you get people reacting differently. If you want to get a broad spectrum of answers than it makes sense to ask the question in a bunch of different ways. I'm intelligent enough to know that my own beliefs about the social status I hold within a group could very well be off even if those beliefs feel very real to me. If you ask me: "Do you think X is really true and everyone who disagrees is wrong?", you trigger slightly different heuristics than in me than if you ask "Do you believe X?". It's probably pretty straightforward to demonstrate this and some cognitive psychologist might even already have done the work.
3Thomas
Very well. But do you have such a belief, that others will see it as a wrong one? (Last time this was asked, the majority of contrarian views were presented by me.)
9RowanE
The most contra-LW belief I have, if you can call it that, is my not being convinced on the pattern theory of identity - EY's arguments about there being no "same" or "different" atoms not effecting me because my intuitions already say that being obliterated and rebuilt from the same atoms would be fatal. I think I need the physical continuity of the object my consciousness runs on. But I realise I haven't got much support besides my intuitions for believing that that would end my experience and going to sleep tonight won't, and by now I've become almost agnostic on the issue.
1ZankerH
* Technological progress and social/political progress are loosely correlated at best * Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression * There is no such thing as moral progress, only people in charge of enforcing present moral norms selectively evaluating past moral norms as wrong because they disagree with present moral norms
8Metus
I think I found the neoreactionary.
3gjm
The neoreactionary? There are quite a number of neoreactionaries on LW; ZankerH isn't by any means the only one.
3Metus
Apparently LW is a bad place to make jokes.
[-]gjm140

The LW crowd is really tough: jokes actually have to be funny here.

3Lumifer
That's not LW, that's internet. The implied context in your head is not the implied context in other heads.
4Nate_Gabriel
Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband's right to punish his wife however he wanted.
2RowanE
I think that progress is specifically what he's on about in his third point. It's standard neoreactionary stuff, there's a reason they're commonly regarded as horribly misogynist.
2Capla
I want to discuss it, and be shown wrong if I'm being unfair, but saying "It's standard [blank] stuff" seems dismissive. Suppose I was talking with someone about friendly AI or the singularity, and a third person comes around and says "Oh, that's just standard Less Wrong stuff." It may or may not be the case, but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright. That is not conducive to communication.
3RowanE
I was trying to say "you should not expect that someone who thinks no social, political or moral progress has been made since the 18th century to consider women's rights to be a big step forward" in a way that wasn't insulting to Nate_Gabriel - being casually dismissive of an idea makes "you seem to be ignorant about [idea]" less harsh.
2Lumifer
This comment could be (but not necessarily is) valid with the meaning of "Your arguments are part of a well-established set of arguments and counter-arguments, so there is no point in going through them once again. Either go meta or produce a novel argument.".
2fubarobfusco
How do you square your beliefs with (for instance) the decline in murder in the Western world — see, e.g. Eisner, Long-Term Historical Trends in Violent Crime?
2Richard_Kennaway
What do you mean by social progress, given that you distinguish it from technological progress ("loosely correlated at best") and moral progress ("no such thing")?
-1ZankerH
Re: social progress: see http://www.moreright.net/social-technology-and-anarcho-tyranny/ As for moral progress, see whig history. Essentially, I view the notion of moral progress as fundamentally a misinterpretation of history. Related fallacy: using a number as an argument (as in, "how is this still a thing in 2014?"). Progress in terms of technology can be readily demonstrated, as can regression in terms of social technology. The notion of moral progress, however, is so meaningless as to be not even wrong.
2Toggle
That use of 'technology' seems to be unusual, and possibly even misleading. Classical technology is more than a third way that increases net good; 'techne' implies a mastery of the technique and the capacity for replication. Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact. It does not seem to be the case that we have ever known how to make new societies that do the things we want. The narrative of a 'regression' in social progress implies that there was a kind of knowledge that we no longer have- but it is the social institutions themselves that are breaking down, not our ability to craft them. Cultures are still built primarily by poorly-understood aggregate interactions, not consciously designed, and they decay in much the same way. A stronger analogy here might be biological adaptation, rather than technological advancement, and in evolutionary theory the notion of 'progress' is deeply suspect.
0Lumifer
The fact that I can't make a new computer from scratch doesn't mean I'm using one as "a magical artifact". What contemporary pieces of technology can you make? You might be more familiar with this set of knowledge if we call it by its usual name -- "politics".
1Toggle
I was speaking in the plural. As a civilization, we are more than capable of creating many computers with established qualities and creating new ones to very exacting specifications. I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision. You can do this for governments, of course- but notably, we haven't lost any information here. We are still perfectly capable of writing constitutions, or even founding monarchies if there were a consensus to do so. The 'regression' that Zanker believes in is (assuming the most common NRx beliefs) a matter of convention, social fabrics, and shared values, and not a regression in our knowledge of political structures per se.
1Lumifer
That's not self-evident to me. There are legal and ethical barriers, but my guess is that given the same level of control that we have in, say, engineering, we could (or quickly could learn to) build societies with custom characteristics. Given the ability to select people, shape their laws and regulations, observe and intervene, I don't see why you couldn't produce a particular kind of a society. Of course you can't build any kind of society you wish just like you can't build any kind of a computer you wish -- you're limited by laws of nature (and of sociology, etc.), by available resources, by your level of knowledge and skill, etc. Shaping a society is a common desire (look at e.g. communists) and a common activity (of governments and politicians). Certainly it doesn't have the precision and replicability of mass-producing machine screws, but I don't see why you can't describe it as a "technology".
0Toggle
Human cultures are material objects that operate within physical law like anything else- so I agree that there's no obvious reason to think that the domain is intractable. Given a long enough lever and a place to stand, you could run the necessary experiments and make some real progress. But a problem that can be solved in principle is not the same thing as a problem that has already been mastered- let alone mastered and then lost again. One of the consequences of the more traditional sorts of technology is that it is a force towards consensus. There is no reasonable person who disagrees about the function of transistors or the narrow domains of physics on which transistor designs depend; once you use a few billion of the things reliably, it's hard to dispute their basic functionality. But to my knowledge, there was never any historical period in which consensus about the mechanisms of culture appeared, from which we might have fallen ignominiously. Hobbes and Machiavelli still haven't convinced everybody; Plato and Aristotle have been polarizing people about the nature of human society for millenia. Proponents of one culture or another never really had an elaborate set of assumptions that they could share with their rivals.
0Lumifer
Let me point out that you continue to argue against ZankerH's position that the social technology has regressed. That is not my position. My objection was to your claim that the whole concept of social technology is nonsense and that the word "technology" in this context is misleadiing. I said that social technology certainly exists and is usually called politics -- but I never said anything about regression or past golden ages.
8lmm
* Arguing on the internet is much like a drug, and bad for you * Progress is real * Some people are worth more than others * You can correlate this with membership in most groups you care to name * Solipsism is true
6NancyLebovitz
Are these consistent with each other? Should it at least be "Some "people" are worth more than others"?
0lmm
Words are just labels for empirical clusters. I'm not going to scare-quote people when it has the usual referent used in normal conversation.
0Mitchell_Porter
What do you mean by solipsism?
0lmm
My own existence is more real than this universe. Humans and our objective reality are map, not territory.
0Mitchell_Porter
What does it mean for one thing to be more real than another thing? Also, when you say something is "map not territory", what do you mean? That the thing in question does not exist, but it resembles something else which does exist? Presumably a map must at least resemble the territory it represents.
2lmm
Maybe "more fundamental" is clearer. In the same way that friction is less real than electromagnetism.
0Mitchell_Porter
More fundamental, in what sense? e.g. do you consider yourself to be the cause of other people?
2lmm
To the extent that there is a cause, yes. Other people are a surface phenomenon.
0Mitchell_Porter
What do you mean by surface? Do you mean people exist as your perceptions but not otherwise? And is there anything 'beneath' this 'surface', whatever it is?
1Evan_Gaensbauer
What do you mean by 'progress'? There is more than one conceivable type of progress: political, philosophical, technological, scientific, moral, social, etc. What's interesting is there is someone else in this thread who believes they are right about something most others are wrong about. ZankerH believes there hasn't been much political or social progress, and that moral progress doesn't exist. So, if that's the sort of progress you are meaning, and also believe that you're right about this when most others aren't, then this thread contains some claims that would contradict each other. Alas, I agree with you that arguing on the Internet is bad, so I'm not encouraging you to debate ZankerH. I'm just noting something I find interesting.
6James_Miller
I've signed up for cryonics, invest in stocks through index funds, and recognize that the Fermi paradox means mankind is probably doomed.
4Ixiel
Inequality is a good thing, to a point. I believe in a world where it is possible to get rich, and not necessarily through hard work or being a better person. One person owning the world with the rest of us would be bad. Everybody having identical shares of everything would be bad (even ignoring practicalities). I don't know exactly where the optimal level is, but is it closer to the first situation than the second, even if assigned by lottery. I'm treating this as basically another contrarian views thread without the voting rules. And full disclosure I'm too biased for anybody to take my word for it, but I'd enjoy reading counterarguments.
5Viliam_Bur
My intuition would be that inequality per se is not a problem, it only becomes a problem when it allows abuse. But that's not necessarily a function of inequality itself; it also depends on society. I can imagine a society which would allow a lot of inequality and yet would prevent abuse (for example if some Friendly AI would regulate how you are allowed to spend your money).
2Nate_Gabriel
Do you think we currently need more inequality, or less?
1Ixiel
In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class. I don't know enough about global issues to comment on them.
0lmm
If we're stipulating that the allocation is by lottery, I think equality is optimal due to simple diminishing returns. And also our instinctive feelings of fairness. This tends to be intuitively obvious in a small group; if you have 12 cupcakes and 4 people, no-one would even think about assigning them at random; 3 each is the obviously correct thing to do. It's only when dealing with groups larger than our Dunbar number that we start to get confused.
0Ixiel
Assuming that cupcakes are tradable, that seems intuitively false to me. Is it just your intuition, or is there also reason? Not denying intuitions' values, they are just not as easy to explain to one who does not share them.
0lmm
If cupcakes are tradeable for brownies then I'd distribute both evenly to start and allow people to trade at prices that seemed fair to them, but I assume that's not what you're talking about. And yeah, it's primarily an intuition, and one that I'm genuinely quite surprised to find isn't universal, but I'd probably try to justify it in terms of diminishing returns, that two people with 3 cupcakes each have a higher overall happiness than one person with 2 and one with 4.
4gattsuru
General : * There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true. * /Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we're desperately trying to Section Eight. * Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself. Political : * Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable. * Privacy policies focused on preventing collection of identifiable data are ultimately doomed. LessWrong-specific: * "Karma" is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it's disappointing. * The risks and costs of "Raising the sanity waterline" are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven't really looked at what this would mean on a national scale. "Nuclear Winter" as argued by Sagan was a very, very overt Pascal's Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective... several hundred pages of reading later. * "Rationality" is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you're competing with RationalWiki, the universe is
6Nornagest
Isn't this basically Goodhart's law?
2gattsuru
It's related. Goodhart's Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn't predict how that decoupling will occur. The common story of Goodhart's law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports. Sometimes this is a good thing : it's why, for one example, companies don't instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly. That said, while I'm convinced that's the pattern, it's not the only one or even the most obvious one, and most people seem to have different formalizations, and I can't find the evidence to demonstrate it.
2polymathwannabe
Desirability issues aside, "believing X" and "knowing X is not true" cannot happen in the same head.
4Lumifer
This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that "The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function" -- a bon mot I find insightful.
2polymathwannabe
Example of that being useful?
7gattsuru
(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.) Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There's some evidence that this link is causative for at least some characteristics. It's not a completely unblemished good characteristic -- it correlates with lower compliance with medical orders, and probably isn't good for some anxiety disorders in extreme cases -- but it seems more helpful than not. It's also almost certainly a lie. Indeed, it's obvious that such a thing can't exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there's a whole lot of universe that isn't you than there is you to start with. On the upside, if your locus of control is external, at least it's not worth worrying about. You couldn't do much to change it, after all. Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that's perhaps too easy an example. It's not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner's dilemma. It's possible (even plausible) that this represents a valley of rationality -- like the earlier example of Pascal's Wagers that hold decent Utilitarian tradeoffs underneath -- but I'm not sure falsifiable, and it's certainly not obvious right now.
8Evan_Gaensbauer
As an afflicted individual, I appreciate the content warning. I'm responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.
3Vulture
I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say "content warning"; "Basilisk" sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.
2gattsuru
Yeah, other terminology is probably a better idea. I'd avoided 'trigger' because it isn't likely to actually trigger anything, but there's no reason to use new terms when perfectly good existing ones are available. Content warning isn't quite right, but it's close enough and enough people are unaware of the original meaning, that its probably preferable to use.
-1Lumifer
Mostly in the analysis of complex phenomena with multiple in(or barely)compatible frameworks of looking at them. A photon is a wave. A photon is a particle. Love is temporary insanity. Love is the most beautiful feeling you can have. Etc., etc.
4RowanE
It's possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true - a photon is actually neither. Truth is not beauty, so there's no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.
1Evan_Gaensbauer
I agree, and it's something I could, maybe should, help with instead of just complaining about. What's stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn't work, then, what would be stopping us?
4gattsuru
In organized form, I've joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there's no obvious organization regarding generalized goals, and no news updates at all. I'm not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn't promote the Youtopia group that seriously, because MIRI doesn't have any current long-term projects that can be easily presented to volunteers, or for some other reason. For individual-oriented work, I'm not sure what to do, and I'm not confident the best person to do it. There are also three separate issues, of which there's not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this : * The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn't much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer. * Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don't think it's a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren't really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be "shake up the local textbook!") I have started working on a dependency web, but this effort doesn't seem produce marginal benefits until large sections are completed. * The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, compo
4Viliam_Bur
This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person? Is it because people don't volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it? Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
6kalium
I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers' comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself. There's also the fact that many errors are only such because they're inconsistent with the overall style. It's presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.
5lmm
I'm just reading LW for fun and unwilling to do any real work to help, FWIW.
3gattsuru
It's the 'norm-palatable' part more than the proofreading aspect, unfortunately, and I'm not sure that can be readily made volunteer work As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I've been able to tell, they're looking for a release at the end of the year that strongly suggests that they've finished the proofreading aspect. That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I'm not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don't seem much stronger from a reading order perspective. In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format -- where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren't possible. At least from what I've seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited. Les
1Evan_Gaensbauer
Thanks for the suggestion. I'll plan some meetups around this. Not the whole thing, mind you. I'll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.
2pianoforte611
Diet and exercise generally do not cause substantial long term weight loss. Failure rates are high, and successful cases keep off about 7% of they original body weight after 5 years. I strongly suspect that this effect does not scale, you won't lose another 7% after another 5 years. It might be instrumentally useful though for people to believe that they can lose weight via diet and exercise, since a healthy diet and exercise are good for other reasons.
6Lumifer
There is a pretty serious selection bias in that study. I know some people who lost a noticeably amount of weight and kept it off. These people did NOT go to any structured programs. They just did it themselves. I suspect that those who are capable of losing (and keeping it off) weight by themselves just do it and do not show up in the statistics of the programs analyzed in the meta-study linked to. These structured programs select for people who have difficulty in maintaining their weight and so are not representative of the general population.
1ChristianKl
"Healthy diet" and dieting are often two different things. Healthy diet might mean increasing the amount of vegetables in your diet. That's simply good. Reducing your calorie consumption for a few months and then increasing it in what's commonly called the jo-jo effect on the other hand is not healthy.
0RomeoStevens
Why is this surprising? You give someone a major context switch, put them in a structured environment where experts are telling them what to do and doing the hard parts for them (calculating caloric needs, setting up diet and exercise plans), they lose weight. You send them back to their normal lives and they regain the weight. These claims are always based upon acute weight loss programs. Actual habit changes are rare and harder to study. I would expect CBT to be an actually effective acute intervention rather than acute diet and exercise.
0pianoforte611
I hadn't thought of CBT, it does work in a very loose sense of the term although I wouldn't call weight loss of 4 kg that plateaus after a few months much of a success. I maintain that no non-surigcal intervention (that I know of) results in significant long term weight loss. I would be very excited to hear about one that does.
0RomeoStevens
I would bet that there are no one time interventions that don't have a regression to pre-treatment levels (except surgery).
1summerstay
It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.
2hyporational
I haven't gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren't made with a purpose in mind and you'd have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about. I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.
1Capla
I often wonder if my subconsciousness is actually conscious, it's just a different consciousnesses than me.
1hyporational
I actually arrived at this supposedly old idea on my own when I was reading about the incredibly complex enteric nervous system in med school. For some reason it struck me that the brain of my gastrointestinal system might be conscious. But then thinking about it further it didn't seem very consistent that only certain bigger neural networks that are confined by arbitrary anatomical boundaries would be conscious, so I proceeded a bit further from there.
2polymathwannabe
EY has declared that P-zombies are nonsense, but I've had trouble understanding his explanation. Is there any consensus on this?
[-]RowanE100

Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn't doing so for reasons causally connected to the fact that they are conscious. To effectively say "I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn't conscious" is absurd.

1Sabiola
How would you tell the difference? I act like I'm conscious too, how do you know I am?
0satt
A friend I was chatting to dropped a potential example in my lap yesterday. Intuitively, they don't find the idea of humanity being eliminated and replaced by AI necessarily horrifying or even bad. As far as they're concerned, it'd be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating? (I don't agree with that position normatively but it seems impregnable intellectually.)
4Viliam_Bur
Just to make sure, could this be because you assume that "intelligent life" will automatically be similar to humans in some other aspects? Imagine a galaxy full of intelligent spiders, who only use their intelligence for travelling the space and destroying potentially competing species, but nothing else. A galaxy full of smart torturers who mostly spend their days keeping their prey alive while the acid dissolves the prey's body, so they can enjoy the delicious juice. Only some specialists among them also spend some time doing science and building space rockets. Only this, multiplied by infinity, forever (or as long as the laws of physics permit).
0satt
It could be because they assume that. More likely, I'd guess, they think that some forms of human-displacing intelligence (like your spacefaring smart torturers) would indeed be ghastly and/or utterly unrecognizable to humans — but others need not be.
-1Daniel_Burfoot
Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view. Altruists should seriously consider either migrating or scaling back their career ambitions significantly.
4Lumifer
Interesting. This is in contrast to which societies? To where should altruists emigrate?
8Evan_Gaensbauer
If anyone cares, the effective altruism community has started pondering this question as a group. This might work out for those doing direct work, such as research or advocacy: if they're doing it mostly virtually, what they need the most is Internet access. If a lot of the people they'd be (net)working with as part of their work were also at the same place, it would be even less of a problem. It doesn't seem like this plan would work for those earning to give, as the best ways of earning to give often depend on geography-specific constraints, i.e., working in developed countries. Note that if you perceive this as a bad idea, please share your thoughts, as I'm only aware of its proponents claiming it might be a good idea. It hasn't been criticized, so it's an idea worthy of detractors if criticism is indeed to be had.
3drethelin
Fundamentally the biggest reason to have a hub and the biggest barrier to creating a new one is coordination. Existing hubs are valuable because a lot of the coordination work is done FOR you. People who are effective, smart, and wealthy are already sorted into living in places like NYC and SF for lots of other reasons. You don't have to directly convince or incentivize these people to live there for EA. This is very similar to why MIRI theoretically benefits from being in the Bay Area: They don't have to pay the insanely high a cost to attract people to their area at all, vs to attract them to hang out with and work with MIRI as opposed to google or whoever. I think it's highly unlikely that even for the kind of people who are into EA that they could make a new place sufficiently attractive to potential EAs to climb over the mountains of non-coordinated reasons people have to live in existing hubs.
2DanielLC
If I scale back my career ambitions, I won't make as much money, which means that I can't donate as much. This is not a small cost. How can my career do more damage than that opportunity cost?
1ChristianKl
Do you follow some kind of utilitarian framework where you could quantify that problem? Roughly how much money donated to effective charities would make up the harm caused by participating in US society.
-3Daniel_Burfoot
Thanks for asking, here's an attempt at an answer. I'm going to compare the US (tax rate 40%) to Singapore (tax rate 18%). Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered. Let I be income, D be charitable donations, R be tax rate (0.4 vs 0.18), U be money usage in support of lifestyle, and T be taxes paid. Roughly U=I-T-D, and T=R(I-D). A bit of algrebra produces the equation D=I-U/(1-R). Consider a good programmer-altruist making I=150K. In the first model, the programmer decides she needs U=70K to support her lifestyle; the rest she will donate. Then in the US, she will donate D=33K, and pay T=47K in taxes. In SG, she will donate D=64K and pay T=16K in taxes to achieve the same U. In the second model, the altruist targets a donation level of D=60, and adjusts U so she can meet the target. In the US, she payes T=36K in taxes and has a lifestyle of U=54K. In SG, she pays T=16K of taxes and lives on U=74K. So, to answer your question, the programmer living in the US would have to reduce her lifestyle by about $20K/year to achieve the same level of contribution as the programmer in SG. Most other developed countries have tax rates comparable or higher than the US, but it's more plausible that in other countries the money goes to things that actually help people.
7bramflakes
this is the point where alarm bells should start ringing
2Daniel_Burfoot
The comparison is valid for the argument I'm trying to make, which is that by emigrating to SG a person can enhance his or her altruistic contribution while keeping other things like take-home income constant.
4[anonymous]
This is just plain wrong. Mostly because Singapore and the US are different countries in different circumstances. Just to name one, Singapore is tiny. Things are a lot cheaper when you're small. Small countries are sustainable because international trade means you don't have to be self-sufficient, and because alliances with larger countries let you get away with having a weak military. The existence of large countries is pretty important for this dynamic. Now, I'm not saying the US is doing a better job than Singapore. In fact, I think Singapore is probably using its money better, albeit for unrelated reasons. I'm just saying that your analysis is far too simple to be at all useful except perhaps by accident.
3fubarobfusco
Things are a lot cheaper when you're large. It's called "economy of scale".
2[anonymous]
Yes, both effects exist and they apply to different extents in different situations. A good analysis would take both (and a host of other factors) into account and figure out which effect dominates. My point is that this analysis doesn't do that.
4ChristianKl
I think given the same skill level the programmer-altruist making 150K while living in Silicon Valley might very well make 20K less living in Germany, Japan or Singapore.
7Nornagest
I don't know what opportunities in Europe or Asia look like, but here on the US West Coast, you can expect a salary hit of $20K or more if you're a programmer and you move from the Silicon Valley even to a lesser tech hub like Portland. Of course, cost of living will also be a lot lower.
0Capla
I'm not sure what you mean. Can you elaborate, with the other available options perhaps? What should I do instead? To be more specific, what's morally problematic about wanting to be a more successful writer or researcher or therapist?
4Lumifer
The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany? “The simple step of a courageous individual is not to take part in the lie." -- Alexander Solzhenitsyn
2faul_sname
...yes? I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else. In this context, "the lie" of Nazi Germany was not the mere existence of the society, it was specific things people within that society were doing. Romance novels, even very good romance novels, are not a part of that lie by reasonable definitions. ETA: There are certainly better things a person in Nazi Germany could do than writing romance novels. If you accept the mindset that anything that isn't optimally good is bad, then yes, being a writer in Nazi Germany is probably bad. But in that event, moving to Sweden and continuing to write romance novels is no better.
2Lumifer
The key word is "successful". To become a successful romance writer in Nazi Germany would probably require you pay careful attention to certain things. For example, making sure no one who could be construed to be a Jew is ever a hero in your novels. Likely you will have to have a public position on the racial purity of marriages. Would a nice Aryan Fräulein ever be able to find happiness with a non-Aryan? You can't become successful in a dirty society while staying spotlessly clean.
2faul_sname
So? Who said my goal was to stay spotlessly clean? I think more highly of Bill Gates than of Richard Stallman, because as much as Gates was a ruthless and sometimes dishonest businessman, and as much as Stallman does stick to his principles, Gates, overall, has probably improved the human condition far more than Stallman.
1Lumifer
The question was whether "being a writer in Nazi Germany would be any worse than being a writer anywhere else". If you would be happy to wallow in mud, be my guest. The question of how much morality could one maintain while being successful in an oppressive society is an old and very complex one. Ask Russian intelligentsia for details :-/
1NancyLebovitz
Lack of representation isn't the worst thing in the world. if you could write romance novels in Nazi Gernany (did they have romance novels?) and the novels are about temporarily and engagingly frustrated love between Aryans with no nasty stereotypes of non-Aryans, I don't think it's especially awful.
2Douglas_Knight
What a great question! I went to wikipedia which paraphrased a great quote from NYT which suggests that they are a recent development. Maybe there was a huge market for Georgette Heyer, but little production in Germany. One thing that is great about wikipedia is the link to corresponding articles in other languages. "Romance Novel" in English links to an article entitled "Love- and Family-Novels." That suggests that the genres were different, at least at some point in time. That article mentions Hedwig Courths-Mahler as a prolific author who was a supporter of the SS and I think registered for censorship. But she rejected the specific censorship, so she published nothing after 1935 and her old books gradually fell out of print. But I'm not sure she really was a romance author, because of the discrepancy of genres.
-1Azathoth123
What do your lovers find attractive about each other? It better be their Aryan traits.
0Nornagest
Well, there is the inconvenient possibility of getting bombed flat in zero to twelve years, depending on what we're calling Nazi Germany.
0RowanE
Considering the example of Nazi Germany is being used as an analogy for the United States, a country not actually at way, taking allied bombing raids into account amounts to fighting the hypothetical.
1Nornagest
Is it? I was mainly joking -- but there's an underlying point, and that's that economic and political instability tends to correlate with ethical failures. This isn't always going to manifest as winding up on the business end of a major strategic bombing campaign, of course, but perpetrating serious breaches of ethics usually implies that you feel you're dealing with issues serious enough to justify being a little unethical, or that someone's getting correspondingly hacked off at you for them, or both. Either way there are consequences.
0NancyLebovitz
It's a lot safer to abuse people inside your borders than to make a habit of invading other countries. The risk from ethical failure has a lot to do with whether you're hurting people who can fight back.
0Daniel_Burfoot
I'm not sure I want to make blanket moral condemnations. I think Americans are trapped in a badly broken political system, and the more power, prestige, and influence that system has, the more damage it does. Emigration or socioeconomic nonparticipation reduces the power the system has and therefore reduces the damage it does.
0Lumifer
It seems to me you do, first of all by your call to emigrate. Blanket condemnations of societies do not extend to each individual, obviously, and the difference between "condemning the system" and "condemning the society" doesn't look all that big..
0Daniel_Burfoot
I would suggest ANZAC, Germany, Japan, or Singapore. I realized after making this list that those countries have an important property in common, which is that they are run by relatively young political systems. Scandinavia is also good. Most countries are probably ethically better than the US, simply because they are inert: they get an ethical score of zero while the US gets a negative score. (This is supposed to be a response to Lumifer's question below).
4Lumifer
That's a very curious list, notable for absences as well as for inclusions. I am a bit stumped, for I cannot figure out by which criteria was it constructed. Would you care to elaborate why do these countries look to you as the most ethical on the planet?
1Daniel_Burfoot
I don't claim that the list is exhaustive or that the countries I mentioned are ethically great. I just claim that they're ethically better than the US.
0Lumifer
Hmm... Is any Western European country ethically worse than the USA from your point of view? Would Canada make the list? Does any poor country qualify?
0Daniel_Burfoot
In my view Western Europe is mostly inert, so it gets an ethics score of 0, which is better than the US. Some poor countries are probably okay, I wouldn't want to make sweeping claims about them. The problem with most poor countries is that their governments are too corrupt. Canada does make the list, I thought ANZAC stood for Australia, New Zealand And Canada.
1Metus
Modern countries with developed economies lacking a military force involved and/or capable of military intervention outside of its territory. Maybe his grief is with the US military so I just went with that.
5Azathoth123
Which is to say they engage in a lot of free riding on the US military.
2DanielFilan
For reference, ANZAC stands for the "Australia and New Zealand Army Corps" that fought in WWI. If you mean "Australia and New Zealand", then I don't think there's a shorter way of saying that than just listing the two countries.
3Douglas_Knight
"the Antipodes"
-4ChristianKl
The importance of somatics is currently likely the most significant.
1RowanE
I don't know what this sentence means. At least one other person is similarly confused, since you've been downvoted - can you clarify?

Can anyone recommend any good books/resources on dyspraxia?

Ideally suitable for adults with a reasonable background understanding of psychology. Most of the stuff I've been able to find has been aimed for teachers/parents.

I keep finding the statistic that "one pint of donated blood can save up to 3 lives!" But I can't find the average number of lives saved from donating blood. Does anyone know/is able to find?

1ChristianKl
What do you mean with "lives saved by donating blood" in the first place? Quantity people who would die without any blood donations ---------------------------------------- Liters of blood donated That's not a pretty useful number if you want to make personal decisions based on it. If our Western system would need more blood, raising the incentives for donations isn't that hard.
3polymathwannabe
WHO prefers all blood donations to be unpaid: "Regular, unpaid voluntary donors are the mainstay of a safe and sustainable blood supply because they are less likely to lie about their health status. Evidence indicates that they are also more likely to keep themselves healthy."
1ChristianKl
Interesting. So the core question seems to be: "How much value is produced by healthy blood donors making decisions to donate without incentives, compared to blood that's "brought"".
7polymathwannabe
Bought blood has been subject of an interesting debate: "Since increased blood shortages are to be expected anyway in the near future, all measures improving the supply of safe blood, including monetary compensation, should be objectively discussed without prejudice." "Paid blood donation still has its defenders, who cite economic doctrines denying the existence of altruism per se, the inability of most countries with exclusively voluntary donations to achieve self-sufficiency and the supposedly successful use of selected groups of paid donors." "Majority would consent to free blood donation only in case of emergency or as a family replacement..." "Several countries are already self-sufficient in blood and blood products, based on a voluntary, unpaid donor system." "The European Association of the Plasma Products [...] believes that the most important aspect of self-sufficiency is a sufficient supply of safe and efficacious product; the question of paid or unpaid donations is of lower importance."
0Lumifer
The expression "can save up to" should immediately trigger your bullshit detector. It's a reliable signal that the following number is meaningless.

I did a little research to find out whether there are free survey sites that offer "check all answers that apply" questions.

Super Simple Survey probably does, but goddamned if I'll deal with their website to make sure.

On the almost free side, Live Journal enables fairly flexible polls (including checkboxes) for paid accounts, and you can get a paid account for a month for $3. Live Journal is a social media site.

1Manfred
Doodle poll, select "free text" might work. http://support.doodle.com/customer/portal/articles/645362-how-to-create-a-poll-

It has been experimentally shown that certain primings and situations increase utilitarian reasoning; for instance, people are more willing to give the "utilitarian" answer to the trolley problem when dealing with strangers, rather than friends. Utilitarians like to claim that this is because people are able to put their biases aside and think more clearly in those situations. But my explanation has always been that it's because these setups are designed to maximise the psychological distance between the subject and the harm they're going to infl... (read more)

4lmm
I highly doubt the subjects were drunk enough to have trouble figuring out that 5 > 1. So one could equally offer an interpretation that e.g. drunk people answered honestly, while sober people wanted to signal that they were too caring to kill someone under any circumstances. It's a fascinating result, but I don't think the interpretation is a slam dunk.
0Scott Garrabrant
I doubt this. I conjecture that more people lie and say they would be utilitarian than lie and say they would not be utilitarian. I hope that I would do the utilitarian thing, but I am not sure that I actually would be able to get myself to do it. (Maybe I would be more likely to actually do it if I were drunk)
2lmm
On LW sure, being utilitarian is the thing you want to signal here. Ordinary people in a bar? I highly doubt it. Being unwilling to kill is far, far more socially acceptable than the utilitarian answer.
2NancyLebovitz
I've been wondering whether utilitarianism undervalues people's loyalty to their own relationships and social networks.
1Lumifer
Field studies are hard work :-D
4ChristianKl
They needed the native habitat for the alcohol consumption.

The following model is my new hypothesis for generating better OKCupid profiles for myself while remaining honest.

  • I brainstorm what I want to include in my profile in a positive way without lying. This may include goal-factoring on what honest signals I'm trying to send. Then, I see how what I brainstormed fits into the different prompts on OKCupid profiles.

  • I generate multiple clause-like chunks for each item/object/quality of myself I'm trying to express in my profile. I then A/B test the options for each item across a cross-section of individuals sim

... (read more)
2MrMind
Just test it and report back the result :) That will teach you and us many things we can't see right now.

Two years ago, I wrote this cringe-worthy thing.

I can't tell if things have gotten worse, or if they've stayed the same. I lean toward worse.

4 years ago, I asked a psychiatrist about my soul-crushing Akrasia issues. He prescribed Focalin, at 5mg/day for the first week, then 10mg/day for the second. The first week saw improvements--I didn't feel like I had much choice over what I wound up focusing on, but I actually finished things--the second week did not work at all, and a pile of unpleasant things all hit at once on one of those nights. So we switched to... (read more)

1ChristianKl
Instead of a psychiatrist maybe a psychologist might be the better option?
0Strangeattractor
Have you considered the idea of learning echolocation? Here is the beginning of a series of blog posts from blind programmer Austin Seraphim about how he learned to use echolocation to navigate the environment and get a spatial sense of things without touching them. He learned it from a teacher from World Access for the Blind. It came to mind because you mentioned a National Federation of the Blind training center, and I'm not sure what you would learn there, but I'm pretty sure they don't offer echolocation training.

Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.

[-][anonymous]20

This article discusses a paper that seems interesting from the perspective of effective altruism and how peoples behavior changes based on where they think their money might be going:

http://www.vox.com/2014/10/30/7131345/overhead-free-donations-charity-fundraising-seed-matching-gneezy

If you want a link directly to the paper, that link is both in the article and reposted here:

http://www.sciencemag.org/content/346/6209/632

Short summary: When considering donations, people in the study donated more when they know their donation is not going to overhead.

It had never occurred to me that the term "applause light" could be taken so literally.

2RomeoStevens
Politician, noun: a person who cheers in-group values professionally.
2Evan_Gaensbauer
My friend recently attended an event at which Ray Kurzweil and an urban planner named Richard Florida were speaking. He didn't like Richard Florida as a speaker, citing how Richard Florida 'sounded just like a politician', and was speaking 'only in applause lights'. I noted it was funny to use 'applause light' in that context, as an auditorium where the speaker looks over a crowd while bathed in light, saying things specifically to garner applause, is just about the most literal interpretation of 'applause light' I could think of.
5Douglas_Knight
"Applause lights" is a metaphor based on a concrete thing that really exists

After reading through the Quantum Physics sequence, I would like to know more about the assumptions and theories behind the idea that an amplitude distribution factorizes, or approximately factorizes. Where would be a good place to learn more about this? I would appreciate some recommendations for journal articles to read, or specific sections of specific books, or if there's another better way to learn this stuff, please let me know.

In the blog posts in the sequence, an analogy comes up a few times, saying that it doesn't make sense to distinguish betwe... (read more)

3Manfred
Relevant wikipedia link. The keyword is something like "many-body wavefunction." But seriously, if you're curious, try to find a textbook online, or a series of video lectures for an introductory course (you might either watch the whole course, or skip to what you want to learn and then try and figure out what the prerequisites are, then do the same thing for the prerequisites).
0DanielLC
I think the factorization is a reference to https://en.wikipedia.org/wiki/Creation_and_annihilation_operators from quantum field theory. I haven't learned quantum field theory though, so I can't comment much. From what I can gather, multiplying something by the creation operator gets you the same state but with an extra particle. I can tell you that at the very minimum, assuming Copenhagen and the minimal amount of physics to allow entanglement to happen at all, whenever two of the same kind of particle are entangled, they have a 50% chance of swapping. If you use MWI, it's that I can find a universe with the same probability density in which those particles are swapped.

I stumbled across an article about Amelia, a program that can supposedly perform low-level human jobs like call center operator. A brief search hasn't turned up anything particularly illuminating. Has this been discussed on LW before?

On the one hand, everything I read about her sounds sufficiently vague that I suspect it's hype (and possibly native advertising). Still, I'm curious about the underlying tech - is it some kind of substantial improvement over past attempts, or is she just Siri++ in the way that Eugene Goostman was a slightly better chatbot?

2Douglas_Knight
Probably Siri-- in the way that Eugene Goostman was a slightly worse chatbot.
0polymathwannabe
The manufacturer's website is only merely illustrative.