Comment author: [deleted] 05 May 2016 03:30:15PM 0 points [-]

Did you see the links in OP's post? As I clarified, links take readers away from a page. Who knows whether they will then become distracted by a link on that new page, etc., etc., and then you lose readers unnecessarily. Links should be purposeful, meant to verify data/claims.

Pictures, like links, should center around the main purpose of the post but again, purposefully. The pic from Elf, for example, added nothing to the post (that's not even a quote from the movie) and said something we already know: OP wants us to donate. No new information. I said something like the hedgehog would be more acceptable given it adds a bit of personality while the hedgehog hails from the area being discussed--a weak connection, but something light people can enjoy.

Again, from my experience in journalism, very few articles need more than one picture, and if you're going to put a picture in it better add to the point, not just stand as something pretty to look at and get distracted from the piece.

Comment author: Jacobian 05 May 2016 06:45:34PM 2 points [-]

Intuitively, I'm mostly optimizing for having readers come back to the blog. With that said, I'm making no money off it (just today I paid $80 to remove ads from Putanumonit) and I'm trying to keep readers by being interesting and educational and not by being clickbaity or controversial.

Regarding pictures, there this piece of advice by Scott (number 2) to break walls of text with pictures although one does notice that Scott doesn't follow that advice himself. I think I'll stick with that, I don't think the pictures are that distracting.

Some people like links and some people hate them, I think that people who are familiar with my blog just learn to ignore most of them. I can see how many links are clicked per view of my posts, it's not a lot. With that said, probably at least 20% of my links are useless and distracting. I'll do my best to rein the excessive linking in :)

Link: Thoughts on the basic income pilot, with hedgehogs

3 Jacobian 04 May 2016 05:47PM

I have resisted the urge of promoting my blog for many months, but this is literally (per my analysis) for the best cause.

We have also raised a decent amount of money so far, so at least some people were convinced by the arguments and didn't stop at the cute hedgehog pictures.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Jacobian 11 April 2016 06:54:17PM 11 points [-]

Lo, I have taken the survey.

Comment author: Viliam 29 March 2016 08:28:57PM *  34 points [-]

Yeah, it's nice when your opponents volunteer to remove from you the burden of proof whether they are irrational.

But seriously, I don't even know where to start. Perhaps here: Articles written on most popular websites are clickbait. It means that their primary purpose is to make you read the article after seeing the headline, and then share it either because you love it or because you hate it. And that's what you did. Mission accomplished.

Another article on the same website explains why animal rights movements are oppresive. (I am not going to link it, but here are the arguments for the curious readers: because it's wrong to care about animals while there are more important causes on this planet such as people being oppressed; because vegans and vegetarians don't acknowledge that vegan or vegetarian food can be expensive; because describing animals as male and female marginalizes trans people; and because protecting animals is colonialistic against native people who hunt animals as part of their tradition.) Obviously, the whole article is an exercise in making the reader scream and share the article to show other readers how crazy it is. This is exactly what the authors and editors get paid for; this is how you shovel the sweet AdSense money on them. So the only winning move is not to play this game.

.

I may be too extreme in this aspect, but when I talk with most people, I simply assume that almost everything they say is a metaphor for something (usually for their feelings), and almost nothing is to be taken literally. This is a normal way of communication among people who couldn't program a Friendly AI if their very lives depended on it.

When someone says "rationality is bad", the correct translation is probably something like "I hate my father because he criticized me a lot and didn't play with me; and my father believes he is smart, and he makes smartness his applause light; and this is why I hate everything that sounds like smartness". You cannot argue against that. (If you try anyway, the person will not remember any specific thing you said, they will only remember that you are just as horrible person as their father.) This is how people talk. This is how people think. And they understand each other, so when another person who also hates their father hears it, they will get the message, and say something like "yeah, exactly like you said, rationality is stupid". And then they know they can trust each other on the emotional level.

Here is a short dictionary containing the idioms from the article:

  • everydayfeminism.com = "I hate my father"
  • we should abolish prisons, police = "I hate my father"
  • cisheteropatriarchy = "I hate my father; but I also blame my mother for staying with him"
  • those who are committed to social justice = "my friends, who also hate their fathers"
  • we have to stop placing limits on ourselves = "we should steal some money and get high"
  • Being Rational Has No Inherent Value = "I don't even respect my father"
  • my very existence is irrational = "my father disapproves of my lifestyle"
  • The only logical time for abolition and decolonization is now = "I wish I had the courage to tell my parents right now how much I am angry at them"

You are overanalyzing it, searching for a logical structure when there is none. If you treat the article as a free-form poem, you will get much closer to the essence. You don't share the author's emotions, that's why the text rubs you the wrong way.

And by the way, other political groups do similar things, just in a different flavor (and perhaps intensity).

Comment author: Jacobian 30 March 2016 09:18:18PM *  5 points [-]

I simply assume that almost everything they say is a metaphor for something (usually for their feelings).

It has taken me many years to realize that, but the more I look for it the more I notice it. I have a friend on Facebook who's a Syrian living in NYC, she keeps posting things like "Here's the proof Assad is actually a spy planted by the Israeli Mossad to cause genocide in Syria". I kept asking her how she could possibly believe it and got very confusing responses that didn't really address the question. And then it hit me: for her and for many Arabs "X is a Mossad spy" is simply an eloquent way of saying "I hate X", it has literally nothing to do with the Mossad at all. My friend was confused why I even bring facts about the Mossad into a discussion of whether Assad is a Mossad spy.

Viliam gave enough SJ examples, so I'll give one from the other side: there was a campaign by some famous PUA to boycott Mad Max: Fury Road because it's feminist propaganda. Hold on, isn't that the movie where the attractive women in skimpy clothes are called "breeders" whose job is to make babies? And then I realized:

  • For PUAs "X is feminist propaganda" = "I hate X"
  • For some Russians "X is a CIA plot" = "I hate X"
  • For some Evangelical Christians "X is from the Devil" = "I hate X"
  • For some communists "X is capitalist" = "I hate X"
  • For some capitalists "X is communist" = "I hate X"

Etcetera, etcetera.

Comment author: Lumifer 29 March 2016 04:55:14PM 3 points [-]

Getting five downvotes on this immediately after posting is bizarre

You are surprised? LW automatically downvotes polarizing/uncomfortable content and that goes double for anything that mentions SJWs.

Or, to be a bit more precise, you are allowed here to make people uncomfortable with the scenario of a giant paperclip chasing them. But you are not allowed to make people uncomfortable about their tribal allegiances.

Comment author: Jacobian 29 March 2016 05:40:56PM *  3 points [-]

LW automatically downvotes...

I would love to hear the evidence you have to back that very broadstatement. It sounds like you're against doing this yourself, and so am I, and so is Phil. That's 0/3 so far here.

I upvoted Phil's post about word per person and rigor because it was an interesting and novel idea backed by actual research and analysis (whether I agree with it or not). I downvoted this post because it's a trite idea backed by no analysis other than taking word definitions out of context.

If you really feel that LW is now entirely populated by people who wage petty wars over tribal grievances (like 90% of the rest of the internet), then what are you still doing here?

Comment author: Jacobian 29 March 2016 04:13:55PM *  13 points [-]

Phil, I think you're falling into the trap you accuse Pham of: getting confused about words and how people use them. Like you've noticed, Pham doesn't use "rationality" to mean the same thing we do. From the article:

What if those imperialism-driven Europeans, all passionate and roused about Manifest Destiny, were encouraged to stop and reconsider whether their violent plans were rational? We might possibly have a world that isn’t filled to the brim with oppression.

In the article Pham vacillates between using "rational" to mean "reasonably likely to be achieved" and to mean "culturally acceptable". The point of their article is that being told that decolonization is "irrational" (i.e. unlikely to be achieved and/or unpopular) doesn't mean that people shouldn't pursue it as a goal. Let's call these definitions Pham.rationality. They, especially the second one, have very little to do with "representing an accurate picture of reality" or however you want to define LW.rationality.

But it isn't just a sign of how insane the social justice movement is—it has clues to how it got that way. The author came to hate "rationality" because s/he thought "rationality" meant "conventionality".

Let me get this straight: you define Insane = NOT(LW.rationality), see an article that says: SJ = NOT(Pham.rationality), and then happily conclude that SJ = Insane because "rationality".

You could have attacked the article for having an undesirable goal (i.e. abolishing the police). You could have attacked it for jumping between two definitions, and creating a deepity: one interpretation is banal (we should push for decolonialization even if it's unpopular), the other is plain false (we will achieve decolonialization even if it's utterly impossible). You could have attacked the article for incorrect facts, incoherent structure and extremely poor writing. There's enough ammunition there to make whatever denigrating point you want to make about SJ writing.

What you shouldn't get away with is seeing someone else define a word in a confusing/misleading way to make a point and then immediately doing the same thing.

My most charitable interpretation of your post is that you think that:

  • A. Pham is just a stupid person and was thus told by her friends they are irrational (i.e. NOT PhamFriends.rational).
  • B. They have thus decided that being stupid is a virtue.

A is both unfounded speculation and unnecessary ad-hominem, B still fails as a logical argument because Pham doesn't use her friends' definition of rationality in the article.

Phil, I have read a lot of the great stuff that you've posted here on LW, this post does your reputation a disservice.

Comment author: Duncan_Sabien 16 February 2016 04:34:19AM *  25 points [-]

[CFAR's newest instructor, here; longtime educator and transhumanist-in-theory with practical confusions]

ScottL—I'm just coming out of the third workshop in six weeks, and flying to Boston to give some talks, so I'm exhausted and haven't had a chance to read through your compilation yet. I will, soon (+1 for the effort you've put forth), but in the meantime I wanted to pop in and give some thoughts on the comments thus far.

Benito, Rainbow, and Crux—+1 for all three perspectives.

Can CFAR content be learned from a compilation or writeup? Yes. After all, it's not magic—it was developed by careful thinkers looking at research and at their own cognition, iterated over 20+ formal attempts (and literally hundreds of informal ones) to share those same insights with others. It's complex, but it's also fundamentally discoverable.

However, there are three large problems (as I see it, speaking as the least experienced staff member). The first is the most obvious—it's hard. It's hard like learning karate from text descriptions is hard. If you go about this properly, without being sloppy or taking shortcuts or making dangerous assumptions, then you're in for a LONG, difficult haul. Speaking as someone who pieced together the discipline of parkour back in 2003, from scattered terrible videos (pre Youtube) and a few internet comment boards—pulling together a cohesive and working practice from even the best writeups is a tremendously difficult task. It's better on almost every axis with instructors, mentors, friends, companions—people to help you avoid the biggest pitfalls, help you understand the subtle points, tease apart the interesting implications, shore up your motivation, assist you in seeing your own mistakes and weaknesses. None of that is impossible on your own, but it's somewhere between one and two orders of magnitude more efficient and more efficacious with guidance.

The second is corruption. As Benito points out, a large part of the problem of rationality instruction is finding things that actually work—if mere knowledge of the flaws were sufficient to protect us from the flaws, then everybody who cared enough could just slog through Heuristics and Biases and be something like 70% of the way there. We've already put several thousand thought-hours and 20+ iterations into tinkering with content, scaffolding, presentation, and practice. What we've got works pretty well, but progress has been incremental and cumulative. What we had before worked less well, and what we had before that worked less well still.

Picture throwing out a complete text version of our current best practices, exposing it to the forces of memetic selection and evolution. Fragments would get seized upon, and quoted out of context; bits of it would get mixed up with this and that; things would be presented out of order and read out of order; people would skip and skim and possibly completely ignore sections they THOUGHT they already knew because the title or the first paragraph seemed mundane or familiar. And there wouldn't be the strong selection pressure toward clarity and cohesion that we've been providing, top-down—instead, there would be selection pressures for what's memorable, pithy, or easily crystallized, none of which would be likely to drive the art forward and make the content BETTER. Each step away from our current best practices is much more likely to be a decrease in quality rather than an increase, and though you and others here on LW are likely to have the necessary curiosity and diligence to "do it right," that doesn't mean that the majority of people exposed to the memes in this way share your autodidactic rigor.

The third problem (related to the second) is idea inoculation. Having seen crappy, distorted versions of the CFAR curriculum (or having attempted to absorb it from text, and failed), a typical human would then be much, much less receptive to other, better explanations in the future. This is why, even within the context of the workshop, we often ask that participants not read the relevant sections of their workbooks until AFTER a given lecture or activity. I'm going to assume this is a familiar concept, and not spend too many words on it, but suffice it to say that I believe an uncanny valley version of our curriculum trending on the internet for one day could produce enough anti-rationality in the general population to counterbalance all of our efforts so far.

None of these problems are absolute in nature. The Sequences exist, and are known to be helpful. And clearly, Rainbow and Benito have gotten at least some value out of the writeups they've gleaned and assembled themselves. Again, there's nothing to stop others from having the same insights we've had, and there's nothing to stop a diligent autodidact from connecting scattered dots.

But they are statistical. They are real. They become quite scary, once you start talking big numbers of people and the free exchange of content-sans-context. And that's without even talking about other concerns like framing, signaling, inferential distance, etc. Lots of worms in this can.

So the question then becomes—what to do?

Thus far, CFAR hasn't had the cycles to spend time creating the (let's say) 80-20 version of their content. Remember that it's a fledgling startup with fewer than ten full-time staff members (when Pete and I were hired, it only had six). They were pouring every 60- and 70- and 80-hour week into trying to squeeze an extra percentage point of comprehension or efficacy out of every activity, every explanation. In other words, the objection wasn't fundamental (to the best of my understanding, which may be wrong) ... it was pragmatic. Creating packaged material fit for the general public wasn't anywhere near the top of the list, which was headed by "create material that's actually epistemically sound and demonstrably effective."

For my own part, I think this belongs in our near future. I think it's an area to be approached cautiously, in incremental steps with lots of data collection, but yes—I'd like to see some of our simpler, core techniques made broadly available. I'd like to see scalability in the things we think we can actually explain on paper. And if it goes well, I'd like to see more and more of it. I'm personally taking steps in this direction (tackling and improving our written content is one of my primary tasks, and I've started with simple things like drafting a glossary and tracking which definitions leave the reader confused (or worse, confident but wrong)).

But we have to a) find the time and manpower to actually run the experiment, and b) find content that genuinely works. Those are both non-trivially difficult, and they're both trading off against the continued expansion and improvement of our version of the art of rationality. I've only just now taken on enough responsibility myself to free up a few of the core staff's hours—and that's mostly gone into reducing their workload from insane to merely crazy. It hasn't actually created sufficient surplus to allow online tutorials to meet the threshold for worth-the-risks.

In short, despite Crux's entirely appropriate and reasonable skepticism, the answer has to be (for the immediate future)—either you find us trustworthy, or you don't (and if you don't, maybe you don't want our material anyway?). I, for one, don't think published material threatens workshop revenue, any more than online tutorials threaten martial arts dojos. There will always be obvious benefits to attending an intensive, collaborative workshop with instructors who know what they're doing, and there will always be people who recognize that the value is worth the cost, particularly given our track record. Our reasons for having refrained from publication thus far aren't monetary (or, to be more precise, money isn't in the top five on what's actually a fairly long and considered list).

Instead, it's that we actually care about getting it right. We don't want to poison the well, we don't want to break the very thing we're trying to protect, and as a member of a group with something that at least resembles expertise (if you don't want to credit us as actual experts), I think that requires a lot more work on our end, first.

That being said, if you have questions about the content above, or about what CFAR is doing this week and this month and this year, or if you're struggling with creating the art of rationality yourself and you've had novel and interesting insights—

Well. You know where to find us, and we don't know where to find you, or we'd have already reached out.

Hope this helps,

  • Duncan
Comment author: Jacobian 17 February 2016 04:59:32PM *  11 points [-]

Can CFAR content be learned from a compilation or writeup?

A year ago I considered attending the CFAR workshop in Boston, one of the things that stopped me was that I actually read LW a lot and applied a bunch of it in real life. Kenzi and Critch at CFAR tried gently to explain how a workshop was qualitatively different from reading and trying stuff yourself, but I didn't give them the opportunity to convince me.

This week I came back from the CFAR workshop in New York, and I actually felt my life changing on the evening of the third day. Yes, time will tell if that actually happened, but I have enough evidence even a week out that it's going that way. How could I think that I could get that benefit by myself with no help? It scares me how close I was to never having gone to CFAR. I'm going to try to write what would have convinced "Jacobian-2015" that he should attend a workshop.

  • Compound interest. You need motivation to work on your motivation. You need an accurate map of knowing how to attain accurate maps. It takes a jolt of rationality to improve your rationality. There isn't an encyclopedia of CFAR material, but the material is incredibly high quality. This causes it to compound and improve other things you learn, like the difference between $100 under your mattress (i.e. the sequences) and $20 that grows at 20% a year.

  • Blind spots. You can't lift yourself up by your hair, you can't see the mistaken beliefs you refuse to question and you can't solve the problems in your life you refuse to admit. Some things simply can't be done without other people helping you out. Most of my progress at CFAR was made in the hours of focused small group "therapy" sessions. The first thing I did when I got back was to set up a CFAR workgroup (can I trademark "Agenty Flock"?) with friends from the workshop.

  • The moon. This is either really important or completely meaningless, I don't know because I'm not there yet. The point of CFAR isn't to learn a bunch of techniques but to achieve the mindset in which the techniques become natural, indistinguishable and you are able to generate them yourself at will. The techniques are the fingers pointing at the moon, the mindset is the moon. I did my BSc in physics, and I retain less declarative knowledge of physics than someone who read the Feynman lectures. Still, I think I wouldn't have fallen for the radiator plate trick. Not because I can do integrals of thermal conduction, but because I spent hours in a lab trying to get some dumb thermodynamics experiment to work the way I believed it should, and it refused. I don't know if I really attained a physics mindset in undergrad, and I don't know if the applied rationality mindset is attainable from a CFAR workshop. I know that it would take a super-mind to attain it from reading stuff online.

ScottL, your write-up is great. The only thing I don't like about it is that you called it "CFAR canon", isn't it troubling that that's what would show up in search results from now on? I would at least change the word "CFAR" to "applied rationality". I'm really concerned that this write-up may cause some people to decide against attending a workshop they otherwise would've gone to. To everyone who reads this "canon" and considers going to a workshop, ask yourself this:

How many actual CFAR alumni do I know who feel that they could have gained most of the value by themselves?

Count my experience as a point of evidence against.

Comment author: leplen 14 February 2016 06:20:42PM 1 point [-]

"There are 729,500 single women my age in New York City. My picture and profile successfully filtered out 729,499 of them and left me with the one I was looking for."

I know this is sort of meant as a joke, but I feel like one of the more interesting questions that could be addressed in an analysis like this is what percentage of the women in the dating pool could you actually have had a successful relationship with. How strong is your filter and how strong does it need to be? There's a tension between trying to find/obtain the best of many possible good options, and trying to find the one of a handful of good options in a haystack of bad ones.

I'm somewhat amazed that you looked at 300 profiles, read 60 of them, and liked 20 of them enough to send them messages. Only 1 in 5 potential matches met your standards for appearance, but 1 in 3 met your standards based on what they wrote, and that's not even taking into account the difference in difficulty between reading a profile and composing a message.

You make a big deal about the number of people available online, but in your previous article on soccer players you implied that the average had a much larger effect on the tails than the average did. If you're really looking for mates in the tails of the distribution, and 1 in 729,500 is about 4.5 sigma event, then being involved in organizations whose members are much more like your ideal mate on average may be a better strategy than online dating.

Comment author: Jacobian 15 February 2016 06:49:44PM 1 point [-]

leplen, thanks for the feedback. Here are my thoughts.

  1. ChristianKI is correct that I looked at match percentage, but mostly I felt that I would learn about someone more from a quick chat than from their profile so I wasn't limited to "perfectly written" profiles. Attractiveness is ultimately less important, but easier to judge.

  2. I didn't think of the "average vs. total" point, but I liked it. Let's do the math: 1/729,500 is 4.68 sigma. If I was picking from a group that was a whole SD better, I would need to only meet a 3.68 sigma girl, that's 1 in 8,900. I don't know if I could think of a group that large and that much better in my life right now, the only thing that comes to mind is the student body of a large and excellent university. Your point would apply to a 20-year old undergrad at Columbia or NYU: they should look at other students before the rest of New York City.

Comment author: Jacobian 11 February 2016 11:41:14PM 1 point [-]

If y'all missed it, part 1.5 is up!

Comment author: LessWrong 05 February 2016 07:44:10AM 0 points [-]

There's a difference: on OKC you can filter people based on whatever, in non-OKC situations you don't have that information available to you. I only have the woman's looks (and the women have a perv-o-meter) to notice.

Re-reading your article I think a better way to describe this is "approaches with comparative advantages" and "approaches without comparative advantages".

Comment author: Jacobian 07 February 2016 04:36:37PM 0 points [-]

I think I referred as "comparative advantage" to something different from what you mean. I was speaking to the advantages of the person hitting on someone: all else being equal you should focus on fora where your skills come to play. If that forum is large enough (like OkCupid), I think it makes sense to focus on that exclusively. In any situation where you meet potential dates specific skills matter and some people are better than others.

For example, I'm at a strong comparative disadvantage when hitting on people on the subway because I have a really goofy accent that people don't expect from my appearance and it lconfuses them. Someone who looks hot and has a great voice will do well on the subway without needing any context or background info on the person they are talking to.

View more: Prev | Next