[Link] Putanumonit - Discarding empathy to save the world

7 Jacobian 06 October 2016 07:03AM

[Link] Putanumonit - You can't always tell people's beliefs from explicit behavior + Trump and blacks

-3 Jacobian 28 September 2016 04:45PM

[Link] Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"

10 Jacobian 28 September 2016 04:43PM

Link: Thoughts on the basic income pilot, with hedgehogs

3 Jacobian 04 May 2016 05:47PM

I have resisted the urge of promoting my blog for many months, but this is literally (per my analysis) for the best cause.

We have also raised a decent amount of money so far, so at least some people were convinced by the arguments and didn't stop at the cute hedgehog pictures.

A Rationalist Guide to OkCupid

24 Jacobian 03 February 2016 08:50PM

There's a lot of data and research on what makes people successful at online dating, but I don't know anyone who actually tried to wholeheartedly apply this to themselves. I decided to be that person: I implemented lessons from data, economics, game theory and of course rationality in my profile and strategy and OkCupid. Shockingly, it worked! I got a lot of great dates, learned a ton and found the love of my life. I didn't expect dating to be my "rationalist win", but it happened.

Here's the first part of the story, I hope you'll find some useful tips and maybe a dollop of inspiration among all the silly jokes.

P.S.

Does anyone know who curates the "Latest on rationality blogs" toolbar? What are the requirements to be included?

 

Don't You Care If It Works? - Part 2

16 Jacobian 30 July 2015 12:22AM
Part 2 – Winstrumental

 Part 1 is here.

The forgotten fifth virtue

Remember, you can't be wrong unless you take a position. Don't fall into that trap.

-- Scott Adams, Dogbert's Top Secret Management Handbook

CronoDAS posted this in a reply to my poem, and I dismissed him because my typical mind is typical. I would never make that mistake, so I didn’t think it’s a big deal. But it is. In the comments to part 1 a lot of people are heartily disagreeing with everything I wrote. I admire and respect them.  I already made a correction to a part of the post which was wrong. Unfortunately, a lot of people reading this couldn’t disagree if they wanted to, because they don’t have an account. I get that lurking is fun, but if you’re spending hours and hours on LessWrong and not posting anything I think you’re doing yourself a disservice.

In part 1 I speculated a lot about what goes on in Eliezer’s mind, knowing full well that Eliezer could read this and say that I’m wrong and I will have no comeback but pure embarrassment. What kind of foolhardy dunce would risk such a thing? Let me answer with another question: how else could I possibly change my mind? After reading them for a year, I have strong opinions on the goals and lessons of the sequences, and the only way to find out if I’m right or wrong is to open myself up to challenge. Worst case: people agree with me and I get sweet sweet karma. Best case: I become wiser. Am I at risk of sticking to an opinion too long just because I wrote it down? Yes, but I know I have that bias, anything known is something I can adjust for. If I don’t argue I don’t know what I don’t know.

If you want a chance to change your opinions, you have to put them where they can hurt you.  Or to use an Umeshism:  if you’ve never been proven an idiot on the internet you’re not learning enough from the internet.

Back to Harvard

Why don’t the psychologists at Harvard switch to reviewing nameless CVs? Well, why would they? They are tenured Harvard professors, they already won!  There was no bias shown for assessing stellar CVs, only those on the margins. So they’re not missing out on any superstars, at worst they hire some gentleman who would be their 32nd strongest faculty member instead of a lady who would be 29th. Would you cause a fuss if you were there?

In “Thinking Fast and Slow” Kahnemann writes that he noticed suffering from the halo effect when grading student exams. If a student did well on herfirst essay Kahnemann gave her the benefit of the doubt on later questions. He switched to grading all the answers to question 1, then all the answers to question 2 and so on. It took more time, but the grades were more accurate and fair. What’s my point? I guess it’s possible to “win at rationality” without a strong incentive, just maybe it takes a Nobel-level rationalist to do so.

Winning isn’t everything?

Vince Lombardi said that “Winning isn’t everything, it’s the only thing.” Aren’t you jealous of him? It’s so simple! I think the most common question asked of our community, mostly by our community, is why we don’t “win” as much as we think we are supposed to. In a rare display of good sense, I’m not going to speculate about why any of you don’t win, I’ll talk about myself.

My job isn’t as interesting, meaningful and full of potential as I would hope for. Why don’t I apply rationality to win at building a better career? Because when I think about it I remember that my job is also decently paying, secure, and full of decent people. My job is easy, and winning is hard. When I read about Nate Soares trying to save the world I feel a little inspired and a little ashamed that I’m not. Nate is almost certainly a better mathematician that I am, but I don’t think there’s a gargantuan gap between us. The big gap between Nate and me is in the desire to win. In my heart of hearts, I just don’t want to save the world as much as he does.

Love wins

What could I possibly want more than saving the world?

There are two ladies, let’s call them Rachel and Leah since my username is reminiscent of the Biblical Jacob. I met Rachel at the desert well (OKCupid) and we went on a few dates and at the same time Leah also replied to me on OKCupid and we also went on a few dates.  Then there were some situations and complications and my desire not to be an asshole so I decided that I had to choose one. The basic heuristic I would normally use pointed slightly to Rachel, but I kept vacillating back and forth for a few days, they were both much more attractive than any other girl I ever met through the site. Suddenly it hit me like Chuck Norris: this is an important decision, with huge stakes, one that I would have to make based on incomplete information with my brain biology trying to trip me up every step of the way. Might not this call for some EWOR?

I got to work. I introspected on past relationships and read the relevant science literature to come up with a weighted list of qualities I am looking for to maximize my chances of a happy long-term relationship. I wrote down all the evidence that could affect my assessment each quality for each lady, and employed every method I could think of to debias myself and give my best guess at the ratings. Then I peeked for the first time at the final score, and it was very surprising. My gut expected Rachel to be slightly ahead, but Leah won handily. I stared at the numbers for a while. Maybe I was too critical here? Overweighted this category there? No! The ghost of Eliezer wouldn’t let me change the bottom line from a formula to a value cell. And then, after 30 minutes of staring at the numbers, my intuition started catching up. For example, my impression from the first date was that Leah wasn’t very funny, and it stuck. When I actually wrote down the evidence, I remembered that she cracked me up once on our second date and a couple of times on our third date as she was slowly beginning to open up and trust me. I gave her a higher rating on humor-compatibility than I thought I would. I closed the spreadsheet and went to sleep. Two days later I broke up with Rachel.

Was I accurate in assessing Leah? Not exactly. She’s above and beyond anything I could’ve guessed. If I don’t “win” a single thing more from my rationality training than the few months I have gotten to spend so far with her, I’ve won enough.

Did I just praise disagreement?

I told this story about Leah to someone at a rationalist gathering. I thought he might  congratulate me on my achievement in rationality or denounce me as a cold and heartless robot. His actual reaction caught me completely by surprise: he just flat out didn’t believe me. He said that I probably used a spreadsheet to justify after the fact a decision that my gut had already made. The idea of someone applying something like EWOR that belongs on internet forums, to something like picking a woman to date was so foreign to him that he rejected it outright. I could almost hear him screaming separate magisteria!

Getting to the points

I’m no good at writing pithy summaries.  If you saw a good point anywhere in those two posts, grab it. I can’t help you. For what it’s worth, here’s Jacobian’s guide to actually using rationality to win:

1.     If you don’t believe you can, Luke, don’t bother. But if you’re not sure whether it works, wouldn’t it be interesting to find out?

2.     Taking ideas seriously requires work, maybe even *gasp* doing math. If you disagree with Eliezer or anyone else on a matter of math or science, sit down and figure it out. Don’t just read stuff, write stuff. Write a bit of code that simulates a probability problem. Derive something from Shrodinger’s equation on a piece of paper. Reading stuff is useful, but it’s not work; rationality is work.

3.     If there’s an opinion that you’re afraid you may be irrationally attached to and you have a real desire to find out the truth, post it on LessWrong. Don’t post things that are 99.999% true, they probably are. Post what you’re 80% sure about, that’s a 20% chance to really learn something. People will call you an idiot online, that’s what the internet is for. Losing karma is how you become smarter, it’s quite a thrill.

4.     Rationality will not change your entire life at once. Pick one thing that you want to win at and apply rationality to it. Just one, but one where you’ll know if you won or lost, so “being wiser” doesn’t count. Getting laid counts. If you take an L, you’ll learn a lot. If you win, you’ll know that the force is yours to command.

Who knows, maybe in a few years you’ll think you’re strong enough to save the world or something.

Don't You Care If It Works? - Part 1

4 Jacobian 29 July 2015 02:32PM

 

Part 1 - Epistemic


Prologue - other people

Psychologists at Harvard showed that most people have implicit biases about several groups. Some other Harvard psychologists were subjects of this study proving that psychologists undervalue CVs with female names. All Harvard psychologists have probably heard about the effect of black names on resumes since even we have. Surely every psychology department in this country starting with Harvard will only review CVs with the names removed? Fat chance.


Caveat lector et scriptor

A couple weeks ago I wrote a poem that makes aspiring rationalists feel better about themselves. Today I'm going to undo that. Disclaimers: This is written with my charity meter set to 5%. Every other paragraph is generalizing from anecdotes and typical-mind-fallacying. A lot of the points I make were made before and better. You should really close this tab and read those other links instead, I won't judge you. I'm not going to write in an academic style with a bibliography at the end, I'm going to write in the sarcastic style my blog would have if I weren't too lazy to start one. I'm also not trying to prove any strong empirical claims, this is BYOE: bring your own evidence. Imagine every sentence starting with "I could be totally wrong" if it makes it more digestible. Inasmuch as any accusations in this post are applicable, they apply to me as well. My goal is to get you worried, because I'm worried. If you read this and you're not worried, you should be. If you are, good!


Disagree to disagree

Edit: in the next paragraph, "Bob" was originally an investment advisor. My thanks to 2irons and Eliezer who pointed out why this is literally the worst example of a job I could give to argue my point.

Is 149 a prime? Take as long as you need to convince yourself (by math or by Google) that it is. Is it unreasonable to have 99.9...% confidence with quite a few nines (and an occasional 7) in there? Now let's say that you have a tax accountant, Bob, a decent guy that seems to be doing a decent job filing your taxes. You start chatting with Bob and he reveals that he's pretty sure that 149 isn't a prime. He doesn't know two numbers whose product is 149, it just feels unprimely to him. You try to reason with him, but he just chides you for being so arrogant in your confidence: can't you just agree to disagree on this one? It's not like either of you is a numbers theorist. His job is to not get you audited by the IRS, which he does, not factorize numbers. Are you a little bit worried about trusting Bob with your taxes? What if he actually claimed to be a mathematician?

A few weeks ago I started reading beautiful probability and immediately thought that Eliezer is wrong about the stopping rule mattering to inference. I dropped everything and spent the next three hours convincing myself that the stopping rule doesn't matter and I agree with Jaynes and Eliezer. As luck would have it, soon after that the stopping rule question was the topic of discussion at our local LW meetup. A couple people agreed with me and a couple didn't and tried to prove it with math, but most of the room seemed to hold a third opinion: they disagreed but didn't care to find out. I found that position quite mind-boggling. Ostensibly, most people are in that room because we read the sequences and thought that this EWOR (Eliezer's Way Of Rationality) thing is pretty cool. EWOR is an epistemology based on the mathematical rules of probability, and the dude who came up with it apparently does mathematics for a living trying to save the world. It doesn't seem like a stretch to think that if you disagree with Eliezer on a question of probability math, a question that he considers so obvious it requires no explanation, that's a big frickin' deal!


Authority screens off that other authority you heard from afterwards

 Opinion change

This is a chart that I made because I got excited about learning ggplot2 in R. On the right side of the chart are a lot bright red dots below the very top who believe in MIRI but also read the quantum physics sequence and don't think that MWI is very likely. Some of them understood the question of P(MWI) to be about whether MWI is the one and only exact truth, but I'm sure that several of them read it the way I did, roughly as: 1-P(collapse is true given current evidence). A lot of these people are congratulating themselves on avoiding cultishness. In the comments they mention other bloggers (or maybe even physicists!) who think that collapse is totally Beatles and MWI is Bieber.

Hold on, why did Eliezer even take all this time to write a huge quantum physics sequence? Here's how I see it: It's not to settle a point about some scientific dispute. It's to show that a person with some spare time and a little above average IQ can figure out something beyond people of massive authority, on the very subject of their authority, even if that subjects is as scary as QM. You know what’s the median IQ of the dots on above chart? 139 points! 139!!! And you know who has spare time? Someone who read 100% of the sequences!

Why should you even care about this? Because Eliezer does. He's so fanatically confident about MWI not for its own sake, it's because he's saying (at least to my ears): "This is my EWOR epistemology. It may not make you skinny and rich, but if it will do just one thing for you it will prevent you from being very very confident in beliefs that could be proven wrong. So if I'm very very confident about a question of science or math, I'm sticking my neck out and betting my entire epistemology on being right, so you can check it for yourself". In fact, Eliezer will totally judge your epistemology based on your belief in MWI. You won't do the same to him when he asks you to?

I'm a bright red dot on the chart above. Here are two statements I believe from authority because it is beyond my cognitive powers and domain knowledge to assess them as arguments: "strong AI will break out of any box humans can design", "CEV is the best basis for programming morality in AI". I believe them because I tested Eliezer on stopping rules and MWI and 20 other things, and came away convinced each time. My confidence of 98% in MWI is an upper bound on my confidence in whatever Eliezer tells me about anything. If I became convinced that collapse is correct or that stopping rules affect Bayesian inference I'll email Nate Soares at once and ask for a refund on my MIRI donation.

By the way, this would be a good time to stop reading and give MIRI some money. That’s OK, I'll wait.


The good book

A man comes to you with a book and says that it will grant you the only wisdom worth having, and as a side effect it may save your eternal soul. You read the book cover to cover and decide that the ideas you thought are nice are probably true, the ones that you didn't aren't, and you really like the bit with horses. Everyone on LW makes fun of you for claiming to take seriously something you don’t. Y’all see where this is going, don't you? Yes, it's fun to read the sequences for the "insight porn". It's also fun to read the Old Testament for the porn porn. But, maybe it could be more? Wouldn't it be kinda cool if you could read a book and become an epistemic superman, showing up experts wrong in their own domains and being proven right? Or maybe some important questions are going to come up in your life and you'll need to know the actual true answers? Or at least some questions you can bet $20 on with your friends and win?

Don't you want to know if this thing even works?

 

To be continued

Part 2 is here. In it: whining is ceased, arguments are argued about, motivations are explained, love is found, and points are taken.


The Other Path - a poem

17 Jacobian 15 July 2015 01:40PM

Inspired by the call to rationalist poetry fans and informed by years of writing satire.



The Other Path

When you ask for truth and are offered illusion,

When senses deceive you and reasoning lies

I'll show you the path through the murky confusion,

Just follow and close your eyes.

 

On matters of fact there's no fact of the matter,

All moral and virtue are fashion and fad,

So dress in the creed that will fit you and flatter

No one can argue with that.

 

Some puzzles unyielding and mysteries ancient

No formula ever could hope to describe.

How proudly the scientist seeks explanations

How clearly in vain she strives.

 

Make cases like fortifications of metal,

No rival assertion shall ever go past.

Be carefree in choosing the side of the battle

But guard it until your last.

 

The sages declared that to know is to suffer,

Where wisdom is gained there is innocence lost

And learning is danger – best leave it to others,

Avoid it at any cost.

 

Some fools declare war on their very own nature

Their weapons are evidence, reason and math.

Don't offer compassion to those wretched creatures,

They've chosen the other path.