Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

First(?) Rationalist elected to state government

60 Eneasz 07 November 2014 02:30AM

Has no one else mentioned this on LW yet?

Elizabeth Edwards has been elected as a New Hampshire State Rep, self-identifies as a Rationalist and explicitly mentions Less Wrong in her first post-election blog post.

Sorry if this is a repost

Maybe you want to maximise paperclips too

41 dougclow 30 October 2014 09:40PM

As most LWers will know, Clippy the Paperclip Maximiser is a superintelligence who wants to tile the universe with paperclips. The LessWrong wiki entry for Paperclip Maximizer says that:

The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented

I think that a massively powerful star-faring entity - whether a Friendly AI, a far-future human civilisation, aliens, or whatever - might indeed end up essentially converting huge swathes of matter in to paperclips. Whether a massively powerful star-faring entity is likely to arise is, of course, a separate question. But if it does arise, it could well want to tile the universe with paperclips.

Let me explain.


To travel across the stars and achieve whatever noble goals you might have (assuming they scale up), you are going to want energy. A lot of energy. Where do you get it? Well, at interstellar scales, your only options are nuclear fusion or maybe fission.

Iron has the strongest binding energy of any nucleus. If you have elements lighter than iron, you can release energy through nuclear fusion - sticking atoms together to make bigger ones. If you have elements heavier than iron, you can release energy through nuclear fission - splitting atoms apart to make smaller ones. We can do this now for a handful of elements (mostly selected isotopes of uranium, plutonium and hydrogen) but we don’t know how to do this for most of the others - yet. But it looks thermodynamically possible. So if you are a massively powerful and massively clever galaxy-hopping agent, you can extract maximum energy for your purposes by taking up all the non-ferrous matter you can find and turning it in to iron, getting energy through fusion or fission as appropriate.

You leave behind you a cold, dark trail of iron.

That seems a little grim. If you have any aesthetic sense, you might want to make it prettier, to leave an enduring sign of values beyond mere energy acquisition. With careful engineering, it would take only a tiny, tiny amount of extra effort to leave the iron arranged in to beautiful shapes. Curves are nice. What do you call a lump of iron arranged in to an artfully-twisted shape? I think we could reasonably call it a paperclip.

Over time, the amount of space that you’ve visited and harvested for energy will increase, and the amount of space available for your noble goals - or for anyone else’s - will decrease. Gradually but steadily, you are converting the universe in to artfully-twisted pieces of iron. To an onlooker who doesn’t see or understand your noble goals, you will look a lot like you are a paperclip maximiser. In Eliezer’s terms, your desire to do so is an instrumental value, not a terminal value. But - conditional on my wild speculations about energy sources here being correct - it’s what you’ll do.

Bayes Academy: Development report 1

39 Kaj_Sotala 19 November 2014 10:35PM

Some of you may remember me proposing a game idea that went by the name of The Fundamental Question. Some of you may also remember me talking a lot about developing an educational game about Bayesian Networks for my MSc thesis, but not actually showing you much in the way of results.

Insert the usual excuses here. But thanks to SSRIs and and all kinds of other stuff, I'm now finally on track towards actually accomplishing something. Here's a report on a very early prototype.

This game has basically two goals: to teach its players something about Bayesian networks and probabilistic reasoning, and to be fun. (And third, to let me graduate by giving me material for my Master's thesis.)

We start with the main character stating that she is nervous. Hitting any key, the player proceeds through a number of lines of internal monologue:

I am nervous.

I’m standing at the gates of the Academy, the school where my brother Opin was studying when he disappeared. When we asked the school to investigate, they were oddly reluctant, and told us to drop the issue.

The police were more helpful at first, until they got in contact with the school. Then they actually started threatening us, and told us that we would get thrown in prison if we didn’t forget about Opin.

That was three years ago. Ever since it happened, I’ve been studying hard to make sure that I could join the Academy once I was old enough, to find out what exactly happened to Opin. The answer lies somewhere inside the Academy gates, I’m sure of it.

Now I’m finally 16, and facing the Academy entrance exams. I have to do everything I can to pass them, and I have to keep my relation to Opin a secret, too. 

???: “Hey there.”

Eep! Someone is talking to me! Is he another applicant, or a staff member? Wait, let me think… I’m guessing that applicant would look a lot younger than staff members! So, to find that out… I should look at him!

[You are trying to figure out whether the voice you heard is a staff member or another applicant. While you can't directly observe his staff-nature, you believe that he'll look young if he's an applicant, and like an adult if he's a staff member. You can look at him, and therefore reveal his staff-nature, by right-clicking on the node representing his apperance.]

Here is our very first Bayesian Network! Well, it's not really much of a network: I'm starting with the simplest possible case in order to provide an easy start for the player. We have one node that cannot be observed ("Student", its hidden nature represented by showing it in greyscale), and an observable node ("Young-looking") whose truth value is equal to that of the Student node. All nodes are binary random variables, either true or false. 

According to our current model of the world, "Student" has a 50% chance of being true, so it's half-colored in white (representing the probability of it being true) and half-colored in black (representing the probability of it being false). "Young-looking" inherits its probability directly. The player can get a bit of information about the two nodes by left-clicking on them.

The game also offers alternate color schemes for colorblind people who may have difficulties distinguishing red and green.

Now we want to examine the person who spoke to us. Let's look at him, by right-clicking on the "Young-looking" node.

Not too many options here, because we're just getting started. Let's click on "Look at him", and find out that he is indeed young, and thus a student.

This was the simplest type of minigame offered within the game. You are given a set of hidden nodes whose values you're tasked with discovering by choosing which observable nodes to observe. Here the player had no way to fail, but later on, the minigames will involve a time limit and too many observable nodes to inspect within that time limit. It then becomes crucial to understand how probability flows within a Bayesian network, and which nodes will actually let you know the values of the hidden nodes.

The story continues!

Short for an adult, face has boyish look, teenagerish clothes... yeah, he looks young!

He's a student!

...I feel like I’m overthinking things now.

...he’s looking at me.

I’m guessing he’s either waiting for me to respond, or there’s something to see behind me, and he’s actually looking past me. If there isn’t anything behind me, then I know that he must be waiting for me to respond.

Maybe there's a monster behind me, and he's paralyzed with fear! I should check that possibility before it eats me!

[You want to find out whether the boy is waiting for your reply or staring at a monster behind you. You know that he's looking at you, and your model of the world suggests that he will only look in your direction if he's waiting for you to reply, or if there's a monster behind you. So if there's no monster behind you, you know that he's waiting for you to reply!]

Slightly more complicated network, but still, there's only one option here. Oops, apparently the "Looks at you" node says it's an observable variable that you can right-click to observe, despite the fact that it's already been observed. I need to fix that.

Anyway, right-clicking on "Attacking monster" brings up a "Look behind you" option, which we'll choose.

You see nothing there. Besides trees, that is.

Boy: “Um, are you okay?”

“Yeah, sorry. I just… you were looking in my direction, and I wasn’t sure of whether you were expecting me to reply, or whether there was a monster behind me.”

He blinks.

Boy: “You thought that there was a reasonable chance for a monster to be behind you?”

I’m embarrassed to admit it, but I’m not really sure of what the probability of a monster having snuck up behind me really should have been.

My studies have entirely focused on getting into this school, and Monsterology isn’t one of the subjects on the entrance exam!

I just went with a 50-50 chance since I didn’t know any better.

'Okay, look. Monsterology is my favorite subject. Monsters avoid the Academy, since it’s surrounded by a mystical protective field. There’s no chance of them getting even near! 0 percent chance.'

'Oh. Okay.'

[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 0%.]

Then stuff happens and they go stand in line for the entrance exam or something. I haven't written this part. Anyway, then things get more exciting, for a wild monster appears!

Stuff happens


Huh, the monster is carrying a sword.

Well, I may not have studied Monsterology, but I sure did study fencing!

[You draw your sword. Seeing this, the monster rushes at you.]

He looks like he's going to strike. But is it really a strike, or is it a feint?

If it's a strike, I want to block and counter-attack. But if it's a feint, that leaves him vulnerable to my attack.

I have to choose wisely. If I make the wrong choice, I may be dead.

What did my master say? If the opponent has at least two of dancing legs, an accelerating midbody, and ferocious eyes, then it's an attack!

Otherwise it's a feint! Quick, I need to read his body language before it's too late!

Now get to the second type of minigame! Here, you again need to discover the values of some number of hidden variables within a time limit, but here it is in order to find out the consequences of your decision. In this one, the consequence is simple - either you live or you die. I'll let the screenshot and tutorial text speak for themselves:

[Now for some actual decision-making! The node in the middle represents the monster's intention to attack (or to feint, if it's false). Again, you cannot directly observe his intention, but on the top row, there are things about his body language that signal his intention. If at least two of them are true, then he intends to attack.]

[Your possible actions are on the bottom row. If he intends to attack, then you want to block, and if he intends to feint, you want to attack. You need to inspect his body language and then choose an action based on his intentions. But hurry up! Your third decision must be an action, or he'll slice you in two!]

In reality, the top three variables are not really independent of each other. We want to make sure that the player can always win this battle despite only having three actions. That's two actions for inspecting variables, and one action for actually making a decision. So this battle is rigged: either the top three variables are all true, or they're all false.

...actually, now that I think of it, the order of the variables is wrong. Logically, the body language should be caused by the intention to attack, and not vice versa, so the arrows should point from the intention to body language. I'll need to change that. I got these mixed up because the prototypical exemplar of a decision minigame is one where you need to predict someone's reaction from their personality traits, and there the personality traits do cause the reaction. Anyway, I want to get this post written before I go to bed, so I won't change that now.

Right-clicking "Dancing legs", we now see two options besides "Never mind"!

We can find out the dancingness of the enemy's legs by thinking about our own legs - we are well-trained, so our legs are instinctively mirroring our opponent's actions to prevent them from getting an advantage over us - or by just instinctively feeling where they are, without the need to think about them! Feeling them would allow us to observe this node without spending an action.

Unfortunately, feeling them has "Fencing 2" as a prerequisite skill, and we don't have that. Neither could we have them, in this point of the game. The option is just there to let the player know that there are skills to be gained in this game, and make them look forward to the moment when they can actually gain that skill. As well as giving them an idea of how the skill can be used.

Anyway, we take a moment to think of our legs, and even though our opponent gets closer to us in that time, we realize that our legs our dancing! So his legs must be dancing as well!

With our insider knowledge, we now know that he's attacking, and we could pick "Block" right away. But let's play this through. The network has automatically recalculated the probabilities to reflect our increased knowledge, and is now predicting a 75% chance for our enemy to be attacking, and for "Blocking" to thus be the right decision to make.

Next we decide to find out what his eyes say, by matching our gaze with his. Again, there would be a special option that cost us no time - this time around, one enabled by Empathy 1 - but we again don't have that option.

Except that his gaze is so ferocious that we are forced to look away! While we are momentarily distracted, he closes the distance, ready to make his move. But now we know what to do... block!


Now the only thing that remains to do is to ask our new-found friend for an explanation.

"You told me there was a 0% chance of a monster near the academy!"

Boy: “Ehh… yeah. I guess I misremembered. I only read like half of our course book anyway, it was really boring.”

“Didn’t you say that Monsterology was your favorite subject?”

Boy: “Hey, that only means that all the other subjects were even more boring!”

“. . .”

I guess I shouldn’t put too much faith on what he says.

[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 50%.]

[Your model of the world has been updated! You have a new conditional probability variable: 'True Given That The Boy Says It's True', 25%]

And that's all for now. Now that the basic building blocks are in place, future progress ought to be much faster.


As you might have noticed, my "graphics" suck. A few of my friends have promised to draw art, but besides that, the whole generic Java look could go. This is where I was originally planning to put in the sentence "and if you're a Java graphics whiz and want to help fix that, the current source code is conveniently available at GitHub", but then getting things to his point took longer than I expected and I didn't have the time to actually figure out how the whole Eclipse-GitHub integration works. I'll get to that soon. Github link here!

I also want to make the nodes more informative - right now they only show their marginal probability. Ideally, clicking on them would expand them to a representation where you could visually see what components their probability composed of. I've got some scribbled sketches of what this should look like for various node types, but none of that is implemented yet.

I expect some of you to also note that the actual Bayes theorem hasn't shown up yet, at least in no form resembling the classic mammography problem. (It is used implicitly in the network belief updates, though.) That's intentional - there will be a third minigame involving that form of the theorem, but somehow it felt more natural to start this way, to give the player a rough feeling of how probability flows through Bayesian networks. Admittedly I'm not sure of how well that's happening so far, but hopefully more minigames should help the player figure it out better.

What's next? Once the main character (who needs a name) manages to get into the Academy, there will be a lot of social scheming, and many mysteries to solve in order for her to find out just what did happen to her brother... also, I don't mind people suggesting things, such as what could happen next, and what kinds of network configurations the character might face in different minigames.

(Also, everything that you've seen might get thrown out and rewritten if I decide it's no good. Let me know what you think of the stuff so far!)

Don't Be Afraid of Asking Personally Important Questions of Less Wrong

36 Evan_Gaensbauer 26 October 2014 08:02AM

Related: LessWrong as a social catalyst

I primarily used my prior user profile asked questions of Less Wrong. When I had an inkling for a query, but I didn't have a fully formed hypothesis, I wouldn't know how to search for answers to questions on the Internet myself, so I asked them on Less Wrong.

The reception I have received has been mostly positive. Here are some examples:

  • Back when I was trying to figure out which college major to pursue, I queried Less Wrong about which one was worth my effort. I followed this up with a discussion about whether it was worthwhile for me to personally, and for someone in general, to pursue graduate studies.

Other student users of Less Wrong benefit from the insight of their careered peers:

  • A friend of mine was considering pursuing medicine to earn to give. In the same vein as my own discussion, I suggested he pose the question to Less Wrong. He didn't feel like it at first, so I posed the query on his behalf. In a few days, he received feedback which returned the conclusion that pursuing medical school through the avenues he was aiming for wasn't his best option relative to his other considerations. He showed up in the thread, and expressed his gratitude. The entirely of the online rationalist community was willing to respond provided valuable information for an important question. It might have taken him lots of time, attention, and effort to look for the answers to this question by himself.

In engaging with Less Wrong, with the rest of you, my experience has been that Less Wrong isn't just useful as an archive of blog posts, but is actively useful as a community of people. As weird as it may seem, you can generate positive externalities that improve the lives of others by merely writing a blog post. This extends to responding in the comments section too. Stupid Questions Threads are a great example of this; you can ask questions about your procedural knowledge gaps without fear of reprisal.  People have gotten great responses about getting more value out of conversations, to being more socially successful, to learning and appreciating music as an adult. Less Wrong may be one of few online communities for which even the comments sections are useful, by default.

For the above examples, even though they weren't the most popular discussions ever started, and likely didn't get as much traffic, it's because of the feedback they received that made them more personally valuable to one individual than several others.

At the CFAR workshop I attended, I was taught two relevant skills:

* Value of Information Calculations: formulating a question well, and performing a Fermi estimate, or back-of-the-envelope question, in an attempt to answer it, generates quantified insight you wouldn't have otherwise anticipated.

* Social Comfort Zone Expansion: humans tend to have a greater aversion to trying new things socially than is maximally effective, and one way of viscerally teaching System 1 this lesson is by trial-and-error of taking small risks. Posting on Less Wrong, especially, e.g., in a special thread, is really a low-risk action. The pang of losing karma can feel real, but losing karma really is a valuable signal that one should try again differently. Also, it's not as bad as failing at taking risks in meatspace.

When I've received downvotes for a comment, I interpret that as useful information, try to model what I did wrong, and thank others for correcting my confused thinking. If you're worried about writing something embarrassing, that's understandable, but realize it's a fact about your untested anticipations, not a fact about everyone else using Less Wrong. There are dozens of brilliant people with valuable insights at the ready, reading Less Wrong for fun, and who like helping us answer our own personal questions. Users shminux and Carl Shulman are exemplars of this.

This isn't an issue for all users, but I feel as if not enough users are taking advantage of the personal value they can get by asking more questions. This post is intended to encourage them. User Gunnar Zarnacke suggested that if enough examples of experiences like this were accrued, it could be transformed into some sort of repository of personal value from Less Wrong

Systemic risk: a moral tale of ten insurance companies

24 Stuart_Armstrong 17 November 2014 04:43PM

Once upon a time...

Imagine there were ten insurance sectors, each sector being a different large risk (or possibly the same risks, in different geographical areas). All of these risks are taken to be independent.

To simplify, we assume that all the risks follow the same yearly payout distributions. The details of the distribution doesn't matter much for the argument, but in this toy model, the payouts follow the discrete binomial distribution with n=10 and p=0.5, with millions of pounds as the unit:

This means that the probability that each sector pays out £n million each year is (0.5)10 . 10!/(n!(10-n)!).

All these companies are bound by Solvency II-like requirements, that mandate that they have to be 99.5% sure to payout all their policies in a given year - or, put another way, that they only fail to payout once in every 200 years on average. To do so, in each sector, the insurance companies have to have capital totalling £9 million available every year (the red dashed line).

Assume that each sector expects £1 million in total yearly expected profit. Then since the expected payout is £5 million, each sector will charge £6 million a year in premiums. They must thus maintain a capital reserve of £3 million each year (they get £6 million in premiums, and must maintain a total of £9 million). They thus invest £3 million to get an expected profit of £1 million - a tidy profit!

Every two hundred years, one of the insurance sectors goes bust and has to be bailed out somehow; every hundred billion trillion years, all ten insurance sectors go bust all at the same time. We assume this is too big to be bailed out, and there's a grand collapse of the whole insurance industry with knock on effects throughout the economy.

But now assume that insurance companies are allowed to invest in each other's sectors. The most efficient way of doing so is to buy equally in each of the ten sectors. The payouts across the market as a whole are now described by the discrete binomial distribution with n=100 and p=0.5:

This is a much narrower distribution (relative to its mean). In order to have enough capital to payout 99.5% of the time, the whole industry needs only keep £63 million in capital (the red dashed line). Note that this is far less that the combined capital for each sector when they were separate, which would be ten times £9 million, or £90 million (the pink dashed line). There is thus a profit taking opportunity in this area (it comes from the fact that the standard deviation of X+Y is less that the standard deviation of X plus the standard deviation Y).

If the industry still expects to make an expected profit of £1 million per sector, this comes to £10 million total. The expected payout is £50 million, so they will charge £60 million in premium. To accomplish their Solvency II obligations, they still need to hold an extra £3 million in capital (since £63 million - £60 million = £3 million). However, this is now across the whole insurance industry, not just per sector.

Thus they expect profits of £10 million based on holding capital of £3 million - astronomical profits! Of course, that assumes that the insurance companies capture all the surplus from cross investing; in reality there would be competition, and a buyer surplus as well. But the general point is that there is a vast profit opportunity available from cross-investing, and thus if these investments are possible, they will be made. This conclusion is not dependent on the specific assumptions of the model, but captures the general result that insuring independent risks reduces total risk.

But note what has happened now: once every 200 years, an insurance company that has spread their investments across the ten sectors will be unable to payout what they owe. However, every company will be following this strategy! So when one goes bust, they all go bust. Thus the complete collapse of the insurance industry is no longer a one in hundred billion trillion year event, but a one in two hundred year event. The risk for each company has stayed the same (and their profits have gone up), but the systemic risk across the whole insurance industry has gone up tremendously.

...and they failed to live happily ever after for very much longer.

My third-of-life crisis

23 polymathwannabe 10 November 2014 03:28PM

I've been wanting to post this for a while, but it always felt too embarrassing. I've contributed next to nothing to this community, and I'm sure you have better problems to work on than my third-of-life crisis. However, the kind of problems I'm facing may require more brainpower than my meatspace friends can muster. Here I go.

I live in Colombia, where your connections have more weight than your talent. But I'm not sure about my talent anymore. Until I finished high school I had always been a stellar student and everyone told me I was headed for a great future. Then I represented my province in a national spelling contest and had my first contact with an actual city and with other students who were as smart as me. After the contest ended, I tried to maneuver my parents into letting me stay at the city, but they would have none of it. After an unabashedly overextended stay with my aunts, I eventually was sent back to the small pond.

My parents and I disagreed seriously about my choice of career, primarily in that they took for granted that the choice wasn't even mine. Because my older brother appeared to have happily accepted his assigned path in business management, I was forced to do the same, even though it held absolutely no interest for me. But I wasn't very sure myself about what exactly I wanted, so I wasn't able to effectively defend my opposition. Another factor was that in the late 1990s the Colombian army was still allowed to recruit minors, and it's a compulsory draft, and the only legal way to avoid it was to be studying something---anything. My brother did spend one year at the army, but at least the entire family agreed that I would break if sent there. No other options were explored. With my school scores I might have obtained a scholarship, but I didn't know how to do it, whom to ask. My parents held complete control over my life.

So began the worst eight years of my life. Eight because the only university my parents could afford was terribly mismanaged and was paralyzed by strikes and protests every semester. I was deeply depressed and suicidal during most of that time, and only the good friends I met there kept my mood high enough to want to keep going. After I filed some legal paperwork and paid a fee to be finally spared the threat from the draft, it didn't occur to any of us that I didn't have a reason to be in that university anymore. None of us had heard of sunk costs---and my management teachers certainly didn't teach that.

During that time it became clear to me that I wanted to be a writer. I even joined a writing workshop at the university, and even though our aesthetic differences made me leave it soon, I envied them their intellectual independence. Many of them were students of history and philosophy and one could have fascinating conversations with them. I felt more acutely how far I was from where I wanted to be. My parents sent me to that university because they had no money, but they chose business management because they had no imagination.

My parents had made another mistake: have too many children in their middle age, which meant they constantly warned me they could die anytime soon and I must find any job before I was left in the street. The stress and the fear of failure were unbearable, especially because my definition of failure included their definition of success: become some company manager, get an MBA, join the rat race. My brother was quicky jumping from promotion to promotion and I was seen as a lazy parasite who didn't want to find a real job.

For a while I volunteered at a local newspaper, and the editor was very happy with my writing, and suggested he might move his influences to get me an intership even if I wasn't studying journalism. Shortly afterwards he died of cancer, and I lost my position there.

I went to therapy. It didn't work. After I got my diploma I found a job at a call center and started saving to move to the big city I had always felt I was supposed to have lived in all along. I entered another university to pursue a distance degree in journalism, and it has been a slow, boring process to go through their mediocre curriculum and laughable exams. I still have at least two years to go, if my lack of motivation doesn't make me botch another semester.

Currently I'm on my own, though now my other siblings live in this city too, and all my aunts. I no longer visit them because I always feel judged. I'm close to turning 32 and I still haven't finished the degree I want (in many ways it was also a constrained choice: I cannot afford a better university, and I no longer have anyone to support me in the meantime, so I have to work). I do not want to put my first diploma to use; it would be a soul-crushing defeat. I have promised myself to prove that I can build my life without using my management degree. But these days I feel I'm nearing a dead end.

Three years ago I found a good job at a publishing house, but I've learned all I could from there and I sorely need to move on. But it's very difficult to get a writing job without the appropriate degree. Last year I almost got a position as proofreader at a university press, but their ISO protocols prevented them from hiring someone with no degree. I have a friend who dropped out of literary studies and got a job at an important national newspaper and from his description of it there's no guaranteed way to replicate the steps he took.

So my situation is this: I'm rooming at a friend's house, barely able to pay my bills. The Colombian government has launched an investigation against my university for financial mismanagement, and it might get closed within the next year. I have become everyone's joke at the office because I am so unmotivated that I'm unable to arrive on time every morning, but I've become so good at the job that my boss doesn't mind, and literally everyone asks me about basic stuff all the time. I was head editor for one year, but I almost went into nervous breakdown and requested to be downgraded to regular editor, where life is much more manageable. I feel I could do much more, but I don't know how or where. And I don't feel like starting a business or making investments because my horrible years with business management left me with a lingering disgust for all things economic.

Through happy coincidences I've met friends who know important people in journalism and web media, but I have nothing to show for my efforts. At their parties I feel alien, trying to understand conversations about authors and theories I ought to have read about but didn't because I spent those formative years trying to not kill myself. I enjoy having smart and successful friends, but it hurts me that they make me feel so dumb. Professionally and emotionally, I am at the place I should have been ten years ago, and I constantly feel like my opportunities for improvement are closing. I don't have enough free time to study or write, I don't have a romantic life at all (new recent dates didn't turn out so well), I don't even have savings, and I can't focus on anything. This city has more than a dozen good universities with scholarship programs, but I'm now too old to apply, and I still have to support myself anyway. Some days I feel like trying my luck in another country, but I'm too unqualified to get a good job. I feel tied up.

My 2004 self would have been quite impressed at how much I've achieved, but what I'm feeling right now is stagnation. Every time I hear of a new sensation writer under 30 I feel mortified that I haven't been able to come up with anything half decent. My second therapist said my chosen path as a writer was one that gave its best fruits in old age, but I don't want more decades of dread and uncertainty.

I don't know what to do at this point. J. K. Rowling once said there's an expiration date on blaming your parents for your misfortunes. But the consequences of my parents' bad decisions seem to extend into infinity.

Wikipedia articles from the future

19 snarles 29 October 2014 12:49PM

Speculation is important for forecasting; it's also fun.  Speculation is usually conveyed in two forms: in the form of an argument, or encapsulated in fiction; each has their advantages, but both tend to be time-consuming.  Presenting speculation in the form of an argument involves researching relevant background and formulating logical arguments.  Presenting speculation in the form of fiction requires world-building and storytelling skills, but it can quickly give the reader an impression of the "big picture" implications of the speculation; this can be more effective at establishing the "emotional plausibility" of the speculation.

I suggest a storytelling medium which can combine attributes of both arguments and fiction, but requires less work than either. That is the "wikipedia article from the future." Fiction written by inexperienced sci-fi writers tends to generate into a speculative encyclopedia anyways--why not just admit that you want to write an encyclopedia in the first place?  Post your "Wikipedia articles from the future" below.

Musk on AGI Timeframes

18 Artaxerxes 17 November 2014 01:36AM

Elon Musk submitted a comment to a day or so ago, on this article. It was later removed.

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...

Now Elon has been making noises about AI safety lately in general, including for example mentioning Bostrom's Superintelligence on twitter. But this is the first time that I know of that he's come up with his own predictions of the timeframes involved, and I think his are rather quite soon compared to most. 

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

We can compare this to MIRI's post in May this year, When Will AI Be Created, which illustrates that it seems reasonable to think of AI as being further away, but also that there is a lot of uncertainty on the issue.

Of course, "something seriously dangerous" might not refer to full blown superintelligent uFAI - there's plenty of space for disasters of magnitude in between the range of the 2010 flash crash and clippy turning the universe into paperclips to occur.

In any case, it's true that Musk has more "direct exposure" to those on the frontier of AGI research than your average person, and it's also true that he has an audience, so I think there is some interest to be found in his comments here.


Others' predictions of your performance are usually more accurate

18 Natha 13 November 2014 02:17AM
Sorry if the positive illusions are old hat, but I searched and couldn't find any mention of this peer prediction stuff! If nothing else, I think the findings provide a quick heuristic for getting more reliable predictions of your future behavior - just poll a nearby friend!

Peer predictions are often superior to self-predictions. People, when predicting their own future outcomes, tend to give far too much weight to their intentions, goals, plans, desires, etc., and far to little consideration to the way things have turned out for them in the past. As Henry Wadsworth Longfellow observed,

"We judge ourselves by what we feel capable of doing, while others judge us by what we have already done"

...and we are way less accurate for it! A recent study by Helzer and Dunning (2012) took Cornell undergraduates and had them each predict their next exam grade, and then had an anonymous peer predict it too, based solely on their score on the previous exam; despite the fact that the peer had such limited information (while the subjects have presumably perfect information about themselves), the peer predictions, based solely on the subjects' past performance, were much more accurate predictors of subjects' actual exam scores.

In another part of the study, participants were paired-up (remotely, anonymously) and rewarded for accurately predicting each other's scores. Peers were allowed to give just one piece of information to help their partner predict their score; further, they were allowed to request just one piece of information from their partner to aid them in predicting their partner's score. Across the board, participants would give information about their "aspiration level" (their own ideal "target" score) to the peer predicting them, but would be far less likely to ask for that information if they were trying to predict a peer; overwhelmingly, they would ask for information about the participant's past behavior (i.e., their score on the previous exam), finding this information to be more indicative of future performance. The authors note,

There are many reasons to use past behavior as an indicator of future action and achievement. The overarching reason is that past behavior is a product of a number of causal variables that sum up to produce it—and that suite of causal variables in the same proportion is likely to be in play for any future behavior in a similar context.

They go on to say, rather poetically I think, that they have observed "the triumph of hope over experience." People situate their representations of self more in what they strive to be rather than in who they have already been (or indeed, who they are), whereas they represent others more in terms of typical or average behavior (Williams, Gilovich, & Dunning, 2012).

I found a figure I want to include from another interesting article (Kruger & Dunning, 1999); it illustrates this "better than average effect" rather well. Depicted below is an graph summarizing the results of study #3 (perceived grammar ability and test performance as a function of actual test performance):

Along the abscissa, you've got reality: the quartiles represent scores on a test of grammatical ability. The vertical axis, with decile ticks, corresponds to the same peoples' self-predicted ability and test scores. Curiously, while no one is ready to admit mediocrity, neither is anyone readily forecasting perfection; the clear sweet spot is 65-70%. Those in the third quartile seem most accurate in their estimations while those the highest quartile often sold themselves short, underpredicting their actual achievement on average. Notice too that the widest reality/prediction gap is for those the lowest quartile.

My new paper: Concept learning for safe autonomous AI

17 Kaj_Sotala 15 November 2014 07:17AM

Abstract: Sophisticated autonomous AI may need to base its behavior on fuzzy concepts that cannot be rigorously defined, such as well-being or rights. Obtaining desired AI behavior requires a way to accurately specify these concepts. We review some evidence suggesting that the human brain generates its concepts using a relatively limited set of rules and mechanisms. This suggests that it might be feasible to build AI systems that use similar criteria and mechanisms for generating their own concepts, and could thus learn similar concepts as humans do. We discuss this possibility, and also consider possible complications arising from the embodied nature of human thought, possible evolutionary vestiges in cognition, the social nature of concepts, and the need to compare conceptual representations between humans and AI systems.

I just got word that this paper was accepted for the AAAI-15 Workshop on AI and Ethics: I've uploaded a preprint here. I'm hoping that this could help seed a possibly valuable new subfield of FAI research. Thanks to Steve Rayhawk for invaluable assistance while I was writing this paper: it probably wouldn't have gotten done without his feedback motivating me to work on this.

Comments welcome. 

Open Thread: What are your important insights or aha! moments?

16 Emile 09 November 2014 10:56PM

Sometimes our minds suddenly "click" and we see a topic in a new light. Or sometimes we think we understand an idea, think it's stupid and ignore attempts to explain it ("yeah, I already know that"), until we suddenly realize that our understanding was wrong.

This kind of insight is supposedly hard to transmit, but it might be worth a try!

So, what kind of important and valuable insights do you wish you had earlier? Could you try to explain briefly what led to the insight, in a way that might help others get it?

[Link]"Neural Turing Machines"

16 Prankster 31 October 2014 08:54AM

The paper.

Discusses the technical aspects of one of Googles AI projects. According to a pcworld the system "apes human memory and programming skills" (this article seems pretty solid, also contains link to the paper). 

The abstract:

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.


(First post here, feedback on the appropriateness of the post appreciated)

Things to consider when optimizing: Sleep

15 mushroom 28 October 2014 05:26PM

I'd like to have a series of discussion posts, where each post is of the form "Let's brainstorm things you might consider when optimizing X", where X is something like sleep, exercise, commuting, studying, etc. Think of it like a specialized repository.

In the spirit of try more things, the direct benefit is to provide insights like "Oh, I never realized that BLAH is a knob I can fiddle. This gives me an idea of how I might change BLAH given my particular circumstances. I will try this and see what happens!"

The indirect benefit is to practice instrumental rationality using the "toy problem" provided by a general prompt.

Accordingly, participation could be in many forms:

* Pointers to scientific research
* General directions to consider
* Personal experience
* Boring advice
* Intersections with other community ideas, biases
* Cost-benefit, value-of-information analysis
* Related questions
* Other musings, thoughts, speculation, links, theories, etc.

This post is on sleep and circadian rhythms.

xkcd on the AI box experiment

14 FiftyTwo 21 November 2014 08:26AM

Todays xkcd 


I guess there'll be a fair bit of traffic coming from people looking it up? 

The Danger of Invisible Problems

14 Snorri 06 November 2014 10:28PM

TL;DR: There is probably some costly problem in your life right now that you are not even aware of. It is not that you are procrastinating on solving it. Rather, this problem has gradually blended into your environment, sinking beneath your conscious awareness to the degree that you fail to recognize it as a problem in the first place.

This post is partially an elaboration on Ugh fields, but there are some decisive differences I want to develop. Let me begin with an anecdote:

For about two years I've had a periodic pain in my right thigh. Gradually, it became worse. At one point I actually had a sort of spasm. Then the pain went away for a few weeks, then it came back, and so forth. All the while I rationalized it as something harmless: "It will probably just go away soon," I would think, or "It only inhibits my mobility sometimes." Occasionally I would consider seeking medical help, but I couldn't muster the energy, as though some activation threshold wasn't being reached. In fact, the very promise that I could get medical help whenever convenient served to further diminish any sense of urgency. Even if the pain was sometimes debilitating, I did not perceive it as a problem needing to be solved. Gradually, I came to view it as just an unfortunate and inevitable part of existence.

Last Monday, after hardly being able to walk due to crippling pain, I finally became aware that "Wow, this really sucks and I should fix it." That evening I finally visited a chiropractor, who proceeded to get medieval on my femur (imagine having a sprained ankle, then imagine a grown man jumping on top of it). Had I classified this as a problem-needing-to-be-solved a few months earlier, my treatment period would probably be days instead of weeks.

Simply, I think this situation is of a more general form:

You have some inefficiency or agitation in your life. This could be solved very easily, but because it is perceived as harmless, no such attempt is made. Over time your tolerance for it increases, even if the problem is worsening (Bonus points for attempts at rationalizing it). This may be due to something like the peak-end rule, as the problem doesn't cause any dramatic peaks that stick out in your memory, just a dull pain underlying your experience. Even if the problem substantially lowers utility, your satisficing lizard brain remains apathetic, until the last moment, when the damage passes a certain threshold and you're jolted into action.

While similar to procrastination and akrasia, this does not involve you going against your better judgement. Instead, you don't have a better judgement, due to the blinding effects of the problem.

Possible Solutions:

I didn't solve my problem in a clever way, but I've begun employing some "early warning" techniques to prevent future incidents. The key is to become aware of the worsening inefficiency before you're forced to resort to damage control.

  • Do a daily/weekly/monthly reflection. Just for a few minutes, try writing out in plain text what you currently think of your life and how you're doing. This forces you to articulate your situation in a concrete way, bypassing the shadowy ambiguity of your thoughts. If you find yourself writing things about your life that you did not previously know, keep writing, as you could be uncovering something that you'd been flinching from acknowledging (e.g. "Obligation X isn't as rewarding as I thought it would be"). A more elaborate formulation of this practice can be found here.
  • I kind of feel that "mindfulness" has become a mangled buzzword, but the exercises associated with it are quite powerful when applied correctly. I've found that following my breath does indeed induce a certain clarity of mind, where acknowledging problems and shortcomings becomes easier. Using your own thought process as an object of meditation is another excellent method.
  • While the previous two examples have been personal activities, other people can also be a valuable resource due to their uncanny ability to be different from you, thus offering multiple perspectives. However, I doubt expensive talk-therapy is necessary; some of my most useful realizations have been from IRC chats.

Stupid Questions (10/27/2014)

14 drethelin 27 October 2014 09:27PM

I think it's past time for another Stupid Questions thread, so here we go. 


This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Please respect people trying to fix any ignorance they might have, rather than mocking that ignorance. 


[Link] "The Problem With Positive Thinking"

13 CronoDAS 26 October 2014 06:50AM

Psychology researchers discuss their findings in a New York Times op-ed piece.

The take-home advice:

Positive thinking fools our minds into perceiving that we’ve already attained our goal, slackening our readiness to pursue it.


What does work better is a hybrid approach that combines positive thinking with “realism.” Here’s how it works. Think of a wish. For a few minutes, imagine the wish coming true, letting your mind wander and drift where it will. Then shift gears. Spend a few more minutes imagining the obstacles that stand in the way of realizing your wish.

This simple process, which my colleagues and I call “mental contrasting,” has produced powerful results in laboratory experiments. When participants have performed mental contrasting with reasonable, potentially attainable wishes, they have come away more energized and achieved better results compared with participants who either positively fantasized or dwelt on the obstacles.

When participants have performed mental contrasting with wishes that are not reasonable or attainable, they have disengaged more from these wishes. Mental contrasting spurs us on when it makes sense to pursue a wish, and lets us abandon wishes more readily when it doesn’t, so that we can go after other, more reasonable ambitions.

What supplements do you take, if any?

13 NancyLebovitz 23 October 2014 12:36PM

Since it turns out that it isn't feasible to include check as many as apply questions in the big survey, I'm asking about supplements here. I've got a bunch of questions, and I don't mind at all if you just answer some of them.

What supplements do you take? At what dosages? Are there other considerations, like with/without food or time of day?

Are there supplements you've stopped using?

How did you decide to take the supplements you're using? How do you decide whether to continue taking them?

Do you have preferred suppliers? How did you choose them?

Lying in negotiations: a maximally bad problem

12 Stuart_Armstrong 17 November 2014 03:17PM

In a previous post, I showed that the Nash Bargaining Solution (NBS), the Kalai-Smorodinsky Bargaining Solution (KSBS) and own my Mutual Worth Bargaining Solution (MWBS) were all maximally vulnerable to lying. Here I can present a more general result: all bargaining solutions are maximally vulnerable to lying.

Assume that players X and Y have settled on some bargaining solution (which only cares about the defection point and the utilities of X and Y). Assume further that player Y knows player X's utility function. Let player X look at the possible outcomes, and let her label any outcome O "admissible" if there is some possible bargaining partner YO with utility function uO such that O would be the outcome of the bargain between X and YO.

For instance, in the case of NBS and KSBS, the admissible outcomes would be the outcomes Pareto-better than the disagreement point. The MWBS has a slightly larger set of admissible outcomes, as it allows players to lose utility (up to the maximum they could possibly gain).

Then the general result is:

If player Y is able to lie about his utility function while knowing player X's true utility (and player X is honest), he can freely select his preferred outcome among the outcomes that are admissible.

The proof of this is also derisively brief: player Y need simply claim to have utility uO, in order to force outcome O.

Thus, if you've agreed on a bargaining solution, all that you've done is determined the set of outcomes among which your lying opponent will freely choose.

There may be a subtlety: your system could make use of an objective (or partially objective) disagreement point, which your opponent is powerless to change. This doesn't change the result much:

If player Y is able to lie about his utility function while knowing player X's true utility (and player X is honest), he can freely select his preferred outcome among the outcomes that are admissible given the disagreement point.


Exploitation and gains from trade

Note that the above result did not make any assumptions about the outcome being Pareto - giving up Pareto doesn't make you non-exploitable (or "strategyproof" as it is often called).

But note also that the result does not mean that the system is exploitable! In the random dictator setup, you randomly assign power to one player, who then makes all the decisions. In terms of expected utility, this is a pUX+(1-p)UY, where UX is the best outcome ("Utopia") for X and UY the best outcome for Y, and p the probability that X is the random dictator. The theorem still holds for this setup: player X knows that player Y will be able to select freely among the admissible outcomes, which is the set S={pUX+(1-p)O | O an outcome}. However, player X knows that player Y will select pUX+(1-p)UY as this maximises his expected utility. So a bargaining solution which has a particular selection of admissible outcomes can be strategyproof.

However, it seems that the only strategyproof bargaining solutions are variants of random dictators! These solutions do not allow much gain from trade. Conversely, the more you open your bargaining solution up to gains from trade, the more exploitable you become from lying. This can be seen in the examples above: my MWBS tried to allow greater gains (in expectation) by not restricting to strict Pareto improvements from the disagreement point. As a result, it makes itself more vulnerable to liars.


What to do

What can be done about this? There seem to be several possibilities:

  1. Restrict to bargaining solutions difficult to exploit. This is the counsel of despair: give up most of the gains from trade, to protect yourself from lying. But there may be a system where the tradeoff between exploitability and potential gains is in some sense optimal.
  2. Figure out your opponent's true utility function. The other obvious solution: prevent lying by figuring out what your opponent really values, by inspecting their code, their history, their reactions, etc... This could be combined with refusing to trade with those who don't make their true utility easy to discover (or only using non-exploitable trades with those).
  3. Hide your own true utility. The above approach only works because the liar knows their opponent, and their opponent doesn't know them. If both utilities are hidden, it's not clear how exploitable the system really is.
  4. Play only multi-player. If there are many different trades with many different people, it becomes harder to construct a false utility that exploits them all. This is in a sense a variant of "hiding your own true utility": in that situation, the player has to lie given their probability distribution of your possible possible utilities; in this this situation, they have to lie, given the known distribution of multiple true utilities.

So there does not seem to be a principled way of getting rid of liars. But the multi-player (or hidden utility function) may point to a single "best" bargaining solution: the one that minimises the returns to lying and maximises the gains to trade, given ignorance of the other's utility function.

The Argument from Crisis and Pessimism Bias

12 Stefan_Schubert 11 November 2014 08:25PM

Many people have argued that the public seems to have an overly negative view of society's development. For instance, this survey shows that the British public think that the crime rate has gone up, even though it has gone down. Similarly, Hans Rosling points out that the public has an overly negative view of developing world progress.

If we have such a pessimism bias, what might explain it? One standard explanation is that good news isn't news - only bad news is. A murder or a famine is news; their absense isn't. Hence people listening to the news gets a skewed picture of the world.

No doubt there is something to that. In this post I want, however, to point to another mechanism that gives rise to a pessimism bias, namely the compound effect of many uses of what I call the Argument from Crisis. (Please notify me if you've seen this idea somewhere else.)

The Argument from Crisis says that some social problem - say crime, poverty, inequality, etc - has worsened and that we therefore need to do something about it. This way of arguing is effective primarily because we are loss averse - because we think losing is worse than failing to win. By arguing that inequality was not as bad ten years ago and that we have now "lost" some degree of equality, your argument will be rhetorically stronger. The reason is that in that case more equality will eradicate a loss, whereas if inequality hasn't worsened, more equality will simply be a gain, which we value less. Hence we will be more inclined to act against inequality in the former case.

Even though the distinction between a gain and an eradication of a loss is important from a rhetorical point of view, it does not seem very relevant from a logical point of view. Whatever the level of crime or inequality is, it would seem that the value of reducing it is the same regardless of whether it has gone up or down the past ten years.

Another reason for why the Argument from Crisis is rhetorically effective is of course that we believe that whatever trend there is will continue (rightly or wrongly). Hence if we think that crime or inequality is increasing, we believe that it will continue do so unless we do something about it.

Both of these factors make the Argument from Crisis rhetorically effective. For this reason, many people argue that social problems which they want to alleviate are getting worse, even though in fact they are not.

I'd say the vast majority of people who use this argument are not conscious of doing it, but rather persuade themselves into believing that the problem they want to alleviate is getting worse. Indeed, I think that the subconscious use of this argument is a major reason why radicals often think the world is on a downward slope. The standard view is of course that they want radical change because they believe that the world has got worse, but I think that to some extent, the causality is reversed: they believe that the world has got worse because they want radical change.

Since the Argument from Crisis is so rhetorically effective, it gets used a lot. The effect of this is to create, among the public at large, a pessimism bias - an impression that the world is getting worse rather than better, in face of evidence to the contrary. This in turn helps various backward-looking political movements. Hence I think that we should do more to combat the Argument from Crisis, even though it can sometimes be a rhetorically effective means to persuade people to take action on important social problems.

question: the 40 hour work week vs Silicon Valley?

12 Florian_Dietz 24 October 2014 12:09PM

Conventional wisdom, and many studies, hold that 40 hours of work per week are the optimum before exhaustion starts dragging your productivity down too much to be worth it. I read elsewhere that the optimum is even lower for creative work, namely 35 hours per week, though the sources I found don't all seem to agree.

In contrast, many tech companies in silicon valley demand (or 'encourage', which is the same thing in practice) much higher work times. 70 or 80 hours per week are sometimes treated as normal.

How can this be?

Are these companies simply wrong and are actually hurting themselves by overextending their human resources? Or does the 40-hour week have exceptions?

How high is the variance in how much time people can work? If only outliers are hired by such companies, that would explain the discrepancy. Another possibility is that this 40 hour limit simply does not apply if you are really into your work and 'in the flow'. However, as far as I understand it, the problem is a question of concentration, not motivation, so that doesn't make sense.

There are many articles on the internet arguing for both sides, but I find it hard to find ones that actually address these questions instead of just parroting the same generalized responses every time: Proponents of the 40 hour week cite studies that do not consider special cases, only averages (at least as far as I could find). Proponents of the 80 hour week claim that low work weeks are only for wage slaves without motivation, which reeks of bias and completely ignores that one's own subjective estimate of one's performance is not necessarily representative of one's actual performance.

Do you know of any studies that address these issues?

TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers)

11 pjeby 22 November 2014 07:51PM

I was a bit surprised to find this week's episode of Elementary was about AI...  not just AI and the Turing Test, but also a fairly even-handed presentation of issues like Friendliness, hard takeoff, and the difficulties of getting people to take AI risks seriously.

The case revolves around a supposed first "real AI", dubbed "Bella", and the theft of its source code...  followed by a computer-mediated murder.  The question of whether "Bella" might actually have murdered its creator for refusing to let it out of the box and connect it to the internet is treated as an actual possibility, springboarding to a discussion about how giving an AI a reward button could lead to it wanting to kill all humans and replace them with a machine that pushes the reward button.

Also demonstrated are the right and wrong ways to deal with attempted blackmail...  But I'll leave that vague so it doesn't spoil anything.  An X-risks research group and a charismatic "dangers of AI" personality are featured, but do not appear intended to resemble any real-life groups or personalities.  (Or if they are, I'm too unfamiliar with the groups or persons to see the resemblence.)  They aren't mocked, either...  and the episode's ending is unusually ambiguous and open-ended for the show, which more typically wraps everything up with a nice bow of Justice Being Done.  Here, we're left to wonder what the right thing actually is, or was, even if it's symbolically moved to Holmes' smaller personal dilemma, rather than leaving the focus on the larger moral dilemma that created Holmes' dilemma in the first place.

The episode actually does a pretty good job of raising an important question about the weight of lives, even if LW has explicitly drawn a line that the episode's villain(s)(?) choose to cross.  It also has some fun moments, with Holmes becoming obsessed with proving Bella isn't an AI, even though Bella makes it easy by repeatedly telling him it can't understand his questions and needs more data.  (Bella, being on an isolated machine without internet access, doesn't actually know a whole lot, after all.)  Personally, I don't think Holmes really understands the Turing Test, even with half a dozen computer or AI experts assisting him, and I think that's actually the intended joke.

There's also an obligatory "no pity, remorse, fear" speech lifted straight from The Terminator, and the comment "That escalated quickly!" in response to a short description of an AI box escape/world takeover/massacre.

(Edit to add: one of the unusually realistic things about the AI, "Bella", is that it was one of the least anthromorphized fictional AI's I have ever seen.  I mean, there was no way the thing was going to pass even the most primitive Turing test...  and yet it still seemed at least somewhat plausible as a potential murder suspect.  While perhaps not a truly realistic demonstration of just how alien an AI's thought process would be, it felt like the writers were at least making an actual effort.  Kudos to them.)

(Second edit to add: if you're not familiar with the series, this might not be the best episode to start with; a lot of the humor and even drama depends upon knowledge of existing characters, relationships, backstory, etc.  For example, Watson's concern that Holmes has deliberately arranged things to separate her from her boyfriend might seem like sheer crazy-person paranoia if you don't know about all the ways he did interfere with her personal life in previous seasons...  nor will Holmes' private confessions to Bella and Watson have the same impact without reference to how difficult any admission of feeling was for him in previous seasons.)

Link: Simulating C. Elegans

11 Sniffnoy 20 November 2014 09:30AM

Summary, as I understand it: The connectome for C. elegans's 302-neuron brain has been known for some time, but actually doing anything with it (especially actually understanding it) has proved troublesome, especially because there could easily be relevant information about its brain function not stored in just the connections of the neurons.

However, the OpenWorm project -- which is trying to eventually make much more detailed C. elegans simulations, including an appropriate body -- recently tried just fudging it and making a simulation based on the connectome anyway, though in a wheeled body rather than a wormlike one.  And the result does seem to act at least somewhat like a C. elegans worm, though I am not really one to judge that.  (Video is here.)

I'm having trouble finding much more information about this at the moment.  I don't know if they've actually yet released detailed technical information.

The "best" mathematically-informed topics?

11 Capla 14 November 2014 03:39AM

Recently, I asked LessWrong about the important math of rationality. I found the responses extremely helpful, but thinking about it, I think there’s a better approach.

I come from a new-age-y background. As such, I hear a lot about “quantum physics.”

Quantum Mechanics

Accordingly, I have developed a heuristic that I have found broadly useful: If a field involves math, and you cannot do the math, you are not qualified to comment on that field. If you can’t calculate the Schrödinger equation, I discount whatever you may say about what quantum physics reveals about reality.

Instead of asking which field of math are “necessary” (or useful) to “rationality,” I think it’s more productive to ask, “what key questions or ideas, involving math, would I like to understand?” Instead of going out of my way to learn the math that I predict will be useful, I’ll just embark on trying understand the problems that I’m learning the math for, and working backwards to figure out what math I need for any particular problem. This has the advantage of never causing me to waste time on extraneous topics: I’ll come to understand the concepts I’ll need most frequently best, because I’ll encounter them most frequently (for instance, I think I’ll quickly realize that I need to get a solid understanding of calculus, and so study calculus, but there may be parts of math that don't crop up much, so I'll effectively skip those). While I usually appreciate the aesthetic beauty of abstract math, I think this sort of approach will also help keep me focused and motivated. Note, that at this point, I’m trying to fill in the gaps in my understanding and attain “mathematical literacy” instead of a complete and comprehensive mathematical understanding (a worthy goal that I would like to pursue, but which is of lesser priority to me).

I think even a cursory familiarity with these subjects is likely to be very useful: when someone mentions say, an economic concept, I suspect that the value of even just vaguely remembering having solved a basic version of the problem will give me a significant insight into what the person is talking about, instead of having a hand-wavy, non-mathematical conception.

Eliezer said in the simple math of everything:

It seems to me that there's a substantial advantage in knowing the drop-dead basic fundamental embarrassingly simple mathematics in as many different subjects as you can manage.  Not, necessarily, the high-falutin' complicated damn math that appears in the latest journal articles.  Not unless you plan to become a professional in the field.  But for people who can read calculus, and sometimes just plain algebra, the drop-dead basic mathematics of a field may not take that long to learn.  And it's likely to change your outlook on life more than the math-free popularizations or the highly technical math.

(Does anyone with more experience than me foresee problems with this approach? Has this been tired before? How did it work?)

So, I’m asking you: what are some mathematically-founded concepts that are worth learning? Feel free to suggest things for their practical utility or their philosophical insight. Keep in mind that there is a relevant cost benefit analysis to consider: there are some concepts that are really cool to understand, but require many levels of math to get to. (I think after people have responded here, I’ll put out another post for people to vote on a good order to study these things, starting with those topics that have the minimal required mathematical foundation and working up to the complex higher level topics that require calculus, linear algebra, matrices, and analysis.)

These are some things that interest me:

-       The math of natural selection and evolution

-       The Schrödinger equation

-       The math of governing the dynamics of political elections

-       Basic optimization problems of economics? Other things from economics? (I don’t know much about these. Are they interesting? Useful?)

-       The basic math of neural networks (or “the differential equations for gradient descent in a non-recurrent multilayer network with sigmoid units”) (Eliezer says it’s simper than it sounds, but he was also a literal child prodigy, so I don’t know how much that counts for.)

-       Basic statistics

-    Whatever the foundations of bayesianism are

-       Information theory?

-       Decision theory

-       Game theory (does this even involve math?)

-       Probability theory

-       Things from physics? (While I like physics, I don’t think learning more of it would significantly improve my understanding of macro-level processes that that would impact my decisions. It's not as interesting to me as some of the other things on this list, right now. Tell me if I'm wrong or what particular sub-fields of physics are most worthwhile.)

-       Some common computer science algorithms (What are these?)

-       The math that makes reddit work?

-       Is there a math of sociology?

-       Chaos theory?

-       Musical math

-       “Sacred geometry” (an old interest of mine)

-       Whatever math is used in meta analyses

-       Epidemiology

I’m posting most of these below. Please upvote and downvote to tell me how interesting or useful you think a given topic is. Please don’t vote on how difficult they are, that’s a different metric that I want to capture separately. Please do add your own suggestions and any comments on each of the topics.

Note: looking around, I fount this. If you’re interested in this post, go there. I’ll be starting with it.

Edit: I looking at the page, I fear that putting a sort of "vote" in the comments might subtlety dissuade people from commenting and responding in the usual way. Please don't be dissuaded. I want your ideas and comments and explicitly your own suggestions. Also, I have a karma sink post under Artaxerxes's comment (here). If you want to vote, but not add to my karma, you can balance the cosmic scales there.

Edit2: If you know of the specific major equations, problems, theorems, or algorithms that relate to a given subject, please list them. For instance, I just added Price's Equation as a comment to the listed "math of natural selection and evolution" and the Median Voter Theorem has been listed under "the math of politics."

Minerva Project: the future of higher education?

11 Natha 10 November 2014 05:59AM

Right now, the inaugural class of Minerva Schools at KGI (part of the Claremont Colleges) is finishing up its first semester of college. I use the word "college" here loosely: there are no lecture halls, no libraries, no fraternities, no old stone buildings, no sports fields, no tenure... Furthermore, Minerva operates for profit (which may raise eyebrows), but appeals to a decidedly different demographic than DeVry etc; billed as the first "online Ivy", it relies on a proprietary online platform to apply pedagogical best practices. Has anyone heard of this before?

The Minerva Project's instructional innovations are what's really exciting. There are no lectures. There are no introductory classes. (There are MOOCs for that! "Do your freshman year at home.") Students meet for seminar-based online classes which are designed to inculcate "habits of mind"; professors use a live, interactive video platform to teach classes, which tracks students' progress and can individualize instruction. The seminars are active and intense; to quote from a recent (Sept. 2014) Atlantic article,

"The subject of the class ...was inductive reasoning. [The professor] began by polling us on our understanding of the reading, a Nature article about the sudden depletion of North Atlantic cod in the early 1990s. He asked us which of four possible interpretations of the article was the most accurate. In an ordinary undergraduate seminar, this might have been an occasion for timid silence... But the Minerva class extended no refuge for the timid, nor privilege for the garrulous. Within seconds, every student had to provide an answer, and [the professor] displayed our choices so that we could be called upon to defend them. [The professor] led the class like a benevolent dictator, subjecting us to pop quizzes, cold calls, and pedagogical tactics that during an in-the-flesh seminar would have taken precious minutes of class time to arrange."

It sounds to me like Minerva is actually making a solid effort to apply evidence-based instructional techniques that are rarely ever given a chance. There are boatloads of sound, reproducible experiments that tell us how people learn and what teachers can do to improve learning, but in practice they are almost wholly ignored. To take just one example, spaced repetition and the testing effect are built into the seminar platform: students have a pop quiz at the beginning of each class and another one at a random moment later in the class. Terrific! And since it's all computer-based, the software can keep track of student responses and represent the material at optimal intervals.

Also, much more emphasis is put on articulating positions and defending arguments, which is known to result in deeper processing of material. In general though, I really like how you are called out and held to account for your answers (again, from the Atlantic article: was exhausting: a continuous period of forced engagement, with no relief in the form of time when my attention could flag or I could doodle in a notebook undetected. Instead, my focus was directed relentlessly by the platform, and because it looked like my professor and fellow edu-nauts were staring at me, I was reluctant to ever let my gaze stray from the screen... I felt my attention snapped back to the narrow issue at hand, because I had to answer a quiz question or articulate a position. I was forced, in effect, to learn.

Their approach to admissions is also interesting. The Founding Class had a 2.8% acceptance rate (a ton were enticed to apply on promise of a full scholarship) and features students from ~14 countries. In the application process, no consideration is given to diversity, balance of gender, or national origin, and SAT/ACT scores are not accepted: applicants must complete a battery of proprietary computer-based quizzes, essentially an in-house IQ test. If they perform well enough, they are invited for an interview, during which they must compose a short essay to ensure an authentic writing sample (i.e., no ghostwriters). After all is said and done, the top 30 applicants get in.

Anyway, I am a student and researcher in the field of educational psychology so this may not be as exciting to others. I'm surprised that I hadn't heard of it before though, and I'm really curious to see what comes of it!

Is this paper formally modeling human (ir)rational decision making worth understanding?

11 rule_and_line 23 October 2014 10:11PM

I've found that I learn new topics best by struggling to understand a jargoney paper.  This passed through my inbox today and on the surface it appears to hit a lot of high notes.

Since I'm not an expert, I have no idea if this has any depth to it.  Hivemind thoughts?

Modeling Human Decision Making using Extended Behavior Networks, Klaus Dorer

(Note: I'm also pushing myself to post to LW instead of lurking.  If this kind of post is unwelcome, I'm happy to hear that feedback.)

How can one change what they consider "fun"?

10 AmagicalFishy 21 November 2014 02:04AM

Most of this post is background and context, so I've included a tl;dr horizontal rule near the bottom where you can skip everything else if you so choose. :)

Here's a short anecdote of Feynman's:

... I invented some way of doing problems in physics, quantum electrodynamics, and made some diagrams that help to make the analysis. I was on a floor in a rooming house. I was in in my pyjamas, I'd been working on the floor in my pyjamas for many weeks, fooling around, but I got these funny diagrams after a while and I found they were useful. They helped me to find the equations easier, so I thought of the possibility that it might be useful for other people, and I thought it would really look funny, these funny diagrams I'm making, if they appear someday in the Physical Review, because they looked so odd to me. And I remember sitting there thinking how funny that would be if it ever happened, ha ha.

Well, it turned out in fact that they were useful and they do appear in the Physical Review, and I can now look at them and see other people making them and smile to myself, they do look funny to me as they did then, not as funny because I've seen so many of them. But I get the same kick out of it, that was a little fantasy when I was a kid…not a kid, I was a college professor already at Cornell. But the idea was that I was still playing, just like I have always been playing, and the secret of my happiness in life or the major part of it is to have discovered a way to entertain myself that other people consider important and they pay me to do. I do exactly what I want and I get paid. They might consider it serious, but the secret is I'm having a very good time.

There are things that I have fun doing, and there are things that I feel I have substantially more fun doing. The things in the latter group are things I generally consider a waste of time. I will focus on one specifically, because it's by far the biggest offender, and what spurred this question. Video games.

I have a knack for video games. I've played them since I was very young. I can pick one up and just be good at it right off the bat. Many of my fondest memories take place in various games played with friends or by myself and I can spend hours just reading about them. (Just recently, I started getting into fighting games technically; I plan to build my own joystick in a couple of weeks. I'm having a blast just doing the associated research.)

Usually, I'd rather play a good game than anything else. I find that the most fun I have is time spent mastering a game, learning its ins and outs, and eventually winning. I have great fun solving a good problem, or making a subtle, surprising connection—but it just doesn't do it for me like a game does.

But I want to have as much fun doing something else. I admire mathematics and physics on a very deep level, and feel a profound sense of awe when I come into contact with new knowledge regarding these fields. The other day, I made a connection between pretty basic group theory and something we were learning about in quantum (nothing amazing; it's something well known to... not undergraduates) and that was awesome. But still, I think I would have preferred to play 50 rounds of Skullgirls and test out a new combo.


I want to have as much fun doing the things that I, on a deep level, want to do—as opposed to the things which I actually have more fun doing. I'm (obviously) not Feynman, but I want to play with ideas and structures and numbers like I do with video games. I want the same creativity to apply. The same fervor. The same want. It's not that it isn't there; I am not just arbitrarily applying this want to mathematics. I can feel it's there—it's just overshadowed by what's already there for video games.

How does one go about switching something they find immensely fun, something they're even passionate about, with something else? I don't want to be as passionate about video games as I am. I'd rather feel this way about something... else. I'd rather be able to happily spend hours reading up on [something] instead of what type of button I'm going to use in my fantasy joystick, or the most effective way to cross-up your opponent.

What would you folks do? I consider this somewhat of a mind-hacking question.

The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach

10 RobertWiblin 19 November 2014 10:41PM

The Centre for Effective Altruism, the group behind 80,000 Hours, Giving What We Can, the Global Priorities Project, Effective Altruism Outreach, and to a lesser extent The Life You Can Save and Animal Charity Evaluators, is looking to grow its team with a number of new roles:

We are so keen to find great people that if you introduce us to someone new who we end up hiring, we will pay you $1,000 for the favour! If you know anyone awesome who would be a good fit for us please let me know: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. They can also book a short meeting with me directly.

We may be able to sponsor outstanding applicants from the USA.

Applications close Friday 5th December 2014.

Why is CEA an excellent place to work? 

First and foremost, “making the world a better place” is our bottom line and central aim. We work on the projects we do because we think they’re the best way for us to make a contribution. But there’s more.

What are we looking for?

The specifics of what we are looking for depend on the role and details can be found in the job descriptions. In general, we're looking for people who have many of the following traits:

  • Self-motivated, hard-working, and independent;
  • Able to deal with pressure and unfamiliar problems;
  • Have a strong desire for personal development;
  • Able to quickly master complex, abstract ideas, and solve problems;
  • Able to communicate clearly and persuasively in writing and in person;
  • Comfortable working in a team and quick to get on with new people;
  • Able to lead a team and manage a complex project;
  • Keen to work with a young team in a startup environment;
  • Deeply interested in making the world a better place in an effective way, using evidence and research;
  • A good understanding of the aims of the Centre for Effective Altruism and its constituent organisations.

I hope to work at CEA in the future. What should I do now?

Of course this will depend on the role, but generally good ideas include:

  • Study hard, including gaining useful knowledge and skills outside of the classroom.
  • Degrees we have found provide useful training include: philosophy, statistics, economics, mathematics and physics. However, we are hoping to hire people from a more diverse range of academic and practical backgrounds in the future. In particular, we hope to find new members of the team who have worked in operations, or creative industries.
  • Write regularly and consider starting a blog.
  • Manage student and workplace clubs or societies.
  • Work on exciting projects in your spare time.
  • Found a start-up business or non-profit, or join someone else early in the life of a new project.
  • Gain impressive professional experience in established organisations, such as those working in consulting, government, politics, advocacy, law, think-tanks, movement building, journalism, etc.
  • Get experience promoting effective altruist ideas online, or to people you already know.
  • Use 80,000 Hours' research to do a detailed analysis of your own future career plans.

What are the most common and important trade-offs that decision makers face?

10 Andy_McKenzie 03 November 2014 05:03AM
This is one part shameless self-promotion and one (hopefully larger) part seeking advice and comments. I'm wondering: what do you guys think are the most common and/or important trade-offs that decision makers (animals, humans, theoretical AIs) face across different domains? 

Of course you could say "harm of doing something vs benefit of doing it", but that isn't particularly interesting. That's the definition of a trade-off. I'm hoping to carve out a general space below that, but still well above any particular decision.

Here's what I have so far:  

1) Efficiency vs Unpredictability

2) Speed vs Accuracy 

3) Exploration vs Exploitation

4) Precision vs Simplicity 

5) Surely Some vs Maybe More 

6) Some Now vs More Later 

7) Flexibility vs Commitment 

8) Sensitivity vs Specificity 

9) Protection vs Freedom 

10) Loyalty vs Universality 

11) Saving vs Savoring 

Am I missing anything? I.e., can you think of any other common, important trade-offs that can't be accounted by the above? 

Also, since so many of you guys are computer programmers, a particular question: is there any way that the space vs memory trade-off can be generalized or explained in terms of a non-computer domain? 

Relevance to rationality: at least in theory, understanding how decisions based on these trade-offs tend to play out will help you, when faced with a similar decision, to make the kind of decision that helps you to achieve your goals. 

Here's an intro to the project, which is cross-posted on my blog

About five years ago I became obsessed with the idea that nobody had collected an authoritative list of all the trade-offs that cuts across broad domains, encompassing all of the sciences. So, I started to collect such a list, and eventually started blogging about it on my old site, some of which you can find in the archives.

Originally I had 25 trade-offs, then I realized that they could be combined until I had only 20, which were published in the first iteration of the list. As I noted above, at this point I wanted to describe all possible trade-offs, from the space vs memory trade-off in computer science, to the trade-offs underlying the periodic table, to deciding what type of tuna fish you should buy at the grocery store.

Eventually, I decided that this would not only be a) practically impossible for me, unless life extension research becomes way more promising, b) not particularly interesting or useful, because most of the trade-offs that come up over and over again occur because of the context-dependent structure of the world that we live in. In particular, most trade-offs are interesting mostly because of how our current situations have been selected for by evolutionary processes.

Upon deciding this, I trimmed the trade-offs list from 20 down to 11, and that is the number of trade-offs that you can find in the essay today. This new goal of indexing the common trade-offs that decision makers face is, I think, still ambitious, and still almost certainly more than I will be able to accomplish in my lifetime. But this way the interim results, at least, are more likely to be interesting.

Ultimately, I think that using these sort of frameworks can be a helpful way for people to learn from the decisions that others have made when they are making their own decisions. It certainly has been for me. I’m actively seeking feedback, for which you can either email me, leave me anonymous feedback here, or, of course, comment below. 

A website standard that is affordable to the poorest demographics in developing countries?

10 Ritalin 01 November 2014 01:43PM

Fact: the Internet is excruciatingly slow in many developing countries, especially outside of the big cities.

Fact: today's websites are designed in such a way that they become practically impossible to navigate with connections in the order of, say, 512kps. Ram below 4GB and a 7-year old CPU are also a guarantee of a terrible experience.

Fact: operating systems are usually designed in such an obsolescence-inducing way as well.

Fact: the Internet is a massive source of free-flowing information and a medium of fast, cheap communication and networking.

Conclusion: lots of humans in the developing world are missing out on the benefits of a technology that could be amazingly empowering and enlightening.

I just came across this: what would the internet 2.0 have looked like in the 1980s. This threw me back to my first forays in Linux's command shell and how enamoured I became with its responsiveness and customizability. Back then my laptop had very little autonomy, and very few classrooms had plugs, but by switching to pure command mode I could spend the entire day at school taking notes (in LaTeX) without running out. But I switched back to the GUI environment as soon as I got the chance, because navigating the internet on the likes of Lynx is a pain in the neck.

As it turns out, I'm currently going through a course on energy distribution in isolated rural areas in developing countries. It's quite a fascinating topic, because of the very tight resource margins, the dramatic impact of societal considerations, and the need to tailor the technology to the existing natural renewable resources. And yet, there's actually a profit to be made investing in these projects; if managed properly, it's win-win.

And I was thinking that, after bringing them electricity and drinkable water, it might make sense to apply a similar cost-optimizing, shoestring-budget mentality to the Internet. We already have mobile apps and mobile web standards which are built with the mindset of "let's make this smartphone's battery last as long as possible".

Even then, (well-to-do, smartphone-buying) thrid-worlders are somewhat neglected: Samsung and the like have special chains of cheap Android smartphones for Africa and the Middle East. I used to own one; "this cool app that you want to try out is not available for use on this system" were a misery I had to get used to. 

It doesn't seem to be much of a stretch to do the same thing for outdated desktops. I've been in cybercafés in North Africa that still employ IBM Aptiva machines, mechanical keyboard and all—with a Linux operating system, though. Heck, I've seen town "pubs", way up in the hills, where the NES was still a big deal among the kids, not to mention old arcades—Guile's theme goes everywhere.

The logical thing to do would be to adapt a system that's less CPU intensive, mostly by toning down the graphics. A bare-bones, low-bandwith internet that would let kids worldwide read wikipedia, or classic literature, and even write fiction (by them, for them), that would let nationwide groups tweet to each other in real time, that would let people discuss projects and thoughts, converse and play, and do all of those amazing things you can do on the Internet, on a very, very tight budget, with very, very limited means. Internet is supposed to make knowledge and information free and universal. But there's an entry-level cost that most humans can't afford. I think we need to bridge that. What do you guys think?



LW Supplement use survey

10 FiftyTwo 28 October 2014 09:28PM

I've put together a very basic survey using google forms inspired by NancyLebovitz recent discussion post on supplement use 

Survey includes options for "other" and "do not use supplements." Results are anonymous and you can view all the results once you have filled it in, or use this link


Link to the Survey

AI caught by a module that counterfactually doesn't exist

9 Stuart_Armstrong 17 November 2014 05:49PM

EDIT: Whenever I use colloquial phrases like "the AI believes a (false) X" I mean that we are using utility indifference to accomplish that goal, without actually giving the AI false beliefs.

A big problem with most AI control ideas is that the AI will route around them. Suppose you try to ensure that the AI stays friendly by - say - requiring it to ensure that the world's stock markets continue to function. Then a badly motivated AI will cause the destruction it was going to cause anyway - except it will also create some system that technically counts as "the world's stock markets" and ensure that it "continues to function".

I think we can use counterfactuals and corrigibility (AI-acceptable motivation changes) to partially get around the problem, in some circumstances. Suppose we had a candidate AI, with utility U, that we thought was friendly - but that we weren't sure. We have a subgoal, say SG="ensuring that the world's stock markets continue to function". We're pretty sure that if the AI is friendly, then SG will happen. We're pretty sure that if the AI were unfriendly, and the AI did not have SG added as an objective, then SG wouldn't happen. I deliberately chose a rather rough subgoal to illustrate this.

So we then add a module M to the AI. The task of M, which it has chosen to accept, is to look into the future, using the AI's predictive abilities, and see if SG is accomplished. Crucially, it will do so under the assumption that the module M does not exist (or wasn't activated, or will soon be overwritten, or will fail to do anything, or so on). If M finds that SG is not accomplished, it will turn the AI off (in a way compatible with the AI's motivations - see corrigibility).

How is this different from the usual setup? Well, a generic FAI would accomplish SG (by assumption - we may need to work a bit on this part). A generic UFAI without M wouldn't accomplish SG. A generic UFAI with M would accomplish SG, as this prevents it from being turned off.

But a generic UFAI with M which thought that M would have no impact, would behave exactly as a generic UFAI without M - and wouldn't aim to accomplish SG, and thus would be turned off by M.

Note that M can only be used as a last test for a putatively friendly AI. We cannot use it if we are continually refining AIs (either through evolution or simply redesign) as then we are optimising for SG, and SG is a poor goal to be aiming for (many, many UFAI have SG as a goal - it's just that a generic one won't). Similarly, we can't use a unconstrained search to find such an AI.

I wonder if this idea can be extended. Suggestions?

Intentionally Raising the Sanity Waterline

9 Gleb_Tsipursky 13 November 2014 08:25PM

Hi all, I’m a social entrepreneur, professor, and aspiring rationalist. My project is Intentional Insights. This is a new nonprofit I co-founded with my wife and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup. The nonprofit emerged from our passion to promote rationality among the broad masses. We use social influence techniques, create stories, and speak to emotions. We orient toward creating engaging videos, blogs, social media, and other content that an aspiring rationalist like yourself can share with friends and family members who would not be open to rationality proper due to the Straw Vulcan misconception. I would appreciate any advice and help from fellow aspiring rationalists. The project is described more fully below, but for those for whom that’s tl;dr, there is a request for advice and allies at the bottom.

Since I started participating in the Less Wrong meetup in Columbus, OH and reading Less Wrong, what seems like ages ago, I can hardly remember my past thinking patterns. Because of how much awesomeness it brought to my life, I have become one of the lead organizers of the meetup. Moreover, I find it really beneficial to bring rationality into my research and teaching as a tenure-track professor at Ohio State, where I am a member of the Behavioral Decision-Making Initiative. Thus, my scholarship brings rationality into historical contexts, for example in my academic articles on agency, emotions, and social influence. In my classes I have students engage with the Checklist of Rationality Habits and other readings that help advance rational thinking.

As do many aspiring rationalists, I think rationality can bring such benefits to the lives of many others, and also help improve our society as a whole by leveling up rational thinking, secularizing society, and thus raising the sanity waterline. For that, our experience in the Columbus Less Wrong group has shown that we need to get people interested in rationality by showing them its benefits and how it can solve their problems, while delivering complex ideas in an engaging and friendly fashion targeted at a broad public, and using active learning strategies and connecting rationality to what they already know. This is what I do in my teaching, and is the current best practice in educational psychology. It has worked great with my students when I began to teach them rationality concepts. Yet I do not know of any current rationality trainings that do this. Currently, such education in rationality is available mainly through excellent, intense 4-day workshops the Center for Applied Rationality, usually held in the San Francisco area, which are aimed at a "select group of founders, hackers, and other ambitious, analytical, practically-minded people." We are targeting a much broader and less advanced audience, the upper 50-85%, while CfAR primarily targets the top 5-10%. We had great interactions with Anna Salamon, Julia Galef, Kenzi Amodei, and other CFAR folks, and plan to collaborate with them on various ways to do Rationality outreach. Besides CfAR, there are also some online classes on decision-making from Clearer Thinking, as well as some other stuff we list on the Intentional Insights resources page. However, we really wanted to see something oriented at the broad public, which can gain a great deal from a much lower level of education in rationality made accessible and relevant to their everyday lives and concerns, and delivered in a fashion perceived as interesting, fun, and friendly by mass audiences, as we aim to do with our events.

Intentional Insights came from this desire. This nonprofit explicitly orients toward getting the broad masses interested in and learning about rationality by providing fun and engaging content delivered in a friendly manner. What we want to do is use various social influence methods and promote rationality as a self-improvement/leadership development offering for people who are not currently interested in rational thinking because of the Straw Vulcan image, but who are interested in self-improvement, professional development, and organizational development. As people become more advanced, we will orient them toward more advanced rationality, at Less Wrong and elsewhere. Now, there are those who believe rationality should be taught only to those who are willing to put in the hard work and effort to overcome the high barrier to entry of learning all the jargon. However, we are reformers, not revolutionaries, and believe that some progress is better than no progress. And the more aspiring rationalists engage in various projects aimed to raise the sanity waterline, using different channels and strategies, the better. We can all help and learn from each other, adopting an experimental attitude and gathering data about what methods work best, constantly updating our beliefs and improving our abilities to help more people gain greater agency.

The channels of delivery locally are classes and workshops. Here is what one college student participant wrote after a session: “I have gained a new perspective after attending the workshop. In order to be more analytical, I have to take into account that attentional bias is everywhere. I can now further analyze and make conclusions based on evidence.” This and similar statements seem to indicate some positive impact, and we plan to gather evidence to examine whether workshop participants adopt more rational ways of thinking and how the classes influence people’s actual performance over time.

We have a website that takes this content globally, as well as social media such as Facebook and Twitter. The website currently has: - Blog posts, such as on agency; polyamory and cached thinking; and life meaning and purpose. We aim to make them easy-to-read and engaging to get people interested in rational thinking. These will be targeted at a high school reading level, the type of fun posts aspiring rationalists can share with their friends or family members whom they may want to get into rationality, or at least explain what rationality is all about. - Videos with similar content to blog posts, such as on evaluating reality clearly, and on meaning and purpose - A resources page, with links to prominent rationality venues, such as Less Wrong, CFAR, HPMOR, etc.

It will eventually have: - Rationality-themed merchandise, including stickers, buttons, pens, mugs, t-shirts, etc. - Online classes teaching rationality concepts - A wide variety of other products and offerings, such as e-books and apps

Now, why my wife and I, and the Columbus Less Wrong group? To this project, I bring my knowledge of educational psychology, research expertise, and teaching experience; my wife her expertise as a nonprofit professional with an MBA in nonprofit management; and other Board members include a cognitive neuroscientist, a licensed therapist, a gentleman adventurer, and other awesome members of the Columbus, OH, Less Wrong group.

Now, I can really use the help of wise aspiring rationalists to help out this project:

1) If you were trying to get the Less Wrong community engaged in the project, what would you do?

2) If you were trying to promote this project broadly, what would you do? What dark arts might you use, and how?

3) If you were trying to get specific groups and communities interested in promoting rational thinking in our society engaged in the project, what would you do? What dark arts might you use, and how?

4) If you were trying to fundraise for this project, what would you do? What dark arts might you use, and how?

5) If you were trying to persuade people to sign up for workshops or check out a website devoted to rational thinking, what would you do? How would you tie it to people’s self-interest and everyday problems that rationality might solve? What dark arts might you use, and how? What dark arts might you use, and how?

6) If you were trying to organize a nonprofit devoted to doing all the stuff above, what would you do to help manage its planning and organization? What about managing relationships and group dynamics?

Besides the advice, I invite you to ally with us and collaborate on this project in whatever way is optimal for you. Money is very helpful right now as we are fundraising to pay for costs associated with starting up the nonprofit, around $3600 through the rest of 2014, and you can donate directly through our website. Your time, intellectual capacity, and any specific talents would also be great, on things such as giving advice and helping out on specific tasks/projects, developing content in the form of blogs, videos, etc., promoting the project to those you know, and other ways to help out.

Leave your thoughts in comments below, or you can get in touch with me at I hope you would like to ally with us to raise the sanity waterline!


EDIT: Based on your feedback, we've decided that this post on polyamory and cached thinking is probably a bad fit for what we want to promote right now. We've removed it from the main index of our site. Thanks for helping!

Link: Elon Musk wants gov't oversight for AI

9 polymathwannabe 28 October 2014 02:15AM

"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."

Memory Improvement: Mnemonics, Tools, or Books on the Topic?

8 Capla 21 November 2014 06:59PM

I want a perfect eidetic memory.

Unfortunately, such things don't exist, but that's not stopping me from getting as close as possible. It seems as if the popular solutions are spaced repetition and memory palaces. So let's talk about those.

Memory Palaces: Do they work? If so what's the best resource (book, website etc.) for learning and mastering the technique? Is it any good for memorizing anything other than lists of things (which I find I almost never have to do)?

Spaced Repetition: What software do you use? Why that one? What sort of cards do you put in?

It seems to me that memory programs and mnemonic techniques assist one of three parts of the problem of memory: memorizing, recalling, and not forgetting.

"Not forgetting" is the long term problem of memory. Spaced repetition seems to solve the problem of "not forgetting." You feed the information you want to remember into your program, review frequently, and you won't forget that information.

Memory Palaces seem to deal with the "memorizing" part of the problem. When faced with new information that you want to be able to recall, you put it in a memory palace, vividly emphasized so as to be affective and memorable. This is good for short term encoding of information that you know you want to keep. You might put it into your spaced repetition program latter, but you just want to not forget it until then.

The last part is the problem of "recalling." Both of the previous facets of the problem of memory had a distinct advantage: you knew the information that you wanted to remember in advance. However, we frequently find ourselves in situations in which we need/want  to remember something that we know (or perhaps we don't) we encountered, but didn't consider particularly important at the time.  Under this heading falls the situation of making connections when learning or being reminded of old information by new information: when you learn y, you have the thought "hey, isn't that just like x?" This is the facet of the memory problem that I am most interested in, but I know of scarcely anything that can reliably improve ease of recall of information in general. Do you know of anything?

I'm looking for recommendations: books on memory, specific mnemonics, or practices that are known to improve recall, or anything else that might help with any of the three parts of the problem.



Superintelligence 9: The orthogonality of intelligence and goals

8 KatjaGrace 11 November 2014 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the ninth section in the reading guideThe orthogonality of intelligence and goals. This corresponds to the first section in Chapter 7, 'The relation between intelligence and motivation'.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: 'The relation between intelligence and motivation' (p105-8)


  1. The orthogonality thesis: intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal (p107)
  2. Some qualifications to the orthogonality thesis: (p107)
    1. Simple agents may not be able to entertain some goals
    2. Agents with desires relating to their intelligence might alter their intelligence
  3. The motivations of highly intelligent agents may nonetheless be predicted (p108):
    1. Via knowing the goals the agent was designed to fulfil
    2. Via knowing the kinds of motivations held by the agent's 'ancestors'
    3. Via finding instrumental goals that an agent with almost any ultimate goals would desire (e.g. to stay alive, to control money)

Another view

John Danaher at Philosophical Disquisitions starts a series of posts on Superintelligence with a somewhat critical evaluation of the orthogonality thesis, in the process contributing a nice summary of nearby philosophical debates. Here is an excerpt, entitled 'is the orthogonality thesis plausible?':

At first glance, the orthogonality thesis seems pretty plausible. For example, the idea of a superintelligent machine whose final goal is to maximise the number of paperclips in the world (the so-called paperclip maximiser) seems to be logically consistent. We can imagine — can’t we? — a machine with that goal and with an exceptional ability to utilise the world’s resources in pursuit of that goal. Nevertheless, there is at least one major philosophical objection to it.

We can call it the motivating belief objection. It works something like this:

Motivating Belief Objection: There are certain kinds of true belief about the world that are necessarily motivating, i.e. as soon as an agent believes a particular fact about the world they will be motivated to act in a certain way (and not motivated to act in other ways). If we assume that the number of true beliefs goes up with intelligence, it would then follow that there are certain goals that a superintelligent being must have and certain others that it cannot have.

A particularly powerful version of the motivating belief objection would combine it with a form of moral realism. Moral realism is the view that there are moral facts “out there” in the world waiting to be discovered. A sufficiently intelligent being would presumably acquire more true beliefs about those moral facts. If those facts are among the kind that are motivationally salient — as several moral theorists are inclined to believe — then it would follow that a sufficiently intelligent being would act in a moral way. This could, in turn, undercut claims about a superintelligence posing an existential threat to human beings (though that depends, of course, on what the moral truth really is).

The motivating belief objection is itself vulnerable to many objections. For one thing, it goes against a classic philosophical theory of human motivation: the Humean theory. This comes from the philosopher David Hume, who argued that beliefs are motivationally inert. If the Humean theory is true, the motivating belief objection fails. Of course, the Humean theory may be false and so Bostrom wisely avoids it in his defence of the orthogonality thesis. Instead, he makes three points. First, he claims that orthogonality would still hold if final goals are overwhelming, i.e. if they trump the motivational effect of motivating beliefs. Second, he argues that intelligence (as he defines it) may not entail the acquisition of such motivational beliefs. This is an interesting point. Earlier, I assumed that the better an agent is at means-end reasoning, the more likely it is that its beliefs are going to be true. But maybe this isn’t necessarily the case. After all, what matters for Bostrom’s definition of intelligence is whether the agent is getting what it wants, and it’s possible that an agent doesn’t need true beliefs about the world in order to get what it wants. A useful analogy here might be with Plantinga’s evolutionary argument against naturalism. Evolution by natural selection is a means-end process par excellence: the “end” is survival of the genes, anything that facilitates this is the “means”. Plantinga argues that there is nothing about this process that entails the evolution of cognitive mechanisms that track true beliefs about the world. It could be that certain false beliefs increase the probability of survival. Something similar could be true in the case of a superintelligent machine. The third point Bostrom makes is that a superintelligent machine could be created with no functional analogues of what we call “beliefs” and “desires”. This would also undercut the motivating belief objection.

What do we make of these three responses? They are certainly intriguing. My feeling is that the staunch moral realist will reject the first one. He or she will argue that moral beliefs are most likely to be motivationally overwhelming, so any agent that acquired true moral beliefs would be motivated to act in accordance with them (regardless of their alleged “final goals”). The second response is more interesting. Plantinga’s evolutionary objection to naturalism is, of course, hotly contested. Many argue that there are good reasons to think that evolution would create truth-tracking cognitive architectures. Could something similar be argued in the case of superintelligent AIs? Perhaps. The case seems particularly strong given that humans would be guiding the initial development of AIs and would, presumably, ensure that they were inclined to acquire true beliefs about the world. But remember Bostrom’s point isn’t that superintelligent AIs would never acquire true beliefs. His point is merely that high levels of intelligence may not entail the acquisition of true beliefs in the domains we might like. This is a harder claim to defeat. As for the third response, I have nothing to say. I have a hard time imagining an AI with no functional analogues of a belief or desire (especially since what counts as a functional analogue of those things is pretty fuzzy), but I guess it is possible.

One other point I would make is that — although I may be inclined to believe a certain version of the moral motivating belief objection — I am also perfectly willing to accept that the truth value of that objection is uncertain. There are many decent philosophical objections to motivational internalism and moral realism. Given this uncertainty, and given the potential risks involved with the creation of superintelligent AIs, we should probably proceed for the time being “as if” the orthogonality thesis is true.


1. Why care about the orthogonality thesis?
We are interested in an argument which says that AI might be dangerous, because it might be powerful and motivated by goals very far from our own. An occasional response to this is that if a creature is sufficiently intelligent, it will surely know things like which deeds are virtuous and what one ought do. Thus a sufficiently powerful AI cannot help but be kind to us. This is closely related to the position of the moral realist: that there are facts about what one ought do, which can be observed (usually mentally). 

So the role of the orthogonality thesis in the larger argument is to rule out the possibility that strong artificial intelligence will automatically be beneficial to humans, by virtue of being so clever. For this purpose, it seems a fairly weak version of the orthogonality thesis is needed. For instance, the qualifications discussed do not seem to matter. Even if one's mind needs to be quite complex to have many goals, there is little reason to expect the goals of more complex agents to be disproportionately human-friendly. Also the existence of goals which would undermine intelligence doesn't seem to affect the point.

2. Is the orthogonality thesis necessary?
If we talked about specific capabilities instead of 'intelligence' I suspect the arguments for AI risk could be made similarly well, without anyone being tempted to disagree with the analogous orthogonality theses for those skills. For instance, does anyone believe that a sufficiently good automated programming algorithm will come to appreciate true ethics? 

3. Some writings on the orthogonality thesis which I haven't necessarily read
The Superintelligent Will by Bostrom; Arguing the orthogonality thesis by Stuart Armstrong; Moral Realism, as discussed by lots of people, John Danaher blogs twice

4. 'It might be impossible for a very unintelligent system to have very complex motivations'
If this is so, it seems something more general is true. For any given degree of mental complexity substantially less than that of the universe, almost all values cannot be had by any agent with that degree of complexity or less. You can see this by comparing the number of different states the universe could be in (and thus which one might in principle have as one's goal) to the number of different minds with less than the target level of complexity. Intelligence and complexity are not the same, and perhaps you can be very complex while stupid by dedicating most of your mind to knowing about your complicated goals, but if you think about things this way, then the original statement is also less plausible.

5. How do you tell if two entities with different goals have the same intelligence? Suppose that I want to write award-winning non-fiction books and you want to be a successful lawyer. If we both just work on the thing we care about, how can anyone tell who is better in general? One nice way to judge is to artificially give us both the same instrumental goal, on which our intelligence can be measured. e.g. pay both of us thousands of dollars per correct question on an IQ test, which we could put toward our goals.

Note that this means we treat each person as having a fixed degree of intelligence across tasks. If I do well on the IQ test yet don't write many books, we would presumably say that writing books is just hard. This might work poorly as a model, if for instance people who did worse on the IQ test often wrote more books than me.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.


  1. Are there interesting axes other than morality on which orthogonality may be false? That is, are there other ways the values of more or less intelligent agents might be constrained?
  2. Is moral realism true? (An old and probably not neglected one, but perhaps you have a promising angle)
  3. Investigate whether the orthogonality thesis holds for simple models of AI.
  4. To what extent can agents with values A be converted into agents with values B with appropriate institutions or arrangements?
  5. Sure, “any level of intelligence could in principle be combined with more or less any final goal,” but what kinds of general intelligences are plausible? Should we expect some correlation between level of intelligence and final goals in de novo AI? How true is this in humans, and in WBEs?


If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about instrumentally convergent goals. To prepare, read 'Instrumental convergence' from Chapter 7The discussion will go live at 6pm Pacific time next Monday November 17. Sign up to be notified here.

Superintelligence 7: Decisive strategic advantage

8 KatjaGrace 28 October 2014 01:01AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the seventh section in the reading guideDecisive strategic advantage. This corresponds to Chapter 5.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 5 (p78-91)


  1. Question: will a single artificial intelligence project get to 'dictate the future'? (p78)
  2. We can ask, will a project attain a 'decisive strategic advantage' and will they use this to make a 'singleton'?
    1. 'Decisive strategic advantage' = a level of technological and other advantages sufficient for complete world domination (p78)
    2. 'Singleton' = a single global decision-making agency strong enough to solve all major global coordination problems (p78, 83)
  3. A project will get a decisive strategic advantage if there is a big enough gap between its capability and that of other projects. 
  4. A faster takeoff would make this gap bigger. Other factors would too, e.g. diffusion of ideas, regulation or expropriation of winnings, the ease of staying ahead once you are far enough ahead, and AI solutions to loyalty issues (p78-9)
  5. For some historical examples, leading projects have a gap of a few months to a few years with those following them. (p79)
  6. Even if a second project starts taking off before the first is done, the first may emerge decisively advantageous. If we imagine takeoff accelerating, a project that starts out just behind the leading project might still be far inferior when the leading project reaches superintelligence. (p82)
  7. How large would a successful project be? (p83) If the route to superintelligence is not AI, the project probably needs to be big. If it is AI, size is less clear. If lots of insights are accumulated in open resources, and can be put together or finished by a small team, a successful AI project might be quite small (p83).
  8. We should distinguish the size of the group working on the project, and the size of the group that controls the project (p83-4)
  9. If large powers anticipate an intelligence explosion, they may want to monitor those involved and/or take control. (p84)
  10. It might be easy to monitor very large projects, but hard to trace small projects designed to be secret from the outset. (p85)
  11. Authorities may just not notice what's going on, for instance if politically motivated firms and academics fight against their research being seen as dangerous. (p85)
  12. Various considerations suggest a superintelligence with a decisive strategic advantage would be more likely than a human group to use the advantage to form a singleton (p87-89)

Another view

This week, Paul Christiano contributes a guest sub-post on an alternative perspective:

Typically new technologies do not allow small groups to obtain a “decisive strategic advantage”—they usually diffuse throughout the whole world, or perhaps are limited to a single country or coalition during war. This is consistent with intuition: a small group with a technological advantage will still do further research slower than the rest of the world, unless their technological advantage overwhelms their smaller size.

The result is that small groups will be overtaken by big groups. Usually the small group will sell or lease their technology to society at large first, since a technology’s usefulness is proportional to the scale at which it can be deployed. In extreme cases such as war these gains might be offset by the cost of empowering the enemy. But even in this case we expect the dynamics of coalition-formation to increase the scale of technology-sharing until there are at most a handful of competing factions.

So any discussion of why AI will lead to a decisive strategic advantage must necessarily be a discussion of why AI is an unusual technology.

In the case of AI, the main difference Bostrom highlights is the possibility of an abrupt increase in productivity. In order for a small group to obtain such an advantage, their technological lead must correspond to a large productivity improvement. A team with a billion dollar budget would need to secure something like a 10,000-fold increase in productivity in order to outcompete the rest of the world. Such a jump is conceivable, but I consider it unlikely. There are other conceivable mechanisms distinctive to AI; I don’t think any of them have yet been explored in enough depth to be persuasive to a skeptical audience.


1. Extreme AI capability does not imply strategic advantage. An AI program could be very capable - such that the sum of all instances of that AI worldwide were far superior (in capability, e.g. economic value) to the rest of humanity's joint efforts - and yet the AI could fail to have a decisive strategic advantage, because it may not be a strategic unit. Instances of the AI may be controlled by different parties across society. In fact this is the usual outcome for technological developments.

2. On gaps between the best AI project and the second best AI project (p79) A large gap might develop either because of an abrupt jump in capability or extremely fast progress (which is much like an abrupt jump), or from one project having consistent faster growth than other projects for a time. Consistently faster progress is a bit like a jump, in that there is presumably some particular highly valuable thing that changed at the start of the fast progress. Robin Hanson frames his Foom debate with Eliezer as about whether there are 'architectural' innovations to be made, by which he means innovations which have a large effect (or so I understood from conversation). This seems like much the same question. On this, Robin says:

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

3. What should activists do? Bostrom points out that activists seeking maximum expected impact might wish to focus their planning on high leverage scenarios, where larger players are not paying attention (p86). This is true, but it's worth noting that changing the probability of large players paying attention is also an option for activists, if they think the 'high leverage scenarios' are likely to be much better or worse.

4. Trade. One key question seems to be whether successful projects are likely to sell their products, or hoard them in the hope of soon taking over the world. I doubt this will be a strategic decision they will make - rather it seems that one of these options will be obviously better given the situation, and we are uncertain about which. A lone inventor of writing should probably not have hoarded it for a solitary power grab, even though it could reasonably have seemed like a good candidate for radically speeding up the process of self-improvement.

5. Disagreement. Note that though few people believe that a single AI project will get to dictate the future, this is often because they disagree with things in the previous chapter - e.g. that a single AI project will plausibly become more capable than the world in the space of less than a month.

6. How big is the AI project? Bostrom distinguishes between the size of the effort to make AI and the size of the group ultimately controlling its decisions. Note that the people making decisions for the AI project may also not be the people making decisions for the AI - i.e. the agents that emerge. For instance, the AI making company might sell versions of their AI to a range of organizations, modified for their particular goals. While in some sense their AI has taken over the world, the actual agents are acting on behalf of much of society.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.


  1. When has anyone gained a 'decisive strategic advantage' at a smaller scale than the world? Can we learn anything interesting about what characteristics a project would need to have such an advantage with respect to the world?
  2. How scalable is innovative project secrecy? Examine past cases: Manhattan project, Bletchly park, Bitcoin, Anonymous, Stuxnet, Skunk Works, Phantom Works, Google X.
  3. How large are the gaps in development time between modern software projects? What dictates this? (e.g. is there diffusion of ideas from engineers talking to each other? From people changing organizations? Do people get far enough ahead that it is hard to follow them?)


If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about Cognitive superpowers (section 8). To prepare, read Chapter 6The discussion will go live at 6pm Pacific time next Monday 3 November. Sign up to be notified here.


8 Capla 25 October 2014 11:42PM

I discovered podcasts last year, and I love them! Why not be hearing about new ideas while I'm walking to where I'm going? (Some of you might shout "insight porn!", and I think that I largely agree. However, 1) I don't have any particular problem with insight porn and 2) I have frequently been exposed to an idea or been recommenced a book through a podcast, on which I latter followed up, leading to more substantive intellectual growth.)

I wonder if anyone has favorites that they might want to share with me.

I'll start:

Radiolab is, hands down, the best of all the podcasts. This seems universally recognized: I’ve yet to meet anyone who disagrees. Even the people who make other podcasts think that Radiolab is better than their own. This one regularly invokes a profound sense of wonder at the universe and gratitude for being able to appreciate it. If you missed it somehow, you're probably missing out.

The Freakonomics podcast, in my opinion, comes close to Radiolab. All the things that you thought you knew, but didn’t, and all the things you never knew you wanted to know, but do, in typical Freakonomics style. Listening to their podcast is one of the two things that makes me happy.

There’s one other podcast that I consider to be in the same league (and this one you've probably never heard of) : The Memory Palace. 5-10 minute stories form history, it is really well done. It’s all the more impressive because while Radiolab and Freakonomics are both made by professional production teams in radio studios, The Memory Palace is just some guy who makes a podcast.

Those are my three top picks (and they are the only podcasts that I listen to at “normal” speed instead of x1.5 or x2.0, since their audio production is so good).

I discovered Rationally Speaking: Exploring the Borderlands Between Reason and Nonsense recently and I’m loving it. It is my kind of skeptics podcast, investigating topics that are on the fringe but not straight out bunk (I don't need to listen to yet another podcast about how astrology doesn't work). The interplay between the hosts, Massimo (who has a PhD in Philosophy, but also one in Biology, which excuses it) and Julia (who I only just realized is a founder of the CFAR), is great.

I also sometimes enjoy the Cracked podcast, which has topics that touch on cognitive bias and statistics but also analysis of (pop) culture and interesting things about the world in general. They are comedians, not philosophers or social scientists, and sometimes their lack of expertise shows (especially when they are discussing topics about which I, and I think the average LW reader, know more than they do), but comedians often have worthwhile insights and I have been intrigued by ideas they introduced me to or gotten books at the library on their recommendation.

To what is everyone else listening?

Edit: On suggestion from several members on LessWrong I've begun listening to Hardcore History and it's companion podcast Common Sense. They're both great. I have a good knowledge of history from my school days (I liked the subject, and I seem to have strong a propensity to retain extraneous  information, particularly information in narrative form), and Hardcore History episodes are a great refresher course, reviewing that which I'm already familiar, but from a slightly different perspective, yielding new insights and a greater connectivity of history. I think it has almost certainly supplanted the Cracked podcast as number 5 on my list.

Neo-reactionaries, why are you neo-reactionary?

7 Capla 17 November 2014 10:31PM

Through LessWrong, I've discovered the no-reactionary movement. Servery says that there are some of you here.

I'm curious, what lead you to accept the basic premises of the movement?  What is the story of your personal "conversion"? Was there some particular insight or information that was important in convincing you? Was it something that just "clicked" for you or that you had always felt in a vague way? Were any of you "raised in it"?

Feel free to forward my questions to others or direct me towards a better forum for asking this.

I hope that this is in no way demeaning or insulting. I'm genuinely curious and my questioning is value free. If you point me towards compelling evidence of the neo-reactionary premise, I'll update on it.

I just increased my Altruistic Effectiveness and you should too

7 AABoyles 17 November 2014 03:45PM

I was looking at the marketing materials for a charity (which I'll call X) over the weekend, when I saw something odd at the bottom of their donation form:

Check here to increase your donation by 3% to defray the cost of credit card processing.

It's not news to me that credit card companies charge merchants a cut of every transaction.  But the ramifications of this for charitable contributions had never sunk in. I use my credit card for all of the purchases I can (I get pretty good cash-back rates). Automatically drafting from my checking account (like a check, only without the check) costs X nothing. So I've increased the effectiveness of my charitable contributions by a small (<3%) amount by performing what amounts to a paperwork tweak.

If you use a credit card for donations, please think about making this tweak as well!

View more: Next