New Haven / Southern Connecticut Meetup, Wednesday Apr. 27th 6 PM

5 alyssavance 25 April 2011 04:00AM

Risk, Success, and Failure

0 alyssavance 19 April 2011 08:34PM

Followup To: Levels of Action, Don't Fear Failure

When you do something new, in addition to accomplishing your goal, you get more information about the techniques you can use to accomplish stuff - what works and what doesn't. For instance, if you just got your driver's license and are driving to work, you get to work on time. But you also gain information about how to drive, which will help the future you be a better driver. In terms of levels of action, doing something new on one level is also partly about doing something on the level above that. If you work for Google and are reading a programming textbook, that's a Level 2 action. But if this is your first programming textbook, it's also a Level 3 action - because when you read it, you're getting information about how programming textbooks work compared to other learning methods.

However, there's also a downside: the possibility of failure. If you've never done something before, and don't really know what you're doing, you're likely to fail. I claim that, in most circumstances, the potential consequences of failure should largely just be ignored; you don't lose anything, other than the resources you initially invested. But why is that?

continue reading »

Advice to Rationalist Communities

0 alyssavance 19 April 2011 12:39AM

(This was adapted by me, with permission, from an email Eliezer sent to the New York Less Wrong group after his first visit. All copyrights belong to Eliezer.)

Having some kind of global rationalist community come into existence seems like a quite extremely good idea. The New York Less Wrong Hive is the forerunner of that, the first group of LW-style rationalists to form a real community, and to confront the challenges involved in staying on track while growing as a community.

"Stay on track toward what?" you ask, and my best shot at describing the vision is as follows:

"Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us."

(That last part is something I only realized was Really Important after visiting New York.)

Michael Vassar says he's worried that people might be losing track of the"rationality" and "world optimization" parts of this - that people might be wondering what sort of benefit "rationality" delivers as opposed to, say, paleo dieting.

I admit that the original Less Wrong sequences did not heavily emphasize the benefits for everyday life (as opposed to solving ridiculously hard scientific problems). This is something I plan to fix with my forthcoming book - along with the problem where the key info is scattered over six hundred blog posts that only truly dedicated people and/or serious procrastinators can find the time to read.

But I really don't think the whole rationality/fun association the New York group got going - my congratulations on pulling that off, by the way, it's damned impressive - is something that can (let alone should) be untangled. Most groups of people capable of becoming enthusiastic about strange new nonconformist ways of living their lives would have started trying to read each other's auras by now. Rationality is the master lifehack which distinguishes which other lifehacks to use.

The way an LW-rationality meetup usually gets started is that there is a joy of being around reasonable people - a joy that comes, in a very direct way, from those people caring about what's true and what's effective, and being able to reflect on more than their first impulse to see whether it makes sense. You wouldn't want to lose that either.

But the thing about effective rationality is that you can also use it to distinguish truth from falsehood, and realize that the best methods aren't always the ones everyone else is using; and you can start assembling a pool of lifehacks that doesn't include homeopathy. You become stronger, and that makes you start thinking that you can also help other people become stronger. Through the systematic accumulation of good ideas and the rejection of bad ideas, you can get so awesome that even other people notice, and this means that you can start attracting a new sort of person, one who starts out wanting to become awesome instead of being attracted specifically to the rationality thing. This is fine in theory, since indeed the Art must have a purpose higher than itself or it collapses into infinite recursion. But some of these new recruits may be a bit skeptical, at first, that all this "rationality" stuff is really contributing all that much to the awesome.

Real life is not a morality tale, and I don't know if I'd prophesy that the instant you get too much awesome and not enough rationality, the group will be punished for that sin by going off and trying to read auras. But I think I would prophesy that if you got too large and insufficiently reasonable, and if you lost sight of your higher purposes and your dreams of world optimization, the first major speedbump you hit would splinter the group. (There will be some speedbump, though I don't know what it will be.)

Rationality isn't just about knowing about things like Bayes's Theorem. It's also about:

* Saying oops and changing your mind occasionally.

* Knowing that clever arguing isn't the same as looking for truth.

* Actually paying attention to what succeeds and what fails, instead of just being driven by your internal theories.

* Reserving your self-congratulations for the occasions when you actually change a policy or belief, because while not every change is an improvement, every improvement is a change.

* Self-awareness - a core rational skill, but at the same time, a caterpillar that spent all day obsessing about being a caterpillar would never become a butterfly.

* Having enough grasp of evolutionary psychology to realize that this is no longer an eighty-person hunter-gatherer band and that getting into huge shouting matches about Republicans versus Democrats does not actually change very much.

* Asking whether your most cherished beliefs to shout about actually control your anticipations, whether they mean anything, never mind whether their predictions are actually correct.

* Understanding that correspondence bias means that most of your enemies are not inherently evil mutants but rather people who live in a different perceived world than you do. (Albeit of course that some people are selfish bastards and a very few of them are psychopaths.)

* Being able to accept and consider advice from other people who think you're doing something stupid, without lashing out at them; and the more you show them this is true, and the more they can trust you not to be offended if you're frank with them, the better the advice you can get. (Yes, this has a failure mode where insulting other people becomes a status display. But you can also have too much politeness, and it is a traditional strength of rationalists that they sometimes tell each other the truth. Now and then I've told college students that they are emitting terrible body odors, and the reply I usually get is that they had no idea and I am the first person ever to suggest this to them.)

* Comprehending the nontechnical arguments for Aumann's Agreement Theorem well enough to realize that when two people have common knowledge of a persistent disagreement, something is wrong somewhere - not that you can necessarily do better by automatically agreeing with everyone who persistently disagrees with you; but still, knowing that ideal rational agents wouldn't just go around yelling at each other all the time.

* Knowing about scope insensitivity doesn't just mean that you donate charitable dollars to existential risks instead of the Society For Curing Rare Diseases In Cute Puppies. It means you know that eating half a chocolate brownie appears as essentially the same pleasurable memory in retrospect as eating a whole brownie, so long as the other half isn't in front of you and you don't have the unpleasant memory of exerting willpower not to eat it. Seriously, I didn't emphasize all the practical applications of every cognitive bias in the Less Wrong sequences but there are a lot of things like that.

* The ability to dissent from conformity; realizing the difficulty and importance of being the first to dissent.

* Knowing that to avoid pluralistic ignorance everyone should write down their opinion on a sheet of paper before hearing what everyone else thinks.

But then one of the chief surprising lessons I learned, after writing the original Less Wrong sequences, was that if you succeed in teaching people a bunch of amazing stuff about epistemic rationality, this reveals...

(drum roll)

...that, having repaired some of people's flaws, you can now see more clearly all the other qualities required to be awesome. The most important and notable of these other qualities, needless to say, is Getting Crap Done.

(Those of you reading Methods of Rationality will note that it emphasizes a lot of things that aren't in the original Less Wrong, such as the virtues of hard work and practice. This is because I have Learned From Experience.)

Similarly, courage isn't something I emphasized enough in the original Less Wrong (as opposed to MoR) but the thought has since occurred to me that most people can't do things which require even small amounts of courage. (Leaving NYC, I had two Metrocards with small amounts of remaining value to give away. I felt reluctant to call out anything, or approach anyone and offer them a free Metrocard, and I thought to myself, well, of course I'm reluctant, this task requires a small amount of courage and then I asked three times before I found someone who wanted them. Not, mind you, that this was an important task in the grand scheme of things - just a little bit of rejection therapy, a little bit of practice in doing things which require small amounts of courage.)

Or there's Munchkinism, the quality that lets people try out lifehacks that sound a bit weird. A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells. Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else. Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death. Or figures out how to build the real-life version of the cycle of infinite wish spells. Magic the Gathering is a Munchkin game, and MoR is a Munchkin story.

It would be really awesome if the Less Wrong groups figured out how to teach their members hard work and courage and Muchkinism and so on.

It would be even more awesome if you could muster up the energy to track the results in any sort of systematic way so that you can do small-N Science (based on Bayesian likelihoods thank you, not the usual statistical significance bullhockey) and find out how effective different teaching methods are, or track the effectiveness of other lifehacks as well - the Quantitative Self road. This, of course, would require Getting Crap Done; but I do think that in the long run, whether we end up with really effective rationalists is going to depend a lot on whether we can come up with evidence-based metrics for how well a teaching method works, or if we're stuck in the failure mode of psychoanalysis, where we just go around trying things that sound like good ideas.

And of course it would be really truly amazingly awesome if some of you became energetic gung-ho intelligent people who can see the world full of low-hanging fruit in front of them, who would go on to form multiple startups which would make millions and billions of dollars. That would also be cool.

But not everyone has to start a startup, not everyone has to be there to Get Stuff Done, it is okay to have Fun. The more of you there are, the more likely it is that any given five of you will want to form a new band, or like the same sort of dancing, or fall in love, or decide to try learning meditation and reporting back to the group on how it went. Growth in general is good. Every added person who's above the absolute threshold of competence is one more person who can try out new lifehacks, recruit new people, or just be there putting the whole thing on a larger scale and making the group more Fun. On the other hand there is a world out there to optimize, and also the scaling of the group is limited by the number of people who can be organizers (more on this below). There's a narrow path to walk between "recruit everyone above the absolute threshold who seems like fun" and "recruit people with visibly unusually high potential to do interesting things". I would suggest making extra effort to recruit people who seem like they have high potential but not anything like a rule. But if someone not only seems to like explicit rationality and want to learn more, but also seems like a smart executive type who gets things done, perhaps their invitation to a meetup should be prioritized?

So that was the main thing I had to say, but now onward to some other points.

A word on the Society for the Promotion of Heroic Equality for Witches. It may be something of a taboo subject, but I suspect that at this stage in our growth as a community, gender balance is important. The why of the problem - whether the relative scarcity of heroines compared to heroes is due to insufficient environmental encouragement or higher male variance, etcetera - doesn't matter in the short term. Rationalists are still a tiny community and that community cannot possibly have exhausted the world supply of potential Annas, Divias, and Lauras, never mind women above the absolute threshold of competence. For so long as the gender ratio hasn't hit 50/50, everyone should be making an extra effort to recruit potential heroines.

You're human and there are extensive studies saying that people always do this, so I know that any given member of your group will cut extra slack for members of all appropriate sexes who happen to be hot. And I am not going to tell anyone, male or female, not to do this, just like I am not going to inveigh against gravity. But if you are all too busy chasing tail to also cut extra slack and try to keep people around, when their key qualification looks like "smart and gets things done" instead of "hot" - and yes this applies especially when the gender balance is at stake - then so help me I will mention in a Methods of Rationality update that you have made Hermione sad.

A sensitive issue is what happens when someone can't reach the absolute threshold of competence. I think the main relevant Less Wrong post on this subject is "Well-Kept Gardens Die By Pacifism." There are people who cannot be saved - or at least people who cannot be saved by any means currently known to you. And there is a whole world out there to be optimized; sometimes even if a person can be saved, it takes a ridiculous amount of effort that you could better use to save four other people instead. We've had similar problems before - I would hear about someone who wasn't Getting Stuff Done, but who seemed to be making amazing strides on self-improvement, and then a month later I would hear the same thing again, and isn't it remarkable how we keep hearing about so much progress but never about amazing things the person gets done -

(I will parenthetically emphasize that every single useful mental technique I have ever developed over the course of my entire life has been developed in the course of trying to accomplish some particular real task and none of it is the result of me sitting around and thinking, "Hm, however shall I Improve Myself today?" I should advise a mindset in which making tremendous progress on fixing yourself doesn't merit much congratulation and only particular deeds actually accomplished are praised; and also that you always have some thing you're trying to do in the course of any particular project of self-improvement - a target real-world accomplishment to which your self-improvements are a means, not definable in terms of any personality quality unless it is weight loss or words output on a writing project or something else visible and measurable.)

- and the other thing is that trying to save people who cannot be saved can drag down a whole community, because it becomes less Fun, and that means new people don't want to join.

I would suggest having a known and fixed period of time, like four months, that you are allowed to spend on trying to fix anyone who seems fixable, and if after that their outputs do not exceed their inputs and they are dragging down the Fun level relative to the average group member, fire them. You could maybe have a Special Committee with three people who would decide this - one of the things I pushed for at the Singularity Institute was to have the Board deciding whether to retain people, with nobody else authorized to make promises. There should be no one person who can be appealed to, who can be moved by pity and impulsively say "Yes, you can stay." Short of having Voldemort do it, the best you can do to reduce pity and mercy is to have the decision made by committee.

And if anyone is making the group less Fun or scaring off new members, and yes this includes being a creep who offends potential heroine recruits, give them an instant ultimatum or just fire them on the spot.

You have to be able to do this. This is not the ancestral environment where there's only eighty people in your tribe and exiling any one of them is a huge decision that can never be undone. It's a large world out there and there are literally hundreds of millions of people whom you do not want in your community, at least relative to your current ability to improve them. I'm sorry but it has to be done.

Finally, if you grow a lot it may no longer be possible for everyone to meet all the time as a group. I'm not quite sure what to advise about this - splitting up into meetings on particular interests, maybe, but it seems more like the sort of thing where you ought to discuss the problem as thoroughly as possible before proposing any policy solutions. My main advice is that if there's any separatish group that forms, I am skeptical about its ability to stay on track if there isn't at least one high-level epistemic rationalist executive type to organize it, someone who not only knows Bayes's Theorem but who can also Get Things Done. Retired successful startup entrepreneurs would be great for this if you could get them, but smart driven young people might be more mentally flexible and a lot more recruitable if far less experienced. In any case, I suspect that your ability to grow is going to be ultimately limited by the percentage of members who have the ability to be organizers, and the time to spend organizing, and who've also leveled up into good enough rationalists to keep things on track. Implication, make an extra effort to recruit people who can become organizers.

And whenever someone does start doing something interesting with their life, or successfully recruits someone who seems unusually promising, or spends time organizing things, don't forget to give them a well-deserved cookie.

Finally, remember that the trouble with the exact phrasing of "become awesome" - though it does nicely for a gloss - is that Awesome isn't a static quality of a person. Awesome is as awesome does.

Levels of Action

106 alyssavance 14 April 2011 12:18AM

One of the most useful concepts I have learned recently is the distinction between actions which directly improve the world, and actions which indirectly improve the world.

Suppose that you go onto Mechanical Turk, open an account, and spend a hundred hours transcribing audio. At current market rates, you'd get paid around $100 for your labor. By taking this action, you have made yourself $100 wealthier. This is an example of what I'd call a Level 1 or object-level action: something that directly moves the world from a less desirable state into a more desirable state.

On the other hand, suppose you take a typing class, which teaches you to type twice as fast. On the object level, this doesn't move the world into a better state- nothing about the world has changed, other than you. However, the typing class can still be very useful, because every Level 1 project you tackle later which involves typing will go better- you'll be able to do it more efficiently, and you'll get a higher return on your time. This is what I'd call a Level 2 or meta-level action, because it doesn't make the world better directly - it makes the world better indirectly, by improving the effectiveness of Level 1 actions. There are also Level 3 (meta-meta-level) actions, Level 4 (meta-meta-meta-level actions), and so on.

continue reading »

Levels of Action

0 alyssavance 14 April 2011 12:09AM

One of the most useful concepts I have learned recently is the distinction between actions which directly improve the world, and actions which indirectly improve the world.

Suppose that you go onto Mechanical Turk, open an account, and spend a hundred hours transcribing audio. At current market rates, you'd get paid around $100 for your labor. By taking this action, you have made yourself $100 wealthier. This is an example of what I'd call a Level 1 or object-level action: something that directly moves the world from a less desirable state into a more desirable state.

On the other hand, suppose you take a typing class, which teaches you to type twice as fast. On the object level, this doesn't move the world into a better state- nothing about the world has changed, other than you. However, the typing class can still be very useful, because every Level 1 project you tackle later which involves typing will go better- you'll be able to do it more efficiently, and you'll get a higher return on your time. This is what I'd call a Level 2 or meta-level action, because it doesn't make the world better directly - it makes the world better indirectly, by improving the effectiveness of Level 1 actions. There are also Level 3 (meta-meta-level) actions, Level 4 (meta-meta-meta-level actions), and so on.

continue reading »

Vassar talk in New Haven, Sunday 2/27

2 alyssavance 27 February 2011 02:49AM

Hey all. I've invited Michael Vassar, president of the Singularity Institute, to come to Yale to give a talk on AI and the Methods of Rationality. We'll be holding the talk on Sunday the 27th at 4 PM, at WLH 119 (100 Wall St., New Haven CT), with an open discussion afterwards. Everyone should come- there will be free pizza!

(Reposted to main section on request of JGWeissman). 

Vassar talk in New Haven, Sunday 2/27

3 alyssavance 26 February 2011 08:57PM

Hey all. I've invited Michael Vassar, president of the Singularity Institute, to come to Yale to give a talk on AI and the Methods of Rationality. We'll be holding the talk on Sunday the 27th at 4 PM, at WLH 119 (100 Wall St., New Haven CT), with an open discussion afterwards. Everyone should come- there will be free pizza!

Science: Do It Yourself

53 alyssavance 13 February 2011 04:47AM

In the nerd community, we have lots of warm, fuzzy associations around 'science'. And, of course, science is indeed awesome. But, seeing how awesome science is, shouldn't we try to have more of it in our lives? When was the last time we did an experiment to test a theory?

Here, I will try to introduce a technique which I have found to be very useful. It is based on the classical scientific method, but I call it "DIY Science", to distinguish it from university science. The point of DIY Science is that science is not that hard to do, and can be used to answer practical questions as well as abstract ones. Particle physics looks hard to do, since you need expensive, massive accelerators and magnets and stuff. However, fortunately, some of the fields in which it is easiest to do science are some of the most practical and interesting. Anyone smart and rational can start doing science right now, from their home computer.

continue reading »

Rally to Restore Rationality

3 alyssavance 18 October 2010 06:41PM

Hey everyone. If anyone else is heading to Jon Stewart's Rally to Restore Sanity on the National Mall on Oct. 30th, please comment or contact me at pphysics141@gmail.com so we can arrange an LW meetup.

Call for Volunteers

3 alyssavance 02 October 2010 05:43PM

I've recently been appointed Program Coordinator of Humanity+ (http://www.humanityplus.org), which is another big organization in the SIAI/FHI/Kurzweilian/futurist space. If you're interested in volunteering to help us out, you can contact me at pphysics141@gmail.com. If you're interested in volunteering to help out the Singularity Institute, you can also contact SIAI Volunteer Coodinator Louie Helm at seventeenorbust@gmail.com. Thanks for your help!

 

View more: Prev | Next