Link: Toward Non-Stupid, Non-Blank-Slatey Polyandry
http://theviewfromhell.blogspot.com/2012/09/toward-non-stupid-non-blank-slatey.html
The author gives a shout out to Less Wrong as a community with a perpetually skewed gender ratio, which is precisely the conditions under which polyandry appears to thrive.
Discuss. :)
Who Wants To Start An Important Startup?
SUMMARY: Let's collect people who want to work on for-profit companies that have significant positive impacts on many people's lives.
Google provides a huge service to the world - efficient search of a vast amount of data. I would really like to see more for-profit businesses like Google, especially in underserved areas like those explored by non-profits GiveWell, Singularity Institute and CFAR. GiveWell is a nonprofit that is both working toward making humanity better, and thinking about leverage. Instead of hacking away at one branch of the problem of effective charity by working on one avenue for helping people, they've taken it meta. They're providing a huge service by helping people choose non-profits to donate to that give the most bang for your buck, and they're giving the non-profits feedback on how they can improve. I would love to see more problems taken meta like that, where people invest in high leverage things.
Beyond these non-profits, I think there is a huge amount of low-hanging fruit for creating businesses that create a lot of good for humanity and make money. For-profit businesses that pay their employees and investors well have the advantage that they can entice very successful and comfortable people away from other jobs that are less beneficial to humanity. Unlike non-profits where people are often trying to scrape by, doing the good of their hearts, people doing for-profits can live easy lives with luxurious self care while improving the world at the same time.
It's all well and good to appeal to altruistic motives, but a lot more people can be mobilzed if they don't have to sacrifice their own comfort. I have learned a great deal about this from Jesse and Sharla at Rejuvenate. They train coaches and holistic practitioners in sales and marketing - enabling thousands of people to start businesses who are doing the sorts of things that advance their mission. They do this while also being multi-millionaires themselves, and maintaining a very comfortable lifestyle, taking the time for self-care and relaxation to recharge from long workdays.
Less Wrong is read by thousands of people, many of whom are brilliant and talented. In addition, Less Wrong readers include people who are interested in the future of the world and think about the big picture. They think about things like AI and the vast positive and negative consequences it could have. In general, they consider possibilities that are outside of their immediate sensory experience.
I've run into a lot of people in this community with some really cool, unique, and interesting ideas, for high-impact ways to improve the world. I've also run into a lot of talent in this community, and I have concluded that we have the resources to implement a lot of these same ideas.
Thus, I am opening up this post as a discussion for these possibilities. I believe that we can share and refine them on this blog, and that there are talented people who will execute them if we come up with something good. For instance, I have run into countless programmers who would love to be working on something more inspiring than what they're doing now. I've also personally talked to several smart organizational leader types, such as Jolly and Evelyn, who are interested in helping with and/or leading inspiring projects And that's only the people I've met personally; I know there are a lot more folks like that, and people with talents and resources that haven't even occurred to me, who are going to be reading this.
Topics to consider when examining an idea:
- Tradeoffs between optimizing for good effects on the world v. making a profit.
- Ways to improve both profitability and good effects on the world.
- Timespan - projects for 3 months, 1 year, 5 years, 10+ years
- Using resources efficiently (e.g. creating betting markets where a lot of people give opinions that they have enough confidence in to back with money, instead of having one individual trying to figure out probabilities)
- Opportunities for uber-programmers who can do anything quickly (they are reading and you just might interest and inspire them)
- Opportunities for newbies trying to get a foot in the door who will work for cheap
- What people/resources do we have at our disposal now, and what can we do with that?
- What people/resources are still needed?
- If you think of something else, make a comment about it in the thread for that, and it might get added to this list.
An example idea from Reichart Von Wolfsheild:
A project to document the best advice we can muster into a single tome. It would inherently be something dynamic, that would grow and cover the topics important to humans that they normally seek refuge and comfort for in religion. A "bible" of sorts for the critical mind.
Before things like wikis, this was a difficult problem to take on. But, that has changed, and the best information we have available can in fact be filtered for, and simplified. The trick now, is to organize it in a way that helps humans. which is not how most information is organized.
Collaboration
- Please keep the mission in mind (let's have more for-profit companies working on goals that benefit people too!) when giving feedback. When you write a comment, consider whether it is contributing to that goal, or if it's counterproductive to motivation or idea-generation, and edit accordingly.
- Give feedback, the more specific the better. Negative feedback is valuable because it tells us where to concentrate further work. It can also be a motivation-killer; it feels like punishment, and not just for the specific item criticized, so be charitable about the motives and intelligence of others, and stay mindful of how much and how aggressively you dole critiques out. (Do give critiques, they're essential - just be gentle!) Also, distribute positive feedback for the opposite effect. More detail on giving the best possible feedback in this comment.
- Please point other people with resources such as business experience, intelligence, implementation skills, and funding capacity at this post. The more people with these resources who look at this and collaborate in the comments, the more likely it is for these ideas to get implemented. In addition to posting this to Less Wrong, I will be sending the link to a lot of friends with shrewd business skills, resources and talent, who might be interested in helping make projects happen, or possibly in finding people to work on their own projects since many of them are already working on projects to make the world better.
- Please provide feedback. If anything good happens in your life as a result of this post or discussion, please comment about it and/or give me feedback. It inspires people, and I have bets going that I'd like to win. Consider making bets of your own! It is also important to let me know if you are going to use the ideas, so that we don't end up with needless duplication and competition.
Finally: If this works right, there will be lots of information flying around. Check out the organization thread and the wiki.
Glenn Beck discusses the Singularity, cites SI researchers
From the final chapter of his new book Cowards, titled "Adapt or Die: The Coming Intelligence Explosion."
The year is 1678 and you’ve just arrived in England via a time machine. You take out your new iPhone in front of a group of scientists who have gathered to marvel at your arrival.
“Siri,” you say, addressing the phone’s voice-activated artificial intelligence system, “play me some Beethoven.”
Dunh-Dunh-Dunh-Duuunnnhhh! The famous opening notes of Beethoven’s Fifth Symphony, stored in your music library, play loudly.
“Siri, call my mother.”
Your mother’s face appears on the screen, a Hawaiian beach behind her. “Hi, Mom!” you say. “How many fingers am I holding up?”
“Three,” she correctly answers. “Why haven’t you called more—”
“Thanks, Mom! Gotta run!” you interrupt, hanging up.
“Now,” you say. “Watch this.”
Your new friends look at the iPhone expectantly.
“Siri, I need to hide a body.”
Without hesitation, Siri asks: “What kind of place are you looking for? Mines, reservoirs, metal foundries, dumps, or swamps?” (I’m not kidding. If you have an iPhone 4S, try it.)
You respond “Swamps,” and Siri pulls up a satellite map showing you nearby swamps.
The scientists are shocked into silence. What is this thing that plays music, instantly teleports video of someone across the globe, helps you get away with murder, and is small enough to fit into a pocket?
At best, your seventeenth-century friends would worship you as a messenger of God. At worst, you’d be burned at the stake for witchcraft. After all, as science fiction author Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.”
Now, imagine telling this group that capitalism and representative democracy will take the world by storm, lifting hundreds of millions of people out of poverty. Imagine telling them their descendants will eradicate smallpox and regularly live seventy-five or more years. Imagine telling them that men will walk on the moon, that planes, flying hundreds of miles an hour, will transport people around the world, or that cities will be filled with buildings reaching thousands of feet into the air.
They’d probably escort you to the madhouse.
Unless, that is, one of the people in that group had been a man named Ray Kurzweil.
Kurzweil is an inventor and futurist who has done a better job than most at predicting the future. Dozens of the predictions from his 1990 book The Age of Intelligent Machines came true during the 1990s and 2000s. His follow-up book, The Age of Spiritual Machines, published in 1999, fared even better. Of the 147 predictions that Kurzweil made for 2009, 78 percent turned out to be entirely correct, and another 8 percent were roughly correct. For example, even though every portable computer had a keyboard in 1999, Kurzweil predicted that most portable computers would lack a keyboard by 2009. It turns out he was right: by 2009, most portable computers were MP3 players, smartphones, tablets, portable game machines, and other devices that lacked keyboards.
Kurzweil is most famous for his “law of accelerating returns,” the idea that technological progress is generally “exponential” (like a hockey stick, curving up sharply) rather than “linear” (like a straight line, rising slowly). In nongeek-speak that means that our knowledge is like the compound interest you get on your bank account: it increases exponentially as time goes on because it keeps building on itself. We won’t experience one hundred years of progress in the twenty-first century, but rather twenty thousand years of progress (measured at today’s rate).
Many experts have criticized Kurzweil’s forecasting methods, but a careful and extensive review of technological trends by researchers at the Santa Fe Institute came to the same basic conclusion: technological progress generally tends to be exponential (or even faster than exponential), not linear.
So, what does this mean? In his 2005 book The Singularity Is Near, Kurzweil shares his predictions for the next few decades:
- In our current decade, Kurzweil expects real-time translation tools and automatic house-cleaning robots to become common.
- In the 2020s he expects to see the invention of tiny robots that can be injected into our bodies to intelligently find and repair damage and cure infections.
- By the 2030s he expects “mind uploading” to be possible, meaning that your memories and personality and consciousness could be copied to a machine. You could then make backup copies of yourself, and achieve a kind of technological immortality.
[sidebar]
Age of the Machines?
“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us.”
—Jaan Tallinn, co-creator of Skype and Kazaa
[/sidebar]
If any of that sounds absurd, remember again how absurd the eradication of smallpox or the iPhone 4S would have seemed to those seventeenth-century scientists. That’s because the human brain is conditioned to believe that the past is a great predictor of the future. While that might work fine in some areas, technology is not one of them. Just because it took decades to put two hundred transistors onto a computer chip doesn’t mean that it will take decades to get to four hundred. In fact, Moore’s Law, which states (roughly) that computing power doubles every two years, shows how technological progress must be thought of in terms of “hockey stick” progress, not “straight line” progress. Moore’s Law has held for more than half a century already (we can currently fit 2.6 billion transistors onto a single chip) and there’s little reason to expect that it won’t continue to.
But the aspect of his book that has the most far-ranging ramifications for us is Kurzweil’s prediction that we will achieve a “technological singularity” in 2045. He defines this term rather vaguely as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”
Part of what Kurzweil is talking about is based on an older, more precise notion of “technological singularity” called an intelligence explosion. An intelligence explosion is what happens when we create artificial intelligence (AI) that is better than we are at the task of designing artificial intelligences. If the AI we create can improve its own intelligence without waiting for humans to make the next innovation, this will make it even more capable of improving its intelligence, which will . . . well, you get the point. The AI can, with enough improvements, make itself smarter than all of us mere humans put together.
The really exciting part (or the scary part, if your vision of the future is more like the movie The Terminator) is that, once the intelligence explosion happens, we’ll get an AI that is as superior to us at science, politics, invention, and social skills as your computer’s calculator is to you at arithmetic. The problems that have occupied mankind for decades— curing diseases, finding better energy sources, etc.— could, in many cases, be solved in a matter of weeks or months.
Again, this might sound far-fetched, but Ray Kurzweil isn’t the only one who thinks an intelligence explosion could occur sometime this century. Justin Rattner, the chief technology officer at Intel, predicts some kind of Singularity by 2048. Michael Nielsen, co-author of the leading textbook on quantum computation, thinks there’s a decent chance of an intelligence explosion by 2100. Richard Sutton, one of the biggest names in AI, predicts an intelligence explosion near the middle of the century. Leading philosopher David Chalmers is 50 percent confident an intelligence explosion will occur by 2100. Participants at a 2009 conference on AI tended to be 50 percent confident that an intelligence explosion would occur by 2045.
If we can properly prepare for the intelligence explosion and ensure that it goes well for humanity, it could be the best thing that has ever happened on this fragile planet. Consider the difference between humans and chimpanzees, which share 95 percent of their genetic code. A relatively small difference in intelligence gave humans the ability to invent farming, writing, science, democracy, capitalism, birth control, vaccines, space travel, and iPhones— all while chimpanzees kept flinging poo at each other.
[sidebar]
Intelligent Design?
The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us. What if the machines don’t want the same things we do?
The truth, unfortunately, is that every kind of AI we know how to build today definitely would not want the same things we do. To build an AI that does, we would need a more flexible “decision theory” for AI design and new techniques for making sense of human preferences. I know that sounds kind of nerdy, but AIs are made of math and so math is really important for choosing which results you get from building an AI.
These are the kinds of research problems being tackled by the Singularity Institute in America and the Future of Humanity Institute in Great Britain. Unfortunately, our silly species still spends more money each year on lipstick research than we do on figuring out how to make sure that the most important event of this century (maybe of all human history)— the intelligence explosion— actually goes well for us.
[/sidebar]
Likewise, self-improving machines could perform scientific experiments and build new technologies much faster and more intelligently than humans can. Curing cancer, finding clean energy, and extending life expectancies would be child’s play for them. Imagine living out your own personal fantasy in a different virtual world every day. Imagine exploring the galaxy at near light speed, with a few backup copies of your mind safe at home on earth in case you run into an exploding supernova. Imagine a world where resources are harvested so efficiently that everyone’s basic needs are taken care of, and political and economic incentives are so intelligently fine-tuned that “world peace” becomes, for the first time ever, more than a Super Bowl halftime show slogan.
With self-improving AI we may be able to eradicate suffering and death just as we once eradicated smallpox. It is not the limits of nature that prevent us from doing this, but only the limits of our current understanding. It may sound like a paradox, but it’s our brains that prevent us from fully understanding our brains.
Turf Wars
At this point you might be asking yourself: “Why is this topic in this book? What does any of this have to do with the economy or national security or politics?”
In fact, it has everything to do with all of those issues, plus a whole lot more. The intelligence explosion will bring about change on a scale and scope not seen in the history of the world. If we don’t prepare for it, things could get very bad, very fast. But if we do prepare for it, the intelligence explosion could be the best thing that has happened since . . . literally ever.
But before we get to the kind of life-altering progress that would come after the Singularity, we will first have to deal with a lot of smaller changes, many of which will throw entire industries and ways of life into turmoil. Take the music business, for example. It was not long ago that stores like Tower Records and Sam Goody were doing billions of dollars a year in compact disc sales; now people buy music from home via the Internet. Publishing is currently facing a similar upheaval. Newspapers and magazines have struggled to keep subscribers, booksellers like Borders have been forced into bankruptcy, and customers are forcing publishers to switch to ebooks faster than the publishers might like.
All of this is to say that some people are already witnessing the early stages of upheaval firsthand. But for everyone else, there is still a feeling that something is different this time; that all of those years of education and experience might be turned upside down in an instant. They might not be able to identify it exactly but they realize that the world they’ve known for forty, fifty, or sixty years is no longer the same.
There’s a good reason for that. We feel it and sense it because it’s true. It’s happening. There’s absolutely no question that the world in 2030 will be a very different place than the one we live in today. But there is a question, a large one, about whether that place will be better or worse.
It’s human nature to resist change. We worry about our families, our careers, and our bank accounts. The executives in industries that are already experiencing cataclysmic shifts would much prefer to go back to the way things were ten years ago, when people still bought music, magazines, and books in stores. The future was predictable. Humans like that; it’s part of our nature.
But predictability is no longer an option. The intelligence explosion, when it comes in earnest, is going to change everything— we can either be prepared for it and take advantage of it, or we can resist it and get run over.
Unfortunately, there are a good number of people who are going to resist it. Not only those in affected industries, but those who hold power at all levels. They see how technology is cutting out the middlemen, how people are becoming empowered, how bloggers can break national news and YouTube videos can create superstars.
And they don’t like it.
A Battle for the Future
Power bases in business and politics that have been forged over decades, if not centuries, are being threatened with extinction, and they know it. So the owners of that power are trying to hold on. They think they can do that by dragging us backward. They think that, by growing the public’s dependency on government, by taking away the entrepreneurial spirit and rewards and by limiting personal freedoms, they can slow down progress.
But they’re wrong. The intelligence explosion is coming so long as science itself continues. Trying to put the genie back in the bottle by dragging us toward serfdom won’t stop it and will, in fact, only leave the world with an economy and society that are completely unprepared for the amazing things that it could bring.
Robin Hanson, author of “The Economics of the Singularity” and an associate professor of economics at George Mason University, wrote that after the Singularity, “The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month.”
That is unfathomable. But even if the rate were much slower, say a doubling of the world economy in two years, the shock-waves from that kind of growth would still change everything we’ve come to know and rely on. A machine could offer the ideal farming methods to double or triple crop production, but it can’t force a farmer or an industry to implement them. A machine could find the cure for cancer, but it would be meaningless if the pharmaceutical industry or Food and Drug Administration refused to allow it. The machines won’t be the problem; humans will be.
And that’s why I wanted to write about this topic. We are at the forefront of something great, something that will make the Industrial Revolution look in comparison like a child discovering his hands. But we have to be prepared. We must be open to the changes that will come, because they will come. Only when we accept that will we be in a position to thrive. We can’t allow politicians to blame progress for our problems. We can’t allow entrenched bureaucrats and power-hungry executives to influence a future that they may have no place in.
Many people are afraid of these changes— of course they are: it’s part of being human to fear the unknown— but we can’t be so entrenched in the way the world works now that we are unable to handle change out of fear for what those changes might bring.
Change is going to be as much a part of our future as it has been of our past. Yes, it will happen faster and the changes themselves will be far more dramatic, but if we prepare for it, the change will mostly be positive. But that preparation is the key: we need to become more well-rounded as individuals so that we’re able to constantly adapt to new ways of doing things. In the future, the way you do your job may change four to five or fifty times over the course of your life. Those who cannot, or will not, adapt will be left behind.
At the same time, the Singularity will give many more people the opportunity to be successful. Because things will change so rapidly there is a much greater likelihood that people will find something they excel at. But it could also mean that people’s successes are much shorter-lived. The days of someone becoming a legend in any one business (think Clive Davis in music, Steven Spielberg in movies, or the Hearst family in publishing) are likely over. But those who embrace and adapt to the coming changes, and surround themselves with others who have done the same, will flourish.
When major companies, set in their ways, try to convince us that change is bad and that we must stick to the status quo, no matter how much human inquisitiveness and ingenuity try to propel us forward, we must look past them. We must know in our hearts that these changes will come, and that if we welcome them into our world, we’ll become more successful, more free, and more full of light than we could have ever possibly imagined.
Ray Kurzweil once wrote, “The Singularity is near.” The only question will be whether we are ready for it.
The citations for the chapter include:
- Luke Muehlhauser and Anna Salamon, "Intelligence Explosion: Evidence and Import"
- Daniel Dewey, "Learning What to Value"
- Eliezer Yudkowsky, "Artificial Intelligence as a Positive and a Negative Factor in Global Risk"
- Luke Muehlhauser and Louie Helm, "The Singularity and Machine Ethics"
- Luke Muehlhauser, "So You Want to Save the World"
- Michael Anissimov, "The Benefits of a Successful Singularity"
How to Run a Successful Less Wrong Meetup
Always wanted to run a Less Wrong meetup, but been unsure of how? The How to Run a Successful Less Wrong Meetup booklet is here to help you!
The 33-page document draws from consultations with more than a dozen Less Wrong meetup group organizers. Stanislaw Boboryk created the document design. Luke provided direction, feedback, and initial research, and I did almost all the writing.
The booklet starts by providing some motivational suggestions on why you'd want to create a meetup in the first place, and then moves on to the subject of organizing your first one. Basics such as choosing a venue, making an announcement, and finding something to talk about once at the meetup, are all covered. This section also discusses pioneering meetups in foreign cities and restarting inactive meetup groups.
For those who have already established a meetup group, the booklet offers suggestions on things such as attracting new members, maintaining a pleasant atmosphere, and dealing with conflicts within the group. The "How to Build Your Team of Heroes" section explains the roles that are useful for a meetup group to fill, ranging from visionaries to organizers.
If you're unsure of what exactly to do at meetups, the guide describes many options, from different types of discussions to nearly 20 different games and exercises. All the talk and philosophizing in the world won't do much good if you don't actually do things, so the booklet also discusses long-term projects that you can undertake. Some people attend meetups to just have fun and to be social, and others to improve themselves and the world. The booklet has been written to be useful for both kinds of people.
In order to inspire you and let you see what others have done, the booklet also has brief case studies and examples from real meetup groups around the world. You can find these sprinkled throughout the guide.
This is just the first version of the guide. We will continue working on it. If you find mistakes, or think that something is unclear, or would like to see some part expanded, or if you've got good advice you think should be included... please let me know! You can contact me at kaj.sotala@intelligence.org.
A large number of people have helped in various ways, and I hope that I've remembered to mention most of them in the acknowledgements. If you've contributed to the document but don't see your name mentioned, please send me a message and I'll have that fixed!
The booklet has been illustrated with pictures from various meetup groups. Meetup organizers sent me the pictures for this use, and I explicitly asked them to make sure that everyone in the photos was fine with it. Regardless, if there's a picture that you find objectionable, please contact me and I'll have it replaced with something else.
Memory, Spaced Repetition and Life
I have made the case that with the advent of the internet went the need to memorize anything. Why worry about memorizing when I'll never be tested for a grade and can access knowledge nearly instantaneously? As well, I reasoned, I have probably already memorized everything I need to. I focused my time instead on learning thinking techniques, such as Bayesian calculations, expected value calculations and various things for improving emotional control.
But after reading this a couple months back I decided to experiment with Anki, a digital flashcard program which exploits a cognitive phenomenon called the Spacing Effect by implementing a memorization technique called Spaced Repetition. The Spacing Effect is the widely observed tendency for people to recall information better when studied a few times over a long period than when studied many times over a short period. Balota et al (2007):
Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011)
Meet fellow LW-ers, hone your rationality, and get on a path toward reducing existential risk and becoming more awesome.
Who: You and a class full of other aspiring rationalists and world-changers, from around the world.
What: A week-long mini-camp, filled with hands-on activities for applying rationality to your life, your goals, and existential risk reduction. (See details in the FAQ.)
When and where: Saturday May 28 through Saturday June 4, 2011 in Berkeley, California.
Why: Because you’re a social primate, and the best way to jump into a new way of thinking, make friends, and accomplish your goals is often to spend time with other primates who are doing just that.
Build Small Skills in the Right Order
I took some Scientology classes in Hollywood so I could get into their Toastmasters club, which is the best Toastmasters club in L.A. county.1 My first Scientology class, 'Success Through Communication', taught skills that were mostly non-specific to Scientology. At first, the class exercises seemed to teach skills too basic to be worth practicing. Later, I came to respect the class as surprisingly useful. (But please, don't take Scientology classes. They are highly Dark Arts, and extremely manipulative.)
For the first exercise, I had to sit upright, still, and silent with my eyes closed for about an hour. I was to remain alert and aware but utterly calm. When my head drooped or my hand twitched, I was forced to start over. It took me five hours of silent sitting to complete the exercise successfully. At first I thought the exercise was stupid, but later I found I was now more in control of my awareness and attention, and less disturbed by things in the environment.
For the second exercise, I had to stare directly into someone's eyes without looking away - even for a split second - for 20 minutes in a row. If you've never tried this, you should. It's very difficult. Unfortunately, they first paired me with a 12-year-old girl. I was sure I would freak her out if I stared into her eyes for 20 minutes (it's an intense experience), so I made faces when the instructors weren't looking and waited for them to pair me with an adult. After half a dozen failures, I finally managed to maintain eye contact for 20 minutes in a row, without a single glance away or a long blink.
Again, this seemed absurd at the time, but later I discovered that I no longer had any trouble maintaining eye contact with people. This skill is a small one, but it is highly valuable in almost every social endeavor.
Later exercises seemed childish. An instructor would ask me simple questions from a book like, "What's that over there?" and I would have to answer correctly: "That's a table." I had to do this for hundreds of questions. But I couldn't just say "That's a table" any old way. I had to say it without a stutter, I had to enunciate, and I had to speak loudly. Answering questions like this 100 times in a row will reveal how often most of us speak softly, fail to enunciate, and use filler words like "um." Every time I did one of those things, I had to start over.
In another exercise, the instructor would do everything she could to make me laugh, and I had to sit still and not crack a hint of a smile for 10 minutes in a row. This simple skill took many rounds to master. It is a small skill, but repeating a simple exercise like this will eventually bring almost anyone to mastery of this small skill. At the end of the exercise I had noticeably improved a small part of my self-control mechanism.
This class - a religious class I took as an atheist in order to achieve an unrelated goal - turned out to be one of the most important classes I have ever taken in my life. It taught me an important meta-skill I have used to great effect ever since.
This is the meta-skill of building small skills in the right order. It is now one of the key tools in my toolkit for instrumental rationality.
Rational Reading: Thoughts On Prioritizing Books
A large element of instrumental rationality consists of filtering, prioritizing, and focusing. It's true for tasks, for emails, for blogs, and for the multitude of other inputs that many of us are drowning in these days[1]. Doing everything, reading everything, commenting on everything is simply not an option - it would take infinite time. We could simply limit time and do what happens to catch our attention in that limited time, but that's clearly not optimal. Spending some time prioritizing rather than executing will always improve results if items can be prioritized and vary widely in benefit. So maximizing the results we get from our finite time requires, for a variety of domains:
- Filtering: a quick first-pass to get input down to a manageable size for the higher-cost effort of prioritizing.
- Prioritizing: briefly evaluating the impact each item will have towards your goals.
- Focusing: on the highest-priority items.
I have some thoughts, and am looking for more advice on how to do this for non-fiction reading. I've stopped buying books that catch my attention, because I have an inpile of about 3-4 shelves of unread books that have been unread for years. Instead, I put them on my Amazon Wishlists, which as a result have swelled to a total of 254 books - obviously un-manageable, and growing much faster than I read.
One obvious question to ask when optimizing is: what is the goal of reading? Let me suggest a few possibilities:
- Improve performance at a current job/role. For example, as Executive Director of a nonprofit, I could read books on fundraising or management.
- Relatedly, work towards a current goal. Here is where it helps to have identified your goals, perhaps in an Annual Review. As a parent, for example, there are an infinitude of parenting books that I could read, but I chose for this year to work specifically on positive psychology parenting, as it seemed like a potentially high-impact skill to learn. This massively filters the set of possible parenting books. Essentially, goal-setting ("learn positive psychology parenting habits") was a conscious prioritization step based on considering what new parenting skills would best advance my goals (in this case, to benefit my kids while making parenting more pleasant along the way).
- Improve core skills or attributes relevant to many areas of life - productivity, happiness, social skills, diet, etc.
- Expand your worldview (improve your map). Myopically focusing only on immediate needs would eliminate some of the greatest benefit I feel I've gotten from non-fiction in my life, which is a richer and more accurate understanding of the world.
- Be able to converse intelligently on currently popular books. (Much as one might watch the news in order to facilitate social bonding by being able to discuss current events). Note that I don't actually recommend this as a goal - I think you can find other things to bond over, plus you will sometimes read currently popular books because they serve other goals - but it may be important for some people.
Rationality Boot Camp
It’s been over a year since the Singularity Institute launched our ongoing Visiting Fellows Program and we’ve learned a lot in the process of running it. This summer we’re going to try something different. We’re going to run Rationality Boot Camp.
We are going to try to take ten weeks and fill them with activities meant to teach mental skills - if there's reading to be done, we'll tell you to get it done in advance. We aren't just aiming to teach skills like betting at the right odds or learning how to take into account others' information, we're going to practice techniques like mindfulness meditation and Rejection Therapy (making requests that you know will be rejected), in order to teach focus, non-attachment, social courage and all the other things that are also needed to produce formidable rationalists. Participants will learn how to draw (so that they can learn how to pay attention to previously unnoticed details, and see that they can do things that previously seemed like mysterious superpowers). We will play games, and switch games every few days, to get used to novelty and practice learning.
We're going to run A/B tests on you, and track the results to find out which training activities work best, and begin the tradition of evidence-based rationality training.
In short, we're going to start constructing the kind of program that universities would run if they actually wanted to teach you how to think.
Towards a Bay Area Less Wrong Community
Follow up to: Less Wrong NYC
Tl;dr: Two new regular weekly meetups in the Bay Area: In the Berkeley Starbucks on Wednesdays at 7pm (host Lucas Sloan), and in Tortuga (in Mountain View) on Thursdays at 7pm (hosts Shannon Friedman and Divia Melwani). New Google Group for the whole Bay Area, all welcome to join.
Hi everyone in the (San Fransisco) Bay Area. I'm Lucas Sloan and I've been organizing LW meet ups in Berkeley for about 8 months now. I think that we've accomplished great things in that time, the last week's had about 40 people show up, which is a number that was beyond my wildest dreams when I held my first meet up and 7 people showed up. As good as things are, I've been spending a lot of time thinking how we can do even better in the future. The main catalyst in my thinking has been the accounts I've been hearing over the last two months from people who've visited the New York Less Wrong group and the amazingly positive reactions people have had to their accomplishments. Now that Cosmos has written a post describing what he sees as their successes, I think now is an excellent time to start a discussion about the future of the Bay Area Less Wrong group, and how to make it awesome.
The main thing that the New York group has that I want for the Bay Area group is a sense of being a close-knit community of like-minded friends. At a Berkeley meet up we get into all sorts of very interesting conversations with our fellow rationalists, but I don't feel a personal connection with most of the people who come to meet-ups, even those people I've seen at many - I am friendly with everyone who comes to meet-ups, but I am not friends with everyone who comes. I see two things that contribute to this problem (though I'm sure there are more) - size of meet-ups, and the frequency of meet ups. The large size of meet ups makes it impossible to establish rapport with everyone, because there is no way to have a good conversation with 40 other people in 4 hours. Even more insidious, the large size makes it hard to establish rapport with even a subset of the people who come to a meet up - the group of 40 splits into 10 groups of 4 and everyone keeps churning between conversations as their interest wanes and waxes. The first meet up I held, with only 7 people, was socially fulfilling in a way that recent ones simply haven't been - everyone was participating in the same conversation, and everyone was getting to know everyone else. As to the frequency of meet ups, it's hard to become friends with people you only interact with once a month - you can easily forget a person in a month, and the format encourages talking about high minded "rational" topics, not the personal small talk that forms the basis of friendship.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)