Meetup : Cambridge Less Wrong: Tutoring Wheels
Discussion article for the meetup : Cambridge Less Wrong: Tutoring Wheels
This Sunday's Cambridge Less Wrong meetup will feature a tutoring wheel. We'll start with a brief discussion on the art of tutoring well, then divide into groups by topic. (Topics will be selected during the meetup based on what people are interested in). The tutoring wheel is a structure where we then go on to alternate between one-on-one conversations, and larger conversations where we discuss how the earlier conversations went, with the goal of getting better at learning and helping each other learn.
The meetup is at Citadel, a house at: 98 Elm St Apt 1 Somerville, MA. The meetup starts at 3:30, and the structured portion starts at 4:00.
Discussion article for the meetup : Cambridge Less Wrong: Tutoring Wheels
Meetup : MIT/Boston Secular Solstice
Discussion article for the meetup : MIT/Boston Secular Solstice
It has become tradition, in the community of those who seek to become more rational, to gather for one night of each year, and sing. We do this close to the winter solstice, which is the longest, darkest night of the year; and, gathered as a community, we stare into and confront the darkness. This consists of participatory singing and a few short speeches, following an emotional arc from light to darkness and back to light again. It will last about two hours, starting at 8pm at MIT Chapel at 50 Massachusetts Ave, Cambridge, Massachusetts 02139 and be followed by a reception/afterparty nearby in room 1-132. We may also be organizing an optional pre-ritual potluck nearby, details TBD.
The Facebook event page is at https://www.facebook.com/events/505931562916689/ .
Discussion article for the meetup : MIT/Boston Secular Solstice
Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game
Discussion article for the meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game
This meetup is about pulling ropes sideways. We'll be practicing the mental motion of noticing when a question has been incorrectly polarized into two options, and breaking out of the framing to get more alternatives. After a brief discussion of the concept and the mindset, we'll practice by playing the Contrarian Positions Game. In the Contrarian Positions Game, the group gets a topic and everyone has two minutes to come up with answers to it. In each round, you score a point for each answer which matches an answer at least one other persons gave, minus one point for each answer which two-thirds of the group gave but which you didn't, to a minimum of zero (your overall score is the sum of your scores in each round where you got a positive number of points). Cambridge/Boston-area Less Wrong meetups start at 3:30pm on the 1st and 3rd Sunday at the Citadel in Porter Sq, at 98 Elm St, apt 1, Somerville. Our default schedule is as follows:
—Phase 1: Arrival, greetings, unstructured conversation. This starts at 3:30; before then, Citadel residents will be busy. Looking forward to seeing you at 3:30!
—Phase 2: The headline event. This starts promptly at 4pm, and lasts 30-60 minutes.
—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.
—Phase 4: Dinner.
Discussion article for the meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game
Rationality Cardinality
Rationality Cardinality is a card game which takes memes and concepts from the rationality/Less Wrong sphere, and mixes them with jokes to make a game. After nearly two years of card-creation, playtesting and development, today, I'm taking the "beta" label off the web-based version of Rationality Cardinality. Go to the website and, if at least two other people visit at the same time, you can play against them.
I've put a lot of thought and a lot of work into the cards, and they're not just about humor; I also went systematically through blog posts and glossaries collecting terms and concepts that I think people should know about and be reminded of, and wrote concise explanations for them. It provides an easy way for everyone to quickly learn the jargon that's floating around, in a fun way; and it provides spaced repetition for concepts that might not otherwise have sunk in.
Rationality Cardinality will also soon have a print version. The catch is that in order to mass-produce it, I need to be sure there's enough demand. So, here's the deal: once enough people have played the online version, I'll launch a Kickstarter to sell print copies. You can speed this up by inviting people who might not otherwise see it to play.

Rationality Cardinality is somewhat inspired by Cards Against Rationality. Software for the web-based implementation is based on Cards for Humanity, with modifications.
Research Priorities for Artificial Intelligence: An Open Letter
The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.
Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.
Petrov Day is September 26
On September 26th, 1983, the world was nearly destroyed by nuclear war. That day is Petrov Day, named for the man who averted it. Petrov Day is now a yearly event on September 26 commemorating the anniversary of the Petrov incident. Last year, Citadel, the Boston-area rationalist house, performed a ritual on Petrov day. We will be doing it again - and have published a revised version, for anyone else who wants to have a Petrov Day celebration themselves.
The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled. This is not for the faint of heart and not for the uninitiated; it is aimed at those who already know what catastrophic and existential risk is, have some background knowledge of what those risks are, and believe (at least on an abstract level) that preventing those risks from coming to pass is important.
Petrov Day is designed for groups of 5-10 people, and consists of a series of readings and symbolic actions which people take turns doing. It is easy to organize; you'll need a few simple props (candles and a candle-holder) and a printout of the program for each person, but other than that no preparation is necessary.
Organizer guide and program (for one-sided printing) (PDF)
Program for two-sided print and fold (PDF)
There will be a Petrov Day ritual hosted at Citadel (Boston area) and at Highgarden (New York area). If you live somewhere else, consider running one yourself!
Three Parables of Microeconomics
(Epistemic status: Satire.)
First Parable: Equilibrium Pricing
Highway Offramp 72 leads to the isolated town of Townton. Visitors are greeted by two fuel stations, Carbonaceous Fossils (CF) and Hydrogenated Chains (HC), on opposite sides of the main road. There are no other gas stations for many miles. Together, these two stations sell 1000 gallons per day. Since their products are indistinguishable, and they have prominently posted prices, every driver will choose the cheaper one; or if the prices are the same, they will split half and half. Both pay $1.50/gal for their stock and charge $2/gal to drivers, so half the drivers stop at each.
The owner of CF reasons as follows: If I keep my current price of $2, I will make 500*(2-1.5)=$250 of profit. But if I lower my price to $1.99, I will get twice as much business and make 1000*(1.99-1.5)=$490 of profit. The next morning, he updates his price.
Across the street, the owner of HC (who is having a bad day, due to the complete lack of customers), reasons the same way. The next morning HC has updated its price to $1.98; the morning after that CF lowers its price to $1.97; and so on.
Because CF and HC's owners are law-abiding model citizens, they never talk to each other about prices. That would be collusion, which is illegal. Later that month, with CF's price down to $1.52 and HC's price at $1.51, the local community center holds Game Theory night, where both owners attend a local economist's presentation on the Iterated Prisoner's Dilemma.
The next morning, both stations charge $1.52. The morning after that, $1.53. The morning after that, $1.54, and so on. Later that year, CF reasons as follows: If I keep my current price of $20...
(Moral: Gas station attendants should study game theory.)
Second Parable: Comparative Advantage
Two farmers, Alex and Bertha, grow potatoes and carrots. In one year, Alex can either grow 4 barrels of potatoes or 10 barrels of carrots, or some linear combination of the two, such as 2 barrels of potatoes and 5 barrels of carrots. Bertha is better at farming, and can produce 15 barrels of potatoes or 20 barrels of carrots, or some combination of the two. Doctors agree that everyone should eat exactly equal numbers of potatoes and carrots - an excess of one over the other would be unacceptable. So in the first year, having just settled a new frontier and not having met their neighbors, Alex plants 2.9 barrels' worth of each, and Bertha plants 8.6 barrels of each.
Meetup : LW/Methods of Rationality meetup
Discussion article for the meetup : LW/Methods of Rationality meetup
On Oct 18th at 7pm there will be a Less Wrong / Methods of Rationality meetup/party on the MIT campus in Building 6, room 120. There will be snacks and refreshments, and Yudkowsky will be in attendance.
http://intelligence.org/2013/10/01/upcoming-talks-at-harvard-and-mit/
Discussion article for the meetup : LW/Methods of Rationality meetup
Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents
Discussion article for the meetup : Talk by Eliezer Yudkowsky: Recursion in rational agents: Foundations for self-modifying AI
On October 17th from 4:00-5:30pm, Scott Aaronson will host a talk by MIRI research fellow Eliezer Yudkowsky. Yudkowsky’s talk will take place in MIT’s Ray and Maria Stata Center (see image on right), in room 32-123 (aka Kirsch Auditorium, with 318 seats). There will be light refreshments 15 minutes before the talk. Yudkowsky’s title and abstract are:
Recursion in rational agents: Foundations for self-modifying AI
Reflective reasoning is a familiar but formally elusive aspect of human cognition. This issue comes to the forefront when we consider building AIs which model other sophisticated reasoners, or who might design other AIs which are as sophisticated as themselves. Mathematical logic, the best-developed contender for a formal language capable of reflecting on itself, is beset by impossibility results. Similarly, standard decision theories begin to produce counterintuitive or incoherent results when applied to agents with detailed self-knowledge. In this talk I will present some early results from workshops held by the Machine Intelligence Research Institute to confront these challenges.
The first is a formalization and significant refinement of Hofstadter’s “superrationality,” the (informal) idea that ideal rational agents can achieve mutual cooperation on games like the prisoner’s dilemma by exploiting the logical connection between their actions and their opponent’s actions. We show how to implement an agent which reliably outperforms classical game theory given mutual knowledge of source code, and which achieves mutual cooperation in the one-shot prisoner’s dilemma using a general procedure. Using a fast algorithm for finding fixed points, we are able to write implementations of agents that perform the logical interactions necessary for our formalization, and we describe empirical results.
Second, it has been claimed that Godel’s second incompleteness theorem presents a serious obstruction to any AI understanding why its own reasoning works or even trusting that it does work. We exhibit a simple model for this situation and show that straightforward solutions to this problem are indeed unsatisfactory, resulting in agents who are willing to trust weaker peers but not their own reasoning. We show how to circumvent this difficulty without compromising logical expressiveness.
Time permitting, we also describe a more general agenda for averting self-referential difficulties by replacing logical deduction with a suitable form of probabilistic inference. The goal of this program is to convert logical unprovability or undefinability into very small probabilistic errors which can be safely ignored (and may even be philosophically justified).
Discussion article for the meetup : Talk by Eliezer Yudkowsky: Recursion in rational agents: Foundations for self-modifying AI
Meetup : Cambridge, MA Meetup
Discussion article for the meetup : Cambridge, MA Meetup
We have a new location! This week's meetup will be held at Citadel, the new rationalist house at 98 Elm St. Apt 1 Somerville (near Porter Square). Cambridge/Boston-area Less Wrong meetups are every Sunday at 2pm.
—Phase 1: Arrival, greetings, unstructured conversation. —Phase 2: Presentations. This starts promptly at 2:30, and lasts 30-60 minutes. —Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups. —Phase 4: Dinner. It's about a ten minute walk to the usual restaurant.
Discussion article for the meetup : Cambridge, MA Meetup
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)