Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : London: Presentation and Performance Games

0 sixes_and_sevens 20 April 2015 04:23PM

Discussion article for the meetup : London: Presentation and Performance Games

WHEN: 26 April 2015 02:00:00PM (+0100)

WHERE: Lincoln's Inn Fields, Holborn, London

Hey you!

Yes, you!

Do you like games? Do you like being awesome? Then you'll LOVE Presentation and Performance Games!

Starting at 2pm, the plan is to spend a couple of hours on a bunch of activities loosely related to being the focus of a group of people. We'll then declare victory and descend into anarchy. Anarchy may involve sitting around and talking, or may involve more games if we want.

You're also welcome to come along and not participate in these games, or just hang out and talk about abstruse moral dilemmas, or bitcoin fanfic, or how much measure you anticipate having when you simulate winning the lottery or whatever.

We'll be making the most of this unreasonably good weather by having the meetup at Lincoln's Inn Fields, just around the corner from the regular venue. We'll be in the north-west quadrant of the green. If you have trouble finding us, call 07887 718458. If the weather is bad, we'll fall back to the Shakespeare's Head.

Discussion article for the meetup : London: Presentation and Performance Games

A quick sketch on how the Curry-Howard Isomorphism kinda appears to connect Algorithmic Information Theory with ordinal logics

2 eli_sennesh 19 April 2015 07:35PM

The following is sorta-kinda carried on from a recent comments thread, where I was basically saying I wasn't gonna yack about what I'm thinking until I spent the time to fully formalized it.  Well, Luke got interested in it, and I spewed the entire sketch and intuition to him, and he asked me to put it up where others can participate.  So the following is it.

Basically, Algorithmic Information Theory as started by Solomonoff and Kolmogorov, and then continued by Chaitin, contains a theorem called Chaitin's Incompleteness Theorem, which says (in short, colloquial terms) "you can't prove a 20kg theorem with 10kg of axioms".  Except it says this in fairly precise mathematical terms, all of which are based in the undecidability of the Halting Problem.  To possess "more kilograms" of axioms is mathematically equivalent to being able to computationally decide the halting behavior of "more kilograms" of Turing Machines, or to be able to compress strings to smaller sizes.

Now consider the Curry-Howard Isomorphism, which says that logical systems as computation machines and logical systems as mathematical logics are, in certain precise ways, the same thing.  Now consider ordinal logic as started in Turing's PhD thesis, which starts with ordinary first-order logic and extends it with axioms saying "First-order logic is consistent", "First-order logic extended with the previous axiom is consistent", all the way up to the limiting countable infinity Omega (and then, I believe but haven't checked, further into the transfinite ordinals).

In a search problem with partial information, as you gain more information you're closing in on a smaller and smaller portion of your search space.  Thus, Turing's ordinal logics don't violate Goedel's Second Incompleteness Theorem: they specify more axioms, and therefore specify a smaller "search space" of models that are, up to any finite ordinal level, standard models of first-order arithmetic (and therefore genuinely consistent up to precisely that finite ordinal level).  Goedel's Completeness Theorem says that theorems of a first-order theory/language are provable iff they are true in every model of that first-order theory/language.  The clearest, least mystical, presentation of Goedel's First Incompleteness Theorem is: nonstandard models of first-order arithmetic exist, in which Goedel Sentences are false.  The corresponding statement of Goedel's Second Incompleteness Theorem follows: nonstandard models of first-order arithmetic, which are inconsistent, exist.  To capture only the consistent standard models of first-order arithmetic, you need to specify the additional axiom "First-order arithmetic is consistent", and so on up the ordinal hierarchy.

Back to learning and AIT!  Your artificial agent, let us say, starts with a program 10kg large.  Through learning, it acquires, let us say, 10kg of empirical knowledge, giving it 20kg of "mass" in total.  Depending on how precisely we can characterize the bound involved in Chaitin's Incompleteness Theorem (he just said, "there exists a constant L which is a function of the 10kg", more or less), we would then have an agent whose empirical knowledge enables it to reason about a 12kg agent.  It can't reason about the 12kg agent plus the remaining 8kg of empirical knowledge, because that would be 20kg and it's only a 20kg agent now even with its strongest empirical data, but it can formally prove universally-quantified theorems about how the 12kg agent will behave as an agent (ie: its goal functions, the soundness of its reasoning under empirical data, etc.).  So it can then "trust" the 12kg agent, hand its 10kg of empirical data over, and shut itself down, and then "come back online" as the new 12kg agent and learn from the remaining 8kg of data, thus being a smarter, self-improved agent.  The hope is that the 12kg agent, possessing a stronger mathematical theory, can generalize more quickly from its sensory data, thus enabling it to accumulate empirical knowledge more quickly and generalize more precisely than its predecessor, thus speeding it through the process of compressing all available information provided by its environment and achieving the reasoning power of something like a Solomonoff Inducer (ie: which has a Turing Oracle to give accurate Kolmogorov complexity numbers).

This is the sketch and the intuition.  As a theory, it does one piece of very convenient work: it explains why we can't solve the Halting Problem in general (we do not possess correct formal systems of infinite size with which to reason about halting), but also explains precisely why we appear to be able to solve it in so many of the cases we "care about" (namely: we are reasoning about programs small enough that our theories are strong enough to decide their halting behavior -- and we discover new formal axioms to describe our environment).

So yeah.  I really have to go now.  Mathematical input and criticism is very welcomed; the inevitable questions to clear things up for people feeling confusion about what's going on will be answered eventually.

Meetup : Canberra: Intro to Solomonoff induction

0 DanielFilan 19 April 2015 10:58AM

Discussion article for the meetup : Canberra: Intro to Solomonoff induction

WHEN: 24 April 2015 06:00:00PM (+1000)

WHERE: 108 North Road, Acton

Assume we are walking through the world and see a bunch of objects. Some of these objects are ravens, and all of the ravens turn out to be black. So we start entertaining the hypothesis that 'all ravens are black'. But how can we believe in this hypothesis? It talks about an infinite number of ravens, almost all of which we haven't seen!

What we need is a method of induction, generalizing a finite number of examples into a universal rule. It has been claimed that Solomonoff induction is the best method out there. Is that true? Does that mean all scientists should use Solomonoff induction? How does it work? And what can it do for me?

Jan will explain these and related questions giving a brief tour from probability theory to the universally intelligent agent AIXI. No prior knowledge about math is required. As always, vegan snacks will be provided.

General meetup info:

  • If you use Facebook, please join our group.
  • Structured meetups are (usually) held on the second Saturday and fourth Friday of each month from 6 pm until late in the CSIT building, room N101.

Discussion article for the meetup : Canberra: Intro to Solomonoff induction

Meetup : San Francisco Meetup: Board Games

0 rocurley 19 April 2015 02:41AM

Discussion article for the meetup : San Francisco Meetup: Board Games

WHEN: 20 April 2015 06:15:00PM (-0700)

WHERE: 1390 Market St., San Francisco, CA

We'll be meeting to hang out and play board games!

This is at our apartment because we have a table, call me at 301-458-0764 or tailgate to get in. As always, showing up late is fine.

Discussion article for the meetup : San Francisco Meetup: Board Games

Weekly LW Meetups

1 FrankAdamek 18 April 2015 06:46AM

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

One week left for CSER researcher applications

7 RyanCarey 17 April 2015 12:40AM

This is the last week to apply for one of four postdoctoral research positions at the Centre for the Study of Existential Risk. We are seeking researchers in disciplines including: economics, science and technology studies, science policy, arms control policy, expert elicitation and aggregation, conservation studies and philosophy.

The application requires a research proposal of no more than 1500 words from an individual with a relevant doctorate.

"We are looking for outstanding and highly-committed researchers, interested in working as part of growing research community, with research projects relevant to any aspect of the project. We invite applicants to explain their project to us, and to demonstrate their commitment to the study of extreme technological risks.

We have several shovel-ready projects for which we are looking for suitable postdoctoral researchers. These include:

1. Ethics and evaluation of extreme technological risk (ETR) (with Sir Partha Dasgupta;

2. Horizon-scanning and foresight for extreme technological risks (with Professor William Sutherland);

3. Responsible innovation and extreme technological risk (with Dr Robert Doubleday and the Centre for Science and Policy).

However, recruitment will not necessarily be limited to these subprojects, and our main selection criterion is suitability of candidates and their proposed research projects to CSER’s broad aims.

More details are available here. Applications close on April 24th.

- Sean OH and Ryan

Meetup : Sydney Rationality Dojo - Planning and Debugging

0 luminosity 16 April 2015 09:57PM

Discussion article for the meetup : Sydney Rationality Dojo - Planning and Debugging

WHEN: 03 May 2015 04:00:00PM (+1000)

WHERE: Humanist House, 10 Shepherd St Chippendale

Our next dojo will focus on how to successfully make plans to achieve our goals, and to debug potential problems with those plans until we end up with one we are happy with.

Discussion article for the meetup : Sydney Rationality Dojo - Planning and Debugging

Meetup : West LA: Improv & Rationality

0 abramdemski 16 April 2015 08:47AM

Discussion article for the meetup : West LA: Improv & Rationality

WHEN: 22 April 2015 07:00:00PM (-0700)

WHERE: 11066 Santa Monica Blvd, Los Angeles, CA

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: Do improvisational acting and rationality go together? I think this link should be explored. Improvisation involves a special kind of relationship with System 1, which most people need to train in order to pull off well. As such, learning improv skills may improve fast reactions, particularly in social settings. Improv games are also good group bonding activities. We will play some improv games geared toward rationality skills, and discuss possible relationships between improv and rationality.

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: Improv & Rationality

Meetup : Washington, D.C.: Sword of Good

0 RobinZ 15 April 2015 05:18PM

Discussion article for the meetup : Washington, D.C.: Sword of Good

WHEN: 19 April 2015 03:00:00PM (-0400)

WHERE: Reynolds Center

We will be meeting in the Kogod Courtyard of the Donald W. Reynolds Center for American Art and Portraiture (8th and F Sts or 8th and G Sts NW, go straight past the information desk from either entrance) to talk about the LessWrongian short story "The Sword of Good". We will congregate between 3:00 and 3:30 p.m., and begin the discussion at 3:30.

"The Sword of Good" is a short story by Eliezer Yudkowsky in the form of an excerpt from a swords-and-sorcery fantasy novel. I won't give any major spoilers here, but it's a good read, and deconstructs some of the tropes of the subgenre. Discussion will continue as long as people are interested; as always, side conversations are permitted and encouraged.

The WMATA major track work schedule is clear for April 19; the weekend service adjustments chiefly affect the Red Line, which should run every 15 minutes.

Upcoming meetups:

  • Apr. 26: (Outdoors) Fun & Games (bring games, play games, converse, socialize, or any combination thereof)
  • May 3: Sharing Book Excerpts (bring books to share and discuss favorite passages)

Discussion article for the meetup : Washington, D.C.: Sword of Good

Meetup : Austin, TX - Schelling Day

0 Vaniver 13 April 2015 02:19PM

Discussion article for the meetup : Austin, TX - Schelling Day

WHEN: 18 April 2015 06:00:00PM (-0500)

WHERE: 4212 Hookbilled Kite Drive

We're celebrating Schelling Day, the day for getting to know people in your community. It'll be a potluck followed by a bunch of sharing (starting at 7-7:30); last year's was awesome, and I hope you can make it to this one!

Standard advice: 1. Feel free to bring food if you want to, but also to not bring food if you don't want to. We typically have more than enough. 2. If you need a ride, get in contact with me and I'll see if I can arrange one. 3. If you've never been to a LW meetup in Austin before, feel free to come. This is probably the best event for meeting people. 4. If you're not sure about whether or not you should come, lean towards coming.

Discussion article for the meetup : Austin, TX - Schelling Day

View more: Next