Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : Sydney Rationality Dojo - Urge Propagation

1 luminosity 28 September 2014 01:56AM

Discussion article for the meetup : Sydney Rationality Dojo - Urge Propagation

WHEN: 05 October 2014 03:00:00PM (+1000)

WHERE: Humanist House, 10 Shepherd St Chippendale

We'll be examining how to connect your desire for goals or outcomes to specific emotional urges to perform the actions to bring about that outcome.

After the session is over, there will also be an optional group dinner.

Discussion article for the meetup : Sydney Rationality Dojo - Urge Propagation

Meetup : October Rationality Dojo - Non-Violent Communication

1 MelbourneLW 28 September 2014 12:31AM

Discussion article for the meetup : October Rationality Dojo - Non-Violent Communication

WHEN: 05 October 2014 03:30:00PM (+0800)

WHERE: Ross House Association, 247-251 Flinders Lane, Melbourne

[ATTN: Please remember the new location for the dojos: the Jenny Florence Room, Level 3, Ross House at 247 Flinders Lane, Melbourne. 3:30pm start / arrival - formal dojo activities will commence at 4:00pm.]

The Less Wrong Sunday Rationality Dojos are crafted to be serious self-improvement sessions for those committed to the Art of Rationality and personal growth. Each month a community member will run a session involving a presentation of content, discussion, and exercises. Continuing the succession of immensely successful dojos, Chris will run a session on Non-Violent Communication.

As always, we will review the personal goals we committed to at the previous Dojo (I will have done X by the next Dojo). Our goals are now being recorded via Google Forms here - https://docs.google.com/forms/d/1MCHH4MpbW0SI_2JyMSDlKnnGP4A0qxojQEZoMZIdopk/viewform, and Melbourne Less Wrong organisers have access to the form results if you wish to review the goals you set last month.

This month, we are also seeking 2-3 lightning talks from members. Speakers will be limited to 5 minutes with room for questions. We will be asking for talks from attendees present, but if you already have a talk topic in mind, please contact Louise at lvalmoria@gmail.com The Dojo is likely to run for 2-3 hours, after which some people will get dinner together.

If you have any trouble finding the venue or getting in, call Louise on 0419 192 367.

If you would like to present at a future Dojo or suggest a topic, please fill it in on the Rationality Dojo Roster: http://is.gd/dojoroster

To organise similar events, please send an email to melbournelw@gmail.com

Discussion article for the meetup : October Rationality Dojo - Non-Violent Communication

The Future of Humanity Institute could make use of your money

42 danieldewey 26 September 2014 10:53PM

Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.

Academic research is generally funded through grants, but because the FHI is researching important but unusual problems, and because this research is multi-disciplinary, we've found it difficult to attract funding from the usual grant bodies. This has meant that we’ve had to prioritise a certain number of projects that are not perfect for existential risk reduction, but that allow us to attract funding from interested institutions.

With more assets, we could both liberate our long-term researchers to do more "pure Xrisk" research, and hire or commission new experts when needed to look into particular issues (such as synthetic biology, the future of politics, and the likelihood of recovery after a civilization collapse).

We are not in any immediate funding crunch, nor are we arguing that the FHI would be a better donation target than MIRI, CSER, or the FLI. But any donations would be both gratefully received and put to effective use. If you'd like to, you can donate to FHI here. Thank you!

New LW Meetups: Prague, Hasselt

2 FrankAdamek 26 September 2014 10:53PM

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Meetup : West LA—The Worst Argument in the World

1 OpenThreadGuy 26 September 2014 04:24AM

Discussion article for the meetup : West LA—The Worst Argument in the World

WHEN: 01 October 2014 07:00:00PM (-0700)

WHERE: 11066 Santa Monica Blvd, Los Angeles, CA

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: Last week, I included a link to the Worst Argument in the World essay, not because it was relevant to last week's topic, but because I thought it was something that people should read. This week, I realized what I should have done instead. The three recommended readings are all rewrites of the same essay, but since the first one is the best, you don't need to read the others unless you didn't understand it or something.

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA—The Worst Argument in the World

Meetup : Washington, D.C.: Book Swap

1 RobinZ 26 September 2014 03:51AM

Discussion article for the meetup : Washington, D.C.: Book Swap

WHEN: 28 September 2014 03:00:00PM (-0400)

WHERE: National Portrait Gallery

We will be meeting in the Kogod Courtyard of the National Portrait Gallery (8th and F Sts or 8th and G Sts NW, go straight past the information desk from either entrance) to lend, borrow, and discuss whatever random books people bring. As usual, folk will congregate in the courtyard between 3:00 and 3:30; the meeting will begin with people taking turns to suggest topics of discussion based on the books they brought, returning to the usual unstructured conversation when people are done discussing books.

A couple notes:

  • There is no obligation to lend, borrow, or return books at book swap meetups. You are not required to bring any given book, or any books at all; you are not required to lend the books that you do bring; if you wish to lend out a given book, you are not required to lend it out to the first person to ask (or to the next person to ask, or to anyone at all); you are not required to borrow a book that is offered to you; and if you have already borrowed a book, you are not required to return it at this meetup. This is a Schelling point for book swaps, no more.
  • Books belong to their owners - keep any book you borrow in good condition, and return it in a timely fashion when you finish reading it or when the owner asks for it back.
  • In the past, the owners of some books have given permission for borrowers to lend them directly to anyone at the meetup who is interested. If you do not have permission from the owner of the book, do not do this.

Upcoming meetups:

  • Oct. 5: Fun & Games
  • Oct. 12: TBA
  • Oct. 19: Mini Talks
  • Oct. 26: TBA

Discussion article for the meetup : Washington, D.C.: Book Swap

Meetup : DC EA meetup / Petrov day dinner

1 Benquo 25 September 2014 10:50PM

Discussion article for the meetup : DC EA meetup / Petrov day dinner

WHEN: 26 September 2014 07:00:00PM (-0400)

WHERE: 3001 Veazey Ter NW # 1005, Washington, DC 20008

On this day in 1983, in an unparalleled feat of Effective Altruism, Stanislav Petrov declined to destroy the world: http://lesswrong.com/lw/jq/926_is_petrov_day/

We'll celebrate his achievement by getting together for food, drinks, and not destroying the world.

Food and drinks will be provided, though please feel free to help.

Ben Hoffman will give a brief talks about the test run of the project to comment on proposed regulations. Most of the night will be free discussion on anything we want.

Schedule:

7:00 - 7:30 PM Arrive 7:30 - 8:00 PM Talk on DC EA projects 8:00 - 9:00 PM Dinner, drinks, discussion 9:00 - 9:30 PM Petrov day ritual 9:30 - Late: Free discussion

Discussion article for the meetup : DC EA meetup / Petrov day dinner

Logics for Mind-Building Should Have Computational Meaning

15 eli_sennesh 25 September 2014 09:17PM

The Workshop

Late in July I organized and held MIRIx Tel-Aviv with the goal of investigating the currently-open (to my knowledge) Friendly AI problem called "logical probability": the issue of assigning probabilities to formulas in a first-order proof system, in order to use the reflective consistency of the probability predicate to get past the Loebian Obstacle to building a self-modifying reasoning agent that will trust itself and its successors.  Vadim Kosoy, Benjamin and Joshua Fox, and myself met at the Tel-Aviv Makers' Insurgence for six hours, and each presented our ideas.  I spent most of it sneezing due to my allergies to TAMI's resident cats.

My idea was to go with the proof-theoretic semantics of logic and attack computational construction of logical probability via the Curry-Howard Isomorphism between programs and proofs: this yields a rather direct translation between computational constructions of logical probability and the learning/construction of an optimal function from sensory inputs to actions required by Updateless Decision Theory.

The best I can give as a mathematical result is as follows:

P(\Gamma \vdash a:A \mid \Gamma \vdash b:B) = P(\Gamma,x:B \vdash [\forall y:B, x/y]a:A)

P(\Gamma \vdash (a, b): A \wedge B) = P(\Gamma \vdash a:A \mid \Gamma \vdash b:B) * P(\Gamma \vdash b:B)

\frac{x:A \notin \Gamma}{P(\Gamma \vdash x:A) = \mathcal{M}_{\lambda\mu} (A)}

\frac{x:A \in \Gamma}{P(\Gamma \vdash x:A) = 1.0}

The capital \Gamma is a set of hypotheses/axioms/assumptions, and the English letters are metasyntactic variables (like "foo" and "bar" in programming lessons).  The lower-case letters denote proofs/programs, and the upper-case letters denote propositions/types.  The turnstile \vdash just means "deduces": the judgement \Gamma \vdash a:A can be read here as "an agent whose set of beliefs is denoted \Gamma will believe that the evidence a proves the proposition A."  The [\forall y:B, x/y]a performs a "reversed" substitution, with the result reading: "for all y proving/of-type B, substitute x for y in a".  This means that we algorithmically build a new proof/construction/program from a in which any and all constructions proving the proposition B are replaced with the logically-equivalent hypothesis x, which we have added to our hypothesis-set \Gamma.

Thus the first equation reads, "the probability of a proving A conditioned on b proving B equals the probability of a proving A when we assume the truth of B as a hypothesis."  The second equation then uses this definition of conditional probability to give the normal Product Rule of probabilities for the logical product (the \wedge operator), defined proof-theoretically.  I strongly believe I could give a similar equation for the normal Sum Rule of probabilities for the logical sum (the \vee operator) if I could only access the relevant paywalled paper, in which the λμ-calculus acting as an algorithmic interpretation of the natural-deduction system for classical propositional logic (rather than intuitionistic) is given.

The third item given there is an inference rule, which reads, "if x is a free variable/hypothesis imputed to have type/prove proposition A, not bound in the hypothesis-set \Gamma, then the probability with which we believe x proves A is given by the Solomonoff Measure of type A in the λμ-calculus".  We can define that measure simply as the summed Solomonoff Measure of every program/proof possessing the relevant type, and I don't think going into the details of its construction here would be particularly productive.  Free variables in λ-calculus are isomorphic to unproven hypotheses in natural deduction, and so a probabilistic proof system could learn how much to believe in some free-standing hypothesis via Bayesian evidence rather than algorithmic proof.

The final item given here is trivial: anything assumed has probability 1.0, that of a logical tautology.

The upside to invoking the strange, alien λμ-calculus instead of the more normal, friendly λ-calculus is that we thus reason inside classical logic rather than intuitionistic, which means we can use the classical axioms of probability rather than intuitionistic Bayesianism.  We need classical logic here: if we switch to intuitionistic logics (Heyting algebras rather than Boolean algebras) we do get to make computational decidability a first-class citizen of our logic, but the cost is that we can then believe only computationally provable propositions. As Benjamin Fox pointed out to me at the workshop, Loeb's Theorem then becomes a triviality, with real self-trust rendered no easier.

The Apologia

My motivation and core idea for all this was very simple: I am a devout computational trinitarian, believing that logic must be set on foundations which describe reasoning, truth, and evidence in a non-mystical, non-Platonic way.  The study of first-order logic and especially of incompleteness results in metamathematics, from Goedel on up to Chaitin, aggravates me in its relentless Platonism, and especially in the way Platonic mysticism about logical incompleteness so often leads to the belief that minds are mystical.  (It aggravates other people, too!)

The slight problem which I ran into is that there's a shit-ton I don't know about logic.  I am now working to remedy this grievous hole in my previous education.  Also, this problem is really deep, actually.

I thus apologize for ending the rigorous portion of this write-up here.  Everyone expecting proper rigor, you may now pack up and go home, if you were ever paying attention at all.  Ritual seppuku will duly be committed, followed by hors d'oeuvre.  My corpse will be duly recycled to make paper-clips, in the proper fashion of a failed LessWrongian.

The Parts I'm Not Very Sure About

With any luck, that previous paragraph got rid of all the serious people.

I do, however, still think that the (beautiful) equivalence between computation and logic can yield some insights here.  After all, the whole reason for the strange incompleteness results in first-order logic (shown by Boolos in his textbook, I'm told) is that first-order logic, as a reasoning system, contains sufficient computational machinery to encode a Universal Turing Machine.  The bidirectionality of this reduction (Hilbert and Gentzen both have given computational descriptions of first-order proof systems) is just another demonstration of the equivalence.

In fact, it seems to me (right now) to yield a rather intuitively satisfying explanation of why the Gaifman-Carnot Condition (that every instance we see of P(x_i) provides Bayesian evidence in favor of \forall x.P(x)) for logical probabilities is not computably approximable.  What would we need to interpret the Gaifman Condition from an algorithmic, type-theoretic viewpoint?  From this interpretation, we would need a proof of our universal generalization.  This would have to be a dependent product of form \Pi(x:A).P(x), a function taking any construction x:A to a construction of type P(x), which itself has type Prop.  To learn such a dependent function from the examples would be to search for an optimal (simple, probable) construction (program) constituting the relevant proof object: effectively, an individual act of Solomonoff Induction.  Solomonoff Induction, however, is already only semicomputable, which would then make a Gaifman-Hutter distribution (is there another term for these?) doubly semicomputable, since even generating it involves a semiprocedure.

The benefit of using the constructive approach to probabilistic logic here is that we know perfectly well that however incomputable Solomonoff Induction and Gaifman-Hutter distributions might be, both existing humans and existing proof systems succeed in building proof-constructions for quantified sentences all the time, even in higher-order logics such as Coquand's Calculus of Constructions (the core of a popular constructive proof assistant) or Luo's Logic-Enriched Type Theory (the core of a popular dependently-typed programming language and proof engine based on classical logic).  Such logics and their proof-checking algorithms constitute, going all the way back to Automath, the first examples of computational "agents" which acquire specific "beliefs" in a mathematically rigorous way, subject to human-proved theorems of soundness, consistency, and programming-language-theoretic completeness (rather than meaning that every true proposition has a proof, this means that every program which does not become operationally stuck has a type and is thus the proof of some proposition).  If we want our AIs to believe in accordance with soundness and consistency properties we can prove before running them, while being composed of computational artifacts, I personally consider this the foundation from which to build.

Where we can acquire probabilistic evidence in a sound and computable way, as noted above in the section on free variables/hypotheses, we can do so for propositions which we cannot algorithmically prove.  This would bring us closer to our actual goals of using logical probability in Updateless Decision Theory or of getting around the Loebian Obstacle.

Some of the Background Material I'm Reading

Another reason why we should use a Curry-Howard approach to logical probability is one of the simplest possible reasons: the burgeoning field of probabilistic programming is already being built on it.  The Computational Cognitive Science lab at MIT is publishing papers showing that their languages are universal for computable and semicomputable probability distributions, and getting strong results in the study of human general intelligence.  Specifically: they are hypothesizing that we can dissolve "learning" into "inducing probabilistic programs via hierarchical Bayesian inference", "thinking" into "simulation" into "conditional sampling from probabilistic programs", and "uncertain inference" into "approximate inference over the distributions represented by probabilistic programs, conditioned on some fixed quantity of sampling that has been done."

In fact, one might even look at these ideas and think that, perhaps, an agent which could find some way to sample quickly and more accurately, or to learn probabilistic programs more efficiently (in terms of training data), than was built into its original "belief engine" could then rewrite its belief engine to use these new algorithms to perform strictly better inference and learning.  Unless I'm as completely wrong as I usually am about these things (that is, very extremely completely wrong based on an utterly unfounded misunderstanding of the whole topic), it's a potential engine for recursive self-improvement.

They also have been studying how to implement statistical inference techniques for their generate modeling languages which do not obey Bayesian soundness.  While most of machine learning/perception works according to error-rate minimization rather than Bayesian soundness (exactly because Bayesian methods are often too computationally expensive for real-world use), I would prefer someone at least study the implications of employing unsound inference techniques for more general AI and cognitive-science applications in terms of how often such a system would "misbehave".

Many of MIT's models are currently dynamically typed and appear to leave type soundness (the logical rigor with which agents come to believe things by deduction) to future research.  And yet: they got to this problem first, so to speak.  We really ought to be collaborating with them, with the full-time grant-funded academic researchers, rather than trying to armchair-reason our way to a full theory of logical probability as a large group of amateurs or part-timers and only a small core cohort of full-time MIRI and FHI staff investigating AI safety issues.

(I admit to having a nerd crush, and I am actually planning to go visit the Cocosci Lab this coming week, and want/intend to apply to their PhD program.)

They have also uncovered something else I find highly interesting: human learning of both concepts and causal frameworks seems to take place via hierarchical Bayesian inference, gaining a "blessing of abstraction" to countermand the "curse of dimensionality".  The natural interpretation of these abstractions in terms of constructions and types would be that, as in dependently-typed programming languages, constructions have types, and types are constructions, but for hierarchical-learning purposes, it would be useful to suppose that types have specific, structured types more informative than Prop or Typen (for some universe level n).  Inference can then proceed from giving constructions or type-judgements as evidence at the bottom level, up the hierarchy of types and meta-types to give probabilistic belief-assignments to very general knowledge.  Even very different objects could have similar meta-types at some level of the hierarchy, allowing hierarchical inference to help transfer Bayesian evidence between seemingly different domains, giving insight into how efficient general intelligence can work.

Just-for-fun Postscript

If we really buy into the model of thinking as conditional simulation, we can use that to dissolve the modalities "possible" and "impossible".  We arrive at (by my count) three different ways of considering the issue computationally:

  1. Conceivable/imaginable: the generative models which constitute my current beliefs do or do not yield a path to make some logical proposition true or to make some causal event happen (planning can be done as inference, after all), with or without some specified level of probability.
  2. Sensibility/absurdity: the generative models which constitute my current beliefs place a desirably high or undesirably low probability on the known path(s) by which a proposition might be true or by which an event might happen.  The level which constitutes "desirable" could be set as the value for a hypothesis test, or some other value determined decision-theoretically.  This could relate to Pascal's Mugging: how probable must something be before I consider it real rather than an artifact of my own hypothesis space?
  3. Consistency or Contradiction: the generative models which constitute my current beliefs, plus the hypothesis that some proposition is true or some event can come about, do or do not yield a logical contradiction with some probability (that is, we should believe the contradiction exists only to the degree we believe in our existing models in the first place!).

I mostly find this fun because it lets us talk rigorously about when we should "shut up and do the 1,2!impossible" and when something is very definitely 3!impossible.

Meetup : London social meetup

1 philh 25 September 2014 04:35PM

Discussion article for the meetup : London social meetup

WHEN: 28 September 2014 02:00:00PM (+0100)

WHERE: Shakespeare's Head, Holborn, WC2B 6BG

The next LW London meetup will be on September 28th. Join us from 2pm to talk about the sorts of things that your other friends will look funny at you for talking about.

I likely won't be there myself this week, but if you have trouble finding us, you can contact Leon on 07860466862.

About London LessWrong:

We run this meetup approximately every other week; these days we tend to get in the region of 5-15 people in attendance. By default, meetups are just unstructured social discussion about whatever strikes our fancy: books we're reading, recent posts on LW/related blogs, logic puzzles, toilet usage statistics....

Sometimes we play The Resistance or other games. We usually finish around 7pm, give or take an hour, but people arrive and leave whenever suits them.

Related discussion happens on both our google group and our facebook group.

Discussion article for the meetup : London social meetup

Newcomblike problems are the norm

29 So8res 24 September 2014 06:41PM

This is crossposted from my blog. In this post, I discuss how Newcomblike situations are common among humans in the real world. The intended audience of my blog is wider than the readerbase of LW, so the tone might seem a bit off. Nevertheless, the points made here are likely new to many.

1

Last time we looked at Newcomblike problems, which cause trouble for Causal Decision Theory (CDT), the standard decision theory used in economics, statistics, narrow AI, and many other academic fields.

These Newcomblike problems may seem like strange edge case scenarios. In the Token Trade, a deterministic agent faces a perfect copy of themself, guaranteed to take the same action as they do. In Newcomb's original problem there is a perfect predictor Ω which knows exactly what the agent will do.

Both of these examples involve some form of "mind-reading" and assume that the agent can be perfectly copied or perfectly predicted. In a chaotic universe, these scenarios may seem unrealistic and even downright crazy. What does it matter that CDT fails when there are perfect mind-readers? There aren't perfect mind-readers. Why do we care?

The reason that we care is this: Newcomblike problems are the norm. Most problems that humans face in real life are "Newcomblike".

These problems aren't limited to the domain of perfect mind-readers; rather, problems with perfect mind-readers are the domain where these problems are easiest to see. However, they arise naturally whenever an agent is in a situation where others have knowledge about its decision process via some mechanism that is not under its direct control.

continue reading »

View more: Next