Hacking Less Wrong made easy: Vagrant edition

28 Morendil 30 January 2012 06:51PM

The Less Wrong Public Goods Team has already brought you an easy-to use virtual machine for hacking Less Wrong.

But virtual boxes can cut both ways: on the one hand, you don't have to worry about setting things up yourself; on the other hand, not knowing how things were put together, having to deal with a "black box" that doesn't let you use your own source code editor or pick an OS - these can be offputting. To me at least, these were trivial inconveniences that might stand in the way of updating my copy of the source and making some useful tweaks.

Enter Vagrant - and a little work I've done today for LW hackers and would-be hackers. Vagrant is a recent tool that allows you to treat virtual machine configurations as source code.

Instead of being something that someone possessed of arcane knowledge has put together, a virtual machine under Vagrant results from executing a series of source code instructions - and this source code is available for you to read, review, understand or change. (Software development should be a process of knowledge capture, not some hermetic discipline where you rely on the intransmissible wisdom of remote elders.)

Preliminary (but tested) results are up on my Github repo - it's a fork of the offiical LW code base, not the real thing. (One this is tested by someone else, and if it works well, I intend to submit a pull request so that these improvements end up in the main codebase.) The following assumes you have a Unix or Mac system, or if you're using Windows, that you're command-line competent.

Hacking on LW is now done as follows (compared to using the VM):

  • The following prerequisites are unchanged: git, Virtualbox
  • Install the following prerequisites: Ruby, rubygems, Vagrant
  • Download the Less Wrong source code as follows: git clone git@github.com:Morendil/lesswrong.git
  • Enter the "lesswrong" directory, then build the VM with: vagrant up (may take a while)
  • Log into the virtual box with: vagrant ssh
  • Go to the "/vagrant/r2" directory, and copy example.ini to development.ini
  • Change all instances of "password" in development.ini to "reddit"
  • You can now start the LW server with: paster serve --reload development.ini port=8080
  • Browse the URL http://localhost:8080/
The cool part is that the "/vagrant" directory on the VM is mapped to where you checked out the LW source code on your own machine: it's a shared directory, which means you can use your own code editor, run grep searches and so on. You've broken out of the black box!
If you try it, please report your experience in the thread below.

 

A signaling theory of class x politics interaction

53 Yvain 17 October 2011 06:49PM

The media, most recently The Economist and Scientific American, have been publicizing a surprising statistical finding: in the current economic climate, when more Americans than ever are poor, support for policies that redistribute wealth to the poor are at their lowest levels ever. This new-found antipathy towards aid to the poor concentrates in people who are near but not yet on the lowest rung of the social ladder. The Economist adds some related statistics: those who earn slightly more than the minimum wage are most against raising the minimum wage, and support for welfare in an area decreases as the percentage of welfare recipients in the area rises.

Both articles explain the paradoxical findings by appealing to something called "last place aversion", an observed tendency for people to overvalue not being in last place. For example, in laboratory experiments where everyone gets randomly determined amounts of money, most people are willing to help those with less money than themselves gain cash - except the person with the second to lowest amount of money, who tends to try to thwart the person in last place even if it means enriching those who already have the most.

"Last place aversion" is interesting, and certainly deserves at least a footnote in the catalogue of cognitive biases and heuristics, but I find it an unsatisfying explanation for the observations about US attitudes toward wealth redistribution. For one thing, the entire point of last place aversion is that it only affects those in last place, but in a massive country like the United States, everyone can find someone worse off than themselves (with one exception). For another, redistributive policies usually stop short of making those who need government handouts wealthier than those who do not; subsidizing more homeless shelters doesn't risk giving the homeless a nicer house than your own. Finally, many of the policies people oppose, like taxing the rich, don't directly translate to helping those in last place.

I propose a different mechanism, one based on ... wait for it ... signaling.

In a previous post, I discussed multi-level signaling and counter-signaling, where each level tries to differentiate itself from the level beneath it. For example, the nouveau riche differentiate themselves from the middle class by buying ostentatious bling, and the nobility (who are at no risk of being mistaken for the middle class) differentiate themselves from the nouveau riche by not buying ostentatious bling.

The very poor have one strong incentive to support redistribution of wealth: they need the money. They also have a second, subtler incentive: most redistributive policies come packaged with a philosophy that the poor are not personally responsible for the poverty, but are at least partially the victims of the rest of society. Therefore, these policies inflate both their pocketbook and their ego.

The lower middle class gain what status they have by not being the very poor; effective status signaling for a lower middle class person is that which proves that she is certainly not poor. One effective method is to hold opinions contrary to those of the poor: that redistribution of wealth is evil and that the poor deserve their poverty. This ideology celebrates the superiority of the lower middle class over the poor by emphasizing the biggest difference between the lower middle class and the very poor: self-reliance. By asserting this ideology, a lower middle class person can prove her lower middle class status.

The upper middle class gain what status they have by not being the lower middle class; effective status signaling for an upper middle class person is that which proves that she is certainly not lower middle class. One effective way is to hold opinions contrary to those of the lower middle class: that really the poor and lower middle class are the same sort of people, but some of them got lucky and some of them got unlucky. The only people who can comfortably say "Deep down there's really no difference between myself and a poor person" are people confident that no one will actually mistake them for a poor person after they say this.

As a thought experiment, imagine your reactions to the following figures:

1. A bearded grizzled man in ripped jeans, smelling slightly of alcohol, ranting about how the government needs to give more free benefits to the poor.

2. A bearded grizzled man in ripped jeans, smelling slightly of alcohol, ranting about how the poor are lazy and he worked hard to get where he is today.

3. A well-dressed, stylish man in a business suit, ranting about how the government needs to give more free benefits to the poor.

4. A well-dressed, stylish man in a business suit, ranting about how the poor are lazy and he worked hard to get where he is today.

My gut reactions are (1, lazy guy who wants free money) (2, honorable working class salt-of-the-earth) (3, compassionate guy with good intentions) (4, insensitive guy who doesn't realize his privilege). If these are relatively common reactions, these would suffice to explain the signaling patterns in these demographics.

If this were true, it would explain the unusual trends cited in the first paragraph. An area where welfare became more common would see support for welfare drop, as it became more and more necessary for people to signal that they themselves were not welfare recipients. Support for minimum wage would be lowest among people who earn just slightly more than minimum wage, and who need to signal that they are not minimum wage earners. And since upper middle class people tend to favor redistribution as a status signal and lower middle class people tend to oppose it, a recession that drives more people into the lower middle class would cause a drop in support for redistributive policies.

You're Entitled to Arguments, But Not (That Particular) Proof

57 Eliezer_Yudkowsky 15 February 2010 07:58AM

Followup toLogical Rudeness

"Modern man is so committed to empirical knowledge, that he sets the standard for evidence higher than either side in his disputes can attain, thus suffering his disputes to be settled by philosophical arguments as to which party must be crushed under the burden of proof."
        -- Alan Crowe

There's a story - in accordance with Poe's Law, I have no idea whether it's a joke or it actually happened - about a creationist who was trying to claim a "gap" in the fossil record, two species without an intermediate fossil having been discovered.  When an intermediate species was discovered, the creationist responded, "Aha!  Now there are two gaps."

Since I'm not a professional evolutionary biologist, I couldn't begin to rattle off all the ways that we know evolution is true; true facts tend to leave traces of themselves behind, and evolution is the hugest fact in all of biology.  My specialty is the cognitive sciences, so I can tell you of my own knowledge that the human brain looks just like we'd expect it to look if it had evolved, and not at all like you'd think it would look if it'd been intelligently designed.  And I'm not really going to say much more on that subject.  As I once said to someone who questioned whether humans were really related to apes:  "That question might have made sense when Darwin first came up with the hypothesis, but this is the twenty-first century.  We can read the genes.  Human beings and chimpanzees have 95% shared genetic material.  It's over."

Well, it's over, unless you're crazy like a human (ironically, more evidence that the human brain was fashioned by a sloppy and alien god).  If you're crazy like a human, you will engage in motivated cognition; and instead of focusing on the unthinkably huge heaps of evidence in favor of evolution, the innumerable signs by which the fact of evolution has left its heavy footprints on all of reality, the uncounted observations that discriminate between the world we'd expect to see if intelligent design ruled and the world we'd expect to see if evolution were true...

...instead you search your mind, and you pick out one form of proof that you think evolutionary biologists can't provide; and you demand, you insist upon that one form of proof; and when it is not provided, you take that as a refutation.

You say, "Have you ever seen an ape species evolving into a human species?"  You insist on videotapes - on that particular proof.

And that particular proof is one we couldn't possibly be expected to have on hand; it's a form of evidence we couldn't possibly be expected to be able to provide, even given that evolution is true.

Yet it follows illogically that if a video tape would provide definite proof, then, likewise, the absence of a videotape must constitute definite disproof.  Or perhaps just render all other arguments void and turn the issue into a mere matter of personal opinion, with no one's opinion being better than anyone else's.

continue reading »

Logical Rudeness

65 Eliezer_Yudkowsky 29 January 2010 06:48AM

The concept of "logical rudeness" (which I'm pretty sure I first found here, HT) is one that I should write more about, one of these days.  One develops a sense of the flow of discourse, the give and take of argument.  It's possible to do things that completely derail that flow of discourse without shouting or swearing.  These may not be considered offenses against politeness, as our so-called "civilization" defines that term.  But they are offenses against the cooperative exchange of arguments, or even the rules of engagement with the loyal opposition.  They are logically rude.

Suppose, for example, that you're defending X by appealing to Y, and when I seem to be making headway on arguing against Y, you suddenly switch (without having made any concessions) to arguing that it doesn't matter if ~Y because Z still supports X; and when I seem to be making headway on arguing against Z, you suddenly switch to saying that it doesn't matter if ~Z because Y still supports X.  This is an example from an actual conversation, with X = "It's okay for me to claim that I'm going to build AGI in five years yet not put any effort into Friendly AI", Y = "All AIs are automatically ethical", and Z = "Friendly AI is clearly too hard since SIAI hasn't solved it yet".

Even if you never scream or shout, this kind of behavior is rather frustrating for the one who has to talk to you.  If we are ever to perform the nigh-impossible task of actually updating on the evidence, we ought to acknowledge when we take a hit; the loyal opposition has earned that much from us, surely, even if we haven't yet conceded.  If the one is reluctant to take a single hit, let them further defend the point.  Swapping in a new argument?  That's frustrating.  Swapping back and forth?  That's downright logically rude, even if you never raise your voice or interrupt.

The key metaphor is flow.  Consider the notion of "semantic stopsigns", words that halt thought.  A stop sign is something that happens within the flow of traffic.  Swapping back and forth between arguments might seem merely frustrating, or rude, if you take the arguments at face value - if you stay on the object level.  If you jump back a level of abstraction and try to sense the flow of traffic, and imagine what sort of traffic signal this corresponds to... well, you wouldn't want to run into a traffic signal like that.

continue reading »

Immortality Roadmap

9 turchin 28 July 2015 09:27PM

Added: Direct link on pdf: http://immortality-roadmap.com/IMMORTEN.pdf

 

A lot of people value indefinite life extension, but most have their own preferred method of achieving it. The goal of this map is to present all known ways of radical life extension in an orderly and useful way.

A rational person could choose to implement all of these plans or to concentrate only on one of them, depending on his available resources, age and situation. Such actions may be personal or social; both are necessary.

The roadmap consists of several plans; each of them acts as insurance in the case of failure of the previous plan. (The roadmap has a similar structure to the "Plan of action to prevent human extinction risks".) The first two plans contain two rows, one of which represents personal actions or medical procedures, and the other represents any collective activity required.

Plan A. The most obvious way to reach immortality is to survive until the creation of Friendly AI; in that case if you are young enough and optimistic enough, you can simply do nothing – or just fund MIRI. However, if you are older, you have to jump from one method of life extension to the next as they become available. So plan A is a relay race of life extension methods, until the problem of death is solved.

This plan includes actions to defeat aging, to grow and replace diseased organs with new bioengineered ones, to get a nanotech body and in the end to be scanned into a computer. It is an optimized sequence of events, and depends on two things – your personal actions (such as regular medical checkups), and collective actions such as civil activism and scientific research funding.

Plan B. However, if Plan A fails, i.e. if you die before the creation of superintelligence, there is Plan B, which is cryonics. Some simple steps can be taken now, such as calling your nearest cryocompany about a contract.

Plan C. Unfortunately, cryonics could also fail, and in that case Plan C is invoked. Of course it is much worse – less reliable and less proven. Plan C is so-called digital immortality, where one could be returned to life based on existing recorded information about that person. It is not a particularly good plan, because we are not sure how to solve the identity problem which will arise, and we don’t know if the collected amount of information would be enough. But it is still better than nothing.

Plan D. Lastly, if Plan C fails, we have Plan D. It is not a plan in fact, it is just hope or a bet that immortality already exists somehow: perhaps there is quantum immortality, or perhaps future AI will bring us back to life.

The first three plans demand particular actions now: we need to prepare for all of them simultaneously. All of the plans will lead to the same result: our minds will be uploaded into a computer with help of highly developed AI.

The plans could also help each other. Digital immortality data may help to fill any gaps in the memory of a cryopreserved person. Also cryonics is raising chances that quantum immortality will result in something useful: you have more chance of being cryopreserved and successfully revived than living naturally until you are 120 years old.

After you have become immortal with the help of Friendly AI you might exist until the end of the Universe or even beyond – see my map “How to prevent the end of the Universe”.

A map of currently available methods of life extension is a sub-map of this one and will published later.

The map was made in collaboration with Maria Konovalenko and Michael Batin and its earlier version was presented in August 2014 in Aubrey de Grey’s conference Rejuvenation Biotechnology.

Pdf of the map is here

Previous posts:

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

 

 

 

 

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

Greg Egan and the Incomprehensible

16 XiXiDu 19 May 2011 10:38AM

In this post I question one disagreement between Eliezer Yudkowsky and science fiction author Greg Egan.

In his post Complex Novelty, Eliezer Yudkowsky wrote in 2008:

Note that Greg Egan seems to explicitly believe the reverse - that humans can understand anything understandable - which explains a lot.

An interview with Greg Egan in 2009 confirmed this to be true:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

The theoretical computer scientist Scott Aaronson wrote in a post titled 'The Singularity Is Far':

The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails).  After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata.  But in the latter case, we know the intuition is mistaken.  There is a ceiling to computational expressive power.  Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster.

An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents that are an effect of natural evolution.

Is it reasonable to assume that such quantum leaps are the rule, based on a single case study? Are there other animals that are vastly more intelligent than their immediate predecessors?

What reason do we have to believe that a level above that of a standard human, that is as incomprehensible to us as higher mathematics is to chimps, does exist at all? And even if such a level is possible, what reason do we have to believe that artificial general intelligence could consistently uplift itself to a level that is incomprehensible to its given level?

To be clear, I do not doubt the possibility of superhuman AI or EM's. I do not doubt the importance of "friendliness"-research and that it will have to be solved before we invent (discover?) superhuman AI. But I lack the expertise to conclude that there are levels of comprehension that are not even fathomable in principle.

In Complexity and Intelligence, Eliezer wrote:

If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.

If we were able to specify the laws of physics and one of the effects of their computation would turn out to be superhuman intelligence that is incomprehensible to us, what would be the definition of 'incomprehensible' in this context?

I can imagine quite a few possibilities of how a normal human being can fail to comprehend the workings of another being. One example can be found in the previously mentioned article by Scott Aaronson:

Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow.  But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.

Mr. Aaronson also provides another fascinating example in an unrelated post ('The T vs. HT (Truth vs. Higher Truth) problem'):

P versus NP is the example par excellence of a mathematical mystery that human beings lacked the language even to express until very recently in our history.

Those two examples provide evidence for the possibility that even beings who are fundamentally on the same level might yet fail to comprehend each other.

An agent might simply be more knowledgeable or lack certain key insights. Conceptual revolutions are intellectually and technologically enabling to the extent that they seemingly spawn quantum leaps in the ability to comprehend certain problems.

Faster access to more information, the upbringing, education, or cultural and environmental differences and dumb luck might also intellectually remove agents with similar potentials from each other to an extent that they appear to reside on different levels. But even the smartest humans are dwarfs standing on the shoulders of giants. Sometimes the time is simply ripe, thanks to the previous discoveries of unknown unknowns.

As mentioned by Scott Aaronson, the ability to think faster, but also the possibility to think deeper by storing more data in one's memory, might cause the appearance of superhuman intelligence and incomprehensible insight.

Yet all of the above merely hints at the possibility that human intelligence can be amplified and that we can become more knowledgeable. But with enough time, standard humans could accomplish the same.

What would it mean for an intelligence to be genuinely incomprehensible? Where do Eliezere Yudkowsky and Greg Egan disagree?

Complex Novelty

26 Eliezer_Yudkowsky 20 December 2008 12:31AM

From Greg Egan's Permutation City:

    The workshop abutted a warehouse full of table legs—one hundred and sixty-two thousand, three hundred and twenty-nine, so far.  Peer could imagine nothing more satisfying than reaching the two hundred thousand mark—although he knew it was likely that he'd change his mind and abandon the workshop before that happened; new vocations were imposed by his exoself at random intervals, but statistically, the next one was overdue.  Immediately before taking up woodwork, he'd passionately devoured all the higher mathematics texts in the central library, run all the tutorial software, and then personally contributed several important new results to group theory—untroubled by the fact that none of the Elysian mathematicians would ever be aware of his work.  Before that, he'd written over three hundred comic operas, with librettos in Italian, French and English—and staged most of them, with puppet performers and audience.  Before that, he'd patiently studied the structure and biochemistry of the human brain for sixty-seven years; towards the end he had fully grasped, to his own satisfaction, the nature of the process of consciousness.  Every one of these pursuits had been utterly engrossing, and satisfying, at the time.  He'd even been interested in the Elysians, once.
    No longer.  He preferred to think about table legs.

Among science fiction authors, (early) Greg Egan is my favorite; of early-Greg-Egan's books, Permutation City is my favorite; and this particular passage in Permutation City, more than any of the others, I find utterly horrifying.

If this were all the hope the future held, I don't know if I could bring myself to try.  Small wonder that people don't sign up for cryonics, if even SF writers think this is the best we can do.

You could think of this whole series on Fun Theory as my reply to Greg Egan—a list of the ways that his human-level uploaded civilizations Fail At Fun.  (And yes, this series will also explain what's wrong with the Culture and how to fix it.)

continue reading »

Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does.

6 Davidmanheim 22 January 2016 07:12PM

In risk modeling, there is a well-known distinction between aleatory and epistemic uncertainty, which is sometimes referred to, or thought of, as irreducible versus reducible uncertainty. Epistemic uncertainty exists in our map; as Eliezer put it, “The Bayesian says, ‘Uncertainty exists in the map, not in the territory.’” Aleatory uncertainty, however, exists in the territory. (Well, at least according to our map that uses quantum mechanics, according to Bells Theorem – like, say, the time at which a radioactive atom decays.) This is what people call quantum uncertainty, indeterminism, true randomness, or recently (and somewhat confusingly to myself) ontological randomness – referring to the fact that our ontology allows randomness, not that the ontology itself is in any way random. It may be better, in Lesswrong terms, to think of uncertainty versus randomness – while being aware that the wider world refers to both as uncertainty. But does the distinction matter?

To clarify a key point, many facts are treated as random, such as dice rolls, are actually mostly uncertain – in that with enough physics modeling and inputs, we could predict them. On the other hand, in chaotic systems, there is the possibility that the “true” quantum randomness can propagate upwards into macro-level uncertainty. For example, a sphere of highly refined and shaped uranium that is *exactly* at the critical mass will set off a nuclear chain reaction, or not, based on the quantum physics of whether the neutrons from one of the first set of decays sets off a chain reaction – after enough of them decay, it will be reduced beyond the critical mass, and become increasingly unlikely to set off a nuclear chain reaction. Of course, the question of whether the nuclear sphere is above or below the critical mass (given its geometry, etc.) can be a difficult to measure uncertainty, but it’s not aleatory – though some part of the question of whether it kills the guy trying to measure whether it’s just above or just below the critical mass will be random – so maybe it’s not worth finding out. And that brings me to the key point.

In a large class of risk problems, there are factors treated as aleatory – but they may be epistemic, just at a level where finding the “true” factors and outcomes is prohibitively expensive. Potentially, the timing of an earthquake that would happen at some point in the future could be determined exactly via a simulation of the relevant data. Why is it considered aleatory by most risk analysts? Well, doing it might require a destructive, currently technologically impossible deconstruction of the entire earth – making the earthquake irrelevant. We would start with measurement of the position, density, and stress of each relatively macroscopic structure, and the perform a very large physics simulation of the earth as it had existed beforehand. (We have lots of silicon from deconstructing the earth, so I’ll just assume we can now build a big enough computer to simulate this.) Of course, this is not worthwhile – but doing so would potentially show that the actual aleatory uncertainty involved is negligible. Or it could show that we need to model the macroscopically chaotic system to such a high fidelity that microscopic, fundamentally indeterminate factors actually matter – and it was truly aleatory uncertainty. (So we have epistemic uncertainty about whether it’s aleatory; if our map was of high enough fidelity, and was computable, we would know.)

It turns out that most of the time, for the types of problems being discussed, this distinction is irrelevant. If we know that the value of information to determine whether something is aleatory or epistemic is negative, we can treat the uncertainty as randomness. (And usually, we can figure this out via a quick order of magnitude calculation; Value of Perfect information is estimated to be worth $100 to figure out which side the dice lands on in this game, and building and testing / validating any model for predicting it would take me at least 10 hours, my time is worth at least $25/hour, it’s negative.) But sometimes, slightly improved models, and slightly better data, are feasible – and then worth checking whether there is some epistemic uncertainty that we can pay to reduce. In fact, for earthquakes, we’re doing that – we have monitoring systems that can give several minutes of warning, and geological models that can predict to some degree of accuracy the relative likelihood of different sized quakes.

So, in conclusion; most uncertainty is lack of resolution in our map, which we can call epistemic uncertainty. This is true even if lots of people call it “truly random” or irreducibly uncertain – or if they are fancy, aleatory uncertainty. Some of what we assume is uncertainty is really randomness. But lots of the epistemic uncertainty can be safely treated as aleatory randomness, and value of information is what actually makes a difference. And knowing the terminology used elsewhere can be helpful.

Very Basic Model Theory

22 So8res 31 October 2013 07:06AM

In this post I'll discuss some basic results of model theory. It may be helpful to read through my previous post if you haven't yet. Model Theory is an implicit context for the Heavily Advanced Epistemology sequence and for a few of the recent MIRI papers, so casual readers may find this brief introduction useful. And who knows, maybe it will pique your interest:

A tale of two logics

propositional logic is the "easy logic", built from basic symbols and the connectives "and" and "not". Remember that all other connectives can be built from these two: With Enough NAND Gates You Can Rule The World and all that. Propositional logic is sometimes called the "sentential logic", because it's not like any other logics are "of or relating to sentences" (/sarcasm).

first order logic is the "nice logic". It has quantifiers ("there exists", "for all") and an internal notion of equality. Its sentences contain constants, functions, and relations. This lets you say lots of cool stuff that you can't say in propositional logic. First order logic turns out to be quite friendly (as we'll see below). However, it's not strong enough to talk about certain crazy/contrived ideas that humans cook up (such as "the numbers").

There are many other logics available (second order logic AKA "the heavy guns", ω-logic AKA "please just please can I talk about numbers", and many more). In this post we'll focus on propositional and first-order logics.

continue reading »

Anxiety and Rationality

32 helldalgo 19 January 2016 06:30PM

Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties.  I have, so I thought I’d share my LessWrong-inspired strategies.  This is my first post, so feedback and formatting help are welcome.  

First things first: the techniques developed by this community are not a panacea for mental illness.  They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed.  In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious.  When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted.  I become non-functional, and the error is clear.

Second: the best way to attack anxiety is to do the things that make your anxieties go away.  That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep.  If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep.  Likewise for techniques that have little to no scientific evidence, but are a good placebo.  A placebo effect is still an effect.

Finally, like all advice, this comes with Implicit Step Zero:  “Have enough executive function to give this a try.”  If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read.  The advice for functioning better is not always identical to the advice for functioning at all.  If there’s interest in an “improving your executive function” post, I’ll write one eventually.  It will be late, because my executive function is not impeccable.

Simple updating is my personal favorite for attacking specific anxieties.  A general sense of impending doom is a very tricky target and does not respond well to reality.  If you can narrow it down to a particular belief, however, you can amass evidence against it. 

Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness.  The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it.  Learning to make beliefs pay rent is much easier than making harmful aliefs go away.  The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality.  Update accordingly.  

The first thing I do is identify the situation and why it’s dysfunctional.  The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety.  Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired.  So I take the anxiety all the way through to its implication.  The algorithm is something like this:

  1.       Notice sense of doom
  2.       Notice my avoidance behaviors (not opening my email, walking away from my desk)
  3.       Ask “What am I afraid of?”
  4.       Answer (it's probably silly)
  5.       Ask “What do I think will happen?”
  6.       Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)

In the “asking for help” scenario, the answer to “what do I think will happen” is implausible.  It’s extremely unlikely that I’ll get fired for it!  This helps take the gravitas out of the anxiety, but it does not make it go away.*  After (6), it’s usually easy to do an experiment.  If I ask my coworkers for help, will I get fired?  The only way to know is to try. 

…That’s actually not true, of course.  A sense of my environment, my coworkers, and my general competence at work should be enough.  But if it was, we wouldn’t be here, would we?

So I perform the experiment.  And I wait.  When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper.  I label it “didn’t get fired.”  Because again, even if it’s negative, I didn’t get fired. 

This takes a lot of tick marks.  Cutting down the Washington monument with a spoon, remember?

The tick marks don’t have to be physical.  I prefer it, because it makes the “updating” process visual.  I’ve tried making a mental note and it’s not nearly as effective.  Play around with it, though.  If you’re anything like me, you have a lot of anxieties to experiment with. 

Usually, the anxiety starts to dissipate after obtaining several tick marks.  Ideally, one iteration of experiments should solve the problem.  But we aren’t ideal; we’re mentally ill.  Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur.  I occasionally panic when I have to return to work after taking a sick day.  I ask my husband to remind me that I won’t get fired.  I ask him to remind me that he’ll still love me if I do get fired.  If this sounds childish, it’s because it is.  Again: we’re mentally ill.  Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense.  Childish-but-helpful is much better than mature-and-harmful, if you have to choose.

I still have tiny ugh fields around my anxiety triggers.  They don’t really go away.  It’s more like learning not to hit someone you’re angry at.  You notice the impulse, accept it, and move on.  Hopefully, your harmful alief starves to death.

If you perform your experiment and doom does occur, it might not be you.  If you can’t ask your boss for help, it might be your boss.  If you disagree with your spouse and they scream at you for an hour, it might be your spouse.  This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky.  Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result.  This is designed for situations where your alief is obviously silly.  Where you know it’s silly, and need to throw evidence at your brain to internalize it.  It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead). 

 

 

*using this technique for several months occasionally stops the anxiety immediately after step 6.  

View more: Prev | Next