Meetup : Bath, UK: Agreement, practical meetups, and report from last meetup

3 KnaveOfAllTrades 30 October 2014 09:24AM

Discussion article for the meetup : Bath, UK: Agreement, practical meetups, and report from last meetup

WHEN: 02 November 2014 02:00:00PM (+0000)

WHERE: 5-10 James St W, Avon, Bath BA1 2BX

Bath, UK will be having its second meetup this Sunday 2nd November at 14:00 in the King of Wessex, which is a Wetherspoons pub in the city. I shall wait at least ninety minutes (i.e. until 15:30) for the first arrivals.

I'll put a sheet featuring a paperclip and saying 'Less Wrong' on the table so you know you've found us. Make sure you venture into the pub, since there's no guarantee our table will be near the door.

In case you need to contact me (e.g. if the venue is unexpectedly busy and we have to move elsewhere and you can't find us), my mobile number is the product 3 x 3 x 23 x 97 x 375127, preceded by a zero (so eleven digits total).

We have a Facebook group.

At the start we'll chat for a bit, then move onto an agreement exercise: Unlike last meetup, where we made predictions for our own PredictionBook accounts somewhat independently without necessarily sharing all our information, this time we shall try to reach consensus in our probabilities and then see how our consensus is calibrated, by means of a single PredictionBook account for the meetup group.

After that, we shall discuss ideas for future meetups and activities. In particular, we shall discuss how we can move forward with practical meetups and instrumental rationality, and how to balance this with discussion and 'abstract' or epistemic stuff.

====Previous meetup (2014-10-19 Sunday)====

It went well. There was me, someone who tagged along with me, someone from Bristol, and someone from Bradford-on-Avon. I think everyone had arrived by 14:30, and we probably stayed until 17:30 or 18:00. (We stayed long enough that we all got something to eat in the pub.)

We got to know each other a bit then did 15 predictions. The previous night, I had prepared a list of prompts for things to make predictions about, ranging from things where I thought we might have very high (or low) confidences, to things where I expected that most of the attendees would be basically indifferent (e.g. whether an even or odd number of elements have been observed, whether the density of water is above or below 1kg per liter, etc.).

We skipped some of the ambiguous prompts, and for a couple we had to sort of figure out what we'd use to judge the prediction midway through. I'd state the prompt, then where necessary we'd pin it down into something we could judge objectively enough. There might be some brief discussion, but we weren't trying to share all our information. I would type in (but not submit or write my probability for) the final wording of the prediction on PredictionBook. At a suitable point, when everyone understood what we were predicting and how it would be judged, I would give 90 seconds for everyone to stop communicating and log their final probabilities. I'd then type in my probability and create the prediction on Predictionbook, then we'd go round stating the probability we'd written down.

Some of the prompts were intentionally underspecified. For example, the first prediction was about Wladimir Klitschko's mass. In that case we each independently (to avoid priming) wrote down a figure (after explaining who he is, of course). Then we took the usual mean of the figures to obtain a 'wisdom of crowds' estimate and used that as the mass for the prediction.

(If you're worried that averaging the guesses would lead everyone to put 50% probability on the proposition, then you can shift by some amount to encourage more extreme confidences. But remember that it's still useful to test calibration at the 50% level!)

That was one of the cases where we had to decide partway through how we were going to judge the prediction, since we realised his mass would fluctuate a lot depending on e.g. whether he'd cut weight for a weigh-in. We agreed that if Google gave a unique figure and it seemed plausible, then we'd go with that. I'm not certain, but I don't think we actually shifted the average in this case, and the mean of our initial guesses turned out to be exactly correct (110kg).

In some cases, where the initial estimates varied wildly, I suggested we use a 'logarithmic average', i.e. use the exponential of the mean of the logarithms of the estimates, i.e. exp(arithmetic_mean(log(estimates)).

Then we'd check the prediction and I'd mark it Right or Wrong accordingly on Predictionbook. When they got home, the others marked the prediction Unknown, then put in the probability they'd made a note of, then re-mark the prediction as Right or Wrong as before.

I had my laptop and used the pub's Wi-Fi to create the predictions on PredictionBook with my estimate. The others made a note of their probabilities. After each prediction, we checked the prediction and I marked it Right or Wrong on PredictionBook accordingly. When they got home, each of the others who attended then marked the prediction Unknown, then submitted their probability from earlier, then re-marked the Prediction as Right or Wrong.

Discussion article for the meetup : Bath, UK: Agreement, practical meetups, and report from last meetup

Meetup : Bath: Introduction and PredictionBook

2 KnaveOfAllTrades 10 October 2014 08:36PM

Discussion article for the meetup : Bath: Introduction and PredictionBook

WHEN: 19 October 2014 02:00:00PM (+0100)

WHERE: 5-10 James St W, Avon, Bath BA1 2BX

I'll be hosting a meetup for Bath, UK on Sunday 19th October at 14:00.
The meetup will be held at the King of Wessex, which is a Wetherspoons pub in the city. Start time is 14:00, and I'll wait at least ninety minutes after that for the first arrivals. I'll put a Less Wrong paperclip print-out on the table so you can identify me. In case you need to contact me (e.g. if the venue is unexpectedly busy and we have to move elsewhere and you can't find us), my mobile number is the product 3 x 3 x 23 x 97 x 375127, preceded by a zero (so eleven digits total). Since this is the first meetup, we'll start off with introductions and chit-chat. I will also formulate and bring along (but not check the veracity of) some propositions for us to place probabilities on, as calibration training. I recommend you create and play around with a PredictionBook account in advance of the meetup to get to grips with it and in case you have any questions about it we can discuss on the day. (Why not register right now? It only takes one or two minutes.) Bonus points if you bring a device to log your predictions on PredictionBook as we go along, and as a back-up in case my laptop dies. (The venue has free Wi-Fi that you can register for in a few minutes.)

Discussion article for the meetup : Bath: Introduction and PredictionBook

Overly convenient clusters, or: Beware sour grapes

22 KnaveOfAllTrades 02 September 2014 04:04AM

Related to: Policy Debates Should Not Appear One-Sided

There is a well-known fable which runs thus:

“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”

This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.

This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.

In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.

The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:

The Seating Fallacy:

“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”

This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.

It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.

In particular, we have the following corollary:

The Fundamental Fallacy of Dating:

“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”

In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.

For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.

We also have:

PR rationalization and incrimination:

“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”

This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:

“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”

 This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.

~~~~

The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.

What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?

Anthropics doesn't explain why the Cold War stayed Cold

6 KnaveOfAllTrades 20 August 2014 07:23PM

(Epistemic status: There are some lines of argument that I haven’t even started here, which potentially defeat the thesis advocated here. I don’t go into them because this is already too long or I can’t explain them adequately without derailing the main thesis. Similarly some continuations of chains of argument and counterargument begun here are terminated in the interest of focussing on the lower-order counterarguments. Overall this piece probably overstates my confidence in its thesis. It is quite possible this post will be torn to pieces in the comments—possibly by my own aforementioned elided considerations. That’s good too.)

I

George VI, King of the United Kingdom, had five siblings. That is, the father of current Queen Elizabeth II had as many siblings as on a typical human hand. (This paragraph is true, and is not a trick; in particular, the second sentence of this paragraph really is trying to disambiguate and help convey the fact in question and relate it to prior knowledge, rather than introduce an opening for some sleight of hand so I can laugh at you later, or whatever fear such a suspiciously simple proposition might engender.)

Let it be known.

II

Exactly one of the following stories is true:

Story One

Recently I hopped on Facebook and saw the following post:

“I notice that I am confused about why a nuclear war never occurred. Like, I think (knowing only the very little I know now) that if you had asked me, at the start of the Cold War or something, the probability that it would eventually lead to a nuclear war, I would've said it was moderately likely. So what's up with that?”


The post had 14 likes. In the comments, the most-Liked explanation was:

“anthropically you are considerably more likely to live in a world where there never was a fullscale nuclear war”

That comment had 17 Likes. The second-most-liked comment that offered an explanation had 4 Likes.

Story Two

continue reading »

MIRI 2014 Summer Matching Challenge and one-off opportunity to donate *for free*

10 KnaveOfAllTrades 03 August 2014 05:58PM

Edit: This post is obsoleted by this post; please see that one instead.

MIRI are currently holding a donation-matching challenge, until Friday 15th August. You can donate and track its progress by going to the Donations page.

Also, to quote the MIRI Facebook page:

Stellar, a long awaited new cryptocurrency and distributed payment network, made by the founder of Mt. Gox and Ripple, just launched.

You can support MIRI for free by signing up. Every new Stellar user gets 6000 STR, and can send an additional 1000 STR to another user for FREE!

Our Stellar username is “miri”.

This is an awesome opportunity to get in on the ground floor of an exciting and promising new digital currency project while also supporting MIRI.

Registering for Stellar requires just a username and password, no e-mail or verification required. To get the free Stellar, you have to have Facebook Platform turned on (see the first setting on this page) and allow the Stellar App to temporarily integrate with your account. Send 1000 STR to 'miri' or their address (gHhshpzDcfRsie2qxjjHqrsTRe3JSCaUeN), and you will get back the 1000 STR. Once you receive back the 1000 STR, you can then remove the Stellar app like any other in your Facebook settings.

Edit: The promotion seems to have been reduced. See these two comments.

Edit2: The promotion seems more-or-less gone now; consider the part of this post about Stellar irrelevant.

I did this and it took me maybe three minutes (including time to take notes). The Stellar website is self-explanatory. If you encounter difficulties at any point in the process, or if I've forgotten some part of the process, feel free to comment on this post to that effect.

To track follow-up on threads like this: If you donate because of this thread, please indicate that in this comment (you can do so anonymously there if you prefer), and feel free to state that you have donated as a reply to that comment.

(If you're thinking free cryptocurrency is too good to be true, and wondering what the founders get out of it: My understanding is the reason this works is that early investment in a cryptocurrency bolsters it, so that giving away some for free at the start is actually a smart thing to do at first for the founders. No gimmicks or tricks about it asking for card details or anything dodgy like that.)

Confused as to usefulness of 'consciousness' as a concept

35 KnaveOfAllTrades 13 July 2014 11:01AM

Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.

Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.

Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.

It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.

However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").

In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.

Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.

~

A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.

What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.

Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.

Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.

Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.

When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."

Six months ago, I wrote:

"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."

Mark Friedenbach replied recently (so, a few months later):

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?

What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.

I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.

If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.

When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

Stock phrases

3 KnaveOfAllTrades 06 April 2014 07:32PM

‘Stock phrases’, in the sense I am using it here, refers to established phrases (in the more common, more specific sense), noises, gestures, etc.; they form a canon of well-known signifiers for messages one might want to convey, like the verbalisation ‘I am happy’, or the gesture of nodding in agreement. They can be very useful, because they save communicators the time, effort, and distraction of forming descriptions from existing phrases. Sometimes a stock phrase has been honed so finely that to try to recreate its precise meaning from scratch would not be possible in any practical period of time. As with language in general, novel or less common combinations of stock phrases are more liable to be misinterpreted. (For example, winks, nods, and other individual gestures are generally less ambiguous than chains of gestures.)

To put it another way: Compression is useful because some amount of upfront time and effort (learning meanings of stock phrases) can save a lot of time and effort later (having to construct new stock phrases repeatedly from scratch).

Two considerations that arise from this are over-reliance on the existing canon of stock phrases, and the skill of originating successful new stock phrases. 

With the former, stock phrases are used even in situations where it would be better to construct a phrase not already in the canon. It is very tempting to round off a complex sentiment into the nearest available stock phrase, because it is so much more convenient—they are available. For example, saying ‘I’m an atheist’ can be a lot more convenient than saying, ‘I put an effectively-zero, but non-zero, probability on the existence of God’. And in some contexts, the former might genuinely be just a useful approximation. But in some other contexts, it can lead to spending an hour arguing with someone before they realise that you don’t rule out God entirely like they were arguing against, and you realise that they have been disagreeing with you ruling out God entirely, rather than you not believing in God with high probability. (Of course, this might not mean the argument is over since there will probably be remaining disagreement. But it might shorten the argument by a frustrating hour.)

Over-reliance on stock phrases can also not only fail to communicate to others, but actually alter the shape one’s own aliefs or beliefs actually take. For example, identifying oneself with a label as a convenience, when one does not actually endorse all the implications of that label, can cause one to begin to advocate for those other implications, even if one did not originally. “I’m an X now, guess I have to believe Y/advocate for Z.” Sometimes this is to avoid censure by other people who identify with that label, whose approval one desires, and this might be a stable decision under reflection. But sometimes it’s as simple and undesirable as the social anxiety of, “If I stop using this label because it doesn’t describe me well, then people might point and laugh at me for seeming to change my mind.”

Originating successful stock phrases is important because of how dependent we are on them—as we should be. Neither extreme—doing everything from scratch on the spot, nor only using the most common stock phrases in the canon—is best; the optimum lies in between these extremes. Therefore we must depend on stock phrases to some extent, and moreover we need to depend on them often enough that we should get good at creating new ones to suit our circumstances, and ensuring that they spread to the relevant people with whom we shall need to use them.

Some things that help:

(1) Training the skill of noticing similarities between attempts to communicate, so that opportunities to generalise a new stock phrase are not missed. A common cue for this would be a feeling of dissatisfaction or frustration that one had not communicated exactly what one had meant and had been misunderstood, and the feeling of ‘I feel like there is a general meaning or class of experience here that I have in mind, but the other party does not realise this, but until I point them to it, we are kind of talking past each other.’

(2) Getting good at coming to catchy, memorable phrases or names. This need not be a solo effort; seeking others’ assistance or going to people who are particularly good at this are also options. Should we have a Phrase Lab here where we can post requests for assistance propagating useful phrases? Vote here!

(3) Surrounding oneself with or having access to people who are good at absorbing, using, and propagating useful phrases. Or at least avoiding people who are actively bad at these things; some people are scornful of new phrases (possibly a status thing; originating widely-used phrases gains status, so endorsing or using a phrase can feel like someone gains status relative to onself), and some people are snobbish prescriptivists (again partly a status thing) and will shoot down novel suggestions on principle. I suspect that an underestimated factor in the Bay Area success story is the unusually high openness to phrases and jargon, which allow deeper exploration of ideas and systems than the more general population’s stock phrases allow.

(4) Related to the above, but worth stating standalone: Surrounding oneself with or having access to people who are good at telling you when your phrases are good, and also when they’re crap. It is good to be motivated when you do good and notice useful categories or clusters, and also good to be warned when you are crystallizing a disuseful patterns. Similarly, people who are willing to say, ‘I think this phrase has made everything look like nail. We should reconsider our usage of it,’ once a phrase has taken off are to be valued.

(Related to my comment on (3): Although there are other factors in the gap, people perhaps underestimate how much of the gap between, say, Eliezer and Yvain and the average LessWronger-who-is-not-a-LessWrong-celebrity comes from their ability to crystallize, describe, and promote useful phrases. For those of us who are not so good at doing all three in one go and need more assistance, LessWrong could probably be more welcoming in seeking assistance or feedback on not-yet-complete phrases or crystallizations. This might not seem like a big advantage, but bear in mind that intelligence correlates very, very well with manipulating patterns, which is what phrases help with, and that while the leverage of using a phrase once is not very high, two or three decades of iterative use of phrases and the resulting positive feedback loop might explain more of the gap than one might initially think.)

Rationalist households: What can London learn from its predecessors?

7 KnaveOfAllTrades 23 August 2013 07:56AM

At our most recent meetup, the London LessWrongers began discussion of setting up one or more houses in the capital. This thread is intended for discussion and advice on planning ‘rationalist households’ and on making them thrive. You can also register your interest in being part of a London, UK rationalist house here.

Those who currently live in or have previously lived in rationalist households
, or who have relevant experience, are particularly encouraged to share their experiences, and any data on house setups is most welcome. It would be great if we could get case studies of several rationalist households, to compare approaches and aid other organizers.

We’re considering having a room for visitors and people who are only in the city for part of the year, with an Airbnb-type arrangement for that room at other times. Therefore, we are seeking advice from Airbnb hosts on setting this up, as well as on its advantages and disadvantages.

We would also like to hear about the common pitfalls of group living in order to avoid making basic errors.

Welcome to Less Wrong! (6th thread, July 2013)

21 KnaveOfAllTrades 26 July 2013 02:35AM
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

continue reading »