LessWrong 2.0
Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!
You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.
But before we leap into action, let's review the problem.
New censorship: against hypothetical violence against identifiable people
New proposed censorship policy:
Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.
Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.
More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).
This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'
Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.
Playing the student: attitudes to learning as social roles
This is a post about something I noticed myself doing this year, although I expect I’ve been doing it all along. It’s unlikely to be something that everyone does, so don’t be surprised if you don’t find this applies to you. It's also an exercise in introspection, i.e. likely to be inaccurate.
Intro
If I add up all the years that I’ve been in school, it amounts to about 75% of my life so far–and at any one time, school has probably been the single activity that I spend the most hours on. I would still guess that 50% or less of my general academic knowledge was actually acquired in a school setting, but school has tests, and grades at the end of the year, and so has provided most of the positive/negative reinforcement related to learning. The ‘attitudes to learning’ that I’m talking about apply in a school setting, not when I’m learning stuff for fun.
Role #1: Overachiever
Up until seventh grade, I didn’t really socialize at school–but once I started talking to people, it felt like I needed a persona, so that I could just act ‘in character’ instead of having to think of things to say from scratch. Being a stereotypical overachiever provided me with easy material for small talk–I could talk about schoolwork to other people who were also overachievers.
Years later, after acquiring actual social skills in the less stereotyped environments of part-time work and university, I play the overachiever more as a way of reducing my anxiety in class. (School was easy for me up until my second year of nursing school, when we started having to do scary things like clinical placements and practical exams, instead of nice safe things like written exams.) If I can talk myself into always being curious and finding everything exciting and interesting and cool I want to do that!!!, I can’t find everything scary–or, at the very least, to other people it looks like I’m not scared.
Role #2: Too Cool for School
This isn’t one I’ve played too much, aside from my tendency to put studying for exams as maybe my fourth priority–after work, exercise, and sleep–and still having an A average. (I will still skip class to work a shift at the ER any day, but that doesn’t count–working there is almost more educational than class, in my mind.) As one of my LW Ottawa friends pointed out, there’s a sort of counter-signalling involved in being a ‘lazy’ student–if you can still pull off good grades without doing any work, you must be smart, so people notice this and respect it.
My brother is the prime example of this. He spent grades 9 through 11 alternately sleeping and playing on his iPhone in class, and maintained an average well over 80%. In grade 12 he started paying attention in class and occasionally doing homework, and graduated with, I believe, an average over 95%. He had a reputation throughout the whole school–as someone who was very smart, but also cool.
Role #3: Just Don’t Fail Me!
Weirdly enough, it wasn’t at school that I originally learned this role. As a teenager, I did competitive swimming. The combination of not having outstanding talent for athletics, plus the anxiety that came from my own performance depending on how fast the other swimmers were, made this about 100 times more terrifying than school. At some point I developed a weird sort of underconfidence, the opposite of using ‘Overachiever’ to deal with anxiety. My mind has now created, and made automatic, the following subroutine: “when an adult takes you aside to talk to you about anything related to ‘living up to your potential’, start crying.” I’m not sure what the original logic behind this was: get the adult to stop and pay attention to me? Get them to take me more seriously? Get them to take me less seriously? Or just the fact that I couldn’t stomach the fact of being ordinarily below average at something–I had to be in some way differently below average. Who knows if there was much logic behind it at all?
Having this learned role comes back to bite me now, sometimes–the subroutine gets triggered in any situation that feels too much like my swim coach’s one-on-one pre-competition pep talks. Taekwondo triggers it once in a while. Weirdly enough, being evaluated in clinicals triggers it too–this didn’t originally make much sense, since it’s not competitive in the sense of ‘she wins, I lose.’ I think the associative chain there is through lifeguarding courses–the hands-on evaluation aspect used to be fairly terrifying for my younger self, and my monkey brain puts clinicals and lab evaluations into that category, as opposed to the nice safe category of written exams, where I can safely be Too Cool for School and still get good grades.
The inconvenience of thinking about school this way really jumped out at me this fall. I started my semester of clinicals with a prof who was a) spectacularly non-intimidating compared to some others I’ve had, and b) who liked me from the very start, basically because I raised my hand a lot and answered questions intelligently during our more classroom-y initial orientation. I was all set up for a semester of playing ‘Overachiever’, until, quite near the beginning of the semester, I was suddenly expected to do something that I found scary, and I was tired and scared of looking confident but being wrong, and I fell back on ‘Just Don’t Fail Me!’ My prof was, understandably, shocked and confused as to why I was suddenly reacting to her as ‘the scary adult who has the power to pass or fail me and will definitely fail me unless I’m absolutely perfect, so I had better grovel.’ I think she actually felt guilty about whatever she had done to intimidate me–which was nothing.
Since then I’ve been doing fine, progressing at the same rate as all the other students (maybe it says something about me that this isn’t very satisfying, and even kind of feels like failure in itself...I would like to be progressing faster). That is, until I’m alone with my prof and she tries to give me a pep talk about how I’m obviously very smart and doing fine, so I just need to improve my confidence. Then I start crying. At this point, I’m pretty sure she thinks I should be on anti-depressants–which is problematic in itself, but could be more problematic if she was the kind of prof who might fail me in my clinical for a lack of confidence. There’s no objective reason why I can’t hop back into Overachiever mode, since I managed both my clinicals last spring entirely in that mode. But part of my brain protests: ‘she’s seen you being insecure! She wouldn’t believe you as an overachiever, it would be too out of character!’ It starts to make sense once I stop seeing this behaviour as 'my learning style' and recognize it as a social role that I, at some point, probably subconsciously, decided I ought to play.
Conclusion
The main problem seems to be that my original mental models for social interaction–with adults, mostly–are overly simplistic and don’t cut reality at the joints. That’s not a huge problem in itself–I have better models now and most people I meet now say I have good communication skills, although I sometimes still come across as ‘odd’. The problem is that every once in a while, a situation happens, pattern recognition jumps into play, and whoa, I’m playing ‘Just Don’t Fail Me’. (It’s happened with the other two roles too, but they’re is less problematic.) Then I can’t get out of that role easily, because my social monkey brain is telling me it would be out of character and the other person would think it was weird. This is despite the fact that I no longer consciously care if I come across as weird, as long as people think I’m competent and trustworthy and nice, etc.
Just noticing this has helped a little–I catch my monkey brain and remind it ‘hey, this situation looks similar to Situation X that you created a stereotyped response for, but it’s not Situation X, so how about we just behave like a human being as usual’. Reminding myself that the world doesn’t break down into ‘adults’ and ‘children’–or, if it did once, I’m now on the other side of the divide–also helps. Failing that, I can consciously try to make sure I get into the 'right’ role–Overachiever or Too Cool For School, depending on the situation–and make that my default.
Has anyone else noticed themselves doing something similar? I’m wondering if there are other roles that I play, maybe more subtly, at work or with friends.
The Fabric of Real Things
Followup to: The Useful Concept of Truth
We previously asked:
What rule would restrict our beliefs to just statements that can be meaningful, without excluding a priori anything that could in principle be true?
It doesn't work to require that the belief's truth or falsity make a sensory difference. It's true, but not testable, to say that a spaceship going over the cosmological horizon of an expanding universe does not suddenly blink out of existence. It's meaningful and false, rather than meaningless, to say that on March 22nd, 2003, the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake. This statement's truth or falsity has no consequences we'll ever be able to test experientally. Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.
You can't say that there has to be some way to arrange the atoms in the universe so as to make the claim true or alternatively false. Then the theory of quantum mechanics is a priori meaningless, because there's no way to arrange atoms to make it true. And if you try to substitute quantum fields instead, well, what if they discover something else tomorrow? And is it meaningless -rather than meaningful and false - to imagine that physicists are lying about quantum mechanics in a grand organized conspiracy?
Since claims are rendered true or false by how-the-universe-is, the question "What claims can be meaningful?" implies the question "What sort of reality can exist for our statements to correspond to?"
If you rephrase it this way, the question probably sounds completely fruitless and pointless, the sort of thing that a philosopher would ponder for years before producing a long, incomprehensible book that would be studied by future generations of unhappy students while being of no conceivable interest to anyone with a real job.
But while deep philosophical dilemmas such as these are never settled by philosophers, they are sometimes settled by people working on a related practical problem which happens to intersect the dilemma. There are a lot of people who think I'm being too harsh on philosophers when I express skepticism about mainstream philosophy; but in this case, at least, history clearly bears out the point. Philosophers have been discussing the nature of reality for literal millennia... and yet the people who first delineated and formalized a critical hint about the nature of reality, the people who first discovered what sort of things seem to be real,were trying to solve a completely different-sounding question.
They were trying to figure out whether you can tell the direction of cause and effect from survey data.
Please now read Causal Diagrams and Causal Models, which was modularized out so that it could act as a standalone introduction. This post involves some simple math, but causality is so basic to key future posts that it's pretty important to get at least some grasp on the math involved. Once you are finished reading, continue with the rest of this post.
The Useful Idea of Truth
(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI. For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows. And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation. Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)
I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan
I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico
What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche
The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:
-
The child sees Sally hide a marble inside a covered basket, as Anne looks on.
-
Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.
-
Anne leaves the room, and Sally returns.
-
The experimenter asks the child where Sally will look for her marble.
Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.
Rationality: Appreciating Cognitive Algorithms
Followup to: The Useful Idea of Truth
It is an error mode, and indeed an annoyance mode, to go about preaching the importance of the "Truth", especially if the Truth is supposed to be something incredibly lofty instead of some boring, mundane truth about gravity or rainbows or what your coworker said about your manager.
Thus it is a worthwhile exercise to practice deflating the word 'true' out of any sentence in which it appears. (Note that this is a special case of rationalist taboo.) For example, instead of saying, "I believe that the sky is blue, and that's true!" you can just say, "The sky is blue", which conveys essentially the same information about what color you think the sky is. Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.
Try it with these:
- I believe Jess just wants to win arguments.
- It’s true that you weren’t paying attention.
- I believe I will get better.
- In reality, teachers care a lot about students.
If 'truth' is defined by an infinite family of sentences like 'The sentence "the sky is blue" is true if and only if the sky is blue', then why would we ever need to talk about 'truth' at all?
Well, you can't deflate 'truth' out of the sentence "True beliefs are more likely to make successful experimental predictions" because it states a property of map-territory correspondences in general. You could say 'accurate maps' instead of 'true beliefs', but you would still be invoking the same concept.
It's only because most sentences containing the word 'true' are not talking about map-territory correspondences in general, that most such sentences can be deflated.
Now consider - when are you forced to use the word 'rational'?
Skill: The Map is Not the Territory
Followup to: The Useful Idea of Truth (minor post)
So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:
"The map is not the territory."

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.
But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:
Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly? How exactly does it help, on what sort of problem?
Introduction to Connectionist Modelling of Cognitive Processes: a chapter by chapter review
This chapter by chapter review was inspired by Vaniver's recent chapter by chapter review of Causality. Like with that review, the intention is not so much to summarize but to help readers determine whether or not they should read the book. Reading the review is in no way a substitute for reading the book.
I first read Introduction to Connectionist Modelling of Cognitive Processes (ICMCP) as part of an undergraduate course on cognitive modelling. We were assigned one half of the book to read: I ended up reading every page. Recently I felt like I should read it again, so I bought a used copy off Amazon. That was money well spent: the book was just as good as I remembered.
By their nature, artificial neural networks (referred to as connectionist networks in the book) are a very mathy topic, and it would be easy to write a textbook that was nothing but formulas and very hard to understand. And while ICMCP also spends a lot of time talking about the math behind the various kinds of neural nets, it does its best to explain things as intuitively as possible, sticking to elementary mathematics and elaborating on the reasons of why the equations are what they are. At this, it succeeds – it can be easily understood by someone knowing only high school math. I haven't personally studied ANNs at a more advanced level, but I would imagine that anybody who intended to do so would greatly benefit from the strong conceptual and historical understanding ICMCP provided.
The book also comes with a floppy disk containing a tlearn simulator which can be used to run various exercises given in the book. I haven't tried using this program, so I won't comment on it, nor on the exercises.
The book has 15 chapters, and it is divided into two sections: principles and applications.
Principles
1: ”The basics of connectionist information processing” provides a general overview of how ANNs work. The chapter begins by providing a verbal summary of five assumptions of connectionist modelling: that 1) neurons integrate information, 2) neurons pass information about the level of their input, 3) brain structure is layered, 4) the influence of one neuron on another depends on the strength of the connection between them, and 5) learning is achieved by changing the strengths of connections between neurons. After this verbal introduction, the basic symbols and equations relating to ANNs are introduced simultaneously with an explanation of how the ”neurons” in an ANN model work.
Introducing Simplexity
"When Charona was trying to explain it to me, she asked me what the most important thing there was. [...]"
"Very good. Anyone who can give a nonrelative answer to that question is simplex."
- Empire Star, Samuel R. Delany
Here's a small riddle: What do the following three images have in common?



The last picture, which ought be recognizable by readers of the sequences, serves as a clue; so does the quote at the top of the page. But these may be insufficient, so I'll just put into plain words what ideas these images represent, which by itself reveals part of the answer:

"Human values are so natural that one could very well achieve friendliness in artificial intelligence pretty much by accident, or at least by letting the machines educate themselves, reaching a human (or superior-to-human) respect for life by themselves."

"The electrons of an atom can be visualized as little tiny billiard balls that go around the nucleus in orbits much like planets go around the sun."
"Characteristics like attractiveness and beauty are inherent to the object possessing them, so that even alien minds would have the good sense of recognizing the beauty of a woman according to criteria possessed by 20th century Hollywood advertisers."
All three images, therefore, represent different types of fatally flawed thinking that have been directly addressed in past sequences. But this isn't quite precise, so let me reveal the remainder of the answer as well: These three fallacies can all be said to consist of a very similar pattern of narrow thinking, false fundamental assumptions, and privileged hypotheses.
And this pattern seems so pervasive (in a large multitude of other fallacies as well) that it probably deserves a name of its own.
In Samuel R. Delany's novella Empire Star, three terms (simplex, complex, and multiplex) are used throughout the novel to label different minds and different ways of thought. Although never explicitly defined, the reader understands their gist to be roughly as follows:
- simplex: Able to look at things only from a single, limited perspective.
- complex: Able to perceive and comprehend multiple ways of examining things and situations.
- multiplex: Able to integrate these multiple perspectives into a new and fuller understanding of the whole.
I will now appropriate the first of these terms to name the above mentioned pattern of biases. It might not be exactly how the author intended it (or then again it might be), but it's close enough for our purposes:
Simplexity: The erroneous mapping of a territory that occurs due to the treatment of a complex element or a highly specific position or area in configuration space as simpler, more fundamental, or more widely applicable than it actually is.
But because it's itself rather simplex to think that a single definition would best clarify the meaning for all readers, I'd like to offer a second definition as well.
Simplexity: The assumption of too high a probability of correlation between the characteristics of familiar and unfamiliar elements of the same set.
And here's a third one:
Simplexity: Treating intuitive notions of simplicity as if referring to the same thing measured by Kolmogorov complexity or used in Solomonoff induction.
These all effectively amount to the same bias, the same flawed way of thinking. Getting back to the images:

In the "Wall-e" picture (which could also have been a "Johnny 5" picture), we see a simplex view of morality and human values; where such complex systems are treated as simple enough to be stumbled upon even by artificial intelligences that were never deliberately designed to have them...

In the "electron orbits" picture, we see a simplex view of the subatomic world, based on the characteristics of macroscopic objects (like position and velocity) being treated as applicable to the whole of physical reality even at quantum scales.

And lastly, in the "monster and lady" picture, we see a simplex view of attractiveness, based on the personal aesthetic criteria of the artists being treated as applicable to all advanced lifeforms, even ones that have different evolutionary histories.
For those who dislike portmanteus, perhaps a term such as "fake-simplicity" (or even "naivety") sounds better than "simplexity". But I think the latter is preferable in a number of ways -- for one thing, it helps remind that what starts out seemingly as simplicity (on the human level) may end up as extreme complexity if described mathematically.
Among the differences between simplicity and simplexity is that simplicity can be either in the map or in the territory. Indeed, since as reductionists we believe the territory to be simple at the most fundamental level, a simple map would (all other things being equal) be a better one - simplicity is a virtue.
But simplexity is always in the map: It's the mind patterning the unfamiliar based on the familiar. Highly useful in an evolutionary sense: humans evolved to be better capable of predicting the actions of other humans than of multiplying three-digit numbers... but ultimately wrong nonetheless whenever it occurs. And the further away from the ancestral environment one gets, the wronger it is likely to be.
And it's the common basis in cognitive failures that range from The Worst Argument In the World all the way to the just-world fallacy or even to privileging single world hypotheses.
But, lest we seem simplex about simplexity, applying a familiar pattern indiscriminately, this must now be followed by an examination of its different variations...
Next Post: Levels of mindspace simplexity
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)