Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
In light of SDR's comment yesterday, instead of writing a new post today I compiled my list of ideas I wanted to write about, partly to lay them out there and see if any stood out as better than the rest, and partly so that maybe they would be a little more out in the wild than if I hold them until I get around to them. I realise there is not a thesis in this post, but I figured it would be better to write one of these than to write each in it's own post with the potential to be good or bad.
Original post: http://bearlamp.com.au/many-draft-concepts/
I create ideas at about the rate of 3 a day, without trying to. I write at about a rate of 1.5 a day. Which leaves me always behind. Even if I write about the best ideas I can think of, some good ones might never be covered. This is an effort to draft out a good stack of them so that maybe it can help me not have to write them all out, by better defining which ones are the good ones and which ones are a bit more useless.
With that in mind, in no particular order - a list of unwritten posts:
From my old table of contents
Goals of your lesswrong group – As a guided/workthrough exercise in deciding why the group exists and what it should do. Help people work out what they want out of it (do people know)? setting goals, doing something particularly interesting or routine, having fun, changing your mind, being activists in the world around you. Whatever the reasons you care about, work them out and move towards them. Nothing particularly groundbreaking in the process here. Sit down with the group with pens and paper, maybe run a resolve cycle, maybe talk about ideas and settle on a few, then decide how to carry them out. Relevant links: Sydney meetup, group resources (estimate 2hrs to write)
Goals interrogation + Goal levels – Goal interrogation is about asking <is this thing I want to do actually a goal of mine> and <is my current plan the best way to achieve that>, goal levels are something out of Sydney Lesswrong that help you have mutual long term goals and supporting short term goal. There are 3 main levels, Dream, Year, Daily (or approximate) you want dream goals like going to the moon, you want yearly goals like getting another year further in your degree and you want daily goals like studying today that contribute to the upper level goals. Any time you are feeling lost you can look at the guide you set out for yourself and use it to direct you. (3hrs)
How to human – A zero to human guide. A guide for basic functionality of a humanoid system. Something of a conglomeration of maslow, mental health, so you feel like shit and system thinking. Am I conscious?Am I breathing? Am I bleeding or injured (major or minor)? Am I falling or otherwise in danger and about to cause the earlier questions to return false? Do I know where I am? Am I safe? Do I need to relieve myself (or other bodily functions, i.e. itchy)? Have I had enough water? sleep? food? Is my mind altered (alcohol or other drugs)? Am I stuck with sensory input I can't control (noise, smells, things touching me)? Am I too hot or too cold? Is my environment too hot or too cold? Or unstable? Am I with people or alone? Is this okay? Am I clean (showered, teeth, other personal cleaning rituals)? Have I had some sunlight and fresh air in the past few days? Have I had too much sunlight or wind in the past few days? Do I feel stressed? Okay? Happy? Worried? Suspicious? Scared? Was I doing something? What am I doing? do I want to be doing something else? Am I being watched (is that okay?)? Have I interacted with humans in the past 24 hours? Have I had alone time in the past 24 hours? Do I have any existing conditions I can run a check on - i.e. depression? Are my valuables secure? Are the people I care about safe? (4hrs)
List of common strategies for getting shit done – things like scheduling/allocating time, pomodoros, committing to things externally, complice, beeminder, other trackers. (4hrs)
List of superpowers and kryptonites – when asking the question “what are my superpowers?” and “what are my kryptonites?”. Knowledge is power; working with your powers and working out how to avoid your kryptonites is a method to improve yourself. What are you really good at, and what do you absolutely suck at and would be better delegating to other people. The more you know about yourself, the more you can do the right thing by your powers or weaknesses and save yourself troubles.
List of effective behaviours – small life-improving habits that add together to make awesomeness from nothing. And how to pick them up. Short list: toothbrush in the shower, scales in front of the fridge, healthy food in the most accessible position in the fridge, make the unhealthy stuff a little more inacessible, keep some clocks fast - i.e. the clock in your car (so you get there early), prepare for expected barriers ahead of time (i.e. packing the gym bag and leaving it at the door), and more.
Stress prevention checklist – feeling off? You want to have already outsourced the hard work for “things I should check on about myself” to your past self. Make it easier for future you. Especially in the times that you might be vulnerable. Generate a list of things that you want to check are working correctly. i.e. did I drink today? Did I do my regular exercise? Did I take my medication? Have I run late today? Do I have my work under control?
Make it easier for future you. Especially in the times that you might be vulnerable. – as its own post in curtailing bad habits that you can expect to happen when you are compromised. inspired by candy-bar moments and turning them into carrot-moments or other more productive things. This applies beyond diet, and might involve turning TV-hour into book-hour (for other tasks you want to do instead of tasks you automatically do)
A p=np approach to learning – Sometimes you have to learn things the long way; but sometimes there is a short cut. Where you could say, “I wish someone had just taken me on the easy path early on”. It’s not a perfect idea; but start looking for the shortcuts where you might be saying “I wish someone had told me sooner”. Of course the answer is, “but I probably wouldn’t have listened anyway” which is something that can be worked on as well. (2hrs)
Rationalists guide to dating – Attraction. Relationships. Doing things with a known preference. Don’t like unintelligent people? Don’t try to date them. Think first; then act - and iteratively experiment; an exercise in thinking hard about things before trying trial-and-error on the world. Think about places where you might meet the kinds of people you want to meet, then use strategies that go there instead of strategies that flop in the general direction of progress. (half written)
Training inherent powers (weights, temperatures, smells, estimation powers) – practice makes perfect right? Imagine if you knew the temperature always, the weight of things by lifting them, the composition of foods by tasting them, the distance between things without measuring. How can we train these, how can we improve. Probably not inherently useful to life, but fun to train your system 1! (2hrs)
Strike to the heart of the question. The strongest one; not the one you want to defeat – Steelman not Strawman. Don’t ask “how do I win at the question”; ask, “am I giving the best answer to the best question I can give”. More poetic than anything else - this post would enumerate the feelings of victory and what not to feel victorious about, as well as trying to feel what it's like to be on the other side of the discussion to yourself, frustratingly trying to get a point across while a point is being flung at yourself. (2hrs)
How to approach a new problem – similar to the “How to solve X” post. But considerations for working backwards from a wicked problem, as well as trying “The least bad solution I know of”, Murphy-jitsu, and known solutions to similar problems. Step 0. I notice I am approaching a problem.
Spices – Adventures in sensory experience land. I ran an event of spice-smelling/guessing for a group of 30 people. I wrote several documents in the process about spices and how to run the event. I want to publish these. As an exercise - it's a fun game of guess-the-spice.
Wing it VS Plan – All of the what, why, who, and what you should do of the two. Some people seem to be the kind of person who is always just winging it. In contrast, some people make ridiculously complicated plans that work. Most of us are probably somewhere in the middle. I suggest that the more of a planner you can be the better because you can always fall back on winging it, and you probably will. But if you don't have a plan and are already winging it - you can't fall back on the other option. This concept came to me while playing ingress, which encourages you to plan your actions before you make them.
On-stage bias – The changes we make when we go onto a stage include extra makeup to adjust for the bright lights, and speaking louder to adjust for the audience which is far away. When we consider the rest of our lives, maybe we want to appear specifically X (i.e, confident, friendly) so we should change ourselves to suit the natural skews in how we present based on the "stage" we are appearing on. appear as the person you want to appear as, not the person you naturally appear as.
Creating a workspace – considerations when thinking about a “place” of work, including desk, screen, surrounding distractions, and basically any factors that come into it. Similar to how the very long list of sleep maintenance suggestions covers environmental factors in your sleep environment but for a workspace.
Posts added to the list since then
Doing a cost|benefit analysis - This is something we rely on when enumerating the options and choices ahead of us, but something I have never explicitly looked into. Some costs that can get overlooked include: Time, Money, Energy, Emotions, Space, Clutter, Distraction/Attention, Memory, Side effects, and probably more. I'd like to see a How to X guide for CBA. (wikipedia)
Extinction learning at home - A cross between intermittent reward (the worst kind of addiction), and what we know about extinguishing it. Then applying that to "convincing" yourself to extinguish bad habits by experiential learning. Uses the CFAR internal Double Crux technique, precommit yourself to a challenge, for example - "If I scroll through 20 facebook posts in a row and they are all not worth my time, I will be convinced that I should spend less time on facebook because it's not worth my time" Adjust 20 to whatever position your double crux believes to be true, then run a test and iterate. You have to genuinely agree with the premise before running the test. This can work for a number of committed habits which you want to extinguish. (new idea as at the writing of this post)
How to write a dating ad - A suggestion to include information that is easy to ask questions about (this is hard). For example; don't write, "I like camping", write "I like hiking overnight with my dog", giving away details in a way that makes them worth inquiring about. The same reason applies to why writing "I'm a great guy" is really not going to get people to believe you, as opposed to demonstrating the claim. (show, don't tell)
How to give yourself aversions - an investigation into aversive actions and potentially how to avoid collecting them when you have a better understanding of how they happen. (I have not done the research and will need to do that before publishing the post)
How to give someone else an aversion - similar to above, we know we can work differently to other people, and at the intersection of that is a misunderstanding that can leave people uncomfortable.
Lists - Creating lists is a great thing, currently in draft - some considerations about what lists are, what they do, what they are used for, what they can be used for, where they come in handy, and the suggestion that you should use lists more. (also some digital list-keeping solutions)
Choice to remember the details - this stems from choosing to remember names, a point in the conversation where people sometimes tune out. As a mindfulness concept you can choose to remember the details. (short article, not exactly sure why I wanted to write about this)
What is a problem - On the path of problem solving, understanding what a problem is will help you to understand how to attack it. Nothing more complicated than this picture to explain it. The barrier is a problem. This doesn't seem important on it's own but as a foundation for thinking about problems it's good to have sitting around somewhere.
How to/not attend a meetup - for anyone who has never been to a meetup, and anyone who wants the good tips on etiquette for being the new guy in a room of friends. First meetup: shut up and listen, try not to be too much of an impact on the existing meetup group or you might misunderstand the culture.
Noticing the world, Repercussions and taking advantage of them - There are regularly world events that I notice. Things like the olympics, Pokemon go coming out, the (recent) spaceX rocket failure. I try to notice when big events happen and try to think about how to take advantage of the event or the repercussions caused by that event. Motivated to think not only about all the olympians (and the fuss leading up to the olympics), but all the people at home who signed up to a gym because of the publicity of the competitive sport. If only I could get in on the profit of gym signups...
leastgood but only solution I know of - So you know of a solution, but it's rubbish. Or probably is. Also you have no better solutions. Treat this solution as the best solution you have (because it is) and start implementing it, as you do that - keep looking for other solutions. But at least you have a solution to work with!
Self-management thoughts - When you ask yourself, "am I making progress?", "do I want to be in this conversation?" and other self management thoughts. And an investigation into them - it's a CFAR technique but their writing on the topic is brief. (needs research)
instrumental supply-hoarding behaviour - A discussion about the benefits of hoarding supplies for future use. Covering also - what supplies are not a good idea to store, and what supplies are. Maybe this will be useful for people who store things for later days, and hopefully help to consolidate and add some purposefulness to their process.
list of sub groups that I have tried - Before running my local lesswrong group I partook in a great deal of other groups. This was meant as a list with comments on each group.
If you have nothing to do – make better tools for use when real work comes along - This was probably going to be a poetic style motivation post about exactly what the title suggests. Be Prepared.
what other people are good at (as support) - When reaching out for support, some people will be good at things that other people are not. For example - emotional support, time to spend on each other, ideas for solving your problems. Different people might be better or worse than others. Thinking about this can make your strategies towards solving your problems a bit easier to manage. Knowing what works and what does not work, or what you can reliably expect when you reach out for support from some people - is going to supercharge your fulfilment of those needs.
Focusing - An already written guide to Eugine Gendlin's focusing technique. That needs polishing before publishing. The short form: treat your system 1 as a very powerful machine that understands your problems and their solutions more than you do; use your system 2 to ask it questions and see what it returns.
Rewrite: how to become a 1000 year old vampire - I got as far as breaking down this post and got stuck at draft form before rewriting. Might take another stab at it soon.
Should you tell people your goals? - This thread in a post. In summary: It depends on the environment, the wrong environment is actually demotivational, the right environment is extra motivational.
Meta: this took around 4 hours to write up. Which is ridiculously longer than usual. I noticed a substantial number of breaks being taken - not sure if that relates to the difficulty of creating so many summaries or just me today. Still. This experiment might help my future writing focus/direction so I figured I would try it out. If you see an idea of particularly high value I will be happy to try to cover it in more detail.
Original post: http://bearlamp.com.au/examples/
When we talk about a concept or a point it's important to understand the ladder of abstraction. Covered before on lesswrong and in other places as advice for communicators on how to bridge a gap of knowledge.
Knowing, understanding and feeling the ladder of abstraction prevents things like this:
- Speakers who bury audiences in an avalanche of data without providing the significance.
- Speakers who discuss theories and ideals, completely detached from real-world practicalities.
When you talk to old and wise people, they will sometimes give you stories of their lives. "back in my day...". Seeing that in perspective is a good way to realise that might be people's way of shifting around the latter of abstraction. As an agenty-agent of agenty goodness - your job is to make sense of this occurrence. The ladder of abstraction is very powerful when used effectively and very frustrating when you find yourself on the wrong side of it.
The flipside to this example is when people talk at a highly theoretical level. I suspect this happens to philosophers, as well as hippies. They are very good at being able to tell you about the connections between things that are "energy" or "desire", but lack the grounding to explain how that applies to real life. I don't blame them. One day I will be able to think completely abstractly. Today is not that day. Since today is not that day, it is my duty and your's to ask and specify. To give the explanation of what the ladder of abstraction is, and then tell them you have no idea what they are talking about. Or as for the example above - ask them to go up a level in the ladder of abstraction. "If I were to learn something from your experiences - what would it be?".
Lesswrong doing it wrong
I care about adding the conceptual ladder of abstraction to the repertoire for a reason. LW'ers are very good at paying attention to details. A really powerful and important ability. After all - the fifth virtue is argument, the tenth is precision. If you can't be precise about what you are communication, you fail to value what we value.
Which is why it's great to see critical objections to what OP's provide as examples.
I object when defeating an example does not defeat the rule. Our delightful OP may see their territory, stride forth and exclaim to have a map for this territory and a few similar mountains or valleys. Correcting the mountains and valleys map mentioned doesn't change the rest of the territory and does not change the rest of the map.
This does matter. Recently a copy of this dissertation came around the slack - https://cryptome.org/2013/09/nolan-nctc.pdf. It is a report detailing the ridiculous culture inside the CIA and other US government security institutions. One of the biggest problems within that culture can be shown through this example (page 34 of the report):
The following exchange is a good example, told to me by a CIA analyst who was explaining the rules of baseball to visitors who didn’t know the game:
Analyst A: So there are four bases--
Analyst B: -- Well, no, it’s really three bases plus home plate.
Analyst A: ... Okay, three bases plus home plate. The batter hits the ball and advances through the bases one by one—
Analyst C: -- Well, no, it doesn’t have to be one base at a time.
And these ones on page 35:
The following excerpts from stories people have told me or that I witnessed further illustrate this concept:
John: I see you’ve drawn a star on that draft.
Bridget: Yeah, that’s just my doodle of choice. I just do it unconsciously sometimes.
John: Don’t you mean subconsciously?
Scott: Good morning!
Employee in the parking lot: Well, I don’t know if it’s good, but here we are.
Helene: I am so thirsty today! I seriously have a dehydration problem.
Lucy: Actually, you have a hydration problem.
Victoria: My hopes have been squashed like a pancake.
James: Don’t you mean flattened like a pancake?
For those of us that don't have time to read 215 pages. The point is that analyst culture does this. A lot. From the outside it might seem ridiculous. We can intellectually confidently say that the analysts A, B and C in the first example were all right, and if they paid attention to the object of the situation they would skip the interruptions and get to the point of explaining how baseball works. But that's not what it feels like when you are on the inside.
The report outlines that these things make analyst culture a difficult one to be a part of or be engaged in because of examples like these.
We do the same thing. We nitpick at examples, and fight over irrelevant things. If I were to change everyone's mind, I would rather see something like this:
(*yes this is not a very good example of an example, this is an example of a turn of speech that was challenged, but the same effect of nitpicking on irrelevant details is present).
Nitpicking is not necessary.
Sometimes we forget that we are all in the same boat together, racing down the river at the rate that we can uncover truth. Sometimes we feel like we are in different boats racing each other. In this sense it would be a good idea to compete and accuse each other of our failures on the journey to get ahead. However we do not want to do that.
It's in our nature to compete, the human need to be right! But we don't need to compete against each other, we need to support each other to compete against Moloch, Akrasia, Entropy, Fallacies and biases (among others).
I am guilty myself. In my personal life as well as on LW. If I am laying blame, I blame myself for failing to point this out sooner, more than I blame anyone else for nitpicking examples.
The plan of action.
Next time you go to comment; Next time I go to comment, think very carefully about if you can improve, if I can improve - the post I am commenting on, before I level my objections at it. We want to make the world a better place. People wiser, older, sharper and witter than me have already said it; "if you are looking for where to start... you need only look in the mirror".
Meta: this took 3 hours to write.
2016 LessWrong Diaspora Survey Analysis
- Results and Dataset
- LessWrong Usage and Experience
- LessWrong Criticism and Successorship
- Diaspora Community Analysis (You are here)
- Mental Health Section
- Basilisk Section/Analysis
- Blogs and Media analysis
- Calibration Question And Probability Question Analysis
- Charity And Effective Altruism Analysis
Before it was the LessWrong survey, the 2016 survey was a small project I was working on as market research for a website I'm creating called FortForecast. As I was discussing the idea with others, particularly Eliot he made the suggestion that since he's doing LW 2.0 and I'm doing a site that targets the LessWrong demographic, why don't I go ahead and do the LessWrong Survey? Because of that, this years survey had a lot of questions oriented around what you would want to see in a successor to LessWrong and what you think is wrong with the site.
LessWrong Usage and Experience
How Did You Find LessWrong?
Been here since it was started in the Overcoming Bias days: 171 8.3%
Referred by a link: 275 13.4%
HPMOR: 542 26.4%
Overcoming Bias: 80 3.9%
Referred by a friend: 265 12.9%
Referred by a search engine: 131 6.4%
Referred by other fiction: 14 0.7%
Slate Star Codex: 241 11.7%
Reddit: 55 2.7%
Common Sense Atheism: 19 0.9%
Hacker News: 47 2.3%
Gwern: 22 1.1%
Other: 191 9.308%
How do you use Less Wrong?
I lurk, but never registered an account: 1120 54.4%
I've registered an account, but never posted: 270 13.1%
I've posted a comment, but never a top-level post: 417 20.3%
I've posted in Discussion, but not Main: 179 8.7%
I've posted in Main: 72 3.5%
How often do you comment on LessWrong?
I have commented more than once a week for the past year.: 24 1.2%
I have commented more than once a month for the past year but less than once a week.: 63 3.1%
I have commented but less than once a month for the past year.: 225 11.1%
I have not commented this year.: 1718 84.6%
[You could probably snarkily title this one "LW usage in one statistic". It's a pretty damning portrait of the sites vitality. A whopping 84.6% of people have not commented this year a single time.]
How Long Since You Last Posted On LessWrong?
I wrote one today.: 12 0.637%
Within the last three days.: 13 0.69%
Within the last week.: 22 1.168%
Within the last month.: 58 3.079%
Within the last three months.: 75 3.981%
Within the last six months.: 68 3.609%
Within the last year.: 84 4.459%
Within the last five years.: 295 15.658%
Longer than five years.: 15 0.796%
I've never posted on LW.: 1242 65.924%
[Supermajority of people have never commented on LW, 5.574% have within the last month.]
About how much of the Sequences have you read?
Never knew they existed until this moment: 215 10.3%
Knew they existed, but never looked at them: 101 4.8%
Some, but less than 25% : 442 21.2%
About 25%: 260 12.5%
About 50%: 283 13.6%
About 75%: 298 14.3%
All or almost all: 487 23.3%
[10.3% of people taking the survey have never heard of the sequences. 36.3% have not read a quarter of them.]
Do you attend Less Wrong meetups?
Yes, regularly: 157 7.5%
Yes, once or a few times: 406 19.5%
No: 1518 72.9%
[However the in-person community seems to be non-dead.]
Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?
Yes, all the time: 158 7.6%
Yes, sometimes: 258 12.5%
No: 1652 79.9%
About the same number say they hang out with LWers 'all the time' as say they go to meetups. I wonder if people just double counted themselves here. Or they may go to meetups and have other interactions with LWers outside of that. Or it could be a coincidence and these are different demographics. Let's find out.
P(Community part of daily life | Meetups) = 40%
Significant overlap, but definitely not exclusive overlap. I'll go ahead and chalk this one up up to coincidence.
Have you ever been in a romantic relationship with someone you met through the Less Wrong community?
Yes: 129 6.2%
I didn't meet them through the community but they're part of the community now: 102 4.9%
No: 1851 88.9%
LessWrong Usage Differences Between 2016 and 2014 Surveys
How do you use Less Wrong?
I lurk, but never registered an account: +19.300% 1125 54.400%
I've registered an account, but never posted: -1.600% 271 13.100%
I've posted a comment, but never a top-level post: -7.600% 419 20.300%
I've posted in Discussion, but not Main: -5.100% 179 8.700%
I've posted in Main: -3.300% 73 3.500%
About how much of the sequences have you read?
Never knew they existed until this moment: +3.300% 217 10.400%
Knew they existed, but never looked at them: +2.100% 103 4.900%
Some, but less than 25%: +3.100% 442 21.100%
About 25%: +0.400% 260 12.400%
About 50%: -0.400% 284 13.500%
About 75%: -1.800% 299 14.300%
All or almost all: -5.000% 491 23.400%
Do you attend Less Wrong meetups?
Yes, regularly: -2.500% 160 7.700%
Yes, once or a few times: -2.100% 407 19.500%
No: +7.100% 1524 72.900%
Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?
Yes, all the time: +0.200% 161 7.700%
Yes, sometimes: -0.300% 258 12.400%
No: +2.400% 1659 79.800%
Have you ever been in a romantic relationship with someone you met through the Less Wrong community?
Yes: +0.800% 132 6.300%
I didn't meet them through the community but they're part of the community now: -0.400% 102 4.900%
No: +1.600% 1858 88.800%
In a bit of a silly oversight I forgot to ask survey participants what was good about the community, so the following is going to be a pretty one sided picture. Below are the complete write ins respondents submitted
Issues With LessWrong At It's Peak
Philosophical Issues With LessWrong At It's Peak[Part One]
Philosophical Issues With LessWrong At It's Peak[Part Two]
Community Issues With LessWrong At It's Peak[Part One]
Community Issues With LessWrong At It's Peak[Part Two]
Issues With LessWrong Now
Peak Philosophy Issue Tallies
|Bad Tech Platform||BTP||1|
|Doesn't Accept Criticism||DAC||3|
|Don't Know Where to Start||DKWS||5|
|Damaged Me Mentally||DMM||1|
|Insufficient Social Support||ISS||1|
|Lack of Rigor||LR||14|
|Not Enough Jargon||NEJ||1|
|Not Enough Roko's Basilisk||NERB||1|
|Not Enough Theory||NET||1|
|Not Progressive Enough||NPE||7|
|None of the Above|
|Quantum Mechanics Sequence||QMS||2|
|Small Competent Authorship||SCA||6|
|Suggestion For Improvement||SFI||1|
|Too Much Roko's Basilisk||TMRB||1|
|Too Much Theory||TMT||14|
Well, those are certainly some results. Top answers are:
Narrow Scholarship: 20
Too Much Theory: 14
Lack of Rigor: 14
Reinvention (reinvents the wheel too much): 10
Personality Cult: 10
So condensing a bit: Pay more attention to mainstream scholarship and ideas, try to do better about intellectual rigor, be more practical and focus on results, be more humble. (Labeled Dataset)
Peak Community Issue Tallies
|Assumes Reader Is Male||ARIM||1|
|Bad At PR||BAP||5|
|Doesn't Accept Criticism||DAC||3|
|Lack of Rigor||LR||1|
|Not Big Enough||NBE||3|
|Not Enough of A Cult||NEAC||1|
|Not Enough Content||NEC||7|
|Not Enough Community Infrastructure||NECI||10|
|Not Enough Meetups||NEM||5|
|Not Nerdy Enough||NNE||3|
|None Of the Above||NOA||1|
|Not Progressive Enough||NPE||3|
|Not Stringent Enough||NSE||3|
|Small Competent Authorship||SCA||5|
|Suggestion For Improvement||SFI||1|
|Too Intolerant of Cranks||TIC||1|
|Too Intolerant of Politics||TIP||2|
|Too Long Winded||TLW||2|
|Too Many Idiots||TMI||3|
|Too Much Math||TMM||1|
|Too Much Theory||TMT||12|
|Too Tolerant of Cranks||TTC||1|
|Too Tolerant of Politics||TTP||3|
|Too Tolerant of POSers||TTPOS||2|
|Too Tolerant of PROGressivism||TTPROG||2|
Too Much Theory: 12
Not Enough Community Infrastructure: 10
Too Contrarian: 10
Insufficiently Indexed: 9
Again condensing a bit: Work on being less intimidating/aggressive/etc to newcomers, spend less time on navel gazing and more time on actually doing things and collecting data, work on getting the structures in place that will onboard people into the community, stop being so nitpicky and argumentative, spend more time on getting content indexed in a form where people can actually find it, be more accepting of outside viewpoints and remember that you're probably more likely to be wrong than you think. (Labeled Dataset)
One last note before we finish up, these tallies are a very rough executive summary. The tagging process basically involves trying to fit points into clusters and is prone to inaccuracy through laziness, adding another category being undesirable, square-peg into round-hole fitting, and my personal political biases. So take these with a grain of salt, if you really want to know what people wrote in my advice would be to read through the write in sets I have above in HTML format. If you want to evaluate for yourself how well I tagged things you can see the labeled datasets above.
I won't bother tallying the "issues now" sections, all you really need to know is that it's basically the same as the first sections except with lots more "It's dead." comments and from eyeballing it a higher proportion of people arguing that LessWrong has been taken over by the left/social justice and complaints about effective altruism. (I infer that the complaints about being taken over by the left are mostly referring to effective altruism.)
Traits Respondents Would Like To See In A Successor Community
Attention Paid To Outside Sources
More: 1042 70.933%
Same: 414 28.182%
Less: 13 0.885%
Self Improvement Focus
More: 754 50.706%
Same: 598 40.215%
Less: 135 9.079%
More: 184 12.611%
Same: 821 56.271%
Less: 454 31.117%
More: 330 22.837%
Same: 770 53.287%
Less: 345 23.875%
More: 455 31.885%
Same: 803 56.272%
Less: 169 11.843%
In summary, people want a site that will engage with outside ideas, acknowledge where it borrows from, focus on practical self improvement, less on AI and AI risk, and tighten its academic rigor. They could go either way on politics but the epistemic direction is clear.
More: 254 19.644%
Same: 830 64.192%
Less: 209 16.164%
Focused On 'Real World' Action
More: 739 53.824%
Same: 563 41.005%
Less: 71 5.171%
More: 749 55.605%
Same: 575 42.687%
Less: 23 1.707%
Data Driven/Testing Of Ideas
More: 1107 78.344%
Same: 291 20.594%
Less: 15 1.062%
More: 583 43.507%
Same: 682 50.896%
Less: 75 5.597%
This largely backs up what I said about the previous results. People want a more practical, more active, more social and more empirical LessWrong with outside expertise and ideas brought into the fold. They could go either way on it being more intense but the epistemic trend is still clear.
So where did the party go? We got twice as many respondents this year as last when we opened up the survey to the diaspora, which means that the LW community is alive and kicking it's just not on LessWrong.
Yes: 353 11.498%
No: 1597 52.02%
Yes: 215 7.003%
No: 1735 56.515%
LessWrong Facebook Group
Yes: 171 5.57%
No: 1779 57.948%
Yes: 55 1.792%
No: 1895 61.726%
Yes: 832 27.101%
No: 1118 36.417%
[SlateStarCodex by far has the highest proportion of active LessWrong users, over twice that of LessWrong itself, and more than LessWrong and Tumblr combined.]
Yes: 350 11.401%
No: 1600 52.117%
[I'm actually surprised that Tumblr doesn't just beat LessWrong itself outright, They're only a tenth of a percentage point behind though, and if current trends continue I suspect that by 2017 Tumblr will have a large lead over the main LW site.]
Yes: 150 4.886%
No: 1800 58.632%
[Eliezer Yudkowsky currently resides here.]
Yes: 59 1.922%
No: 1891 61.596%
Effective Altruism Hub
Yes: 98 3.192%
No: 1852 60.326%
Yes: 4 0.13%
No: 1946 63.388%
[I included this as a 'troll' option to catch people who just check every box. Relatively few people seem to have done that, but having the option here lets me know one way or the other.]
Good Judgement(TM) Open
Yes: 29 0.945%
No: 1921 62.573%
Yes: 59 1.922%
No: 1891 61.596%
Yes: 8 0.261%
No: 1942 63.257%
Yes: 252 8.208%
No: 1698 55.309%
#lesswrong on freenode
Yes: 76 2.476%
No: 1874 61.042%
#slatestarcodex on freenode
Yes: 36 1.173%
No: 1914 62.345%
#hplusroadmap on freenode
Yes: 4 0.13%
No: 1946 63.388%
#chapelperilous on freenode
Yes: 10 0.326%
No: 1940 63.192%
[Since people keep asking me, this is a postrational channel.]
Yes: 274 8.925%
No: 1676 54.593%
Yes: 230 7.492%
No: 1720 56.026%
[Given that the story is long over, this is pretty impressive. I'd have expected it to be dead by now.]
Yes: 244 7.948%
No: 1706 55.57%
One or more private 'rationalist' groups
Yes: 192 6.254%
No: 1758 57.264%
[I almost wish I hadn't included this option, it'd have been fascinating to learn more about these through write ins.]
Of all the parties who seem like plausible candidates at the moment, Scott Alexander seems most capable to undiaspora the community. In practice he's very busy, so he would need a dedicated team of relatively autonomous people to help him. Scott could court guest posts and start to scale up under the SSC brand, and I think he would fairly easily end up with the lions share of the free floating LWers that way.
Before I call a hearse for LessWrong, there is a glimmer of hope left:
Would you consider rejoining LessWrong?
I never left: 668 40.6%
Yes: 557 33.8%
Yes, but only under certain conditions: 205 12.5%
No: 216 13.1%
A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?
Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.
Future Improvement Wishlist Based On Survey Results
- Pay more attention to mainstream scholarship and ideas.
- Improved intellectual rigor.
- Acknowledge sources borrowed from.
- Be more practical and focus on results.
- Be more humble.
- Less intimidating/aggressive/etc to newcomers,
- Structures that will onboard people into the community.
- Stop being so nitpicky and argumentative.
- Spend more time on getting content indexed in a form where people can actually find it.
- More accepting of outside viewpoints.
While that list seems reasonable, it's quite hard to put into practice. Rigor, as the name implies requires high-effort from participants. Frankly, it's not fun. And getting people to do un-fun things without paying them is difficult. If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject. Not just have people 'discuss', as though the potential for Rationality is within all of us just waiting to be brought out by the right conversation.
I personally haven't been a LW regular in a long time. Assuming the points about pedanticism, snipping, "well actually"-ism and the like are true then they need to stop for the site to move forward. Personally, I'm a huge fan of Scott Alexander's comment policy: All comments must be at least two of true, kind, or necessary.
True and kind - Probably won't drown out the discussion signal, will help significantly decrease the hostility of the atmosphere.
True and necessary - Sometimes what you have to say isn't nice, but it needs to be said. This is the common core of free speech arguments for saying mean things and they're not wrong. However, something being true isn't necessarily enough to make it something you should say. In fact, in some situations saying mean things to people entirely unrelated to their arguments is known as the ad hominem fallacy.
Kind and necessary - The infamous 'hugbox' is essentially a place where people go to hear things which are kind but not necessarily true. I don't think anybody wants a hugbox, but occasionally it can be important to say things that might not be true but are needed for the sake of tact, reconciliation, or to prevent greater harm.
If people took that seriously and really gave it some thought before they used their keyboard, I think the on-site LessWrong community would be a significant part of the way to not driving people off as soon as they arrive.
More importantly, in places like the LessWrong Slack I see this sort of happy go lucky attitude about site improvement. "Oh that sounds nice, we should do that." without the accompanying mountain of work to actually make 'that' happen. I'm not sure people really understand the dynamics of what it means to 'revive' a website in severe decay. When you decide to 'revive' a dying site, what you're really doing once you're past a certain point is refounding the site. So the question you should be asking yourself isn't "Can I fix the site up a bit so it isn't quite so stale?". It's "Could I have founded this site?" and if the answer is no you should seriously question whether to make the time investment.
Whether or not LessWrong lives to see another day basically depends on the level of ground game its last users and administrators can muster up. And if it's not enough, it won't.
Virtus junxit mors non separabit!
For some time now, "Promoted" has been reserved for articles written by MIRI staff, mostly about MIRI activities. Which, I suppose, would be reasonable, if this were MIRI's blog. But it isn't. MIRI has its own blog. It seems to me inconvenient both to readers of LessWrong, and to readers of MIRI's blog, to split MIRI's material up between the two.
People visiting lesswrong land on "Promoted", see a bunch of MIRI blogs, mostly written by people who don't read LessWrong themselves much anymore, and get a mistaken impression of what people talk about on LessWrong. Also, LessWrong looks like a dying site, since often months pass between new posts.
I suggest the default landing page be "New", not "Promoted".
I have compiled many suggestions about the future of lesswrong into a document here:
It's long and best formatted there.
In case you hate leaving this website here's the summary:
There are 3 main areas that are going to change.
Technical/Direct Site Changes
new home page
new forum style with subdivisions
new sub for “friends of lesswrong” (rationality in the diaspora)
New tagging system
New karma system
Social and cultural changes
Positive culture; a good place to be.
Pillars of good behaviours (the ones we want to encourage)
Demonstrate by example
3 levels of social strategies (new, advanced and longtimers)
Content (emphasis on producing more rationality material)
For up-and-coming people to write more
for the community to improve their contributions to create a stronger collection of rationality.
For known existing writers
To encourage them to keep contributing
- To encourage them to work together with each other to contribute
Why change LW?
Lesswrong has gone through great times of growth and seen a lot of people share a lot of positive and brilliant ideas. It was hailed as a launchpad for MIRI, in that purpose it was a success. At this point it’s not needed as a launchpad any longer. While in the process of becoming a launchpad it became a nice garden to hang out in on the internet. A place of reasonably intelligent people to discuss reasonable ideas and challenge each other to update their beliefs in light of new evidence. In retiring from its “launchpad” purpose, various people have felt the garden has wilted and decayed and weeds have grown over. In light of this; and having enough personal motivation to decide I really like the garden, and I can bring it back! I just need a little help, a little magic, and some little changes. If possible I hope for the garden that we all want it to be. A great place for amazing ideas and life-changing discussions to happen.
How will we know we have done well (the feel of things)
Success is going to have to be estimated by changes to the feel of the site. Unfortunately that is hard to do. As we know outrage generates more volume than positive growth. Which is going to work against us when we try and quantify by measurable metrics. Assuming the technical changes are made; there is still going to be progress needed on the task of socially improving things. There are many “seasoned active users” - as well as “seasoned lurkers” who have strong opinions on the state of lesswrong and the discussion. Some would say that we risk dying of niceness, others would say that the weeds that need pulling are the rudeness.
Honestly we risk over-policing and under-policing at the same time. There will be some not-niceness that goes unchecked and discourages the growth of future posters (potentially our future bloggers), and at the same time some other niceness that motivates trolling behaviour as well as failing to weed out potential bad content which would leave us as fluffy as the next forum. there is no easy solution to tempering both sides of this challenge. I welcome all suggestions (it looks like a karma system is our best bet).
In the meantime I believe being on the general niceness, steelman side should be the motivated direction of movement. I hope to enlist some members as essentially coaches in healthy forum growth behaviour. Good steelmanning, positive encouragement, critical feedback as well as encouragement, a welcoming committee and an environment of content improvement and growth.
While at the same time I want everyone to keep up the heavy debate; I also want to see the best versions of ourselves coming out onto the publishing pages (and sometimes that can be the second draft versions).
So how will we know? By trying to reduce the ugh fields to people participating in LW, by seeing more content that enough people care about, by making lesswrong awesome.
The full document is just over 11 pages long. Please go read it, this is a chance to comment on potential changes before they happen.
Meta: This post took a very long time to pull together. I read over 1000 comments and considered the ideas contained there. I don't have an accurate account of how long this took to write; but I would estimate over 65 hours of work has gone into putting it together. It's been literally weeks in the making, I really can't stress how long I have been trying to put this together.
If you want to help, please speak up so we can help you help us. If you want to complain; keep it to yourself.
Thanks to the slack for keeping up with my progress and Vanvier, Mack, Leif, matt and others for reviewing this document.
As usual - My table of contents
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Barry Cotter
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- Your Intuitions are Not Magic
- The Apologist and the Revolutionary
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- The Planning Fallacy
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
- That Alien Message
- The Worst Argument in the World
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Note from Clarity: MBlume and other contributors wrote the original version of this welcome post, and orthonormal edited it a fair bit. If there's anything I should add or update please send me a private message or make the change by making the next thread—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.
Cross Posted at the EA Forum
At Event Horizon (a Rationalist/Effective Altruist house in Berkeley) my roommates yesterday were worried about Slate Star Codex. Their worries also apply to the Effective Altruism Forum, so I'll extend them.
Lesswrong was for many years the gravitational center for young rationalists worldwide, and it permits posting by new users, so good new ideas had a strong incentive to emerge.
With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.
The Effective Altruism forum doesn't have that particular problem. It is however more constrained in terms of what can be posted there. It is after all supposed to be about Effective Altruism.
We thus have three different strong attractors for the large community of people who enjoy reading blog posts online and are nearby in idea space.
(EDIT: By possible solutions I merely mean to say "these are some bad solutions I came up with in 5 minutes, and the reason I'm posting them here is because if I post bad solutions, other people will be incentivized to post better solutions)
If Slate Star Codex became an open blog like Lesswrong, more people would consider transitioning from passive lurkers to actual posters.
If the Effective Altruism Forum got as many readers as Lesswrong, there could be two gravity centers at the same time.
If the moderation and self selection of Main was changed into something that attracts those who have been on LW for a long time, and discussion was changed to something like Newcomers discussion, LW could go back to being the main space, with a two tier system (maybe one modulated by karma as well).
In the past there was Overcoming Bias, and Lesswrong in part became a stronger attractor because it was more open. Eventually lesswrongers migrated from Main to Discussion, and from there to Slate Star Codex, 80k blog, Effective Altruism forum, back to Overcoming Bias, and Wait But Why.
It is possible that Lesswrong had simply exerted it's capacity.
It is possible that a new higher tier league was needed to keep post quality high.
I suggest two things should be preserved:
Interesting content being created by those with more experience and knowledge who have interacted in this memespace for longer (part of why Slate Star Codex is powerful), and
The opportunity (and total absence of trivial inconveniences) for new people to try creating their own new posts.
If these two properties are kept, there is a lot of value to be gained by everyone.
The Status Quo:
I feel like we are living in a very suboptimal blogosphere. On LW, Discussion is more read than Main, which means what is being promoted to Main is not attractive to the people who are actually reading Lesswrong. The top tier quality for actually read posting is dominated by one individual (a great one, but still), disincentivizing high quality posts by other high quality people. The EA Forum has high quality posts that go unread because it isn't the center of attention.
You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.
As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.
Philosophy as the anti-science...
What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.
This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.
A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.
The lens that sees its own flaws...
Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.
I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.
And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.
I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.
How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.
A note about effective altruism…
One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.
Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.
This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.
How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).
Where should I send my charitable donations?
Aubrey de Grey's SENS Research Foundation.
100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.
If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:
- Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
- Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
- B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.
I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.
Addendum regarding unfinished business
I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.
EDIT: Obviously I'll stick around long enough to answer questions below :)
Pardon me, please, if this is not the way to go about asking such questions (it's all I know). Is this more for LessWrong itself, or for LessWrong Discussion?
Is there some kind of comprehensive organization by subject of LessWrong posts?
I know there are the sequences, but also a lot of other useful posts.
If I want to learn about learning, about lifespan extension, about charity work, about happiness, etc., is there a place I can go to view all relevant posts in each respective area?
Cover title: “Power and paranoia in Silicon Valley”; article title: “Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” (mirrors: 1, 2, 3), by Sam Frank; Harper’s Magazine, January 2015, pg26-36 (~8500 words). The beginning/ending are focused on Ethereum and Vitalik Buterin, so I'll excerpt the LW/MIRI/CFAR-focused middle:
…Blake Masters-the name was too perfect-had, obviously, dedicated himself to the command of self and universe. He did CrossFit and ate Bulletproof, a tech-world variant of the paleo diet. On his Tumblr’s About page, since rewritten, the anti-belief belief systems multiplied, hyperlinked to Wikipedia pages or to the confoundingly scholastic website Less Wrong: “Libertarian (and not convinced there’s irreconcilable fissure between deontological and consequentialist camps). Aspiring rationalist/Bayesian. Secularist/agnostic/ ignostic . . . Hayekian. As important as what we know is what we don’t. Admittedly eccentric.” Then: “Really, really excited to be in Silicon Valley right now, working on fascinating stuff with an amazing team.” I was startled that all these negative ideologies could be condensed so easily into a positive worldview. …I saw the utopianism latent in capitalism-that, as Bernard Mandeville had it three centuries ago, it is a system that manufactures public benefit from private vice. I started CrossFit and began tinkering with my diet. I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens.
…I left the auditorium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed. Heroes like Elon and Peter (did I have to ask? Musk and Thiel). The relative abilities of physicists and biologists, their standard deviations calculated out loud. How exactly Vassar would save the world. His left eyelid twitched, his full face winced with effort as he told me about his “personal war against the universe.” My brain hurt. I backed away and headed home. But Vassar had spoken like no one I had ever met, and after Kurzweil’s keynote the next morning, I sought him out. He continued as if uninterrupted. Among the acolytes of eternal life, Vassar was an eschatologist. “There are all of these different countdowns going on,” he said. “There’s the countdown to the broad postmodern memeplex undermining our civilization and causing everything to break down, there’s the countdown to the broad modernist memeplex destroying our environment or killing everyone in a nuclear war, and there’s the countdown to the modernist civilization learning to critique itself fully and creating an artificial intelligence that it can’t control. There are so many different - on different time-scales - ways in which the self-modifying intelligent processes that we are embedded in undermine themselves. I’m trying to figure out ways of disentangling all of that. . . .I’m not sure that what I’m trying to do is as hard as founding the Roman Empire or the Catholic Church or something. But it’s harder than people’s normal big-picture ambitions, like making a billion dollars.” Vassar was thirty-four, one year older than I was. He had gone to college at seventeen, and had worked as an actuary, as a teacher, in nanotech, and in the Peace Corps. He’d founded a music-licensing start-up called Sir Groovy. Early in 2012, he had stepped down as president of the Singularity Institute for Artificial Intelligence, now called the Machine Intelligence Research Institute (MIRI), which was created by an autodidact named Eliezer Yudkowsky, who also started Less Wrong. Vassar had left to found MetaMed, a personalized-medicine company, with Jaan Tallinn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that included young rationalists who had cut their teeth arguing on Yudkowsky’s website. The idea behind MetaMed was to apply rationality to medicine-“rationality” here defined as the ability to properly research, weight, and synthesize the flawed medical information that exists in the world. Prices ranged from $25,000 for a literature review to a few hundred thousand for a personalized study. “We can save lots and lots and lots of lives,” Vassar said (if mostly moneyed ones at first). “But it’s the signal-it’s the ‘Hey! Reason works!’-that matters. . . . It’s not really about medicine.” Our whole society was sick - root, branch, and memeplex - and rationality was the only cure. …I asked Vassar about his friend Yudkowsky. “He has worse aesthetics than I do,” he replied, “and is actually incomprehensibly smart.” We agreed to stay in touch.
One month later, I boarded a plane to San Francisco. I had spent the interim taking a second look at Less Wrong, trying to parse its lore and jargon: “scope insensitivity,” “ugh field,” “affective death spiral,” “typical mind fallacy,” “counterfactual mugging,” “Roko’s basilisk.” When I arrived at the MIRI offices in Berkeley, young men were sprawled on beanbags, surrounded by whiteboards half black with equations. I had come costumed in a Fermat’s Last Theorem T-shirt, a summary of the proof on the front and a bibliography on the back, printed for the number-theory camp I had attended at fifteen. Yudkowsky arrived late. He led me to an empty office where we sat down in mismatched chairs. He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him. I asked what he was working on. “Should I assume that your shirt is an accurate reflection of your abilities,” he asked, “and start blabbing math at you?” Eight minutes of probability and game theory followed. Cogitating before me, he kept grimacing as if not quite in control of his face. “In the very long run, obviously, you want to solve all the problems associated with having a stable, self-improving, beneficial-slash-benevolent AI, and then you want to build one.” What happens if an artificial intelligence begins improving itself, changing its own source code, until it rapidly becomes - foom! is Yudkowsky’s preferred expression - orders of magnitude more intelligent than we are? A canonical thought experiment devised by Oxford philosopher Nick Bostrom in 2003 suggests that even a mundane, industrial sort of AI might kill us. Bostrom posited a “superintelligence whose top goal is the manufacturing of paper-clips.” For this AI, known fondly on Less Wrong as Clippy, self-improvement might entail rearranging the atoms in our bodies, and then in the universe - and so we, and everything else, end up as office supplies. Nothing so misanthropic as Skynet is required, only indifference to humanity. What is urgently needed, then, claims Yudkowsky, is an AI that shares our values and goals. This, in turn, requires a cadre of highly rational mathematicians, philosophers, and programmers to solve the problem of “friendly” AI - and, incidentally, the problem of a universal human ethics - before an indifferent, unfriendly AI escapes into the wild.
Among those who study artificial intelligence, there’s no consensus on either point: that an intelligence explosion is possible (rather than, for instance, a proliferation of weaker, more limited forms of AI) or that a heroic team of rationalists is the best defense in the event. That MIRI has as much support as it does (in 2012, the institute’s annual revenue broke $1 million for the first time) is a testament to Yudkowsky’s rhetorical ability as much as to any technical skill. Over the course of a decade, his writing, along with that of Bostrom and a handful of others, has impressed the dangers of unfriendly AI on a growing number of people in the tech world and beyond. In August, after reading Superintelligence, Bostrom’s new book, Elon Musk tweeted, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” In 2000, when Yudkowsky was twenty, he founded the Singularity Institute with the support of a few people he’d met at the Foresight Institute, a Palo Alto nanotech think tank. He had already written papers on “The Plan to Singularity” and “Coding a Transhuman AI,” and posted an autobiography on his website, since removed, called “Eliezer, the Person.” It recounted a breakdown of will when he was eleven and a half: “I can’t do anything. That’s the phrase I used then.” He dropped out before high school and taught himself a mess of evolutionary psychology and cognitive science. He began to “neuro-hack” himself, systematizing his introspection to evade his cognitive quirks. Yudkowsky believed he could hasten the singularity by twenty years, creating a superhuman intelligence and saving humankind in the process. He met Thiel at a Foresight Institute dinner in 2005 and invited him to speak at the first annual Singularity Summit. The institute’s paid staff grew. In 2006, Yudkowsky began writing a hydra-headed series of blog posts: science-fictionish parables, thought experiments, and explainers encompassing cognitive biases, self-improvement, and many-worlds quantum mechanics that funneled lay readers into his theory of friendly AI. Rationality workshops and Meetups began soon after. In 2009, the blog posts became what he called Sequences on a new website: Less Wrong. The next year, Yudkowsky began publishing Harry Potter and the Methods of Rationality at
fanfiction.net. The Harry Potter category is the site’s most popular, with almost 700,000 stories; of these, HPMoR is the most reviewed and the second-most favorited. The last comment that the programmer and activist Aaron Swartz left on Reddit before his suicide in 2013 was on
/r/hpmor. In Yudkowsky’s telling, Harry is not only a magician but also a scientist, and he needs just one school year to accomplish what takes canon-Harry seven. HPMoR is serialized in arcs, like a TV show, and runs to a few thousand pages when printed; the book is still unfinished. Yudkowsky and I were talking about literature, and Swartz, when a college student wandered in. Would Eliezer sign his copy of HPMoR? “But you have to, like, write something,” he said. “You have to write, ‘I am who I am.’ So, ‘I am who I am’ and then sign it.” “Alrighty,” Yudkowsky said, signed, continued. “Have you actually read Methods of Rationality at all?” he asked me. “I take it not.” (I’d been found out.) “I don’t know what sort of a deadline you’re on, but you might consider taking a look at that.” (I had taken a look, and hated the little I’d managed.) “It has a legendary nerd-sniping effect on some people, so be warned. That is, it causes you to read it for sixty hours straight.”
The nerd-sniping effect is real enough. Of the 1,636 people who responded to a 2013 survey of Less Wrong’s readers, one quarter had found the site thanks to HPMoR, and many more had read the book. Their average age was 27.4, their average IQ 138.2. Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, 54.7 % American, 89.3% atheist or agnostic. The catastrophes they thought most likely to wipe out at least 90% of humanity before the year 2100 were, in descending order, pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, pandemic (natural), economic/political collapse, asteroid, nanotech/gray goo. Forty-two people, 2.6 %, called themselves futarchists, after an idea from Robin Hanson, an economist and Yudkowsky’s former coblogger, for reengineering democracy into a set of prediction markets in which speculators can bet on the best policies. Forty people called themselves reactionaries, a grab bag of former libertarians, ethno-nationalists, Social Darwinists, scientific racists, patriarchists, pickup artists, and atavistic “traditionalists,” who Internet-argue about antidemocratic futures, plumping variously for fascism or monarchism or corporatism or rule by an all-powerful, gold-seeking alien named Fnargl who will free the markets and stabilize everything else. At the bottom of each year’s list are suggestive statistical irrelevancies: “every optimizing system’s a dictator and i’m not sure which one i want in charge,” “Autocracy (important: myself as autocrat),” “Bayesian (aspiring) Rationalist. Technocratic. Human-centric Extropian Coherent Extrapolated Volition.” “Bayesian” refers to Bayes’s Theorem, a mathematical formula that describes uncertainty in probabilistic terms, telling you how much to update your beliefs when given new information. This is a formalization and calibration of the way we operate naturally, but “Bayesian” has a special status in the rationalist community because it’s the least imperfect way to think. “Extropy,” the antonym of “entropy,” is a decades-old doctrine of continuous human improvement, and “coherent extrapolated volition” is one of Yudkowsky’s pet concepts for friendly artificial intelligence. Rather than our having to solve moral philosophy in order to arrive at a complete human goal structure, C.E.V. would computationally simulate eons of moral progress, like some kind of Whiggish Pangloss machine. As Yudkowsky wrote in 2004, “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together.” Yet can even a single human’s volition cohere or compute in this way, let alone humanity’s? We stood up to leave the room. Yudkowsky stopped me and said I might want to turn my recorder on again; he had a final thought. “We’re part of the continuation of the Enlightenment, the Old Enlightenment. This is the New Enlightenment,” he said. “Old project’s finished. We actually have science now, now we have the next part of the Enlightenment project.”
In 2013, the Singularity Institute changed its name to the Machine Intelligence Research Institute. Whereas MIRI aims to ensure human-friendly artificial intelligence, an associated program, the Center for Applied Rationality, helps humans optimize their own minds, in accordance with Bayes’s Theorem. The day after I met Yudkowsky, I returned to Berkeley for one of CFAR’s long-weekend workshops. The color scheme at the Rose Garden Inn was red and green, and everything was brocaded. The attendees were mostly in their twenties: mathematicians, software engineers, quants, a scientist studying soot, employees of Google and Facebook, an eighteen-year-old Thiel Fellow who’d been paid $100,000 to leave Boston College and start a company, professional atheists, a Mormon turned atheist, an atheist turned Catholic, an Objectivist who was photographed at the premiere of Atlas Shrugged II: The Strike. There were about three men for every woman. At the Friday-night meet and greet, I talked with Benja, a German who was studying math and behavioral biology at the University of Bristol, whom I had spotted at MIRI the day before. He was in his early thirties and quite tall, with bad posture and a ponytail past his shoulders. He wore socks with sandals, and worried a paper cup as we talked. Benja had felt death was terrible since he was a small child, and wanted his aging parents to sign up for cryonics, if he could figure out how to pay for it on a grad-student stipend. He was unsure about the risks from unfriendly AI - “There is a part of my brain,” he said, “that sort of goes, like, ‘This is crazy talk; that’s not going to happen’” - but the probabilities had persuaded him. He said there was only about a 30% chance that we could make it another century without an intelligence explosion. He was at CFAR to stop procrastinating. Julia Galef, CFAR’s president and cofounder, began a session on Saturday morning with the first of many brain-as-computer metaphors. We are “running rationality on human hardware,” she said, not supercomputers, so the goal was to become incrementally more self-reflective and Bayesian: not perfectly rational agents, but “agent-y.” The workshop’s classes lasted six or so hours a day; activities and conversations went well into the night. We got a condensed treatment of contemporary neuroscience that focused on hacking our brains’ various systems and modules, and attended sessions on habit training, urge propagation, and delegating to future selves. We heard a lot about Daniel Kahneman, the Nobel Prize-winning psychologist whose work on cognitive heuristics and biases demonstrated many of the ways we are irrational. Geoff Anders, the founder of Leverage Research, a “meta-level nonprofit” funded by Thiel, taught a class on goal factoring, a process of introspection that, after many tens of hours, maps out every one of your goals down to root-level motivations-the unchangeable “intrinsic goods,” around which you can rebuild your life. Goal factoring is an application of Connection Theory, Anders’s model of human psychology, which he developed as a Rutgers philosophy student disserting on Descartes, and Connection Theory is just the start of a universal renovation. Leverage Research has a master plan that, in the most recent public version, consists of nearly 300 steps. It begins from first principles and scales up from there: “Initiate a philosophical investigation of philosophical method”; “Discover a sufficiently good philosophical method”; have 2,000-plus “actively and stably benevolent people successfully seek enough power to be able to stably guide the world”; “People achieve their ultimate goals as far as possible without harming others”; “We have an optimal world”; “Done.” On Saturday night, Anders left the Rose Garden Inn early to supervise a polyphasic-sleep experiment that some Leverage staff members were conducting on themselves. It was a schedule called the Everyman 3, which compresses sleep into three twenty-minute REM naps each day and three hours at night for slow-wave. Anders was already polyphasic himself. Operating by the lights of his own best practices, goal-factored, coherent, and connected, he was able to work 105 hours a week on world optimization. For the rest of us, for me, these were distant aspirations. We were nerdy and unperfected. There was intense discussion at every free moment, and a genuine interest in new ideas, if especially in testable, verifiable ones. There was joy in meeting peers after years of isolation. CFAR was also insular, overhygienic, and witheringly focused on productivity. Almost everyone found politics to be tribal and viscerally upsetting. Discussions quickly turned back to philosophy and math. By Monday afternoon, things were wrapping up. Andrew Critch, a CFAR cofounder, gave a final speech in the lounge: “Remember how you got started on this path. Think about what was the time for you when you first asked yourself, ‘How do I work?’ and ‘How do I want to work?’ and ‘What can I do about that?’ . . . Think about how many people throughout history could have had that moment and not been able to do anything about it because they didn’t know the stuff we do now. I find this very upsetting to think about. It could have been really hard. A lot harder.” He was crying. “I kind of want to be grateful that we’re now, and we can share this knowledge and stand on the shoulders of giants like Daniel Kahneman . . . I just want to be grateful for that. . . . And because of those giants, the kinds of conversations we can have here now, with, like, psychology and, like, algorithms in the same paragraph, to me it feels like a new frontier. . . . Be explorers; take advantage of this vast new landscape that’s been opened up to us in this time and this place; and bear the torch of applied rationality like brave explorers. And then, like, keep in touch by email.” The workshop attendees put giant Post-its on the walls expressing the lessons they hoped to take with them. A blue one read RATIONALITY IS SYSTEMATIZED WINNING. Above it, in pink: THERE ARE OTHER PEOPLE WHO THINK LIKE ME. I AM NOT ALONE.
That night, there was a party. Alumni were invited. Networking was encouraged. Post-its proliferated; one, by the beer cooler, read SLIGHTLY ADDICTIVE. SLIGHTLY MIND-ALTERING. Another, a few feet to the right, over a double stack of bound copies of Harry Potter and the Methods of Rationality: VERY ADDICTIVE. VERY MIND-ALTERING. I talked to one of my roommates, a Google scientist who worked on neural nets. The CFAR workshop was just a whim to him, a tourist weekend. “They’re the nicest people you’d ever meet,” he said, but then he qualified the compliment. “Look around. If they were effective, rational people, would they be here? Something a little weird, no?” I walked outside for air. Michael Vassar, in a clinging red sweater, was talking to an actuary from Florida. They discussed timeless decision theory (approximately: intelligent agents should make decisions on the basis of the futures, or possible worlds, that they predict their decisions will create) and the simulation argument (essentially: we’re living in one), which Vassar traced to Schopenhauer. He recited lines from Kipling’s “If-” in no particular order and advised the actuary on how to change his life: Become a pro poker player with the $100k he had in the bank, then hit the Magic: The Gathering pro circuit; make more money; develop more rationality skills; launch the first Costco in Northern Europe. I asked Vassar what was happening at MetaMed. He told me that he was raising money, and was in discussions with a big HMO. He wanted to show up Peter Thiel for not investing more than $500,000. “I’m basically hoping that I can run the largest convertible-debt offering in the history of finance, and I think it’s kind of reasonable,” he said. “I like Peter. I just would like him to notice that he made a mistake . . . I imagine a hundred million or a billion will cause him to notice . . . I’d like to have a pi-billion-dollar valuation.” I wondered whether Vassar was drunk. He was about to drive one of his coworkers, a young woman named Alyssa, home, and he asked whether I would join them. I sat silently in the back of his musty BMW as they talked about potential investors and hires. Vassar almost ran a red light. After Alyssa got out, I rode shotgun, and we headed back to the hotel.
It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what? “Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10% of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.” We pulled up outside the Rose Garden Inn. He continued: “You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.” I didn’t argue with him, except to press, gently, on his particular form of elitism. His rationalism seemed so limited to me, so incomplete. “It is unfortunate,” he said, “that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.” That hadn’t been what I’d meant. I had meant rationalism as itself a failure of the imagination. “The current ecosystem is so totally fucked up,” Vassar said. “But if you have conversations here”-he gestured at the hotel-“people change their mind and learn and update and change their behaviors in response to the things they say and learn. That never happens anywhere else.” In a hallway of the Rose Garden Inn, a former high-frequency trader started arguing with Vassar and Anna Salamon, CFAR’s executive director, about whether people optimize for hedons or utilons or neither, about mountain climbers and other high-end masochists, about whether world happiness is currently net positive or negative, increasing or decreasing. Vassar was eating and drinking everything within reach. My recording ends with someone saying, “I just heard ‘hedons’ and then was going to ask whether anyone wants to get high,” and Vassar replying, “Ah, that’s a good point.” Other voices: “When in California . . .” “We are in California, yes.”
…Back on the East Coast, summer turned into fall, and I took another shot at reading Yudkowsky’s Harry Potter fanfic. It’s not what I would call a novel, exactly, rather an unending, self-satisfied parable about rationality and trans-humanism, with jokes.
…I flew back to San Francisco, and my friend Courtney and I drove to a cul-de-sac in Atherton, at the end of which sat the promised mansion. It had been repurposed as cohousing for children who were trying to build the future: start-up founders, singularitarians, a teenage venture capitalist. The woman who coined the term “open source” was there, along with a Less Wronger and Thiel Capital employee who had renamed himself Eden. The Day of the Idealist was a day for self-actualization and networking, like the CFAR workshop without the rigor. We were to set “mega goals” and pick a “core good” to build on in the coming year. Everyone was a capitalist; everyone was postpolitical. I squabbled with a young man in a Tesla jacket about anti-Google activism. No one has a right to housing, he said; programmers are the people who matter; the protesters’ antagonistic tactics had totally discredited them.
…Thiel and Vassar and Yudkowsky, for all their far-out rhetoric, take it on faith that corporate capitalism, unchecked just a little longer, will bring about this era of widespread abundance. Progress, Thiel thinks, is threatened mostly by the political power of what he calls the “unthinking demos.”
Pointer thanks to /u/Vulture.
Follow-up to: What have you recently tried, and failed at?
Related-to: Challenging the Difficult Sequence
ialdabaoth's post about blockdownvoting and its threads have prompted me to keep an eye on controversial topics and community norms on LessWrong. I noticed some things.
I was motivated: My own postings are also sometimes controversial. I know beforehand which might be (this one possibly). Why do I post them nonetheless? Do I want to wreak havoc? Or do I want to foster productive discussion of unresolved but polarized questions? Or do I want to call in question some point the community may have a blind spot on or possibly has taken something for granted too early.
[Summary: Trying to use new ideas is more productive than trying to evaluate them.]
I haven't posted to LessWrong in a long time. I have a fan-fiction blog where I post theories about writing and literature. Topics don't overlap at all between the two websites (so far), but I prioritize posting there much higher than posting here, because responses seem more productive there.
The key difference, I think, is that people who read posts on LessWrong ask whether they're "true" or "false", while the writers who read my posts on writing want to write. If I say something that doesn't ring true to one of them, he's likely to say, "I don't think that's quite right; try changing X to Y," or, "When I'm in that situation, I find Z more helpful", or, "That doesn't cover all the cases, but if we expand your idea in this way..."
Whereas on LessWrong a more typical response would be, "Aha, I've found a case for which your step 7 fails! GOTCHA!"
It's always clear from the context of a writing blog why a piece of information might be useful. It often isn't clear how a LessWrong post might be useful. You could blame the author for not providing you with that context. Or, you could be pro-active and provide that context yourself, by thinking as you read a post about how it fits into the bigger framework of questions about rationality, utility, philosophy, ethics, and the future, and thinking about what questions and goals you have that it might be relevant to.
I'm not sure this is something that can be consciously done, but in this post I want to prime you to consider whether something you do is really, totally, completely wacky and absurd.
We have trained ourselves a lot to notice when we are wrong. We trained ourselves even more to notice when we are confused and to tell word confusion from substance confusion.
But here is the tale of what happened to me today, and I don't think it qualifies as any of those:
I had a serious motivational problem yesterday, and got absolutely nothing done. So today I thought I should do things in a different manner, so as to decrease probability of two bad days in a row. One of the most effective things for me is going into the LW Study Hall (the password is in the group's description when you click this link). A very nice place to work that I recommend for everyone to check out, and do one or two pomo's every now and then.
And I did, I gave myself ten minutes observing others working, and I noticed something remarkable: The property of the LW chat that causes me to be motivated is "Presence of long haired people". Yes. Presence of people with a long hair. For weeks I had been trying to work out why it was efficient sometimes and not others. The most obvious initial alternative was that when there was a woman, I would feel more driven. I assumed that was the case. But I started getting false negatives and false positives. Today I finally came to terms with the fact. I am motivated by the presence of people whose hair goes to their shoulders. Women or Men.
Now why did I not notice this before? Seems to me that basically it was such a far fetched hypothesis that I simply had no prior for it. In vain hopes of being rational, I would read about how we fear the twinge of starting, how to beat procrastination, and how to get things done, and valid as those were and are, they would never have given me a complete picture of the unbelievable things my brain thinks behind my back.
Maybe there is something similar taking place in your mind. Even if there isn't, just update with me on the fact that this is true at least for someone, and how there may be millions of other tiny absurd facts controlling people's actions way beyond the scope of imagination of any economist or psychologist.
I have now one more piece of understanding about what is it like to be me, about how to tame and steer my future behavior, and specially one more thing to tell people in awkward silence moments to break the ice and face the absurdity of reality.
For obvious reasons, if you have long hair, I'd like to make an even stronger case for you to try to work and do pomos at the LW study hall. It's not only yourself that you'll be helping!
It would improve the usefulness of article navigation, if people tended to use the same tag for the same thing.
Currently, if you want to decide whether to tag your article "fai" or "friendly_ai", your best bet is to manually try:
And count how many articles use which variant. But, even then, there might be other similar variants you didn't think to check.
What would be nice is a tag cloud, listing how many articles there are (possibly weighted by ranking) that use each variant. The list of tags on the wiki isn't dynamically generated, and is very incomplete.
It wouldn't need to be something fancy, like:
Just an alphabetical list, with a number by each entry, would be an improvement over the current situation.
If you are downvoting this article, and would like to provide constructive feedback, here's a place to provide it: LINK
Discussion article for the meetup : LW Meetup in Lyon
It is time for a first LW Meetup in Lyon!
Some of the topics we could talk about are: effective altruism, munchkinism and other sorts of life hacking, belief updates or any other topic that you would like to discuss.
We will meet at Café de la Cloche, a "café-philo".
Please feel free to bring along non LW readers and non Anglophones who might be interested in such conversations.
I look forward to meeting you!
Discussion article for the meetup : LW Meetup in Lyon
Yesterday, someone moved one of my posts from Main to Discussion without telling me. Again.
I encourage the site administrators to show some basic courtesy to the posters who provide the content for the site. I believe this would be a better way of doing things:
1. Have a policy on what has to happen to move a post from Main to Discussion. Who can do it? How many admins are there who can do this? State this policy in a FAQ.
2. When you move a post from Main to Discussion, make a comment on the post saying you have done so and why you have done so.
If you've been following the announced partnership between LessWrong and Castify, you'll know that we would like to start offering the promoted posts as a podcast.
So far, everything offered by Castify is authored by Eliezer Yudkowsky who gave permission to have his content used. Because promoted posts can be written by those who haven't explicitly given us permission, we're reluctant to offer them without first working through the licensing issues with the community.
What we propose is that all content on the site be subject to the Creative Commons license which would allow content posted to LessWrong to be used for commercial use as long as the work is given proper attribution.
LessWrong management and Castify want feedback from the community before moving forward. Thoughts?
Edit: EricHerboso was kind enough to start a poll in the comments here.
I was excited to find this site, so I wanted to know how many people had joined LessWrong. Was it what it seemed - that a lot of people had actually gathered around the theme of rational thought - or was that just wishful thinking about a site that a guy with a neat idea and his buddies put together? I couldn't find anything stating the number of members on LessWrong anywhere on the site or the internet, so I decided it would be a fun test of my search engine knowledge to nail jello to a tree and make my own.
Some argue that Google totals are completely meaningless, however, the real problem is that it's very complicated and if you don't know how search engines work, your likelihood of getting a usable number is low. I took into account the potential pitfalls when MacGyvering this figure out of Google. So far, no one has posted a significant flaw with my specific method. (I will change that statement if they do, once I've read their comment.) Also, I was right (Find in page: total).
Here is the query I constructed:
site:lesswrong.com/user -"submitted by" -"comments by"
(Translation provided at the end.)
This gets a similar result in Bing and Yahoo:
If this is correct, LessWrong has over 9,000 members. That's my claim: "LessWrong probably has over 9,000 members" not "LessWrong has exactly 9,000 members". My LessWrong population figure is likely to be low. (I explain this below.)
Why did I do this? I was really overjoyed to find this site and wanted to see whether it was somebody's personal site with just a few buddies, or if they actually managed to draw a significant gathering of people who are interested in rational thought. I was very happy to see that it looks much bigger than a personal site. Since it was so hard to find out how many users LessWrong has, I decided to share.
I think a lot of people assume the hasty generalization that "all search engine totals are meaningless". If you're an average user just plugging in search terms with little understanding of how search engines work: yes, you should regard them as meaningless. However, if you know the limitations of a technique, what parts of the system your working within are consistent and what parts of it are not, I say it is possible to get some meaning within those limitations. Do I know all the limitations? Well, I assume I am unaware of things I don't know, so I won't say that. But I do know that so far nobody has proven this number or method wrong. If you want to prove me wrong, go for it. That would be fascinating. Remember that the claim is "LessWrong probably has over 9,000 members". The entire purpose of this was to get an "at least this many" figure for how many members LessWrong has. The inaccuracies I've already taken into consideration in order to compensate for the limits of this technique are listed below:
Why this is an "at least this many" figure, pitfalls I've avoided or addressed, and inaccuracies.
- Some users may not be included in Google's index yet. For instance, if they have never posted, there may be no link to their page (which is what I searched for - user pages), and the spider would not find them. This may be restricted to members that have actually commented, posted, or have been linked to in some way somewhere on the internet.
- Search engine caches are not in real time. There can be a lag of up to months, depending on how much the search engine "likes" the page.
- It has been reported by previous employees of a major search engine that they are using crazy old computer equipment to store their caches. I've been told that it is common for sections of cache to be down for that reason.
- Search engines have restrictions in place to conserve resources. For instance, they won't let you peruse all of the results using the "next" button, and they don't total all of the results that they have when you first press "search" (you may see that number increase later if you continue to press "next" to see more pages of results.)
- It has been argued that Google doesn't interpret search terms the way you'd think. I knew that before I started. The query was designed with that in mind. I explain that here: http://lesswrong.com/r/discussion/lw/e4j/number_of_members_on_lesswrong/780g
- Some of the results in Bing and Yahoo were irrelevant, though I think I weeded them pretty thoroughly for Google if my random samples of results pages are a good indication of the whole.
- When you go to your user page, if you have more than 10 comments, a next link shows at the bottom and clicking it makes more pages appear. My understanding is that Google doesn't index these types of links - and they don't seem to be getting included. http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/7839
Go ahead and check it out - stick the query in Google and see how many LessWrong members it shows. You'll certainly get a more up-to-date total than I have posted here. ;)
Translation for those of you that don't know Google's codes:
"Search only lesswrong.com, only the user directory."
(The user directory is where each user's home page is, so I'm essentially telling it "find all the home page directories".)
-"submitted by" -"comments by"
Exclude any page in that directory with the exact text "submitted by" or "comments by"
(The submissions and comments pages use a url in that directory, so they will show up in the results if I do not subtract them. Also, I used exact text specific to those pages, so that the text in the links on user home pages do not get user home pages omitted from the search. )
I realize this number isn't scientific proof of anything, (we can't see Google's code so that would be foolish), which is why I'm not attempting to use it to convince anyone of anything important.
So I have been checking laws around the world regarding Apostasy. And I have found extremely troubling data on the approach Muslims take to dealing with apostates. In most cases, publicly stating that you do not, in fact, love Big Brother (specifically, that you do not believe in God, the Prophet, or Islam), after having professed the Profession of Faith being adult and sane (otherwise, you were never a Muslim in the first place), will get you killed.
Yes, killed. It's one of the only three things traditional Islamic tribunals hand out death penalties for, the others being murder and adultery.
However, interestingly enough, you are often given three days of detainment to "think it over" and "accept the faith".
Some other countries, though, are more forgiving: you are allowed to be a public apostate. But you are still not allowed to proselytize: that remains a crime (in Morocco it's 15 years of prison, and a flogging). Though proselytism is also a crime if you are not a Muslim. I leave to your imagination how precarious the situation of religious minorities is, in this context.
How little sense all of this makes, from a theological perspective. Forcing someone to "accept the faith" at knife point? Forbidding you from arguing against the Lord's (reputedly) absolutely self-evident and miraculously beautiful Word?
No. These are the patterns of sedition and treason laws. The crime of the Apostate is not one against the Lord (He can take care of Himself, and He certainly can take care of the Apostate) but against the State (existence of a human lord contingent on political regime).
And the lesswronger asks himself: "How is that my concern? Please, get to the point." The point is that the promotion of rationalism faces a terrible obstacle there. We're not talking "God Hates You" placards, or getting fired from your job. We're talking fire range and electric chair.
"Sure," you say, "but rationalism is not about atheism." And you'd be right. It isn't. It's just a very likely conclusion for the rationalist mind to reach, and, also, our cult leader (:P) is a raging, bitter, passionate atheist. That is enough. If word spreads and authorities find out, just peddling HPMOR might get people jailed. And that's not accounting for the hypothetical (cough) case of a young adult reading the Sequences and getting all hotheaded about it and doing something stupid. Like trying to promote our brand of rationality in such hostile terrain.
So, let's take this hypothetical (harrumph) youth. They see irrationality around them, obvious and immense, they see the waste and the pain it causes. They'd like to do something about it. How would you advise them to go about it? Would you advise them to, in fact, do nothing at all?
More importantly, concerning Less Wrong itself, should we try to distance ourselves from atheism and anti-religiousness as such? Is this baggage too inconvenient, or is it too much a part of what we stand for?
I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.
Here's the link.
Guys I'd like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
The community seems to not update on ideas and concepts that didn't originate here. The only major examples fellow LWers brought up in conversation where works that Eliezer cited as great or influential. :/
Another thing, I could be wrong about this naturally, but it seems to clear that LessWrong has not grown. I'm not talking numerically. I can't put my finger to major progress done in the past 2 years. I have heard several other users express similar sentiments. To quote one user:
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general.
I've recently come to think this is probably true to the first approximation. I was checking out a blogroll and saw LessWrong listed as Eliezer's blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since the author doesn't make new updates any more. Originally the man had high hopes for the site. He wanted to build something that could keep going on its own, growing without him. It turned out to be a community mostly dedicated to studying the scrolls he left behind. We don't even seem to do a good job of getting others to read the scrolls.
Overall there seems to be little enthusiasm for actually systematically reading the old material. I'm going to share my take on what is I think a symptom of this. I was debating which title to pick for my first ever original content Main article (it was originally titled "On Conspiracy Theories") and made what at first felt like a joke but then took on a horrible ring of:
Over time the meaning of an article will tend to converge with the literal meaning of its title.
We like linking articles, and while people may read a link the first time, they don't tend to read it the second or third time they run across it. The phrase is eventually picked up and used out the appropriate of context. Something that was supposed to be shorthand for a nuanced argument starts to mean exactly what "it says". Well not exactly, people still recall it is a vague applause light. Which is actually worse.
I cited precisely "Politics is the Mindkiller" as an example of this. In the original article Eliezer basically argues that gratuitous politics, political thinking that isn't outweighed by its value to the art of rationality, is to be avoided. This soon came to meant it is forbidden to discuss politics in Main and Discussion articles, though it does live in the comment sections.
Now the question if LessWrong remains productive intellectually, is separate from the question of it being insular. But I feel both need to be discussed. If our community wasn't growing and it wasn't insular either, it could at least remain relevant.
This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.
Following http://lesswrong.com/lw/bwo/logical_fallacy_poster/ some people complained about
- the sarcastic illustration
- the lack of references
- the weird categorization that should rather fit a Bayesian framework
- the simplistic or even wrong definitions
- and more
Yet this poster has ONE key difference with the ideal poster, it exists.
If it sparks criticisms that lead to a new, LessWrong compatible poster, then it is well worth the critics.
The obvious next step then is to make a poster that would allow to take into account such well founded suggestion and synthesize the LessWrong lessons visually.
In your opinion then what would be a good structure, e.g. a hierarchy of fallacies, and a design theme?
I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.
As a real world example I would like to use the post 'Epistle to the New York Less Wrongians' by Eliezer Yudkowsky and his visit to New York.
How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?
It seems that the first thing he would have to do is to figure out what he really wants, his preferences1, right? The next step would be to formalize his preferences by describing it as a utility function and assign a certain number of utils2 to each member of the set, e.g. his own survival. This description would have to be precise enough to figure out what it would mean to maximize his utility function.
Now before he can continue he will first have to compute the expected utility of computing the expected utility of computing the expected utility of computing the expected utility3 ... and also compare it with alternative heuristics4.
He then has to figure out each and every possible action he might take, and study all of their logical implications, to learn about all possible world states he might achieve by those decisions, calculate the utility of each world state and the average utility of each action leading up to those various possible world states5.
To do so he has to figure out the probability of each world state. This further requires him to come up with a prior probability for each case and study all available data. For example, how likely it is to die in a plane crash, how long it would take to be cryonically suspended from where he is in case of a fatality, the crime rate and if aliens might abduct him (he might discount the last example, but then he would first have to figure out the right level of small probabilities that are considered too unlikely to be relevant for judgment and decision making).
I probably miss some technical details and got others wrong. But this shouldn't detract too much from my general request. Could you please explain how Less Wrong style rationality is to be applied practically? I would also be happy if you could point out some worked examples or suggest relevant literature. Thank you.
I also want to note that I am not the only one who doesn't know how to actually apply what is being discussed on Less Wrong in practice. From the comments:
You can’t believe in the implied invisible and remain even remotely sane. [...] (it) doesn’t just break down in some esoteric scenarios, but is utterly unworkable in the most basic situation. You can’t calculate shit, to put it bluntly.
None of these ideas are even remotely usable. The best you can do is to rely on fundamentally different methods and pretend they are really “approximations”. It’s complete handwaving.
Using high-level, explicit, reflective cognition is mostly useless, beyond the skill level of a decent programmer, physicist, or heck, someone who reads Cracked.
I can't help but agree.
P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.
1. What are "preferences" and how do you figure out what long-term goals are stable enough under real world influence to allow you to make time-consistent decisions?
2. How is utility grounded and how can it be consistently assigned to reflect your true preferences without having to rely on your intuition, i.e. pull a number out of thin air? Also, will the definition of utility keep changing as we make more observations? And how do you account for that possibility?
3. Where and how do you draw the line?
4. How do you account for model uncertainty?
5. Any finite list of actions maximizes infinitely many different quantities. So, how does utility become well-defined?
Can anyone tell me why it is that if I use my rationality exclusively to improve my conception of rationality I fall into an infinite recursion? EY say's this in The Twelve Virtues and in Something to Protect, but I don't know what his argument is. He goes as far as to say that you must subordinate rationality to a higher value.
I understand that by committing yourself to your rationality you lose out on the chance to notice if your conception of rationality is wrong. But what if I use the reliability of win that a given conception of rationality offers me as the only guide to how correct that conception is. I can test reliability of win by taking a bunch of different problems with known answers that I don't know, solving them using my current conception of rationality and solving them using the alternative conception of rationality I want to test, then checking the answers I arrived at with each conception against the right answers. I could also take a bunch of unsolved problems and attack them from both conceptions of rationality, and see which one I get the most solutions with. If I solve a set of problems with one, that isn't a subset of the set of problems I solved with the other, then I'll see if I can somehow take the union of the two conceptions. And, though I'm still not sure enough about this method to use it, I suppose I could also figure out the relative reliability of two conceptions by making general arguments about the structures of those conceptions; if one conception is "do that which the great teacher says" and the other is "do that which has maximal expected utility", I would probably not have to solve problems using both conceptions to see which one most reliably leads to win.
And what if my goal is to become as epistimically rational as possible. Then I would just be looking for the conception of rationality that leads to truth most reliably. Testing truth by predictive power.
And if being rational for its own sake just doesn't seem like its valuable enough to motivate me to do all the hard work it requires, let's assume that I really really care about picking the best conception of rationality I know of, much more than I care about my own life.
It seems to me that if this is how I do rationality for its own sake — always looking for the conception of goal-oriented rationality which leads to win most reliably, and the conception of epistemic rationality which leads to truth most reliably — then I'll always switch to any conception I find that is less mistaken than mine, and stick with mine when presented with a conception that is more mistaken, provided I am careful enough about my testing. And if that means I practice rationality for its own sake, so what? I practice music for its own sake too. I don't think that's the only or best reason to pursue rationality, certainly some other good and common reasons are if you wanna figure something out or win. And when I do eventually find something I wanna win or figure out that no one else has (no shortage of those), if I can't, I'll know that my current conception isn't good enough. I'll be able to correct my conception by winning or figuring it out, and then thinking about what was missing from my view of rationality that wouldn't let me do that before. But that wouldn't mean that I care more about winning or figuring some special fact than I do about being as rational as possible; it would just mean that I consider my ability to solve problems a judge of my rationality.
I don't understand what I loose out on if I pursue the Art for its own sake in the way described above. If you do know of something I would loose out on, or if you know Yudkowsky's original argument showing the infinite recursion when you motivate yourself to be rational by your love of rationality, then please comment and help me out. Thanks ahead of time.
After just spending some time browsing free nonficton kindle ebooks on Amazon, it occurred to me that it might be a good idea for SIAI/LW to publish for free download through Amazon some introductory LW essays and other useful introductory works like Twelve Virtues of Rationality and The Simple Truth.
People who search for 'rationality' on Google will see Eliezer's Twelve Virtues of Rationality and LW. It would nice if searching for rationality on Amazon also led people to similar resources that could be read on the Kindle with just one click. It would considerably expand the audience of potential readers (and LW contributors and SIAI donors).
I wrote a short userscript1 that allows for jumping to the next (or previous) new comment in a page (those marked with green). I have tested it on Firefox nightly with the Greasemonkey addon and Chromium. Unfortunately, I think that user scripts only work in Chromium/Google Chrome and Firefox (with Greasemonkey).
Download here (Clicking the link should offer a install prompt, and that is all the work that needs to be done.)
It inserts a small box in the lower right-hand corner that indicates the number of new messages and has a "next" and a "previous" link like so:
Clicking either link should scroll the browser to the top of the appropriate comment (wrapping around at the top and bottom).
The "!" link shows a window for error logging. If a bug occurs, clicking the "Generate log" button inside this window will create a box with some information about the running of the script2, copying and pasting that information here will make debugging easier.
I have only tested on the two browsers listed above, and only on Linux, so feedback about any bugs/improvements would be useful.
(Technical note: It is released under the MIT License, and this link is to exactly the same file as above but renamed so that the source can be viewed more easily. The file extension needs to be changed to "user.js" to be able to run as a user script properly)
v0.1 - First version
v0.2 - Logging & indication of number of new messages
v0.3 - Correctly update when hidden comments are loaded (and license change). NOTE: Upgrading to v0.3 on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (uninstall via Tools > Extensions)
2 Specifically: the url, counts of different sets of comments, some info about the new comments, and also a list of the clicks on "prev" and "next".
Suppose that you're a bee. Perhaps, even, an extremely rational bee. And yet, as you go through your life, you can't shake the feeling that you're missing something - the other bees live so effortlessly, alighting on flowers bursting with pollen as if by chance. Try as you might, you can't seem to figure out the patterns that they're unconsciously drawn to. Are you overanalyzing? Are you overwhelmed by sensory data? But the others seem to defy thermodynamics in their ability to extract useful information, all the while wasting so much effort on suboptimal patterns of thought.
Perhaps they have access to different data? Perhaps, where you see a uniform field of yellow, they see bullseyes.
Less Wrong seems to have a problem with socializing. Not just an unusual share of the people, but the community's character (as if it were a person). We should suspect ourselves (as a collective) of overlooking the ultraviolet, those facts about the world that are so easily accessed by some others. We should be suspicious of simplistic or monolithic explanations of social reality that don't allow sweeping social success on the same scale as their claims. We should be suspicious of dismissals of social concerns.
Am I off the mark? Am I worried over nothing? Am I overreaching? I am tossing this idea out into the sandstorm of doubt so that it can be worn down and honed to the razor edge at its core, if such a thing exists. I ask you to be my wind and sand.
Disclaimers: I don't intend this as an insult. It's a reminder - as a collective intelligence, we have a blind spot. We shouldn't conclude that there's nothing behind it. I myself am pretty dang "manualistic" (or whatever the other side of neurotypical is called). I am not an apiarist.
Edit: I've removed the focus on Autism. I was wrong, and I apologize. The post may be further edited in the near future.
(Is Bayesianism even a word? Should it be? The suffix "ism" sets off warning lights for me.)
Visitors to LessWrong may come away with the impression that they need to be Bayesians to be rational, or to fit in here. But most people are a long way from the point where learning Bayesian thought patterns is the most time-effective thing they can do to improve their rationality. Most of the insights available on LessWrong don't require people to understand Bayes' Theorem (or timeless decision theory).
I'm not calling for any specific change. Just to keep this in mind when writing things in the Wiki, or constructing a rationality workbook.
When the sequences were copied from Overcoming Bias to Less Wrong, it looks like something went very wrong with the character encoding. I found the following sequences of HTML entities in words in the sequences:
Ă˘Â€Â” arbitrator?i window?and
ĂŞ b?te m?me
ĂŠ fianc?e proteg?s d?formation d?colletage am?ricaine d?sir
ĂƒÂŻ na?ve na?vely
Ã¶ Schr?dinger L?b
ĂƒÂś Schr?dinger H?lldobler
Ăź D?sseldorf G?nther
â€“ ? Church? miracles?in Church?Turing
â€™ doesn?t he?s what?s let?s twin?s aren?t I?ll they?d ?s you?ve else?s EY?s Whate?er punish?d There?s Caledonian?s isn?t harm?s attack?d I?m that?s Google?s arguer?s Pascal?s don?t shouldn?t can?t form?d controll?d Schiller?s object?s They?re whatever?s everybody?s That?s Tetlock?s S?il it?s one?s didn?t Don?t Aslan?s we?ve We?ve Superman?s clamour?d America?s Everybody?s people?s you?d It?s state?s Harvey?s Let?s there?s Einstein?s won?t
ĂĄ Alm?si Zolt?n
ĂŤ pre?mpting re?valuate
Ă¨ l?se m?ne accurs?d
â†’ high?low low?high
Ä k?rik Siddh?rtha
รถ Sj?berg G?delian L?b Schr?dinger G?gel G?del co?rdinate W?hler K?nigsberg P?lzl
Â  I?understood ? I?was
â€” PEOPLE?and smarter?supporting to?at problem?and probability?then valid?to opportunity?of time?in true?I view?wishing Kyi?and ones?such crudely?model stupid?which that?larger aside?from Ironically?but intelligence?such flower?but medicine?as
â€ side?effect galactic?scale
Â´ can?t Biko?s aren?t you?de didn?t don?t it?s
View more: Next