Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I'd like to start by way of analogy. I think it'll make the link to rationality easier to understand if I give context first.
I sometimes teach the martial art of aikido. The way I was originally taught, you had to learn how to "feel the flow of ki" (basically life energy) through you and from your opponent, and you had to make sure that your movements - both physical and mental - were such that your "ki" would blend with and guide the "ki" of your opponent. Even after I stopped believing in ki, though, there were some core elements of the art that I just couldn't do, let alone teach, without thinking and talking in terms of ki flow.
A great example of this is the "unbendable arm". This is a pretty critical thing to get right for most aikido techniques. And it feels really weird. Most people when they first get it think that the person trying to fold their arm isn't actually pushing because it doesn't feel like effort to keep their arm straight. Many students (including me once upon a time) end up taking this basic practice as compelling proof that ki is real. Even after I realized that ki wasn't real, I still had to teach unbendable arm this way because nothing else seemed to work.
…and then I found anatomical resources like Becoming a Supple Leopard.
It turns out that the unbendable arm works when:
- your thoracic spine is in a non-kyphotic position
- your head isn't hanging forward (which would mimic the thoracic tension of kyphosis)
- your shoulder is rolled back and down enough for the part of your clavicle immediately above the sternoclavicular joint to stick out a bit (see here)
- your shoulder has slight tension in it from holding your elbow in a pointing-down position
That's it. If you do this correctly, you can relax most of your other arm muscles and still be able to resist pretty enormous force on your arm.
Why, you might ask? Well, from what I have gathered, this lets you engage your latissimus dorsi (pretty large back muscles) in stabilizing your elbow. There's also a bit of strategy where you don't actually have to fully oppose the arm-bender's strength; you just have to stabilize the elbow enough to be able to direct the push-down-on-elbow force into the push-up-on-wrist force.
But the point is, by understanding something about proper posture, you can cut literally months of training down to about ten minutes.
To oversimplify it a little bit, there are basically three things to get right about proper posture for martial arts (at least as I know them):
- You need to get your spine in the right position and brace it properly. (For the most part and for most people, this means tucking your pelvis, straightening your thoracic spine a bit, and tensing your abs a little.)
- You need to use your hip and shoulder ball-and-socket joints properly. (For the most part this seems to mean using them instead of your spine to move, and putting torque in them by e.g. screwing your elbow downward when reaching forward.)
- You need to keep your tissue supple & mobile. (E.g., tight hamstrings can pull your hips out of alignment and prevent you from using your hip joints instead of your mid-lumbar spine (i.e. waist) to bend over. Also, thoracic inflexibility usually locks people in thoracic kyphosis, making it extremely difficult to transfer force effectively between their lower body and their arms.)
My experience is that as people learn how to feel these three principles in their bodies, they're able to correct their physical postures whenever they need to, rather than having to wait for my seemingly magical touch to make an aikido technique suddenly really easy.
It's worth noting that this is mostly known, even in aikido dojos ("training halls"). They just phrase it differently and don't understand the mechanics of it. They'll say things like "Don't bend over; the other guy can pull you down if you do" and "Let the move be natural" and "Relax more; let ki flow through you freely."
But it turns out that getting the mechanical principles of posture down makes basically all the magic of aikido something even a beginner can learn how to see and correct.
A quick anecdote along these lines, which despite being illustrative, you should take as me being a bit of an idiot:
I once visited a dojo near the CFAR office. That night they were doing a practice basically consisting of holding your partner's elbow and pulling them to the ground. It works by a slight shift sideways to cause a curve in the lumbar spine, cutting power between their lower and upper bodies. Then you pull straight down and there's basically nothing they can do about it.
However, the lesson was in terms of feeling ki flow, and the instruction was to pull straight down. I was feeling trollish and a little annoyed about the wrongness and authoritarian delivery of the instruction, so I went to the instructor and asked: "Sensei, I see you pulling slightly sideways, and I had perhaps misheard the instructions to be that we should pull straight down. Should I be pulling slightly sideways too?"
At which point the sensei insisted that the verbal instructions were correct, concentrated on preventing the sideways shift in his movements, and obliterated his ability to demonstrate the technique for the rest of the night.
Brienne Yudkowsky has a lovely piece in which she refers to "mental postures". I highly recommend reading it. She does a better job of pointing at the thing than I think I would do here.
…but if you really don't want to read it just right now, here's the key element I'll be using: There seems to be a mental analog to physical posture.
We've had quite a bit of analogizing rationality as a martial art here. So, as a martial arts practitioner and instructor with a taste of the importance of deeply understanding body mechanics, I really want to ask: What, exactly, are the principles of good mental posture for the Art of Rationality?
In the way I'm thinking of it, this isn't likely to be things like "consider the opposite" or "hold off on proposing solutions". I refer to things of this breed as "mental movements" and think they're closer to the analogs of individual martial techniques than they are principles of mental orientation.
That said, we can look at mental movements to get a hint about what a good mental posture might do. In the body, good physical posture gives you both more power and more room for error: if you let your hands drift behind your head in a shihonage, having a flexible thoracic spine and torqued shoulders and braced abs can make it much harder for your opponent to throw you to the ground even though you've blundered. So, by way of analogy, what might an error in attempting to (say) consider the opposite look like, and what would a good "mental posture" be that would make the error matter less?
(I encourage you to think on your own about an answer for at least 60 seconds before corrupting your mind with my thoughts below. I really want a correct answer here, and I doubt I have one yet.)
When I think of how I've messed up in attempts to consider the opposite, I can remember several instances when my tone was dutiful. I felt like I was supposed to consider the opinion that I disagreed with or didn't want to have turn out to be true. And yet, it felt boring or like submitting or something like that to really take that perspective seriously. I felt like I was considering the opposite roughly the same way a young child replies to their parent saying "Now say that you're sorry" with an almost sarcastic "I'm sorry."
What kind of "mental posture" would have let me make this mistake and yet still complete the movement? Or better yet, what mental posture would have prevented the mistake entirely? At this point I intuit that I have an answer but it's a little tricky for me to articulate. I think there's a way I can hold my mind that makes the childish orientation to truth-seeking matter less. I don't do it automatically, much like most people don't automatically sit up straight, but I sort of know how to see my grasping at a conclusion as overreaching and then… pause and get my mental feet under my mental hips before I try again.
I imagine that wasn't helpful - but I think we have examples of good and bad mental posture in action. In attachment theory, I think that the secure attachment style is a description of someone who is using good mental posture even when in mentally/emotionally threatening situations, whereas the anxious and avoidant styles are descriptions of common ways people "tense up" when they lose good mental posture. I also think there's something interesting in how sometimes when I'm offended I get really upset or angry, and sometimes the same offense just feels like such a small thing - and sometimes I can make the latter happen intentionally.
The story I described above of the aikido sensei I trolled also highlights something that I think is important. In this case, although he didn't get very flustered, he couldn't change what he was doing. He seemed mentally inflexible, like the cognitive equivalent of someone who can't usefully block an overhead attack because of a stiff upper back restricting his shoulder movement. I feel like I've been in that state lots of times, so I feel like I can roughly imagine how my basic mental/emotional orientation to my situation and way of thinking would have to be in order to have been effective in his position right then - and why that can be tricky.
I don't feel like I've adequately answered the question of what good mental posture is yet. But I feel like I have some intuitions - sort of like being able to talk about proper posture in terms of "good ki flow". But I also notice that there seem to be direct analogs of the three core parts of good physical posture that I mentioned above:
- Have a well-braced "spine". Based on my current fledgling understanding, this seems to look something like taking a larger perspective, like imagining looking back at this moment 30 years hence and noticing what does and does not matter. (I think that's akin to tucking your hips, which is a movement in service of posture but isn't strictly part of the posture.) I imagine this is enormously easier when one has a well-internalized sense of something to protect.
- Move your mind in strong & stable ways, rather than losing "spine". I think this can look like "Don't act while triggered", but it's more a warning not to try to do heavy cognitive work while letting your mental "spine" "bend". Instead, move your mind in ways that you would upon reflection want your mind to move, and that you expect to be able to bear "weight".
- Make your mind flexible. Achieve & maintain full mental range of movement. Don't get "stiff", and view mental inflexibility as a risk to your mental health.
All three of these are a little hand-wavy. That third one in particular I haven't really talked about much - in part because I don't really know how to work on that well. I have some guesses, and I might write up some thoughts about that later. (A good solution in the body is called "mobilization", basically consisting of pushing on tender/stiff spots while you move the surrounding joints through their maximal range of motion.) Also, I don't know if there are more principles for the mind than these three, or if these three are drawing too strongly on the analogy and are actually a little distracting. I'm still at the stage where, for mental posture, I keep wanting to say the equivalent of "relax more and let ki flow."
A lot of people say I have excellent physical posture. I think I have a reasonably clear idea of how I made my posture a habit. I'd like to share that because I've been doing the equivalent in my mind for mental posture and am under the impression that it's getting promising results.
I think my physical practice comes down to three points:
- Recognize that having good posture gives you superpowers. It's really hard to throw me down, and I can pretty effortlessly pull people to the ground. A lot of that is martial skill, but a huge chunk of it is just that good posture gives me excellent leverage. This transfers to being able to lift really heavy things and move across the room very efficiently and quickly when needed. This also gives me a pretty big leg up on learning physical skills. Recognizing that these were things I'd gain from learning good posture gave me a lot of drive to stick to my practice.
- Focus on how the correct posture feels, and exactly how it's different from glitchy posture. I found it super-important to notice that my body feels different in specific ways when my shoulders are in the right position versus when they're too far forward or back. Verbal instructions like "Pull shoulders back" don't work nearly as well as the feeling in the body.
- Choose one correction at a time, and always operate from that posture, pausing and correcting yourself when you're about to slip up. Getting good shoulder posture required that I keep my shoulders back all the time. When I would reach for water, I'd notice when my shoulder was in the too-far-forward position, and then pull back and fix my shoulder position before trying again. This sometimes required trying at very basic tasks several times, often quite slowly, until I could get it right each time.
Although I didn't add this until quite late, I would now add a fourth point when giving advice on getting good physical posture: make sure to mobilize the parts of your body that are either (a) preventing you from moving into a good position or (b) requiring you to be very stiff or tense to hold that position. The trouble is, I know how to do that for the body, but I'm not as sure about how to do that for the mind.
But the three bullet points above are instructions that I can follow with respect to mental posture, I think.
So, to the extent that that seems possible for you, I invite you to try to do the same - and let me know how it goes.
Eight months ago, I announced that the Less Wrong Study Hall, a virtual coworking space where people do pomodoros together, has moved to Complice. Complice is a software system I made to help people achieve their goals. About 20% of rationalists who've tried it have started using it full-time, which by my math gives signing up positive expected value. Anyway...
What follows is a brief history of the LWSH's development thus far. If you just wanna try it, click here: complice.co/room/lesswrong
By embedding the original tinychat window within a larger page, I let users see what the pomodoro timer was up to as soon as they joined, and the page also doesn't let breaks run overtime because the timer just keeps ticking. Also, users could now show a persistent status of what they're working on.
"Never discuss religion or politics."
I was raised in a large family of fundamentalist Christians. Growing up in my house, where discussing politics and religion were the main course of life, the above proverb was said often -- as an expression of regret, shock, or self-flagellation. Later, the experience impressed a deep lesson about bubbling up that even intelligent and rational people fall into. And I ... I am often tempted, so tempted, to give in.
Religion and political identity were the languages of love in my house. Affirming the finer points of a friend's identical values was a natural ritual, like sharing coffee or a meal together, and so soothing we attributed the afterglow to God himself. We can use some religious nonsense to illustrate, but please keep in mind, there's a much more interesting point here than "certain religious views are wrong".
A point of controversy was an especially excellent topic of mutual comfort. How could anyone else be *so* stupid as to believe we came from monkeys and monkeys came from *nothing*! that exploded a gazillion years ago, especially given all the young earth creation evidence that they stubbornly ignored. They obviously just wanted to sin and needed an excuse. Agreeing about something like this, you both felt smarter than the hostile world, and you had someone to help defend you against that hostility. We invented byzantine scaffolding for our shared delusions to keep the conversation interested and agree with each other in ever more creative ways. We esteemed each other, and ourselves, much more.
This safety bubble from the real world would allow denial of anything too painful. Losing a loved one to cancer? God will heal them. God mysteriously decided not this time? They're in Heaven. Did your incredible stupidity lose you your job, your wife, your reputation? God would forgive you and rescue you from the consequences. You could probably find a Bible verse to justify anything you're doing. Ironically, this artificial shell of safety, which kept us from ever facing the pain and finality reality often has, made us all the more fragile inside. The bubble became necessary to psychologically survive.
In this flow of happy mirror neuron dances, minor disagreements felt like a slap on the face. The shock afterward burned harder than a hand-print across the face.
25 years and, what seems like 86 billion light years of questioning, testing, and learning from that world-view, can see even beyond religion, people fall into bubbles so easily. The political conservatives only post articles from conservative blogs. The liberals post from liberal news sources. None have ever gone hunting on the opposing side for ways to test their own beliefs even once. Ever debate someone over a bill that they haven't even read? All their info comes from the pravda wing of their preferred political party / street gang, none of it is first hand knowledge. They're in a bubble.
Three of the most popular religions that worship the same God will each tell you the others are counterfeits, despite the shared moral codes, values, rituals and traditions. Apple fanboys who wholesale swallowed the lies about their OS / machines being immune from viruses, without ever having read one article of an IT security blog. It's not just confirmation bias at work, people live in an artificial information bubble of information sources that affirm their identity, soothe their egos, and never test any idea that they have. Scientific controversies create bubbles no less. But it doesn't even take a controversy, just a preferred source of information -- news, blogs, books, authors. Even if such sources attempt to present an idea or argument from the others who disagree, they do not present it with sufficient force.
Even Google will gladly do this for you by customizing your search results by location, demographic, past searches, etc, to filter out things you may not want to see, providing a convenient invisible bubble for you even if you don't want it!
If you're rational, there's daily work to break the bubbles by actually looking for ways to test the beliefs you care about. The more you care about them, the more they should be tested.
Problem is, the bigger our information sharing capabilities are, the harder it is to find quality information. Facebook propaganda posts get repeated over and over. Re-tweets. Blog reposts. Academic "science" papers that have never been replicated, but are in the news headlines everywhere. The more you actually dig into the agitprop looking for a few gems, or at least sources of interesting information, the more you realize even the questions have been framed wrongly, especially over controversial things. Without searching for high quality evidence about a thing, I resign myself to "no opinion" until I care enough to do the work.
And now you don't fit in anyone's bubble. Not in politics, not in religion, not even in technical arenas where people bubble up also. Take politics ... it's not that I'm a liberal and I miss the company of my conservative friends, or the other way around. Like the "underground man" I feel I actually understand the values and arguments from both sides, leading to wanting to tear the whole system apart and invent new ways or angles of addressing the problems.
But try to have a conversation, for example, about the trade-offs of huge military superiority the US has created: costs and murder vs eventually conceding dominance to who knows who, as they say-- you either wear the merciless boot or live with it on your neck. Approaching the topic this way, and you may be seen as a weak peacenik who dishonors our hero troops or as a monster who gladly trades blood for oil; you're not even understood as having no firm conclusion.
Okay, so don't throw your pearls before swine you say. But you know, you're going to have to do it quite a few times just to find out where the pig-pen ends and information close to the raw sources and unbiased data begin. If you want to hear interesting new ideas from other minds, you're going to have to accept that they are biased and often come from inside their bubble. If you want to test your own beliefs, actively seek to disprove what you think, you will have to wade through oceans of bullshit and agitprop to find the one pearl that shifts your awareness. There is no getting around the work.
Then there are these kinds of situations: my father has also left the fundamentalist fold, but he has gone deeply into New Age mysticism instead of the more skeptical method I've taken. I really want to preserve our closeness and friendship. I know I can't change his mind, but he really likes to talk about this stuff so to stay close I should really try hard to understand his perspective and ideas. But even asking to define terms like "higher consciousness" or explain experiences of "higher awareness" or try to understand the predictions about human "evolutionary" steps coming up ... and he falls back to "it can't be described" or "it's beyond our present intelligence to grasp" or even "beyond rational thought" to understand. So I can artificially nod along not understanding a damn word about it, or I can try to get some kind of hook into his ideas and totally burst his bubble, without even trying. Bursting someone's bubble is not cool. If you burst their bubble, they will cry. If only inwardly. Burst their bubble, and they will try to burst yours, not to help you but from pain.
Problem is, trying to burst your own bubble, you're breaking everyone else's bubbles left and right.
There is the temptation to seek out your own bubble just for temporary comfort ... just how many skeptical videos about SpiritScience or creationism or religion am I going to watch? The scale of evidence is already tipped so far, investing more time to learn more details that nudge it 0.0001% toward 100% isn't about anything other than emotional soothing. Emotional soothing is dangerous; it's reinforcing my bubbles that I will now have to work all the harder to burst, to test, and to train myself to have no emotional investment in any provisional belief.
But it is so, so tempting, when you see yet another propaganda post for the republicrips or bloodocrat gang, vast scientific conspiracy posts, watch your friends and family shut down mid-conversation, so tempting to go read another Sagan book that teaches me nothing new but makes me feel good about my current provisional beliefs. It's tempting to think about blocking friends who run a pravda outlet over facebook, or even shut down your facebook account. It's tempting to give up on family in their own bubble and artificially nod along to concepts that have no meaning.
To some extent, I am even giving in by writing this ... I would like to see many other rationalists feel the same way and affirm my perspective and struggle with this, and that reinforces my bubble, doesn't it? There are probably psychological limits and needs that make some degree of it minimal. We're compelled to eat, but if give ourselves over to that instinct without regard or care it will eventually kill us.
Don't bubble, don't give into the temptation, keep working to burst the bubbles that accrete around you. It's exhausting, it's painful, and it's the only thing keeping your eyes open to reality.
And friend, as you need it here and there, come here and I'll agree with you about something we both already have mountains of evidence for and almost none against. ;)
Welcome to Five Worlds Collide, the (un)conference for effective altruism, quantified self, rationality/scientific thinking, transhumanism and artificial intelligence.
Based on some feedback I heard about the EA Global events where people said they want to have more additional opportunity to present their own thoughts and because I (co)organize multiple meetups in Vienna, which to my mind have a huge overlap, I plan and organize this event in December 2015.
What: Present your own thoughts and projects, get inspired by new input, discuss, disagree, change your mind and grow – and connect with new amazing people and form ideas and projects together. In practice this means there will be five keynote talks and a lot of opportunity to give short lightning talks yourself.
When: it starts in the evening of Friday the 4th of December - and ends in the evening on Sunday the 6th of December 2015 (so it’s 2,5 days).
Where: sektor5 is an amazing and huge coworking space in Vienna. They even won the “Best Coworking Space” in the Austrian national round of the Central European Startup Awards 2015! Vienna is a city worth visiting – it is especially beautiful during Christmas season and interesting because of its history („Vienna Circle“, Gödel, Schrödinger – it’s even mentioned in the „Logicomix“).
How much: the ticket for the whole event will be 50 Euro. This includes lunch on Saturday on Sunday - it does not include accommodation, breakfast and dinner (but I offer advice and recommendations for those). Still this is the absolute minimum needed to create this event so there is also the option on Eventbrite to donate additional money to make the event as great as possible. (Any overshoot will be used for “Effective Altruism Austria” and/or donated effectively to GiveWell top charities.)
Always updated version on Facebook here.
Get your ticket here.
I am very thankful for all the great events I attended in the last months. For example the European LessWrong Community Weekend 2015, EA Global in San Francisco and Oxford. They added value to my life and gave me opportunity to learn new things, exchange thoughts and get to know amazing humans as well as meet friends again. I hope I can give the same back to others.
Also I am happy about feedback and helping hands – right now it’s mostly a one-(wo)man-show.
Looking forward to seeing you,
Among my friends interested in rationality, effective altruism, and existential risk reduction, I often hear: "If you want to have a real positive impact on the world, grad school is a waste of time. It's better to use deliberate practice to learn whatever you need instead of working within the confines of an institution."
While I'd agree that grad school will not make you do good for the world, if you're a self-driven person who can spend time in a PhD program deliberately acquiring skills and connections for making a positive difference, I think you can make grad school a highly productive path, perhaps more so than many alternatives. In this post, I want to share some advice that I've been repeating a lot lately for how to do this:
- Find a flexible program. PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying. By contrast, sciences like biology and chemistry can require time-consuming laboratory work that you can't always speed through by being clever.
- Choose high-impact topics to learn about. AI safety and existential risk reduction are my favorite examples, but there are others, and I won't spend more time here arguing their case. If you can't make your thesis directly about such a topic, choosing a related more popular topic can give you valuable personal connections, and you can still learn whatever you want during the spare time a flexible program will afford you.
- Teach classes. Grad programs that let you teach undergraduate tutorial classes provide a rare opportunity to practice engaging a non-captive audience. If you just want to work on general presentation skills, maybe you practice on your friends... but your friends already like you. If you want to learn to win over a crowd that isn't particularly interested in you, try teaching calculus! I've found this skill particularly useful when presenting AI safety research that isn't yet mainstream, which requires carefully stepping through arguments that are unfamiliar to the audience.
- Use your freedom to accomplish things. I used my spare time during my PhD program to cofound CFAR, the Center for Applied Rationality. Alumni of our workshops have gone on to do such awesome things as creating the Future of Life Institute and sourcing a $10MM donation from Elon Musk to fund AI safety research. I never would have had the flexibility to volunteer for weeks at a time if I'd been working at a typical 9-to-5 or a startup.
- Organize a graduate seminar. Organizing conferences is critical to getting the word out on important new research, and in fact, running a conference on AI safety in Puerto Rico is how FLI was able to bring so many researchers together on its Open Letter on AI Safety. It's also where Elon Musk made his donation. During grad school, you can get lots of practice organizing research events by running seminars for your fellow grad students. In fact, several of the organizers of the FLI conference were grad students.
- Get exposure to experts. A top 10 US school will have professors around that are world-experts on myriad topics, and you can attend departmental colloquia to expose yourself to the cutting edge of research in fields you're curious about. I regularly attended cognitive science and neuroscience colloquia during my PhD in mathematics, which gave me many perspectives that I found useful working at CFAR.
- Learn how productive researchers get their work done. Grad school surrounds you with researchers, and by getting exposed to how a variety of researchers do their thing, you can pick and choose from their methods and find what works best for you. For example, I learned from my advisor Bernd Sturmfels that, for me, quickly passing a draft back and forth with a coauthor can get a paper written much more quickly than agonizing about each revision before I share it.
- Remember you don't have to stay in academia. If you limit yourself to only doing research that will get you good post-doc offers, you might find you aren't able to focus on what seems highest impact (because often what makes a topic high impact is that it's important and neglected, and if a topic is neglected, it might not be trendy enough land you good post-doc). But since grad school is run by professors, becoming a professor is usually the most salient path forward for most grad students, and you might end up pressuring yourself to follow that standards of that path. When I graduated, I got my top choice of post-doc, but then I decided not to take it and to instead try earning to give as an algorithmic stock trader, and now I'm a research fellow at MIRI. In retrospect, I might have done more valuable work during my PhD itself if I'd decided in advance not to do a typical post-doc.
That's all I have for now. The main sentiment behind most of this, I think, is that you have to be deliberate to get the most out of a PhD program, rather than passively expecting it to make you into anything in particular. Grad school still isn't for everyone, and far from it. But if you were seriously considering it at some point, and "do something more useful" felt like a compelling reason not to go, be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.
Please email me (email@example.com) if you have more ideas for getting the most out of grad school!
Rationality Cardinality is a card game which takes memes and concepts from the rationality/Less Wrong sphere, and mixes them with jokes to make a game. After nearly two years of card-creation, playtesting and development, today, I'm taking the "beta" label off the web-based version of Rationality Cardinality. Go to the website and, if at least two other people visit at the same time, you can play against them.
I've put a lot of thought and a lot of work into the cards, and they're not just about humor; I also went systematically through blog posts and glossaries collecting terms and concepts that I think people should know about and be reminded of, and wrote concise explanations for them. It provides an easy way for everyone to quickly learn the jargon that's floating around, in a fun way; and it provides spaced repetition for concepts that might not otherwise have sunk in.
Rationality Cardinality will also soon have a print version. The catch is that in order to mass-produce it, I need to be sure there's enough demand. So, here's the deal: once enough people have played the online version, I'll launch a Kickstarter to sell print copies. You can speed this up by inviting people who might not otherwise see it to play.
“Receive my instruction, and not silver; and knowledge rather than choice gold. / For wisdom is better than rubies; and all the things that may be desired are not to be compared to it.”
Sorted by topic:
Darknet market related:
- Darknet Market archives, 2011-2015: 1.5tb of mirrors of scores of Tor-Bitcoin black-markets & forums 2013-2015, and other material; this is the single largest public archive of all DNM materials, and creating it was a major focus of mine since December 2013. The release also marks the end of my career as DNM expert - I’ve lost interest in the topic due to the apparent stability of the DNMs & being trapped in a local equilibrium
- DNM arrests compilation: a census of all known arrests Jan 2011-June 2015
- “Silk Goxed: How DPR used MtGox for hedging & lost big”
- there was an ICE subpoena on my Reddit account
Statistics & decision theory:
- When Does The Mail Come? A subjective Bayesian decision-theoretic analysis of local mail delivery times
resortertool for statistically re-ranking a set of ratings
- analysis of Effective Altruists’ donations as reported in the LW survey
- anthology on how “everything is correlated”
- electric vs stove kettle boiling-time analysis: collected some simple data on my kettles & demonstrated some statistics tools on the dataset like a Bayesian measurement-error model
- dysgenics power analysis: how much genetic data would it take to falsify those claims?
- noisy polls: modeling potentially falsified poll data
- Value of Information for suicide (example cost-benefit analysis of weakly predicting suicide)
- Air conditioner upgrade cost-benefit analysis
- probability/gerontology problem: can one visit 566 centenarians before any die? No.
- do causal networks explain why correlation≠causation is so often true?
- a little example of estimating scores from censored data
- 2015 modafinil community survey (not quite finished)
- Bitter Melon experimental & cost-benefit analysis
- Redshift self-experiment: screen-reddening software shifts bedtime forward by 20 minutes
- magnesium citrate experiment finished: initial benefits but apparent cumulative overdose led to net negative effect and mixed effects on sleep
- playing with inferring Bayesian networks for my Zeo & body weight data (powerful generalization of SEMs, but requires a lot of data before networks stabilize)
- Nootropics: initial results on LLLT correlated with large increases; but the followup randomized experiment showed zero effect
- LLLT re-analysis: no change in sleep as hypothesized by another LLLT user
- analysis of sceaduwe’s spirulina/allergies self-experiment (no reduction in allergies)
- Noopept experiment (no benefits)
- Treadmill spaced repetition experiment: expanded analysis to cover treadmill’s impact on successive reviews with SEM (no additional damage to recall beyond that implied by the original damage)
- lithium orotate experiment finished: no effects positive or negative
- “Effective Use of arbtt”: My window tracker/time-logger of choice is arbtt which records X window info for later classification and analysis; but one of the challenges is you don’t know how to set up arbtt or improve your environment or write classifications rules. So I wrote a tutorial.
- Time-lock crypto: wrote a Bash implementation of serial hashing time-lock crypto, link to all known implementations of hash time-lock crypto; discuss recent major theoretical breakthroughs involving Bitcoin
- Bicycle face
- “Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia.”
- did Fifty Shades of Gray have only 4k readers as the original Twilight fanfiction?
- switched to Patreon for donations
- continued sending out my newsletter; up to 24 issues now
gwern.netCSS to be mobile-friendly; should now be readable in an iPhone 6 browser
- optimized website loading (removed Custom Search Engine, A/B testing, non-validating XML, outbound link-tracking; simplified Disqus; minified JS, and fully async/deferred JS loading)
- proposal towards recurrent neural network for reinforcement learning of CSS
- metadata test: indicates moving it from the sidebar to the top of page works as well
- indentation test: no real result, defaulted to 2em
- floating footnotes test: verified no apparent harm (as hoped)
- paragraph indentation test (responding to anonymous complaint; they were wrong)
The winter solstice marks the darkest day of the year, a time to reflect on the past, present, and future. For several years and in many cities, Rationalists, Humanists, and Transhumanists have celebrated the solstice as a community, forming bonds to aid our work in the world.
Last year, more than one hundred people in the Bay Area came together to celebrate the Solstice. This year, we will carry on the tradition. Join us for an evening of song and story in the candlelight as we follow the triumphs and hardships of humanity.
The event itself is a community performance. There will be approximately two hours of songs and speeches, and a chance to eat and talk before and after. Death will be discussed. The themes are typically Humanist and Transhumanist, with a general audience that tends to be those who have found this site interesting, or care a lot about making our future better. There will be mild social pressure to sing along to songs.
When: December 12 at 7:00 PM - 9:00 PM
Where: Humanist Hall, 390 27th St, Oakland, CA 94612
Get tickets here. Bitcoin donation address: 1ARz9HYD45Midz9uRCA99YxDVnsuYAVPDk
Feel free to message me if you'd like to talk about the direction the Solstice is taking, things you like, or things you didn't like. Also, please let me know if you'd like to volunteer.
I am a co-founder of the Future of Life Institute based in Boston, and we are looking to fill two job openings that some LessWrongers might be interested in. We are a mostly volunteer-run organization working to reduce catastrophic and existential risks, and increase the chances of a positive future for humanity. Please consider applying and pass this posting along to anyone you think would be a good fit!
Technology has given life the opportunity to flourish like never before - or to self-destruct. The Future of Life Institute is a rapidly growing non-profit organization striving for the former outcome. We are fortunate to be supported by an inspiring group of people, including Elon Musk, Jaan Tallinn and Stephen Hawking, and you may have heard of our recent efforts to keep artificial intelligence beneficial.
You are idealistic, hard-working and well-organized, and want to help our core team carry out a broad range of projects, from organizing events to coordinating media outreach. Living in the greater Boston area is a major advantage, but not an absolute requirement.
If you are excited about this opportunity, then please send an email to firstname.lastname@example.org with your cv and a brief statement of why you want to work with us. The title of your email must be 'Project coordinator'.
NEWS WEBSITE EDITOR
There is currently huge public interest in the question of how upcoming technology (especially artificial intelligence) may transform our world, and what should be done to seize opportunities and reduce risks.
You are idealistic and ambitious, and want to lead our effort to transform our fledgling news site into the number one destination for anyone seeking up-to-date and in-depth information on this topic, and anybody eager to join what is emerging as one of the most important conversations of our time.
You love writing and have the know-how and drive needed to grow and promote a website. You are self-motivated and enjoy working independently rather than being closely mentored. You are passionate about this topic, and look forward to the opportunity to engage with our second-to-none global network of experts and use it to generate ideas and add value to the site. You look forward to developing and executing your vision for the website using the resources at your disposal, which include both access to experts and funds for commissioning articles, improving the website user interface, etc. You look forward to making use of these resources and making things happen rather than waiting for others to take the initiative.
If you are excited about this opportunity, then please send an email to email@example.com with your cv and answers to these questions:
- Briefly, what is your vision for our site? How would you improve it?
- What other site(s) (please provide URLs) have attributes that you'd like to emulate?
- How would you generate the required content?
- How would you increase traffic to the site, and what do you view as realistic traffic goals for January 2016 and January 2017?
- What budget do you need to succeed, not including your own salary?
- What past experience do you have with writing and/or website management? Please include a selection of URLs that showcase your work.
The title of your application email must be 'Editor'. You can live anywhere in the world. A science background is a major advantage, but not a strict requirement.
As part of my broader project of promoting rationality to a wide audience, we developed clothing with rationality-themed slogans. This apparel is suited for aspiring rationalists to wear to show their affiliation with rationality, to remind themselves and other aspiring rationalists to improve, and to spread positive memes broadly.
My gratitude to all those who gave suggestions about and voted on these slogans, both on LW itself and the LW Facebook group. This is the first set of seven slogans that had the most popular support from Less Wrongers, and more will be coming soon.
The apparel is pretty affordable, starting at under $15. All profits will go to funding nonprofit work dedicated to spreading rationality to a broad audience.
Links to Clothing with Slogans:
This slogan conveys a key aspiration of every aspiring rationalist - to grow less wrong every day and have a clearer map of the territory. This is not only a positive meme, but also a clear sign of affiliation with rationality and the Less Wrong community in particular.
This slogan conveys the broad goal of rationality, namely for its participants to grow mentally stronger. This shirt helps prime the wearer and those around the wearer to focus on growing more rational, both epistemically and instrumentally. It is more broadly accessible than something like "Less Wrong Every Day."
This slogan conveys the intentional nature of how aspiring rationalists live their life, with a clear set of terminal goals and strategies to reach those goals.
This slogan and its variants received a lot of support from aspiring rationalists tired of discussions and debates with people who talked in broad abstract terms and failed to provide examples. It automatically reminds those who you are talking with, both aspiring rationalists and non-rationalists, to be concrete and specific in their engagement with you, and minimizes wasted airtime and inefficient discussions.
This slogan reminds the wearer and those around the wearer of the vital skill of noticing confusion for growing aware of gaps between one's map and the reality of the territory. Moreover, in field testing this design, this slogan proved especially fruitful for prompting conversations about rationality from those curious about this slogan.
This slogan conveys and reinforces one of the most fundamental aspects of rationality - the eagerness and yearning to change one's mind based on evidence. The slogan is an especially impactful way of conveying rationality broadly, as the sentiment of updating beliefs based on evidence is something that many intelligent people wish for society. Thus, it helps attract intellectually-oriented people into discussions about rationality.
This slogan has the same benefits as the above slogan, except being more outwardly oriented and expressing the message in a more meme-style format.
Other ideas for slogans that had support, in no particular order (Note that we limited the number of words to 4 longer words or 7 shorter words to fit on a T-shirt, and some of these combine Effective Altruism and Rationality):
- How Much Do You Believe That?
- Reach Your Goals Using Science
- Truth Is Not Partisan
- Glad To Give Citations
- What is True is Already So
- Reality Doesn’t Take Sides
- In Math We Trust
- In Reason We Trust
- Seeking Constructive Feedback
- Make New Mistakes Only
- Constantly Optimizing
- Absence Of Evidence Is Evidence Of Absence
- Rationality: Accurate Beliefs + Winning Decisions
- I Chose This Rationally
- Combining Heart And Head
- Effective Altruism
- Doing the Most Good Per Dollar
- Optimizing QALYs
- Making My Life Meaningful
- Purpose Comes from Within
I would appreciate feedback on the current designs. As you get and wear them, I'd appreciate learning about your experience wearing them, to learn what kind of reaction you get. So far, we've had quite positive reports from our field tests of the merchandise, with good conversations prompted by wearing these slogans.
Also, please share which of the additional slogans are your favorites, so we can get them done sooner. If you have additional ideas for slogans, list them in comments below, and remember the guidelines of 4 longer words to 7 short words, and making them accessible to a broad audience to spread rationality memes.
Besides clothing, what other kind of merchandise would you like to buy?
Look forward to your feedback! If you want to contact me privately about the merchandise or the broader project of spreading rationality to a broad audience, my email is firstname.lastname@example.org
View more: Next