LessWrong 2.0
Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!
You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.
But before we leap into action, let's review the problem.
The Importance of Sidekicks
[Reposted from my personal blog.]
Mindspace is wide and deep. “People are different” is a truism, but even knowing this, it’s still easy to underestimate.
I spent much of my initial engagement with the rationality community feeling weird and different. I appreciated the principle and project of rationality as things that were deeply important to me; I was pretty pro-self improvement, and kept tsuyoku naritai as my motto for several years. But the rationality community, the people who shared this interest of mine, often seemed baffled by my values and desires. I wasn’t ambitious, and had a hard time wanting to be. I had a hard time wanting to be anything other than a nurse.
It wasn’t until this August that I convinced myself that this wasn’t a failure in my rationality, but rather a difference in my basic drives. It’s around then, in the aftermath of the 2014 CFAR alumni reunion, that I wrote the following post.
I don’t believe in life-changing insights (that happen to me), but I think I’ve had one–it’s been two weeks and I’m still thinking about it, thus it seems fairly safe to say I did.
At a CFAR Monday test session, Anna was talking about the idea of having an “aura of destiny”–it’s hard to fully convey what she meant and I’m not sure I get it fully, but something like seeing yourself as you’ll be in 25 years once you’ve saved the world and accomplished a ton of awesome things. She added that your aura of destiny had to be in line with your sense of personal aesthetic, to feel “you.”
I mentioned to Kenzi that I felt stuck on this because I was pretty sure that the combination of ambition and being the locus of control that “aura of destiny” conveyed to me was against my sense of personal aesthetic.
Kenzi said, approximately [I don't remember her exact words]: “What if your aura of destiny didn’t have to be those things? What if you could be like…Samwise, from Lord of the Rings? You’re competent, but most importantly, you’re *loyal* to Frodo. You’re the reason that the hero succeeds.”
I guess this isn’t true for most people–Kenzi said she didn’t want to keep thinking of other characters who were like this because she would get so insulted if someone kept comparing her to people’s sidekicks–but it feels like now I know what I am.
So. I’m Samwise. If you earn my loyalty, by convincing me that what you’re working on is valuable and that you’re the person who should be doing it, I’ll stick by you whatever it takes, and I’ll *make sure* you succeed. I don’t have a Frodo right now. But I’m looking for one.
It then turned out that quite a lot of other people recognized this, so I shifted from “this is a weird thing about me” to “this is one basic personality type, out of many.” Notably, Brienne wrote the following comment:
Sidekick” doesn’t *quite* fit my aesthetic, but it’s extremely close, and I feel it in certain moods. Most of the time, I think of myself more as what TV tropes would call a “dragon”. Like the Witch-king of Angmar, if we’re sticking of LOTR. Or Bellatrix Black. Or Darth Vader. (It’s not my fault people aren’t willing to give the good guys dragons in literature.)
For me, finding someone who shared my values, who was smart and rational enough for me to trust him, and who was in a much better position to actually accomplish what I most cared about than I imagined myself ever being, was the best thing that could have happened to me.
She also gave me what’s maybe one of the best and most moving compliments I’ve ever received.
In Australia, something about the way you interacted with people suggested to me that you help people in a completely free way, joyfully, because it fulfills you to serve those you care about, and not because you want something from them… I was able to relax around you, and ask for your support when I needed it while I worked on my classes. It was really lovely… The other surprising thing was that you seemed to act that way with everyone. You weren’t “on” all the time, but when you were, everybody around you got the benefit. I’d never recognized in anyone I’d met a more diffuse service impulse, like the whole human race might be your master. So I suddenly felt like I understood nurses and other people in similar service roles for the first time.
Sarah Constantin, who according to a mutual friend is one of the most loyal people who exists, chimed in with some nuance to the Frodo/Samwise dynamic: “Sam isn’t blindly loyal to Frodo. He makes sure the mission succeeds even when Frodo is fucking it up. He stands up to Frodo. And that’s important too.”
Kate Donovan, who also seems to share this basic psychological makeup, added “I have a strong preference for making the lives of the lead heroes better, and very little interest in ever being one.”
Meanwhile, there were doubts from others who didn’t feel this way. The “we need heroes, the world needs heroes” narrative is especially strong in the rationalist community. And typical mind fallacy abounds. It seems easy to assume that if someone wants to be a support character, it’s because they’re insecure–that really, if they believed in themselves, they would aim for protagonist.
I don’t think this is true. As Kenzi pointed out: “The other thing I felt like was important about Samwise is that his self-efficacy around his particular mission wasn’t a detriment to his aura of destiny – he did have insecurities around his ability to do this thing – to stand by Frodo – but even if he’d somehow not had them, he still would have been Samwise – like that kind of self-efficacy would have made his essence *more* distilled, not less.”
Brienne added: “Becoming the hero would be a personal tragedy, even though it would be a triumph for the world if it happened because I surpassed him, or discovered he was fundamentally wrong.”
Why write this post?
Usually, “this is a true and interesting thing about humans” is enough of a reason for me to write something. But I’ve got a lot of other reasons, this time.
I suspect that the rationality community, with its “hero” focus, drives away many people who are like me in this sense. I’ve thought about walking away from it, for basically that reason. I could stay in Ottawa and be a nurse for forty years; it would fulfil all my most basic emotional needs, and no one would try to change me. Because oh boy, have people tried to do that. It’s really hard to be someone who just wants to please others, and to be told, basically, that you’re not good enough–and that you owe it to the world to turn yourself ambitious, strategic, Slytherin.
Firstly, this is mean regardless. Secondly, it’s not true.
Samwise was important. So was Frodo, of course. But Frodo needed Samwise. Heroes need sidekicks. They can function without them, but function a lot better with them. Maybe it’s true that there aren’t enough heroes trying to save the world. But there sure as hell aren’t enough sidekicks trying to help them. And there especially aren’t enough talented, competent, awesome sidekicks.
If you’re reading this post, and it resonates with you… Especially if you’re someone who has felt unappreciated and alienated for being different… I have something to tell you. You count. You. Fucking. Count. You’re needed, even if the heroes don’t realize it yet. (Seriously, heroes, you should be more strategic about looking for awesome sidekicks. AFAIK only Nick Bostrom is doing it.) This community could use more of you. Pretty much every community could use more of you.
I’d like, someday, to live in a culture that doesn’t shame this way of being. As Brienne points out, “Society likes *selfless* people, who help everybody equally, sure. It’s socially acceptable to be a nurse, for example. Complete loyalty and devotion to “the hero”, though, makes people think of brainwashing, and I’m not sure what else exactly but bad things.” (And not all subsets of society even accept nursing as a Valid Life Choice.) I’d like to live in a world where an aspiring Samwise can find role models; where he sees awesome, successful people and can say, “yes, I want to grow up to be that.”
Maybe I can’t have that world right away. But at least I know what I’m reaching for. I have a name for it. And I have a Frodo–Ruby and I are going to be working together from here on out. I have a reason not to walk away.
We Haven't Uploaded Worms
In theory you can upload someone's mind onto a computer, allowing them to live forever as a digital form of consciousness, just like in the Johnny Depp film Transcendence.
But it's not just science fiction. Sure, scientists aren't anywhere near close to achieving such feat with humans (and even if they could, the ethics would be pretty fraught), but now an international team of researchers have managed to do just that with the roundworm Caenorhabditis elegans.
—Science Alert
Uploading an animal, even one as simple as c. elegans would be very impressive. Unfortunately, we're not there yet. What the people working on Open Worm have done instead is to build a working robot based on the c. elegans and show that it can do some things that the worm can do.
The c. elegans nematode has only 302 neurons, and each nematode has the same fixed pattern. We've known this pattern, or connectome, since 1986. [1] In a simple model, each neuron has a threshold and will fire if the weighted sum of its inputs is greater than that threshold. Which means knowing the connections isn't enough: we also need to know the weights and thresholds. Unfortunately, we haven't figured out a way to read these values off of real worms. Suzuki et. al. (2005) [2] ran a genetic algorithm to learn values for these parameters that would give a somewhat realistic worm and showed various wormlike behaviors in software. The recent stories about the Open Worm project have been for them doing something similar in hardware. [3]
To see why this isn't enough, consider that nematodes are capable of learning. Sasakura and Mori (2013) [5] provide a reasonable overview. For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.
If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
(In a 2011 post on what progress with nematodes might tell us about uploading humans I looked at some of this research before. Since then not much has changed with nematode simulation. Moore's law looks to be doing much worse in 2014 than it did in 2011, however, which makes the prospects for whole brain emulation substantially worse.)
I also posted this on my blog.
[1] The Structure of the Nervous System of the Nematode Caenorhabditis elegans, White et. al. (1986).
[2] A Model of Motor Control of the Nematode C. Elegans With Neuronal Circuits, Suzuki et. al. (2005).
[3] It looks like instead of learning weights Busbice just set them all to +1 (excitatory) and -1 (inhibitory). It's not clear to me how they knew which connections were which; my best guess is that they're using the "what happens to work" details from [2]. Their full writeup is [4].
[4] The Robotic Worm, Busbice (2014).
[5] Behavioral Plasticity, Learning, and Memory in C. Elegans, Sasakura and Mori (2013).
A "Failure to Evaluate Return-on-Time" Fallacy
I don't have a good name for this fallacy, but I hope to work it out with everyone here through thinking and discussion.
It goes like this: a large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.
A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995.
How the Grinch Ought to Have Stolen Christmas
On Dec. 24, 1957, a Mr. T. Grinch attempted to disrupt Christmas by stealing associated gifts and decorations. His plan failed, the occupants of Dr. Suess' narrative remained festive, and Mr. Grinch himself succumbed to cardiac hypertrophy. To help others avoid repeating his mistakes, I've written a brief guide to properly disrupting holidays. Holiday-positive readers should read this with the orthogonality thesis in mind. Fighting Christmas is tricky, because the obvious strategy - making a big demoralizing catastrophe - doesn't work. No matter what happens, the media will put the word Christmas in front of it and convert your scheme into even more free advertising for the holiday. It'll be a Christmas tragedy, a Christmas earthquake, a Christmas wave of foreclosures. That's no good; attacking Christmas takes more finesse.
The first thing to remember is that, whether you're stealing a holiday or a magical artifact of immense power, it's almost always a good idea to leave a decoy in its place. When people notice that something important is missing, they'll go looking to find or replace it. This rule can be generalized from physical objects to abstractions like sense of community. T. Grinch tried to prevent community gatherings by vandalizing the spaces where they would've taken place. A better strategy would've been to promise to organize a Christmas party, then skip the actual organizing and leave people to sit at home by themselves. Unfortunately, this solution is not scalable, but someone came up with a very clever solution: encourage people to watch Christmas-themed films instead of talking to each other, achieving almost as much erosion of community without the backlash.
I'd like to particularly applaud Raymond Arnold, for inventing a vaguely-Christmas-like holiday in December, with no gifts, and death (rather than cheer) as its central theme [1]. I really wish it didn't involve so much singing and community, though. I recommend raising the musical standards; people who can't sing at studio-recording quality should not be allowed to sing at all.
Gift-giving traditions are particularly important to stamp out, but stealing gifts is ineffective because they're usually cheap and replaceable. A better approach would've been to promote giving undesirable gifts, such as religious sculptures and fruitcake. Even better would be to convince the Mayor of Whoville to enact bad economic policies, and grind the Whos into a poverty that would make gift-giving difficult to sustain. Had Mr. Grinch pursued this strategy effectively, he could've stolen Christmas and Birthdays and gotten himself a Nobel Prize in Economics [2].
Finally, it's important to avoid rhyming. This is one of those things that should be completely obvious in hindsight, with a little bit of genre savvy; villains like us win much more often in prose and in life than we do in verse.
And with that, I'll leave you with a few closing thoughts. If you gave presents, your friends are disappointed with them. Any friends who didn't give you presents, it's because they don't care, and any fiends who did give you presents, they're cheap and lame presents for the same reason. If you have a Christmas tree, it's ugly, and if it's snowing, the universe is trying to freeze you to death.
Merry Christmas!
[1] I was initially concerned that the Solstice would pattern-match and mutate into a less materialistic version of Christmas, but running a Kickstarter campaign seems to have addressed that problem.
[2] This is approximately the reason why Alfred Nobel specifically opposed the existence of that prize.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)