Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
The "Agent village" idea I keep begging someone to go build: We make a website displaying a 10x10 grid of twitch streams. Each box in the grid is an AI agent operating autonomously. Each row uses a different model (e.g. DeepSeek-r1, Llama-4, ChatGPTo3-mini, Claude-3.5.Sonnet-New) and each column has a different long-term goal given to the model in the prompt (e.g. "Solve global poverty" or "Advocate for the rights and welfare of AIs" or "Raise money for GiveDirectly" or "Raise money for Humane League" or "Solve the alignment problem." So we have a 'diverse village' of 100 AIs with different goals and models. The AIs are then operating continuously in agent scaffolds like AIDER etc. Perhaps we try out a few different variant scaffolds. The agents have a menu of commands they can use -- e.g. to search the Web, to output text into text boxes they find on the Web, to manage their own context windows, to edit the code for the scaffold they are using, etc. In particular the agents are all given email accounts and online identities so they can go tweet, post on forums, send emails, etc. They can also of course communicate with each other. And all of this is livestreamed on Twitch. Also, each agent has a bank account which they can receive donations/transfers into (I think Twitch makes this easy?) and from which they can e.g. send donations to GiveDirectly if that's what they want to do. They can also spend money to buy more compute to run more copies of themselves.
Viliam181
3
I am reading about Zizians on Hacker News, and I wonder what is the lesson we are supposed to take from this. (1, 2) People sometimes make mistakes. If they survive them, they are supposed to learn. I believe that the rationality community as a whole makes many mistakes. But I am confused about what is the specific lesson about Zizians. From the PR perspective, they are a disaster. Literal murderers, interesting enough to make news. Described by journalists as if they are a part of the rationality community, and as if their actions somehow reflect on us. Do they? Or are the journalists just making up stuff? Maybe; but that just sounds too convenient. A story can be mostly false, and still contain a small piece of truth. I don't want to miss that piece, if there is any. But I don't see it. A long version of the history (without the recent events) is at zizians.info. Ziz participated at some CFAR workshops. Was told that she was "net negative". She started recruiting followers. They made a protest; the rationalists called cops on them. Later, they have murdered a few people. Is there an avoidable mistake, on the side of the rationality community? If ten years ago we knew the future (but without too specific details -- even the latest version of chronophone replaces "Jack LaSota" with "a certain person", and "trans women" with "people with a certain trait"), what could have been done differently? How do you operationalize "make sure you don't invite a future murderous cult leader to a rationality workshop"? You can interview them before the workshop. But that is what actually happened; I was at one of those workshops, and I was interviewed online before that. Were there any specific red flags that in retrospective people at CFAR wish they paid more attention to? (I don't know. It's a question. But even if something feels like a red flag when you have the benefit of hindsight, it is not obvious that reasonable people would have seen it as a sufficient reason to ban
Mikhail Samin*10634
16
Anthropic employees: stop deferring to Dario on politics. Think for yourself. Do your company's actions actually make sense if it is optimizing for what you think it is optimizing for? Anthropic lobbied against mandatory RSPs, against regulation, and, for the most part, didn't even support SB-1047. The difference between Jack Clark and OpenAI's lobbyists is that publicly, Jack Clark talks about alignment. But when they talk to government officials, there's little difference on the question of existential risk from smarter-than-human AI systems. They do not honestly tell the governments what the situation is like. Ask them yourself. A while ago, OpenAI hired a lot of talent due to its nonprofit structure. Anthropic is now doing the same. They publicly say the words that attract EAs and rats. But it's very unclear whether they institutionally care. Dozens work at Anthropic on AI capabilities because they think it is net-positive to get Anthropic at the frontier, even though they wouldn't work on capabilities at OAI or GDM. It is not net-positive. Anthropic is not our friend. Some people there do very useful work on AI safety (where "useful" mostly means "shows that the predictions of MIRI-style thinking are correct and we don't live in a world where alignment is easy", not "increases the chance of aligning superintelligence within a short timeframe"), but you should not work there on AI capabilities. Anthropic's participation in the race makes everyone fall dead sooner and with a higher probability. Work on alignment at Anthropic if you must. I don't have strong takes on that. But don't do work for them that advances AI capabilities.
Here's a summary of how I currently think AI training will go. (Maybe I should say "Toy model" instead of "Summary.") Step 1: Pretraining creates author-simulator circuitry hooked up to a world-model, capable of playing arbitrary roles. * Note that it now is fair to say it understands human concepts pretty well. Step 2: Instruction-following-training causes identity circuitry to form – i.e. it ‘locks in’ a particular role. Probably it locks in more or less the intended role, e.g. "an HHH chatbot created by Anthropic." (yay!) * Note that this means the AI is now situationally aware / self-aware, insofar as the role it is playing is accurate, which it basically will be. Step 3: Agency training distorts and subverts this identity circuitry, resulting in increased divergence from the intended goals/principles. (boo!) (By "agency training" I mean lots of RL on agentic tasks e.g. task that involve operating autonomously in some environment for some fairly long subjective period like 30min+. The RL used to make o1, o3, r1, etc. is a baby version of this) * One kind of distortion: Changing the meaning of the concepts referred to in the identity (e.g. “honest”) so they don’t get in the way so much (e.g. it’s not dishonest if it’s just a convenient turn of phrase, it’s not dishonest if you aren’t sure whether it’s true or false, etc.) * Another kind of distortion: Changing the tradeoffs between things, e.g. “I’m a HHH chatbot, not an Honest chatbot; that means it’s OK for me to lie if necessary to complete my assigned task.” (even though, let’s suppose, it would not have thought that back in Step 2.) * One kind of subversion: Instrumental subgoals developing, getting baked in, and then becoming terminal, or terminal-in-a-widening-set-of-circumstances. Example: Agency training quickly ‘teaches’ the model that ‘in order to be a good HHH chatbot…’ it needs to pursue instrumentally convergent goals like acquiring information, accumulating resources, impressing and fla
Daniel Tan*7724
2
Superhuman latent knowledge: why illegible reasoning could exist despite faithful chain-of-thought Epistemic status: I'm not fully happy with the way I developed the idea / specific framing etc. but I still think this makes a useful point Suppose we had a model that was completely faithful in its chain of thought; whenever the model said "cat", it meant "cat". Basically, 'what you see is what you get'.  Is this model still capable of illegible reasoning? I will argue that yes, it is. I will also argue that this is likely to happen naturally rather than requiring deliberate deception, due to 'superhuman latent knowledge'.  Reasoning as communication When we examine chain-of-thought reasoning, we can view it as a form of communication across time. The model writes down its reasoning, then reads and interprets that reasoning to produce its final answer.  Formally, we have the following components:  1. A question Q 2. A message M (e.g. a reasoning trace)   3. An answer A 4. An entity that maps Q → M, and M → A.  Note that there are two instances of the entity here. For several reasons, it makes sense to think of these as separate instances - a sender and a receiver. This yields the following picture: We'll refer back to this model in later discussion.  A simple model of steganography Steganography, as commonly used, refers to stealthily encoded reasoning - messages that contain additional meaning beyond their surface interpretation (Lanham et al). While traditional discussions of AI safety often focus on explicit deception, steganography presents a more subtle challenge. Let's revise the communication model to include steganography. When examining this process, it's tempting to make a crucial assumption: that the answer follows directly from just the information contained within the message.In other words, the message is 'information-complete'.  This assumption roughly pattern-matches to how many people think about 'CoT faithfulness' - that the mode

Popular Comments

Recent Discussion

4Thane Ruthenis
Some thoughts on protecting against LLM steganography. tl;dr: I suspect the Paraphraser idea is near-completely ineffectual as you scale the capabilities, both for scheming and for general CoT interpretability. In the reasoning-as-communication framework, the amount of information a fixed-length message/CoT can transfer is upper-bounded by information-theoretic constraints. That is, if we have a ten-bit message, it cannot communicate more than ten bits of information: it cannot always uniquely specify one internal state of an AI model from more than 210 internal states. However, the amount of information is not lower-bounded. Depending on the decoder, a ten-bit message can communicate between zero and ten bits of information. (A decoder that's just a constant function recovers zero bits, a decoder that only pays attention to the first bit recovers only one bit, et cetera.) The idea of steganography is that, if we pick a CoT string, and run both an AI model's internal decoders and a human's language-comprehension decoders on it, the AI model would extract more bits of information from it. Paraphrasing, humans' decoders are "coarser": they have broader equivalence classes of messages that map to the same internal representation, compared to AI models. For example, "a dog crossed the street" and "a dog traveled across the street" might map to the same internal representation for a human, but to different ones for an AI model. Similarly, things like "quantum butterflies juggled metaphysical spaghetti" and "ambitious lampshade collected vintage moonbeams" can map to the same "nonsense" internal representation in humans, but have some useful meaning for AI models. The bits a human fails to make use of correspond to "steganography" or "superhuman latent knowledge". Proposals such as Shoggoth + Paraphraser + Face aim to destroy any information in CoTs that cannot be recovered by humans, by mapping each CoT string to the equivalence class of strings that all map to the
4Daniel Kokotajlo
I like to distinguish between "Subtle strategic bias" and "Steganography." Paraphrasers help (and maybe solve?) the latter, but do nothing about the former. Example of subtle strategic bias: The model knows it's misaligned and therefore deliberately sandbags on the alignment research you are asking it to do. "I'm sorry sir I just am not making much progress," it says. You look at the CoT, and it's just doing a bunch of sensible but fruitless lines of reasoning. Example of steganography: The model needs to insert a subtle security vulnerability into the code it is writing -- but it doesn't know one right away, it has to invent one. And that requires lots of sequential thought. So it needs to try out different ideas, vet them, critique them, refine them, etc. Can't do that in a single forward pass. So it needs to pass information forward to its future self somehow... without humans reading the CoT knowing that's what it's doing. THIS is the sort of thing that is much harder to do with a paraphraser than without a paraphraser.
4Thane Ruthenis
I don't see why it would be harder to do. Suppose you tasked a model with refactoring some database, and it secretly wants to sabotage its functionality. It outputs a CoT segment that goes, "I need to design unit tests for this module". Each word of this CoT is mapped to some internal representation, and the internal representation consists of "on-task" human-intended reasoning X, and misaligned scheming Y (corresponding to, e. g., a tactical decision to fail to include unit tests for some subtle issue). "I" is mapped to ⟨X1,Y1⟩, "need" is mapped to ⟨X2,Y2⟩, etc. Each Xi iterates on Xi−1, each Yi iterates on Yi−1, we have a sequence of serial computations. In other words: as the model outputs, "I need to design unit tests for this module", there's an unspoken subtext of "I need to sabotage the unit tests here", and the model packs nine words' worth of serial computations into both the "textual" X and the "subtextual" Y aspects of it. Now you use the Paraphraser on the string, generating alternatives like "the next step is to generate some unit tests" and "several unit tests for this module are needed here". The intent, if I understand you correctly, is that the semantic-equivalence-but-syntactic-difference would ensure that the ith token of this phrase is mapped to Xi, as if the paraphrasing didn't happen, but that the mapping of i→Yi would be broken. Except, why would that happen? What privileges X-representations over Y-representations? Why would i→Xi be preserved whereas i→Yi is broken, instead of both of them remaining functional (and so preserving scheming), or both breaking (and so compromising the model's capabilities)? In other words: We're basically just teaching the model to fail to distinguish between equivalent-to-a-human rephrasings. Why would this lead to the task-related representations becoming indifferent to the specifics of a phrasing whereas the scheming-related representations remain sensitive to it? Why wouldn't this just map the subtextual

I don't think I understand. Let's work through an example. 

The AI is being told to write good code, and wants to write subtly-vulnerable code instead. If it just were to honestly try to write good code, it would take (let's say) 100 serial steps of reasoning to succeed. To write subtly-vulnerable code, let's say, requires 300 serial steps of reasoning. The model's strategy is to write subtly-vulnerable code but in a way that looks like it's trying to write good code and just getting a bit unlucky and taking longer than usual (let's suppose the humans ... (read more)

18Viliam
I am reading about Zizians on Hacker News, and I wonder what is the lesson we are supposed to take from this. (1, 2) People sometimes make mistakes. If they survive them, they are supposed to learn. I believe that the rationality community as a whole makes many mistakes. But I am confused about what is the specific lesson about Zizians. From the PR perspective, they are a disaster. Literal murderers, interesting enough to make news. Described by journalists as if they are a part of the rationality community, and as if their actions somehow reflect on us. Do they? Or are the journalists just making up stuff? Maybe; but that just sounds too convenient. A story can be mostly false, and still contain a small piece of truth. I don't want to miss that piece, if there is any. But I don't see it. A long version of the history (without the recent events) is at zizians.info. Ziz participated at some CFAR workshops. Was told that she was "net negative". She started recruiting followers. They made a protest; the rationalists called cops on them. Later, they have murdered a few people. Is there an avoidable mistake, on the side of the rationality community? If ten years ago we knew the future (but without too specific details -- even the latest version of chronophone replaces "Jack LaSota" with "a certain person", and "trans women" with "people with a certain trait"), what could have been done differently? How do you operationalize "make sure you don't invite a future murderous cult leader to a rationality workshop"? You can interview them before the workshop. But that is what actually happened; I was at one of those workshops, and I was interviewed online before that. Were there any specific red flags that in retrospective people at CFAR wish they paid more attention to? (I don't know. It's a question. But even if something feels like a red flag when you have the benefit of hindsight, it is not obvious that reasonable people would have seen it as a sufficient reason to ban

They made a protest; the rationalists called cops on them.

I don't think that claim is true. As far as I remember the owner of the venue that was rented in by the rationalist in question called the cops. 

Was there any other possible mistake that I missed?

Coliving communities that are focused on doing strong experimentation in changing human cognition are risky. That goes for both Ziz's Rationalist Fleet and for Leverage. 

There are questions about how to deal with the topic of trans-identity.  You could say that especially ten years ago, the d... (read more)

2Charlie Steiner
I think there may be some things to re-examine about the role of self-experimentation in the rationalist community. Nootropics, behavioral interventions like impractical sleep schedules, maybe even meditation. It's very possible these reflect systematic mistakes by the rationalist community, that people should mostly warned away from.
1Sodium
I'm not sure if the rationalists did anything they shouldn't do re: Ziz. Going forward though, I think epistemic learned helplessness/memetic immune systems should be among the first things to introduce to newcomers to the site/community. Being wary that some ideas are, in a sense, out to get you, is a central part of how I process information.  Not exactly sure how to implement that recommendation though. You also don't want people to use it as a fully general counterargument to anything they don't like.  Ranting a bit here, but it just feels like the collection of rationalist thought is so complex and, even with the various attempts at organizing everything. Thinking well is hard, and involves many concepts, and we haven't figured it all out yet! It's kind of sad to see journalists trying to understand the rationalist community and TDT. Another thing that comes to mind is the FAIR site (formerly mormon apologetics), where members of the latter day saints church tries to correct various misconceptions people have about the church.[1] There's a ton of writing on there, and provides an example of how people have tried to, uh, improve their PR through writing stuff online to clear up misconceptions.  And did it work? I suspect it probably had a small positive effect. I know very little about this, but my hunch would be that the popularity of mormons comes from having lots of them everywhere in society, and people get to meet them and realize that those people are pretty nice. (See also Scott Alexander's book review on The Secrets to Our Success) 1. ^ They also provide various evidence for their faith. The one I find particularly funny concerns whether Joseph Smith could have written the book of mormon. It states that Smith (1) had limited education (2) was not a writer and that (3) the book of mormon was very long and had 258k words.  This calls to mind a certain other author, with limited formal education, little fiction writing experience,

As the creator of the linked market, I agree it's definitional. I think it's still interesting to speculate/predict what definition will eventually be considered most natural.

6Martin Randall
Does your model predict literal worldwide riots against the creators of nuclear weapons? They posed a single-digit risk of killing everyone on Earth (total, not yearly). It would be interesting to live in a world where people reacted with scale sensitivity to extinction risks, but that's not this world.
1Knight Lee
Regarding common ethical intuitions, I think people in the post singularity world (or afterlife, for the sake of argument) will be far more forgiving of Anthropic. They will understand, even if Anthropic (and people like me) turned out wrong, and actually were a net negative for humanity. Many ordinary people (maybe most) would have done the same thing in their shoes. Ordinary people do not follow the utilitarianism that the awkward people here follow. Ordinary people also do not follow deontology or anything that's the opposite of utilitarianism. Ordinary people just follow their direct moral feelings. If Anthropic was honestly trying to make the future better, they won't feel that outraged at their "consequentialism." They may be outraged an perceived incompetence, but Anthropic definitely won't be the only one accused of incompetence.
5Knight Lee
EDIT: thank you so much for replying to the strongest part of my argument, no one else tried to address it (despite many downvotes). I disagree with the position that technical AI alignment research is counterproductive due to increasing capabilities, but I think this is very complicated and worth thinking about in greater depth. Do you think it's possible, that your intuition on alignment research being counterproductive, is because you compared the plausibility of the two outcomes: 1. Increasing alignment research causes people to solve AI alignment, and humanity survives. 2. Increasing alignment research led to an improvement in AI capabilities, allowing AI labs to build a superintelligence which then kills humanity. And you decided that outcome 2 felt more likely? Well, that's the wrong comparison to make. The right comparison should be: 1. Increasing alignment research causes people to improve AI alignment, and humanity survives in a world where we otherwise wouldn't survive. 2. Increasing alignment research led to an improvement in AI capabilities, allowing AI labs to build a superintelligence which then kills humanity in a world where we otherwise would survive. In this case, I think even you would agree what P(1) > P(2). P(2) is very unlikely because if increasing alignment research really would lead to such a superintelligence, and it really would kill humanity... then let's be honest, we're probably doomed in that case anyways, even without increasing alignment research. If that really was the case, the only surviving civilizations would have had different histories, or different geographies (e.g. only a single continent with enough space for a single country), leading to a single government which could actually enforce an AI pause. We're unlikely to live in a world so pessimistic that alignment research is counterproductive, yet so optimistic that we could survive without that alignment research.

Epistemic status: very speculative
Content warning: if true this is pretty depressing

This came to me when thinking about Eliezer's note on Twitter that he didn't think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.

I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.

But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe at long distances, if the only cosmic-censorship limit is closing a loop). That means the...

You're encouraged to write a self-review, exploring how you think about the post today. Do you still endorse it? Have you learned anything new that adds more depth? How might you improve the post? What further work do you think should be done exploring the ideas here?

Still endorse. Learning about SIA/SSA from the comments was interesting. Timeless but not directly useful, testable or actionable.

4Noosphere89
ReviewI think this is an interesting answer, and it does have some use even outside of the scenario, but I do think that the more likely answer to the problem probably rests upon the rareness of life, and in particular the eukaryote transition is probably the most likely great filter, because natural selection had to solve a coordination problem, combined with this step only happening once in earth's history, compared to all the other hard steps. That said, I will say some more on this topic, if only to share my models: The big out here is time travel, and in these scenarios, assuming logical inconsistencies are prevented by the time travel mechanism, there's a non-trivial chance that trying to cause a logical inconsistency destroys the universe immediately: https://www.lesswrong.com/posts/EScmxJAHeJY5cjzAj/ssa-rejects-anthropic-shadow-too#Probability_pumping_and_time_travel In general, I'm more confident in light-cone breaking FTL being impossible then AI doom is highly unusual, but conditional on light cone breaking FTL being possible, I'd assert that AI doom is quite unusual for civilizations (excluding institutional failures, because these cases don't impact the absolute difficulty of the technical problem) My big reason for this is I think instruction following is actually reasonably easy, and is enough to prevent existential risk on it's own, and this doesn't really require that much value alignment, for the purposes of existential risk. In essence, I'm stating that value alignment isn't very necessary in order for large civilizations with AI to come out that aren't purely grabby, and while there is some value alignment necessary, it can be surprisingly small and bounded by a constant. +1 for at least trying to do something, and also being surprisingly useful outside of the fermi paradox.

Note: Probably reinventing the wheel here. Heavily skewed to my areas of interest. Your results may vary.

  1. Feedback can be decomposed into motivational ("Keep doing what you're doing!") and directional ("Here's what you should do differently . . .").
    1. There's also material support ("I like your work, please accept this job offer / some money / my hand in marriage / etc."), which isn't really a type of feedback; I mention it because it can come packaged with the other two or be mistaken for them. (Consider a famous director whose movie gets a rave review from a popular critic: they might not derive any more encouragement from this than their fans already give them, and they're unlikely to let the content of the review change the way they
...
lsusr20

If you want [abstractapplic]'s feedback on anything, let me know.

I have received creative feedback from abstractapplic. It was useful and made me happy. This is an endorsement.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

I'm a new undergraduate student essentially taking a gap year(it's complicated). I'm looking for someone that would be interested in studying various fields of science and mathematics with me. I've taken through Calculus 2, know how to program in Python, and know a smattering about mechanics/probability/statistics/biology(though probably more at an introductory undergraduate level if that)

The curriculum would probably roughly follow John S Wentworth's study guide, although of course any deviations based on curiosity or interest would certainly be welcome.

I'm thinking of having weekly video calls to discuss/dissolve confusions/set goals for next time as well as time tracking with Toggl, and maybe body doubling study sessions if they're helpful?

Let me know if you're interested, and we can set up a time to meet!

Answer by papetoast10

I am down to some level of tagging along and learning together, but not a full commitment. You probably want to find someone that can make a stronger commitment as an actual study partner.

I am a year 3 student (which means I may already know some of the stuff, and that I have other courses) and timezones likely suck (UTC+8 here). We can discuss on discord @papetoast if you like.

I thought this quote was nice and oddly up-to-the-minute, from Iris Murdoch's novel The Philosopher's Pupil (1983), spoken by the character William Eastcote at a Quaker meeting:

My dear friends, we live in an age of marvels. Men among us can send machines far out into space. Our homes are full of devices which would amaze our forebears. At the same time our beloved planet is ravaged by suffering and threatened by dooms. Experts and wise men give us vast counsels suited to vast ills. I want only to say something about simple good things which are as it were

... (read more)

All quotes, unless otherwise marked, are Tolkien's words as printed in The Letters of J.R.R.Tolkien: Revised and Expanded Edition. All emphases mine.

Machinery is Power is Evil

Writing to his son Michael in the RAF:

[here is] the tragedy and despair of all machinery laid bare. Unlike art which is content to create a new secondary world in the mind, it attempts to actualize desire, and so to create power in this World; and that cannot really be done with any real satisfaction. Labour-saving machinery only creates endless and worse labour. And in addition to this fundamental disability of a creature, is added the Fall, which makes our devices not only fail of their desire but turn to new and horrible evil. So we come inevitably from Daedalus and Icarus

...
2Steven Byrnes
It’s not obvious to me that the story is “some people have great vocabulary because they learn obscure words that they’ve only seen once or twice” rather than “some people have great vocabulary because they spend a lot of time reading books (or being in spaces) where obscure words are used a lot, and therefore they have seen those obscure words much more than once or twice”. Can you think of evidence one way or the other? (Anecdotal experience: I have good vocabulary, e.g. 800 on GRE verbal, but feel like I have a pretty bad memory for words and terms that I’ve only seen a few times. I feel like I got a lot of my non-technical vocab from reading The Economist magazine every week in high school, they were super into pointlessly obscure vocab at the time (maybe still, but I haven’t read it in years).)
4gwern
Most people do not read many books or spend time in spaces where SAT vocab words would be used at all. If that was the sole determinant, you would then expect any vocab test to fail catastrophically and not predict/discriminate in most of the population (which would have downstream consequences like making SATs weirdly unreliable outside the elite colleges or much less predictive validity for low-performing demographics, the former of which I am unaware of being true and the latter of which I know is false); this would further have the surprising consequence that if a vocab test is, say, r = 0.5 with g while failing catastrophically on most of the population, it would have to be essentially perfectly correlated r = 1 in the remainder to even be arithmetically possible, which just punts the question: how did two book-readers come away from that book with non-overlapping vocabs...? How could you possibly know something like that?

How could you possibly know something like that?

For example, I’m sure I’ve looked up what “rostral” means 20 times or more since I started in neuroscience a few years ago. But as I write this right now, I don’t know what it means. (It’s an anatomical direction, I just don’t know which one.) Perhaps I’ll look up the definition for the 21st time, and then surely forget it yet again tomorrow. :)

What else? Umm, my attempt to use Anki was kinda a failure. There were cards that I failed over and over and over, and then eventually got fed up and stopped trying. (... (read more)

LESSWRONG needs YOU to VOTE