Lately I've found it useful to think of my memory in the same way. I've got working memory (7±2 items?), consisting of things that I'm thinking about in this very moment. I've got short term memory and long term memory. And if I can't find something after trying to think of it for a while, I'll look it up (frequently on Google). Cache miss for the lose.
Short term memory is working memory. "Short-term memory" is a distinction no longer used by cognitive psychologists.
Really, you have highly activated long-term memory (working memory), less activated memory (things you've recently thought about), and even less activated memory. Level of activation, and graph distance from activated nodes, is what determines probability and speed of recall.
This is basic cognitive psychology; I don't know of any good textbooks on the subject because the classes I took in this never used textbooks, but with some scholarship (authors I recommend are Baddley & Hitch, Attkinson & Schiffrin, and later Engle) you should know this to be true.
Notice that this is true at the micro and macro levels of processing. You can use an API in a day and be familiar with it while still losing track of things at the end of the day. You can use an API for a month and be reasonably fluent in it a month later.
nVALT looks like an incredibly valuable tool; I use a simple wiki for this but feel like that should be further out in my cache hierarchy, storing more organized and structured content rather than quick notes. Thanks for pointing it out.
However, human memory is functionally infinite: the process is bound by encoding time rather than any notion of "space". As such, you should definitely invest in creating a set of Anki decks. Anything you want to quickly remember forever should be in an Anki deck. nVALT and related systems should only store relationships, things you can't easily fit on Anki decks and want to be able to compute over.
You can make things even easier to remember by making them more proximal to things you've overlearned and will never forget; for example, learn functional programming and express everything in terms of functional programming. If you want to learn a new API or framework, phrase it in terms of functional programming. This is just an example, but given thought you can extend this practice.
Thanks for the neuroscience info!
As such, you should definitely invest in creating a set of Anki decks. Anything you want to quickly remember forever should be in an Anki deck.
I've found that memorizing info in Anki takes significantly longer than writing it in my digital notebook. In his Anki guide, gwern writes:
if, over your lifetime, you will spend more than 5 minutes looking something up or will lose more than 5 minutes as a result of not knowing something, then it’s worthwhile to memorize it with spaced repetition. 5 minutes is the line that divides trivia from useful data.
It takes me on the order of seconds to copy-paste info in to my digital notebook; we're talking a 10-100x difference in time investment here. And digital notebook lookups are pretty fast; it takes maybe 10-15 keystrokes to look up most info that I've recorded. So I think from a caching perspective maybe it makes sense to put a broad range of info in a digital notebook and for things you want to be able to look up extremely quickly, perhaps use Anki (although you'll likely find that if you're using it that often you'll come to memorize it anyway. Another memory management trick I've come across: before looking info up, try to recall it first. Trying to recall things is substantially better for memorizing them than just re-reading info. Using this method you'll automatically find yourself memorizing info you use often.)
I think Anki is plausibly a good fit for info that's useful to know in situations where you aren't already being primed to look it up. For example, if I want to remember to never feel sorry for myself, there aren't really situations in my life where I might want to look up a page in my notebook with the answer to the question "do I want to feel sorry for myself?" But a spaced repetion cloze deletion card like "[...] feel sorry for yourself" has the potential to program that attitude in to me.
See also: http://lesswrong.com/lw/juq/a_vote_against_spaced_repetition/
As a first step, I wouldn't put that much stock into Gwern's guides. I've found that Gwern has his own way of doing things but it rarely seems to generalize at least in my experience. Self-experimentation is good but no matter what you can't get much out of an N=1 sample unless you are that particular person.
I find that going to any sort of persistent store incredibly harmful for my flow state while programming, so I try to get as much as possible into Anki. I think you'll find that if you sum the time spent attempting recall and the 3-5 seconds per lookup you'll also get far more than five minutes for any reasonably well-used concept.
I also find that the concepts in my Anki decks tend to be the ones that come up when I'm problem solving in general or trying to be creative. In a psychology (not neuroscience -- none of this is neuroscience, much like programming is unrelated to byte patterns except as an implementation detail) sense, Anki is just generally raising the activation level of those concepts, and so when you try to think of things, you will think in terms of those concepts. That's why the self-programming cards thing works. But also, it means that when you think about anything, you think in terms related to your Anki concepts.
The OP of the second post you linked seems like they didn't use a lot of Anki functionality. Anki's most popular plugin (maybe second most since I think kanji is still implemented as a plugin) is image occlusion, which seems like it would perfectly mesh with flash cards. However, I still use spatial memory with Anki just by associating Anki values with directions. It's not hard to do.
Overall, I think it's something you should invest in. No matter what you say about its value, it is a reliable way to move things from RAM (let's say) into L2 cache. This is something you should have familiarity with.
You can also check my comment history for a small OCaml utility, Space, that automates some aspects of making Anki cards.
Dear reader, if you liked this article, do you use some kind of note-making system?
If no, then stop procrastinating and do it now! Right at this moment, download and install some note-making software and start using it. Because you already agree that using it is better than not using it, so most likely it's only akrasia that stops you. If you don't overcome that akrasia now, is there any reason to think it would be easier later?
Stop reading, start installing. Use the system for a month, and then report the results on LW.
I don't use one. Which should I use? My requirements: Must be usable on Ubuntu, substantially less useful if it can't be shared between that and Windows.
Wikidpad. You need a part of disk where both Windows and Linux can write.
When you create a new wiki, there is an option "only use ascii in file names", I am not sure what it means, but it would be probably a good idea to use it, to prevent possible problems with different encoding on different systems.
If you choose an option "Sqlite compact", all wiki pages will be in one file; the default option is one file per page. It depends on what you want. I prefer less files. But with the one file per page, you could use external tools to search in the pages; as a Linux user you will probably want this.
Dear reader, if you liked this article, do you use some kind of note-making system?
Yes, a text editor. For every project I do on the computer, there's a file, usually called something like 00-notes.txt, in which notes accumulate, in the order I write them, with date stamps. Typically, it records things done, things thought about, and things to do.
I once installed Evernote, but I couldn't see what it was useful for, despite the fact that it even has conferences devoted to new and wonderful things to do with it.
Any suggestions for something that improves upon text files? For my purposes it would have to run on OSX with native UI, and however it stores the data, the data must be searchable from outside the application, i.e. plain text files or something not far removed from that, like HTML or Markdown.
ETA: I've just followed the link to nvALT, which satisfies the OSX and text files requirements, and might even be useful.
ETA2: Although this strikes an ominous note:
The Notational Velocity codebase that nvALT is built on has aged to a point where it’s nearly impossible to continue development for modern versions of OS X.
For every project I do on the computer, there's a file
How about general knowledge, unrelated to projects? Contacts, tasks, random ideas, programming knowledge, other knowledge...
Any suggestions for something that improves upon text files?
Text files with convenient hyperlinks to other text files. And maybe pictures; they could be hyperlinked using the same mechanism. -- That's probably all.
The simplest solution would be something like plain text editor, except that there would be some special syntax for links to other files (the syntax should be easily legible, but something that doesn't normally appear in text). Those links would be highlighted, and clicking on them would open the other file (or just give focus to the existing windows, if it is already opened).
wikidpad is more or less Markdown files with hyperlinks + some optional metadata
I'd say that the most important thing may be how you use the system, not how it is implemented. (The implementation is important only to the degree it makes the use more or less convenient.) For example, your contacts database will be more useful if you put many contacts in it. If you have a system that allows to put hundreds of random people there, but you can still easily see the important ones, but you can also find people will specific skills when necessary. And meaningful descriptions, so it's not like a year later you just see an unknown name with a phone number, and have no idea who that is. -- A system of plain text files where you really put all the info, and can run search queries using a command-line tool is better than having Evernote with lousy organization where you don't even bother write most of the data, so of course there is nothing to find, which in turn makes you less likely to write there anything.
How about general knowledge, unrelated to projects? Contacts, tasks, random ideas, programming knowledge, other knowledge...
That is indeed a problem. Perhaps nvALT will be a solution. One possible showstopper is that as far as I can tell, it can't display more than one note at a time. As an indication of how I work, right now I have 14 browser windows open. This is typical. Possessed of the ability to count higher than one, I find Single Document View pretty much impossible to work with, and programs that snatch away the document I was looking at just because I wanted to look at another one as well are thoroughly obnoxious.
ETA: nvALT doesn't itself allow multiple windows, but it supports invoking any external editor to edit notes.
wikidpad supports more tabs.
(right click on the node and choose "Activate new tab", or mouse wheel click on the node)
Yes, a text editor.
Some supporting materials :-)
I once installed Evernote, but I couldn't see what it was useful for,
I use it mostly as a universally available information dump. It's hard to throw images, PDFs, webpages, etc. into a text editor...
I have recently found wikidPad and it seems like a very useful tool. It is an offline wiki, the pages are stored either as individual text files (allows searching by external tools) or in a sqlite database (less files), and it has some interesting features I am still exploring.
I believe that a good use of such "external memory" can make people much more efficient. And this seems like a universal tool; you could have notes, contacts, to-do lists, internet bookmarks, and other things in one system (which allows easy hyperlinks between them).
First time I realized an importance of making notes was one year after finishing university. I found at home a paper with a list of exam questions... and it was a shock to realize that at half of them I didn't even understand what the question was. But those were all things I spent years learning, and now the knowledge is... gone. Similarly with programming; when I don't use some programming language for a few years, when I try again, I have to learn many things again. Now I keep written notes and example pieces of code.
Another cash level is drawing visual map and when use visual field as a cash of all the elements of the problem.
For ideas: write things down and regularly re-visit them, or they will grow stale and confusing. It will be hard to re-enter that topic usefully if you don't re-visit your old ideas.
For projects (e.g. programming) short-term interruptions are what people usually think about, but putting a project down for a few months will make it extremely hard to pick it up again, relative to putting it down for only a few days, even if working conditions are optimal. Continuity at all time-scales is important.
Sometime you need to intervene too. Some people have fictional memories of childhood sexual abuse. Here is how not to be one of these people. After reading this (though, I came upon it by searching about false memories cause I was suspicious), I no longer believe I was sexually abused in childcare...
Note: this post leans heavily on metaphors and examples from computer programming, but I've tried to write it so it's accessible to a determined person with no programming background.
To summarize some info from computer processor design at very high density: There are a variety of ways to manufacture the memory that's used in modern computer processors. There's a trend where the faster a kind of memory is to read from and write to, the more expensive it will be. So modern computers have a hierarchical memory structure: a very small amount of memory that's very fast to do computation with ("the registers"), a larger amount of memory that's a bit slower to do computation with, a even larger amount of memory that's even slower to do computation with, and so on. The two layers immediately below the the registers (the L1 cache and the L2 cache) are typically abstracted away from even the assembly language programmer. They store data that's been accessed recently from the level below them ("main memory"). The processor will do a lookup in the caches when accessing data; if the data is not already in the cache, that's called a "cache miss" and the data will get loaded in to the cache before it's accessed.
(Please correct me in the comments if I got any of that wrong; it's based on years-old memories of an undergrad computer science course.)
Lately I've found it useful to think of my memory in the same way. I've got working memory (7±2 items?), consisting of things that I'm thinking about in this very moment. I've got short term memory and long term memory. And if I can't find something after trying to think of it for a while, I'll look it up (frequently on Google). Cache miss for the lose.
What are some implications of thinking about memory about this way?
Register limitations and chunking
When programming, I've noticed that sometimes I'll encounter a problem that's too big to fit in my working memory (WM) all at once. In the spirit of getting stronger, I'm typically tempted to attack the problem head on, but I find that my brain just tends to flit around the details of the problem instead of actually making progress on it. So lately I've been toying with the idea of trying to break off a piece of the problem that can be easily modularized and fits fully in my working memory and then solving it on its own. (Feynman: "What's the smallest nontrivial example?") You could turn this definition around and define a good software architecture as one that consists of modular components that can individually be made to fit completely in to one's working memory when reading code.
As you write or read code modules, you'll come to understand them better and you'll be able to compress or "chunk" them so they take up less space in your working memory. This is why top-down programming doesn't always work that well. You're trying to fit the entire design in your working memory, but because you don't have a good understanding of the components yet (since you haven't written them), you aren't dealing with chunks but pseudochunks. This is true for concepts in general: it takes all of a beginner's WM to comprehend a for loop, but in a master's WM a for loop can be but one piece in a larger puzzle.
Swapping
One thing to observe: you don't get alerted when memory at the top of your mental hierarchy gets overwritten. We've all had the experience of having some idea in the shower and having forgotten it by the time we get out. Similarly, if you're working on a delicate mental task (programming, math, etc.) and you get interrupted, you'll lose mental state related to the problem you're working on.
If you're having difficulty focusing, this can easily make doing a delicate mental task, like a complicated math problem, much less fun and productive. Instead of actually making progress on the task, your mind drifts away from it, and when you redirect your attention, you find that information related to the problem has swapped out of your working memory or short-term memory and must be re-loaded. If you're getting distracted frequently enough or you're otherwise lacking mental stamina, you may find that you spend the majority of your time context switching instead of making progress on your problem.
Adding an additional external cache level
Anecdotally, adding an additional brain cache level between long-term memory and Google seems like a pretty big win for personal productivity. My digital notebook (since writing that post, I've started using nvALT) has turned out to be one of my biggest wins where productivity is concerned; it's ballooned to over 700K words, and a decent portion of it consists of copy-pasted snippets that represent the best information from Google searches I've done. A co-worker wrote a tool that allows him to quickly look up how to use software libraries and reports that he's continued to find it very useful years after making it.
Text is the most obvious example of an exobrain memory device, but here's a more interesting example: if you're cleaning a messy room, you probably don't develop a detailed plan in your head of where all of your stuff will be placed when you finish cleaning. Instead, you incrementally organize things in to related piles, then decide what to do with the piles, using the organization of the items in your room as a kind of external memory aid that allows you to do a mental task that you wouldn't be able to do entirely in your head.
Would it be accurate to say that you're "not intelligent enough" to organize your room in your head without the use of any external memory aides? It doesn't really fit with the colloquial use of "intelligence", does it? But in the same way computers are frequently RAM-limited, I suspect that humans are also frequently RAM-limited, even on mental tasks we frequently associate with "intelligence". For example, if you're reading a physics textbook and you notice that you're getting confused, you could write down a question that would resolve your confusion, then rewrite the question to be as precise as possible, then list hypotheses that would answer your question along with reasons to believe/disbelieve each hypothesis. By writing things down, you'd be able to devote all of your working memory to the details of a particular aspect of your confusion without losing track of the rest of it.