Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
LessWrong seems to be a big fan of spaced-repetition flashcard programs like Anki, Supermemo, or Mnemosyne. I used to be. After using them religiously for 3 years in medical school, I now categorically advise against using them for large volumes of memorization.
[A caveat before people get upset: I think they appropriate in certain situations, and I have not tried to use them to learn a language, which seems its most popular use. More at the bottom.]
A bit more history: I and 30 other students tried using Mnemosyne (and some used Anki) for multiple tests. At my school, we have a test approximately every 3 weeks, and each test covers about 75 pages of high-density outline-format notes. Many stopped after 5 or so such tests, citing that they simply did not get enough returns from their time. I stuck with it longer and used them more than anyone else, using them for 3 years.
Incidentally, I failed my first year and had to repeat.
By the end of that third year (and studying for my Step 1 boards, a several-month process), I lost faith in spaced-repetition cards as an effective tool for my memorization demands. I later met with a learning-skills specialist, who felt the same way, and had better reasons than my intuition/trial-and-error:
- Flashcards are less useful to learning the “big picture”
- Specifically, if you are memorizing a large amount of information, there is often a hierarchy, organization, etc that can make leaning the whole thing easier, and you loose the constant visual reminder of the larger context when using flashcards.
- Flashcards do not take advantage of spatial, mapping, or visual memory, all of which the human mind is much better optimized for. It is not so well built to memorize pairs between seemingly arbitrary concepts with few to no intuitive links. My preferred methods are, in essence, hacks that use your visual and spatial memory rather than rote.
Here are examples of the typical kind of things I memorize every day and have found flashcards to be surprisingly worthless for:
- The definition of Sjögren's syndrome
- The contraindications of Metronidazole
- The significance of a rise in serum αFP
Here is what I now use in place of flashcards:
- Ven diagrams/etc, to compare and contrast similar lists. (This is more specific to medical school, when you learn subtly different diseases.)
- Mnemonic pictures. I have used this myself for years to great effect, and later learned it was taught by my study-skills expert, though I'm surprised I haven't found them formally named and taught anywhere else. The basic concept is to make a large picture, where each detail on the picture corresponds to a detail you want to memorize.
- Memory palaces. I recently learned how to properly use these, and I'm a true believer. When I only had the general idea to “pair things you want to memorize with places in your room” I found it worthless, but after I was taught a lot of do's and don'ts, they're now my favorite way to memorize any list of 5+ items. If there's enough demand on LW I can write up a summary.
Spaced repetition is still good for knowledge you need to retrieve immediately, when a 2-second delay would make it useless. I would still consider spaced-repetition to memorize some of the more rarely-used notes on the treble and bass clef, if I ever decide to learn to sight-read music properly. I make no comment on it's usefulness to learn a foreign language, as I haven't tried it, but if I were to pick one up I personally would start with a rosetta-stone-esque program.
Your mileage may vary, but after seeing so many people try and reject them, I figured it was enough data to share. Mnemonic pictures and memory palaces are slightly time consuming when you're learning them. However, if someone has the motivation and discipline to make a stack of flashcards and study them every day indefinitely, then I believe learning and using those skills is a far better use of time.
Followup to: Lifestyle interventions to increase longevity.
What does it mean for exercise to be optimal?
- Optimal for looks
- Optimal for time
- Optimal for effort
- Optimal for performance
- Optimal for longevity
There may be even more criteria.
We're all likely going for a mix of outcomes, and optimal exercise is going to change depending on your weighting of different factors. So I'm going to discuss something close to a minimum viable routine based on meta-analyses of exercise studies.
Not knowing which sort of exercise yields the best results gives our brains an excuse to stop thinking about it. The intent of this post is to go over the dose responses to various types of exercise. We’re going to break through vague notions like “exercise is good” and “I should probably exercise more” with a concrete plan where you understand the relevant parameters that will cause dramatic improvements.
This was originally a comment to VipulNaik's recent indagations about the academic lifestyle versus the job lifestyle. Instead of calling it lifestyle he called them career options, but I'm taking a different emphasis here on purpose.
Due to information hazards risks, I recommend that Effective Altruists who are still wavering back and forth do not read this. Spoiler EA alert.
I'd just like to provide a cultural difference information that I have consistently noted between Americans and Brazilians which seems relevant here.
To have a job and work in the US is taken as a *de facto* biological need. It is as abnormal for an American, in my experience, to consider not working, as it is to consider not breathing, or not eating. It just doesn't cross people's minds.
If anyone has insight above and beyond "Protestant ethics and the spirit of capitalism" let me know about it, I've been waiting for the "why?" for years.
So yeah, let me remind people that you can spend years and years not working. that not getting a job isn't going to kill you or make you less healthy, that ultravagabonding is possible and feasible and many do it for over six months a year, that I have a friend who lives as the boyfriend of his sponsor's wife in a triad and somehow never worked a day in his life (the husband of the triad pays it all, both men are straight). That I've hosted an Argentinian who left graduate economics for two years to randomly travel the world, ended up in Rome and passed by here in his way back, through couchsurfing. That Puneet Sahani has been well over two years travelling the world with no money and an Indian passport now. I've also hosted a lovely estonian gentleman who works on computers 4 months a year in London to earn pounds, and spends eight months a year getting to know countries while learning their culture etc... Brazil was his third country.
Oh, and never forget the Uruguay couple I just met at a dance festival who have been travelling as hippies around and around South America for 5 years now, and showed no sign of owning more than 500 dollars worth of stuff.
Also in case you'd like to live in a paradise valley taking Santo Daime (a religious ritual with DMT) about twice a week, you can do it with a salary of aproximatelly 500 dollars per month in Vale do Gamarra, where I just spent carnival, that is what the guy who drove us back did. Given Brazilian or Turkish returns on investment, that would cost you 50 000 bucks in case you refused to work within the land itself for the 500.
Oh, I forgot to mention that though it certainly makes you unable to do expensive stuff, thus removing the paradox of choice and part of your existential angst from you (uhuu less choices!), there is nearly no detraction in status from not having a job. In fact, during these years in which I was either being an EA and directing an NGO, or studying on my own, or doing a Masters (which, let's agree is not very time consuming) my status has increased steadily, and many opportunities would have been lost if I had a job that wouldn't let me move freely. Things like being invited as Visiting Scholar to Singularity Institute, like giving a TED talk, like directing IERFH, and like spending a month working at FHI with Bostrom, Sandberg, and the classic Lesswrong poster Stuart Armstrong.
So when thinking about what to do with you future my dear fellow Americans, please, at least consider not getting a job. At least admit what everyone knows from the bottom of their hearts, that jobs are abundant for high IQ people (specially you my programmer lurker readers.... I know you are there...and you native English speakers, I can see you there, unnecessarily worrying about your earning potential).
A job is truly an instrumental goal, and your terminal goals certainly do have chains of causation leading to them that do not contain a job for 330 days a year. Unless you are a workaholic who experiences flow in virtue of pursuing instrumental goals. Then please, work all day long, donate as much as you can, and may your life be awesome!
When I was a freshman in high school, I was a mediocre math student: I earned a D in second semester geometry and had to repeat the course. By the time I was a senior in high school, I was one of the strongest few math students in my class of ~600 students at an academic magnet high school. I went on to earn a PhD in math. Most people wouldn't have guessed that I could have improved so much, and the shift that occurred was very surreal to me. It’s all the more striking in that the bulk of the shift occurred in a single year. I thought I’d share what strategies facilitated the change.
In late December 2013, Jonah, my collaborator at Cognito Mentoring, announced the service on LessWrong. Information about the service was also circulated in other venues with high concentrations of gifted and intellectually curious people. Since then, we're received ~70 emails asking for mentoring from learners across all ages, plus a few parents. At least 40 of our advisees heard of us through LessWrong, and the number is probably around 50. Of the 23 who responded to our advisee satisfaction survey, 16 filled in information on where they'd heard of us, and 14 of those 16 had heard of us from LessWrong. The vast majority of student advisees with whom we had substantive interactions, and the ones we felt we were able to help the most, came from LessWrong (we got some parents through the Davidson Forum post, but that's a very different sort of advising).
In this post, I discuss some common themes that emerged from our interaction with these advisees. Obviously, this isn't a comprehensive picture of the LessWrong community the way that Yvain's 2013 survey results were.
- A significant fraction of the people who contacted us via LessWrong aren't active LessWrong participants, and many don't even have user accounts on LessWrong. The prototypical advisees we got through LessWrong don't have many distinctive LessWrongian beliefs. Many of them use LessWrong primarily as a source of interesting stuff to read, rather than a community to be part of.
- About 25% of the advisees we got through LessWrong were female, and a slightly higher proportion of the advisees with whom we had substantive interaction (and subjectively feel we helped a lot) were female. You can see this by looking at the sex distribution of the public reviews of us from students.
- Our advisees included people in high school (typically, grades 11 and 12) and college. Our advisees in high school tended to be interested in mathematics, computer science, physics, engineering, and entrepreneurship. We did have a few who were interested in economics, philosophy, and the social sciences as well, but this was rarer. Our advisees in college and graduate school were also interested in the above subjects but skewed a bit more in the direction of being interested in philosophy, psychology, and economics.
- Somewhat surprisingly and endearingly, many of our advisees were interested in effective altruism and social impact. Some had already heard of the cluster of effective altruist ideas. Others were interested in generating social impact through entrepreneurship or choosing an impactful career, even though they weren't familiar with effective altruism until we pointed them to it. Of those who had heard of effective altruism as a cluster of ideas, some had either already consulted with or were planning to consult with 80,000 Hours, and were connecting with us largely to get a second opinion or to get opinion on matters other than career choice.
- Some of our advisees had had some sort of past involvement with MIRI/CFAR/FHI. Some were seriously considering working in existential risk reduction or on artificial intelligence. The two subsets overlapped considerably.
- Our advisees were somewhat better educated about rationality issues than we'd expect others of similar academic accomplishment to be, and more than the advisees we got from sources other than LessWrong. That's obviously not a surprise at all.
- We hadn't been expecting it, but many advisees asked us questions related to procrastination, social skills, and other life skills. We were initially somewhat ill-equipped to handle these, but we've built a base of recommendations, with some help from LessWrong and other sources.
- One thing that surprised me personally is that many of these people had never spent time exploring Quora. I'd have expected Quora to be much more widely known and used by the sort of people who were sufficiently aware of the Internet to know LessWrong. But it's possible there's not that much overlap.
My overall takeaway is that LessWrong seems to still be one of the foremost places that smart and curious young people interested in epistemic rationality visit. I'm not sure of the exact reason, though HPMOR probably gets a significant fraction of the credit. As long as things stay this way, LessWrong remains a great way to influence a subset of the young population today that's likely to be disproportionately represented among the decision-makers a few years down the line.
It's not clear to me why they don't participate more actively on LessWrong. Maybe no special reasons are needed: the ratio of lurkers to posters is huge for most Internet fora. Maybe the people who contacted us were relatively young and still didn't have an Internet presence, or were being careful about building one. On the other hand, maybe there is something about the comments culture that dissuades people from participating (this need not be a bad feature per se: one reason people may refrain from participating is that comments are held to a high bar and this keeps people from offering off-the-cuff comments). That said, if people could somehow participate more, LessWrong could transform itself into an interactive forum for smart and curious people that's head and shoulders above all the others.
PS: We've now made our information wiki publicly accessible. It's still in beta and a lot of content is incomplete and there are links to as-yet-uncreated pages all over the place. But we think it might still be interesting to the LessWrong audience.
I like posts that are concise and to the point. Posts like that maximize my information/effort ratio. I would really like to see experienced rationalists simply post a list of things they believe on any given subject with a short explanation for why they believe each of those things. Then I could go ahead and adjust my beliefs based on those lists as necessary.
Sadly I don’t see any posts like this. Presumably this is because of the social convention where you’re expected to back up any public belief with arguments, so that other people can attempt to poke holes in them. I find this strange because the arguments people present rarely have anything to do with why they believe those things, which makes the whole exercise a giant distraction from the main point that the author is trying to bring across. In order to prevent this kind of derailment, posters tend to cover their arguments with endless qualifications so that their sentences read like this: “I personally believe that, in cases X Y Z and under circumstances B and C, ceteris paribus and barring obvious exceptions, it seems safe to say that murder is wrong, though of course I could be mistaken.” The problems with such excessive argumentation and qualification are threefold:
- The post becomes less readable: The information/effort ratio is lowered.
- It becomes much more difficult to tell what the author genuinely believes: Are they really unsure or just trying to appear humble? Is that their true objection, or just an argument?
- Despite everything, someone is STILL going to miss the point and reply that sometimes killing people is ok in certain situations, and then the next 100 comments will be about that.
By contrast, terseness makes posts more readable and makes it less likely that the main point is misunderstood. So if we as a community could just relax the demand for argumentation and qualification somewhat, and we all focussed on debating the main points of posts instead of getting sidetracked, then perhaps experienced rationalists here could write nice and concise posts that give clear and direct answers to complicated questions. Instead, some of the sequences are so long and involve so many arguments, counter-arguments and disclaimers that I feel the point is lost entirely.
Many of you here have likely heard of Bitcoin, and maybe know something about it.
Earlier today, a story broke that a reporter had apparently tracked down the real Satoshi Nakamoto, infamous creator of the Bitcoin protocol.
This seems like an excellent opportunity to practice our Bayesian updating!
So, how likely do you think it is that this man is the founder of Bitcoin? What do you believe and why?
For a long time I have tried to study things on my own, at my own pace. But it was always an uphill struggle against strong akrasia issues, and eventually I came to the conclusion that the only thing that really seems to work is to have externally-imposed deadlines. The only way I could think of to do this was to sign up for classes, so I enrolled in a number of MOOCs. So far this has worked wonders - I went from basically spending most of my time playing around and wasting time, to several recent days where I studied for several hours straight.
The only thing I don't like about this setup is that there's a very limited number of really good MOOCs out there on the subjects I want to study. Also, most MOOCs are geared for a wider audience and are therefore dumbed-down to a certain degree.
So I had the following idea: A lot of us on LW seem to be studying a lot of the same material, whether it's the sequences, MIRI course list, CFAR booklist, or any of the various recommended reading lists. What if those who were studying the same thing would get together and set a schedule for themselves to finish the reading material, complete with deadlines? This might not be a normal "externally imposed" deadline, but at least it's a deadline with some social pressure to back it up. I can't be the only one on LW who could benefit from a deadline.
The details would need to be worked out, but here's a preliminary version of the way I envision it:
- There should be a monthly thread for requests for new classes. The request should specify the text to be used, or it could ask for suggestions for a good text. The request should also specify the approximate pace (very slow - slow - normal - fast - very fast), or an approximate weekly time commitment.
- The next thing that would be needed for each proposed class would be for someone who's already gone through that text to propose a rough calendar for the course. For example, they could say that given the requested pace / time commitment, you should expect to spend about 3 months on that particular text. Also, some chapters are harder than others, so the calendar should specify, for example, that you should expect to spend just one week on Chapters 1-3, but Chapter 7 will need to spread over three weeks. It would also be very useful to specify what prerequisites are needed for that text. (Similar to this thread. Keep in mind that different people have different styles when it comes to prerequisites. Some prefer to do as few prerequisites as possible and then skip straight to the harder stuff, and work backwards / fill in gaps as necessary. Others prefer to carefully cover all lower-level material before even touching the harder stuff. These people will want to know about all possible prerequisites so that they won't have to work backwards at all.)
- I would recommend creating a repository of available course calendars (i.e., course X should be split up this way, course Y should be split up that way, etc.). This can be done by creating a special thread for this purpose and then linking to that thread every time a new "proposed course" thread starts.
- A calendar provides some deadlines, but there needs to be some motivation for keeping to the deadlines. I can think of a few possibilities that might work:
- Social pressure: If there is anybody else in your class other than yourself, there's a certain amount of social pressure to keep up with the group and keep to the agreed-upon deadlines. Classmates can increase this pressure by actively encouraging each other to keep up.
- Social encouragement: As you make each deadline you should report that you did so, and others can then respond with encouragement.
- Karma: If someone makes a deadline they should post an announcement to that effect, and LWers (even those not part of your class, and even those who aren't taking any classes) could be encouraged to upvote the announcement. I haven't been on LW long enough to tell if this is a socially acceptable use of karma points, but this might be motivating for some people.
- Perhaps someone could design a "LW U" badge or something of the sort to post on your personal / social site when you complete a course. (Notice that with the karma or badge reward forms, it becomes possible to have only a single member in a course and they'll still be able to get some form of reward structure. It might not be as effective as having other people in the course, but at least it works.)
- There should be a dedicated thread for each course once it begins. The thread would be used for everything relating to the course: announcing progress, discussing subject-related material, meta-discussions about the course, etc.
- LWers who have already completed the subject / textbook could follow the course discussions and provide guidance and help as needed. Anyone who thinks they can contribute in this "teacher" capacity should let course participants know about it beforehand, as this will provide additional social pressure / support, and provide valuable encouragement (there's someone I can ask my stupid questions to!).
- I'd recommend that once one or more people decide to take a course, they should set a date to start the course that's at least two weeks (maybe a month) in the future. This would give time for others to join. Each month's thread for proposed courses could then include a list of "courses starting soon".
- Perhaps people who have already studied a given text could put together a few quizzes / tests / finals for that text. The quizzes would be sent to individual students at a certain point in the course via private message. Each student would take the quiz on their own (honor system, of course), and the quizzes would then be graded either by the creator of the quiz (the "teacher"), a volunteer TA (using an answer key provided by the teacher), the other students, or even by each student themselves. (I would not recommend this last unless there are no other options, since even very honest people can be sorely tempted to fudge things occasionally in their own favor.) There could even be a final grade for the course. I suspect that this system would create powerful psychological motivation for certain people to work hard at the coursework and complete their work on time.
What do you think about such an idea?
A serious possibility is that the first AGI(s) will be developed in a Manhattan Project style setting before any sort of friendliness/safety constraints can be integrated reliably. They will also be substantially short of the intelligence required to exponentially self-improve. Within a certain range of development and intelligence, containment protocols can make them safe to interact with. This means they can be studied experimentally, and the architecture(s) used to create them better understood, furthering the goal of safely using AI in less constrained settings.
Setting the Scene
Technological and/or Political issues could force the development of AI without theoretical safety guarantees that we'd certainly like, but there is a silver lining
A lot of the discussion around LessWrong and MIRI that I've seen (and I haven't seen all of it, please send links!) seems to focus very strongly on the situation of an AI that can self-modify or construct further AIs, resulting in an exponential explosion of intelligence (FOOM/Singularity). The focus on FAI is on finding an architecture that can be explicitly constrained (and a constraint set that won't fail to do what we desire).
My argument is essentially that there could be a critical multi-year period preceding any possible exponentially self-improving intelligence during which a series of AGIs of varying intelligence, flexibility and architecture will be built. This period will be fast and frantic, but it will be incredibly fruitful and vital both in figuring out how to make an AI sufficiently strong to exponentially self-improve and in how to make it safe and friendly (or develop protocols to bridge the even riskier period between when we can develop FOOM-capable AIs and when we can ensure their safety).
The requirement for a hard singularity, an exponentially self-improving AI, is that the AI can substantially improve itself in a way that enhances its ability to further improve itself, which requires the ability to modify its own code; access to resources like time, data, and hardware to facilitate these modifications; and the intelligence to execute a fruitful self-modification strategy.
The first two conditions can (and should) be directly restricted. I'll elaborate more on that later, but basically any AI should be very carefully sandboxed (unable to affect its software environment), and should have access to resources strictly controlled. Perhaps no data goes in without human approval or while the AI is running. Perhaps nothing comes out either. Even a hyperpersuasive hyperintelligence will be slowed down (at least) if it can only interact with prespecified tests (how do you test AGI? No idea but it shouldn't be harder than friendliness). This isn't a perfect situation. Eliezer Yudkowsky presents several arguments for why an intelligence explosion could happen even when resources are constrained, (see Section 3 of Intelligence Explosion Microeconomics) not to mention ways that those constraints could be defied even if engineered perfectly (by the way, I would happily run the AI box experiment with anybody, I think it is absurd that anyone would fail it! [I've read Tuxedage's accounts, and I think I actually do understand how a gatekeeper could fail, but I also believe I understand how one could be trained to succeed even against a much stronger foe than any person who has played the part of the AI]).
But the third emerges from the way technology typically develops. I believe it is incredibly unlikely that an AGI will develop in somebody's basement, or even in a small national lab or top corporate lab. When there is no clear notion of what a technology will look like, it is usually not developed. Positive, productive accidents are somewhat rare in science, but they are remarkably rare in engineering (please, give counterexamples!). The creation of an AGI will likely not happen by accident; there will be a well-funded, concrete research and development plan that leads up to it. An AI Manhattan Project described above. But even when there is a good plan successfully executed, prototypes are slow, fragile, and poor-quality compared to what is possible even with approaches using the same underlying technology. It seems very likely to me that the first AGI will be a Chicago Pile, not a Trinity; recognizably a breakthrough but with proper consideration not immediately dangerous or unmanageable. [Note, you don't have to believe this to read the rest of this. If you disagree, consider the virtues of redundancy and the question of what safety an AI development effort should implement if they can't be persuaded to delay long enough for theoretically sound methods to become available].
A Manhattan Project style effort makes a relatively weak, controllable AI even more likely, because not only can such a project implement substantial safety protocols that are explicitly researched in parallel with primary development, but also because the total resources, in hardware and brainpower, devoted to the AI will be much greater than a smaller project, and therefore setting a correspondingly higher bar for the AGI thus created to reach to be able to successfully self-modify itself exponentially and also break the security procedures.
Strategies to handle AIs in the proto-Singularity, and why they're important
First, take a look the External Constraints Section of this MIRI Report and/or this article on AI Boxing. I will be talking mainly about these approaches. There are certainly others, but these are the easiest to extrapolate from current computer security.
These AIs will provide us with the experimental knowledge to better handle the construction of even stronger AIs. If careful, we will be able to use these proto-Singularity AIs to learn about the nature of intelligence and cognition, to perform economically valuable tasks, and to test theories of friendliness (not perfectly, but well enough to start).
"If careful" is the key phrase. I mentioned sandboxing above. And computer security is key to any attempt to contain an AI. Monitoring the source code, and setting a threshold for too much changing too fast at which point a failsafe freezes all computation; keeping extremely strict control over copies of the source. Some architectures will be more inherently dangerous and less predictable than others. A simulation of a physical brain, for instance, will be fairly opaque (depending on how far neuroscience has gone) but could have almost no potential to self-improve to an uncontrollable degree if its access to hardware is limited (it won't be able to make itself much more efficient on fixed resources). Other architectures will have other properties. Some will be utility optimizing agents. Some will have behaviors but no clear utility. Some will be opaque, some transparent.
All will have a theory to how they operate, which can be refined by actual experimentation. This is what we can gain! We can set up controlled scenarios like honeypots to catch malevolence. We can evaluate our ability to monitor and read the thoughts of the agi. We can develop stronger theories of how damaging self-modification actually is to imposed constraints. We can test our abilities to add constraints to even the base state. But do I really have to justify the value of experimentation?
I am familiar with criticisms based on absolutley incomprehensibly perceptive and persuasive hyperintelligences being able to overcome any security, but I've tried to outline above why I don't think we'd be dealing with that case.
Right now AGI is really a political non-issue. Blue sky even compared to space exploration and fusion both of which actually receive funding from government in substantial volumes. I think that this will change in the period immediately leading up to my hypothesized AI Manhattan Project. The AI Manhattan Project can only happen with a lot of political will behind it, which will probably mean a spiral of scientific advancements, hype and threat of competition from external unfriendly sources. Think space race.
So suppose that the first few AIs are built under well controlled conditions. Friendliness is still not perfected, but we think/hope we've learned some valuable basics. But now people want to use the AIs for something. So what should be done at this point?
I won't try to speculate what happens next (well you can probably persuade me to, but it might not be as valuable), beyond extensions of the protocols I've already laid out, hybridized with notions like Oracle AI. It certainly gets a lot harder, but hopefully experimentation on the first, highly-controlled generation of AI to get a better understanding of their architectural fundamentals, combined with more direct research on friendliness in general would provide the groundwork for this.
Many of the high school and college students who contacted us at Cognito Mentoring were looking for advice were considering going into academia. The main draw to them was the desire to learn specific subjects and explore ideas in greater depth. As a result, we've been investigating academia as a career option and also considering what alternatives there may be to academia that fulfill the same needs but provide better pay and/or generate more social value. The love of ideas and epistemic exploration is shared by many of the people at Less Wrong, including those who are not in academia. So I'm hoping that people will share their own perspectives in the comments. That'll help us as well as the many LessWrong lurkers interested in academia.
I'm eager to hear about what considerations you used when weighing academia against other career options, and how you came to your decision. Incidentally, there are a number of great answers to the Quora question Why did you leave academia?, but there's probably many thoughts people have here that aren't reflected in the Quora answers. I've also written up a detailed review of academia as a career option on the info wiki for Cognito Mentoring here (long read), and I'd also love feedback on the validity of the points I make there.
Many of our advisees as well as the LessWrong readership at large are interested in choosing careers based on the social value generated by these careers. (This is evidenced in the strong connection between the LessWrong and effective altruism communities). What are your thoughts on that front? Jonah and I have collaboratively written a page on the social value of academia. Our key point is that research academia is higher value than alternative careers only in cases where either the person has a chance of making big breakthroughs in the area, or if the area of research itself is high-value. Examples of the latter may include machine learning (we're just starting on investigating this) and (arguably) biomedical research (we've collected some links on this, but haven't investigated this in depth).
For those who are or were attracted to academia, what other career options did you consider? If you decided not to join, or chose to quit, academia, what alternative career are you now pursuing? We've identified a few possibilities at our alternatives to academia page, but we're largely shooting in the dark here. Based on anecdotal evidence from people working in venture capital, it seems like venture capital is a great place for polymath-types who are interested in researching a wide range of subjects shallowly, so it's ideal for people who like shallow intellectual exploration rather than sticking to a single subject for an inordinate amount of time. But there are very few jobs in venture capital. On paper, jobs at consulting firms should be similar to venture capital in requiring a lot of shallow research. But we don't have an inside view of consulting jobs -- are they a good venue for intellectually curious people? Are there other job categories we missed?
All thoughts are greatly appreciated!
Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games
We're going to have a meetup on Thursday, March 6th at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv.
This time we're going to have a social meetup! Unlike previous meetups where we had a set agenda, an a talk - this time we'll be socializing and playing games. Specifically, we look forward to playing any cool board or card game anyone will bring.
We'll start the meetup at 19:00, and we'll go on as much as we like to. Feel free to come a little bit later, as there is no agenda. (We've decided to start slightly earlier this time to give us more time and accommodate people with different schedules).
We'll meet at the 29th floor of the building (Note: Not the 26th where Google Campus is). If you arrive and cant find your way around, call Anatoly who is graciously hosting us at 054-245-1060.
Things that might happen: - You'll trade cool ideas with cool people from the Israel LW community. - You'll discover kindred spirits who agree with you about one/two boxing. - You'll kick someone's ass (and teach them how you did it) at some awesome boardgame. - You'll discover how to build a friendly AGI running on cold fusion (well probably not)
Things that will happen for sure: - You'll get to hang out with awesome people and have fun!
There is also talk of food and beers, and if you'd like to bring some too - that would be great. (But you don't have to).
If you have any question feel free to email me at firstname.lastname@example.org or call me at 054-533-0678 or call Anatoly at 054-245-1060.
See you there!
Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games
In the previous post I have defined an intelligence metric solving the duality (aka naturalized induction) and ontology problems in AIXI. This model used a formalization of UDT using Benja's model of logical uncertainty. In the current post I am going to:
- Explain some problems with my previous model (that section can be skipped if you don't care about the previous model and only want to understand the new one).
- Formulate a new model solving these problems. Incidentally, the new model is much closer to the usual way UDT is represented. It is also based on a different model of logical uncertainty.
- Show how to define intelligence without specifying the utility function a priori.
- Since the new model requires utility functions formulated with abstract ontology i.e. well-defined on the entire Tegmark level IV multiverse. These are generally difficult to construct (i.e. the ontology problem resurfaces in a different form). I outline a method for constructing such utility functions.
Problems with UIM 1.0
The previous model postulated that naturalized induction uses a version of Solomonoff induction updated in the direction of an innate model N with a temporal confidence parameter t. This entails several problems:
- The dependence on the parameter t whose relevant value is not easy to determine.
- Conceptual divergence from the UDT philosophy that we should not update at all.
- Difficulties with counterfactual mugging and acausal trade scenarios in which G doesn't exist in the "other universe".
- Once G discovers even a small violation of N at a very early time, it loses all ground for trusting its own mind. Effectively, G would find itself in the position of a Boltzmann brain. This is especially dangerous when N over-specifies the hardware running G's mind. For example assume N specifies G to be a human brain modeled on the level of quantum field theory (particle physics). If G discovers that in truth it is a computer simulation on the merely molecular level, it loses its epistemic footing completely.
I now propose the following intelligence metric (the formula goes first and then I explain the notation):
IU(q) := ET[ED[EL[U(Y(D)) | Q(X(T)) = q]] | N]
- N is the "ideal" model of the mind of the agent G. For example, it can be a universal Turing machine M with special "sensory" registers e whose values can change arbitrarily after each step of M. N is specified as a system of constraints on an infinite sequence of natural numbers X, which should be thought of as the "Platonic ideal" realization of G, i.e. an imagery realization which cannot be tempered with by external forces such as anvils. As we shall see, this "ideal" serves as a template for "physical" realizations of G which are prone to violations of N.
- Q is a function that decodes G's code from X e.g. the program loaded in M at time 0. q is a particular value of this code whose (utility specific) intelligence IU(q) we are evaluating.
- T is a random (as in random variable) computable hypothesis about the "physics" of X, i.e a program computing X implemented on some fixed universal computing model (e.g. universal Turing machine) C. T is distributed according to the Solomonoff measure however the expectation value in the definition of IU(q) is conditional on N, i.e. we restrict to programs which are compatible with N. From the UDT standpoint, T is the decision algorithm itself and the uncertainty in T is "introspective" uncertainty i.e. the uncertainty of the putative precursor agent PG (the agent creating G e.g. an AI programmer) regarding her own decision algorithm. Note that we don't actually need to postulate a PG which is "agenty" (i.e. use for N a model of AI hardware together with a model of the AI programmer programming this hardware), we can be content to remain in a more abstract framework.
- D is a random computable hypothesis about the physics of Y, where Y is an infinite sequence of natural numbers representing the physical (as opposed to "ideal") universe. D is distributed according to the Solomonoff measure and the respective expectation value is unconditional (i.e. we use the raw Solomonoff prior for Y which makes the model truly updateless). In UDT terms, D is indexical uncertainty.
- U is a computable function from infinite sequences of natural numbers to [0, 1] representing G's utility function.
- L represents logical uncertainty. It can be defined by the model explained by cousin_it here, together with my previous construction for computing logical expectation values of random variables in [0, 1]. That is, we define EL(dk) to be the probability that a random string of bits p encodes a proof of the sentence "Q(X(T)) = q implies that the k-th digit of U(Y(D)) is 1" in some prefix-free encoding of proofs conditional on p encoding the proof of either that sentence or the sentence "Q(X(T)) = q implies that the k-th digit of U(Y(D)) is 0". We then define
EL[U(Y(D)) | Q(X(T)) = q] := Σk 2-k EL(dk). Here, the sentences and the proofs belong to some fixed formal logic F, e.g. Peano arthimetics or ZFC.
- G's mental architecture N is defined in the "ideal" universe X where it is inviolable. However, G's utility function U inhabits the physical universe Y. This means that a highly intelligent q is designed so that imperfect realizations of G inside Y generate as many utilons as possible. A typical T is a low Kolmogorov complexity universe which contains a perfect realization of G. Q(X(T)) is L-correlated to the programming of imperfect realizations of G inside Y because T serves as an effective (approximate) model of the formation of these realizations. For abstract N, this means q is highly intelligent when a Solomonoff-random "M-programming process" producing q entails a high expected value of U.
- Solving the Loebian obstacle requires a more sophisticated model of logical uncertainty. I think I can formulate such a model. I will explain it in another post after more contemplation.
- It is desirable that the encoding of proofs p satisfies a universality property so that the length of the encoding can only change by an additive constant, analogically to the weak dependence of Kolmogorov complexity on C. It is in fact not difficult to formulate this property and show the existence of appropriate encodings. I will discuss this point in more detail in another post.
It seems conceptually desirable to have a notion of intelligence independent of the specifics of the utility function. Such an intelligence metric is possible to construct in a way analogical to what I've done in UIM 1.0, however it is no longer a special case of the utility-specific metric.
Assume N to consist of a machine M connected to a special storage device E. Assume further that at X-time 0, E contains a valid C-program u realizing a utility function U, but that this is the only constraint on the initial content of E imposed by N. Define
I(q) := ET[ED[EL[u(Y(D); X(T)) | Q(X(T)) = q]] | N]
Here, u(Y(D); X(T)) means that we decode u from X(T) and evaluate it on Y(D). Thus utility depends both on the physical universe Y and the ideal universe X. This means G is not precisely a UDT agent but rather a "proto-agent": only when a realization of G reads u from E it knows which other realizations of G in the multiverse (the Solomonoff ensemble from which Y is selected) should be considered as the "same" agent UDT-wise.
Incidentally, this can be used as a formalism for reasoning about agents that don't know their utility functions. I believe this has important applications in metaethics I will discuss in another post.
Utility Functions in the Multiverse
UIM 2.0 is a formalism that solves the diseases of UIM 1.0 at the price of losing N in the capacity of the ontology for utility functions. We need the utility function to be defined on the entire multiverse i.e. on any sequence of natural numbers. I will outline a way to extend "ontology-specific" utility functions to the multiverse through a simple example.
Suppose G is an agent that cares about universes realizing the Game of Life, its utility function U corresponding to e.g. some sort of glider maximization with exponential temporal discount. Fix a specific way DC to decode any Y into a history of a 2D cellular automaton with two cell states ("dead" and "alive"). Our multiversal utility function U* assigns Ys for which DC(Y) is a legal Game of Life the value U(DC(Y)). All other Ys are treated by dividing the cells into cells O obeying the rules of Life and cells V violating the rules of Life. We can then evaluate U on O only (assuming it has some sort of locality) and assign V utility by some other rule, e.g.:
- zero utility
- constant utility per V cell with temporal discount
- constant utility per unit of surface area of the boundary between O and V with temporal discount
- The construction of U* depends on the choice of DC. However, U* only depends on DC weakly since given a hypothesis D which produces a Game of Life wrt some other low complexity encoding, there is a corresponding hypothesis D' producing a Game of Life wrt DC. D' is obtained from D by appending a corresponding "transcoder" and thus it is only less Solomonoff-likely than D by an O(1) factor.
- Since the accumulation between O and V is additive rather than e.g. multiplicative, a U*-agent doesn't behave as if it a priori expects the universe the follow the rules of Life but may have strong preferences about the universe actually doing it.
- This construction is reminiscent of Egan's dust theory in the sense that all possible encodings contribute. However, here they are weighted by the Solomonoff measure.
Discussion article for the meetup : Canberra: Meta-meetup + meditation
Our first regular meetup will have two parts: firstly, we will be discussing what we want to get out of meetups, what sort of things we would ilke to do in them, and related matters, and secondly, we will be taught how to meditate and have a practice session. Vegan snacks will be provided.
General meetup info:
Structured meetups are held on the second Saturday and fourth Friday of each month from 6 pm until 10 pm at the XSite (home of the XSA), located upstairs in the ANU Arts Centre.
There will be LWers at the Computer Science Students Association's weekly board games night, held on Wednesdays from 7 pm in the CSIT building, room N101.
Discussion article for the meetup : Canberra: Meta-meetup + meditation
Summary: Across the board, people are less prone to cognitive bias in a non-native language.
Conclusion: If all important discourse was conducted in Latin, or any other language native to no one, people would make better decisions.
Corollary: All the attempts to make a constructed "scientific language" actually could have worked relatively well, for reasons entirely unconnected to the painstaking scientific structure of the languages.
The Centre for the Study of Existential Risk (CSER) has recently held its first public lecture which can be found here:
The talk's blurb:
"In the coming century, the greatest threats to human survival may come from our own technological developments. However, if we can safely navigate the pitfalls, the benefits that technology promises are enormous. A philosopher, an astronomer, and an entrepreneur have come together to form the Centre for the Study of Existential Risk. The goal: to bring a fraction of humanity’s talents to bear on the task of ensuring our long-term survival. In this lecture, Huw Price, Martin Rees and Jaan Tallinn will outline humanity’s greatest challenge: surviving the 21st century."
From CSER's about page:
"An existential risk is one that threatens the existence of our entire species. The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between a philosopher, a scientist, and a software entrepreneur — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. CSER is a multidisciplinary research centre dedicated to the study and mitigation of risks that could lead to human extinction.
Our goal is to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future."
The philosopher, scientist and entrepreneur in question being Huw Price, Martin Rees and Jaan Tallinn respectively.
Incase you are looking for the talk that Jaan Tallinn referred to, I think that it is this.
There are many ways to tackle this question, but I mean this in a homo economicus, not biased perspective. If we were great optimizers of some things (experiences, states of the world, utility in the emotional sense), should we be sad upon hearing we lost an opportunity?
The intuitive answer, to me, is yes. But for many things, for most things I have begun to believe otherwise. This is because we combine two distinct meanings of opportunity
Opportunity1 = Something good in the future that is uncertain at the moment and could happen to you, frequently depending on environmental factors outside of your control and some factors within your control in the time between now and the opportunity taking place. Ex:
- Getting a promotion
- Finding a romantic partner
- Having a really good friendship
- Having a large H index (for scientific publications)
Opportunity2 = Something good in the future that is uncertain at the moment and could happen to you, but all the actions you could have personally taken that could influence this are in the past, and now only time and chance will determine if it will be the case. Ex:
- Being approved at Google after the entire interview process has happened
- Being accepted at Harvard
- Avoiding wine in your clothing after it has been dropped
- Being accepted to work with CEA after filling in the entire application.
I think it is very reasonable to be sad when you lose opportunites1 but completely pointless to be sad over the loss of the second kind, opporunities2. It feels obvious to me, but in case it isn't I'll try to make it explicit:
When you lose opportunities1, you change the course of your future actions, each of your actions, your time and your effort has become less valuable, since you have to do more to get the same odds or even less.
When you lose opportunities2 you are only being notified of an indexical property, you learn in which of the possible universes you could be you happen to be. You have gained knowledge, you can tailor your future actions regarding other things accordingly. Nothing has become pricier for your efforts, in fact, now you have a better map, and can navigate with ease.
So let us be neutral or happy with the loss of oportunities2, and gain strenght from the loss of opportunities1. It seems right to allocate emotional and psychological resources to things you can act on, when you are not in flow. Otherwise, you may end up in the hardest death spiral to overcome, learned helplessness.
For political reasons related to my prospective adviser's academic history, all applicants who wanted to study with him didn't make it to Berkeley University. But hey, I didn't care... That just means I'm in the fun universe in which I actually have to do all the crazy stuff like moving into the unknown, that is a universe of adventure right?
Loss aversion be damned!
Discussion prompt: Nick Szabo's essay on judging tradition, "Objective Versus Intersubjective Truth".
1 PM (remember daylight saving time!)
Nam Phuong at 11th and Broad St. This is a Vietnamese restaurant which is good, cheap, quiet, and on mass transit.
This summary was posted to LW main on February 28th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Boston - Optimizing Empathy Levels: 02 March 2014 02:00PM
- Hamburg - Structure: 04 March 2014 05:00PM
- Munich Meetup: 08 March 2014 02:00PM
- Saint Petersburg sunday meetup: 01 March 2014 04:00PM
- Sydney Meetup - March: 26 March 2014 06:30PM
- Berkeley: Implementation Intentions: 05 March 2014 07:00PM
- [Berlin] Community Weekend in Berlin: 11 April 2014 04:00PM
- Brussels - Calibration and other games: 08 March 2014 01:00PM
- London Games Meetup 09/03, + Socials 02/03 and 16/02 : 09 March 2014 02:00PM
- NYC Rationality Megameetup and Unconference: April 5-6: 05 April 2014 11:00AM
- Salt Lake City UT — Open Possibilities and Improv Skills: 09 March 2014 02:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Brussels, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
Discussion article for the meetup : West LA—Expert At Vs. Expert On
How to Find Us: Go into this Del Taco. I will bring a Rubik's Cube. The presence of a Rubik's Cube will be strong Bayesian evidence of the presence of a Less Wrong meetup.
Parking is completely free. There is a sign that claims there is a 45-minute time limit, but it is a lie.
Discussion: Expert at vs. expert on is a fairly important distinction. It's also a really simple one, which makes it conceptual low-hanging fruit. It's not totally without nuance; for example the terminology implies either total mastery or encyclopedic knowledge, but it applies just as well at any level of competence.
- Expert At Versus Expert On. I know of no other writing that is explicitly on this topic. Robin Hanson emphasizes the signaling aspect (of course he does), but I do not.
- It is well-known that you learn to play baseball by playing baseball, not by reading essays about baseball. However, it is not usually made explicit that the former makes you an expert at baseball, and the latter makes you an expert on baseball.
- Another nuance: Being an expert at something helps you become an expert on it; the vice versa may be true also. For example, you are probably a better linguist if you speak many languages.
NB: No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible. Also, we may or may not play a card game.
Discussion article for the meetup : West LA—Expert At Vs. Expert On
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Discussion article for the meetup : March Meetup: Body Hacking!
(Rescheduled to March 15)
An overview of body hacking, what's possible, what's known, what needs more exploration, and what tools are available to you.
Presenters needed! Do you have expertise on any of this? Lemme know and you can do anything from a full presentation with slides and handouts to leading a discussion on a particular topic.
Also, please check out our facebook group here: https://www.facebook.com/groups/Atlanta.Lesswrong/
Discussion article for the meetup : March Meetup: Body Hacking!
Discussion article for the meetup : Auckland Preliminary Meetup
I got back from the second Melbourne CFAR workshop recently and it was good. It's well worthwhile having a local rationalist community and while there are some good thinkers in my immediate circle of friends, meeting more and learning from each other would be awesome.
I'm not sure if others will be using it, but let's meet near the gazebo in Albert Park at 2pm Saturday. I'll be carrying my CFAR bag and water bottle if you want to come over and say hi. I would be wearing a "Just shy, not antisocial" shirt if I had one. If you're interested in coming, just comment, or come anyway.
Discussion article for the meetup : Auckland Preliminary Meetup
Discussion article for the meetup : Urbana-Champaign: Discussion
Discussion article for the meetup : Urbana-Champaign: Discussion
Or, “how not to make a fundamental attribution error on yourself;” or, “how to do that thing that you keep being frustrated at yourself for not doing;” or, “finding and solving trivial but leveraged inconveniences.”
Discussion article for the meetup : Brussels - all fun and games
(Because of a scheduling conflict with the Berlin community meetup, April's meetup will not be on the second Saturday of the month. It's one week later.)
No serious theme for this month. We'll probably chat about Berlin, play boardgames that aren't rationalist training, and just talk about anything we want.
We will meet at 1 pm at "La Fleur en papier doré", close to the Brussels Central station. The meeting will be in English to facilitate both French and Dutch speaking members.
If you are coming for the first time, please consider filling out this one minute form to share your contact information.
The Brussels meetup group communicates through a Google Group.
Meetup announcements are also mirrored on meetup.com
Discussion article for the meetup : Brussels - all fun and games
Discussion article for the meetup : Frankfurt
Location will be published in time. Contact me under 0176 34 095 760. I prefer texting to calling. If you have any special requirements and need help to attend (whether a disability, social anxiety, whatever), please tell us in advance!
Discussion article for the meetup : Frankfurt
Discussion article for the meetup : Salt Lake City, UT: Schelling Day
Every person reading this (ESPECIALLY YOU!) is challenged to leave ONE piece of feedback regarding what you think about this event. Before, during, or after. One sentence. Doable?
Why should I care?
- There will be dinner! Eat food!
- Hang out with and befriend generally awesome people!
- Have friends or family you want to introduce to rationality? This makes a great appeal for emotional thinkers!
- Generate hedons and warm fuzzies!
- Generate fond memories to be nostalgic about!
- Receive empathy for what's going on in your life!
- Gain arbitrary Kudo points for having come!
I like hanging out with just generally awesome people. It's why I joined this group in the first place!
Spending time with people is fun. But getting to know people—really, truly getting to know people—is hard. Rewarding, but hard.
Sharing your fondest hopes and deepest fears is a powerful way to make connections, but exposing your soul like that terrifying. Worse, it’s awkward. There are few socially appropriate times to bring up stuff like that. Even when everything works out beautifully, getting it started feels stressful and not-fun.
As soon as people are in a context where everyone agrees that sharing is normal (e.g. an Alcoholics Anonymous meeting, or a conversation with a therapist, or Truth or Dare), the stigma and self-consciousness don’t hold people back nearly as much.
This is our version of Truth or Dare: optimized for more plausible deniability, more warm fuzzy feelings, less debasement†, and more genuine connection with other people.
- Five different flavors of Truth: Struggles, Joys, Confessions, Hopes, and Miscellaneous.
- To provide plausible deniability, everyone rolls a die before speaking. A one means you cannot speak your turn, a 6 means you MUST. The result is not shared.
- What happens in the Schelling Game stays in the Schelling Game.
This game is traditionally meant to played every April 14th, the birthday of Thomas Schelling, for the obvious reason‡, followed by dinner and socializing. I moved it around a bit to get it on the weekend instead of a Monday.
Any of that interest you? All you have to do is:
- RSVP now. I won't hold it against you if you back out later.
- Show up at 9771 S 170 E, Sandy UT
2:45pm on April 20th. If you come in after we start the game at 3:00, please wait for the lull between speakers to announce yourself.
- Find us near the Trax station on 9800, just north of Dewey Bluth Park. There will be a small boat in the driveway.
- Bring your observations on what you've been up to since last meetup, if you've got any to share.
- Children are welcome, provided they are mature enough to either handle the adult themes in the game, or entertain themselves mostly unsupervised while we play.
- Anyone who brings Potluck contributions will get Two Rationality Points. If you'd like to show some non-food support for this and other events, I have a PayItSquare set up: http://www.payitsquare.com/collect-page/edit/24123
Thank you for your time, and thank you for being part of the most fun, engaging, and all-around rewarding social group I know.
(†) Don't worry about missing the discomfort and humiliation of dares. That will be the focus of a different game, later in the year >:-D Muahahahahahaha! (‡) A Schelling point is is a solution that people will tend to use in the absence of communication, because it seems natural, special or relevant to them. This is an arbitrary consensus point for changing social rules. It fits.
PS: Be warned--fuzzy feelings are fun, but they do run a risk of skewing your view of people. It is up to you to find the appropriate benefit/cost balance. PPS: This is plagiarized heavily from the original Schelling Day post. Don't sue me.
Discussion article for the meetup : Salt Lake City, UT: Schelling Day
Discussion article for the meetup : Moscow, Meet up
We will gather at the same second entrance, but we will go to a room inside the building at 16:00. So please do not be late. We will have:
- Report about “Zen to Done”.
- Report about cognitive biases.
- Stumbling on happiness for rationalists presentation.
- Report about "Decisive: How to Make Better Choices in Life and Work" book.
- Cognitive behavioural therapy workshop.
We gather in the Yandex office, you need the second revolving door with the sign “Яндекс”, here is the photo of the entrance you need. You need to pass the first entrance and through the archway. Here is additional guide how to get there: link.
You can fill this one minute form (in Russian), to share your contact information.
We start at 16:00 and sometimes finish at night. Please pay attention that we only gather near the second entrance and then come inside.