The Best Popular Books on Every Subject
I enjoy reading popular-level books on a wide variety of subjects, and I love getting new book recommendations. In the spirit of lukeprog's The Best Textbooks on Every Subject, can we put together a list of the best popular books on every subject?
Here's what I mean by popular-level books:
- Written very well and clearly, preferably even entertaining.
- Does not require the reader to write anything (e.g., practice problems) or do anything beyond just reading and thinking, except perhaps on very rare occasions.
- Cannot be "heavy" reading that requires the reader to proceed slowly and carefully and/or do lots of heavy thinking.
- Can be understood by anyone with a decent high school education (not including calculus). However, sometimes this requirement can be circumvented, if the following additional criteria are met:
- There must be other books on this list that cover all the prerequisite information.
- When you suggest the book, list any prerequisites.
- There shouldn't be more than 2 or 3 prerequisites.
- Post the title of your favorite book on a given subject.
- You must have read at least two other books on that same subject.
- You must briefly name the other books you've read on the subject and explain why you think your chosen book is superior to them.
Proposal: Community Curated Anki Decks For MIRI Recommended Courses
Spaced repetition is optimal for recalling factual information. It won't necessarily teach you anything that you haven't already learned. It helps you retain knowledge, and won't necessarily help you develop skills. But, within the domain of factual information that you can already comprehend, spaced repetition systems are pretty optimal. So, if you want to train your brain on a bunch of Spanish-to-English sentence translations, or stock market tickers, or definitions, or sample questions, you should use something like Anki.
Once you start using spaced repetition, you learn that one of the biggest limits is the card-making process. Making your own cards is time-consuming, although experience will make you much faster. Experience will also teach you what makes a better card. The 20 Rules of Formatting Knowledge pretty much spells it out for you, but I still had to make my own cards, find the sticking points, and edit them until I got a good sense for proper context and suitably short, distinct answers.
You can find other people's shared decks and skip the card making process yourself, but not without some new problems. First, you are going to learn more making the cards yourself than studying someone else's. If it's subtle material, you can make cards that fill in the gaps in your particular understanding. But if someone else studies something and makes cards to fill in their own gaps, that means that what you're studying may not cover material that you don't know you don't know. It would be nice if everyone's shared decks were a completely thorough treatment of the material, but, alas, it is not so. And the only way that you can tell is by comparing the deck and the material during your own studying.
Shared decks also just aren't all that good sometimes. Someone, I can't recall who, wrote a script to scrape the entire LW wiki and then cloze-delete the title from the article. I appreciate the idea of SRSing the LW wiki, and scripting the whole thing was undoubtedly really efficient. However, the result was usually question and answer text hundreds of words long, with tables of content in the middle, and probably too many cards of insufficient value.
Despite their problems, I think that shared decks have way more potential than their current use suggests. A well-crafted deck that gives its subject matter a thorough treatment could be more valuable than a textbook, and about as difficult to compose. But, looking at some of the best Anki decks I've come across, it will likely take more than one person to get such a deck off the ground.
Anki's .apkg files are sorta unwieldy to edit collaboratively, because there's not really a way to merge edits from multiple contributors. Luckily, we can export and import decks as text, and use version control like GitHub to do the same thing. With a GitHub-hosted collaborative deck, a team of people studying a textbook, like Thinking and Deciding, could all make flashcards as they go, add them all to the same deck, remove redundant cards, standardize the layout, tag cards appropriately, and share them with whomever else comes along. Then, anyone else who wants to study the textbook has a high-quality Anki deck to use in conjunction, and if they know how a question can be asked better, or if they find an error, or if the seventh chapter didn't really get much coverage, they can contribute to the deck, too.
This huge list of material put together by Louie Helm should be Anki-fied. Hopefully we can unite the efforts of many autodidacts and start to curate decks for each of the areas covered. Maybe a group of friends is about to work through a course on Quantum Computing or Set Theory. The rest of LW would benefit from their work making flashcards, but especially so if they leave the project open to collaboration.
So, the things needed to move forward:
- Someone learned in IP tell me what kind of licensing or copyright applies here. Should people post these with a Creative Commons or a GPL? Obviously we don't want to start plagiarizing or copyright-violating in the process of making this work. We don't want to abscond with other people's decks and start building on them, I think.
- If you're about to tackle an area of study on the MIRI courses list, make a GitHub repo for it.
- If this interests you in the slightest way, please contact me. iconreforged@gmail.com
- I'm working through the dull details of hosting an Anki deck in text form on GitHub myself with the copious number of Russian flashcards I made in three semesters of Russian classes. Hopefully that can provide some kind of template. If I'm feeling ambitious, I might start a Heuristics and Biases deck based on Thinking and Deciding.
Open Thread: how do you look for information?
There have been a couple discussion posts on this, but let's make it general and collect our tips in one place. It's also a good way to encourage each other at getting better at this - looking for info more often and more efficiently.
So, if you want to find something out, where do you look, and how? Who do you ask?
Course recommendations for Friendliness researchers
When I first learned about Friendly AI, I assumed it was mostly a programming problem. As it turns out, it's actually mostly a math problem. That's because most of the theory behind self-reference, decision theory, and general AI techniques haven't been formalized and solved yet. Thus, when people ask me what they should study in order to work on Friendliness theory, I say "Go study math and theoretical computer science."
But that's not specific enough. Should aspiring Friendliness researchers study continuous or discrete math? Imperative or functional programming? Topology? Linear algebra? Ring theory?
I do, in fact, have specific recommendations for which subjects Friendliness researchers should study. And so I worked with a few of my best interns at MIRI to provide recommendations below:
- University courses. We carefully hand-picked courses on these subjects from four leading universities — but we aren't omniscient! If you're at one of these schools and can give us feedback on the exact courses we've recommended, please do so.
- Online courses. We also linked to online courses, for the majority of you who aren't able to attend one of the four universities whose course catalogs we dug into. Feedback on these online courses is also welcome; we've only taken a few of them.
- Textbooks. We have read nearly all the textbooks recommended below, along with many of their competitors. If you're a strongly motivated autodidact, you could learn these subjects by diving into the books on your own and doing the exercises.
Have you already taken most of the subjects below? If so, and you're interested in Friendliness research, then you should definitely contact me or our project manager Malo Bourgon (malo@intelligence.org). You might not feel all that special when you're in a top-notch math program surrounded by people who are as smart or smarter than you are, but here's the deal: we rarely get contacted by aspiring Friendliness researchers who are familiar with most of the material below. If you are, then you are special and we want to talk to you.
Not everyone cares about Friendly AI, and not everyone who cares about Friendly AI should be a researcher. But if you do care and you might want to help with Friendliness research one day, we recommend you consume the subjects below. Please contact me or Malo if you need further guidance. Or when you're ready to come work for us.
|
Cognitive Science
|
If you're endeavoring to build a mind, why not start by studying your own? It turns out we know quite a bit: human minds are massively parallel, highly redundant, and although parts of the cortex and neocortex seem remarkably uniform, there are definitely dozens of special purpose modules in there too. Know the basic details of how the only existing general purpose intelligence currently functions. |
|
Heuristics and Biases
|
While cognitive science will tell you all the wonderful things we know about the immense, parallel nature of the brain, there's also the other side of the coin. Evolution designed our brains to be optimized at doing rapid thought operations that work in 100 steps or less. Your brain is going to make stuff up to cover up that its mostly cutting corners. These errors don't feel like errors from the inside, so you'll have to learn how to patch the ones you can and then move on.
|
|
Functional Programing
|
There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme. |
|
Discrete Math
|
Much like programming, there are two major branches of mathematics as well: Discrete and continuous. It turns out a lot of physics and all of modern computation is actually discrete. And although continuous approximations have occasionally yielded useful results, sometimes you just need to calculate it the discrete way. Unfortunately, most engineers squander the majority of their academic careers studying higher and higher forms of calculus and other continuous mathematics. If you care about AI, study discrete math so you can understand computation and not just electricity.
|
|
Linear Algebra
|
Linear algebra is the foundation of quantum physics and a huge amount of probability theory. It even shows up in analyses of things like neural networks. You can't possibly get by in machine learning (later) without speaking linear algebra. So learn it early in your scholastic career. |
|
Set Theory
|
Like learning how to read in mathematics. But instead of building up letters into words, you'll be building up axioms into theorems. This will introduce you to the program of using axioms to capture intuition, finding problems with the axioms, and fixing them. |
|
Mathematical Logic
|
The mathematical equivalent of building words into sentences. Essential for the mathematics of self-modification. And even though Sherlock Holmes and other popular depictions make it look like magic, it's just lawful formulas all the way down. |
|
Efficient Algorithms and Intractable Problems
|
Like building sentences into paragraphs. Algorithms are the recipes of thought. One of the more amazing things about algorithm design is that it's often possible to tell how long a process will take to solve a problem before you actually run the process to check it. Learning how to design efficient algorithms like this will be a foundational skill for anyone programming an entire AI, since AIs will be built entirely out of collections of algorithms. |
|
Numerical Analysis
|
There are ways to systematically design algorithms that only get things slightly wrong when the input data has tiny errors. And then there's programs written by amateur programmers who don't take this class. Most programmers will skip this course because it's not required. But for us, getting the right answer is very much required. |
|
Computability and Complexity
|
This is where you get to study computing at it's most theoretical. Learn about the Church-Turing thesis, the universal nature and applicability of computation, and how just like AIs, everything else is algorithms... all the way down. |
|
Quantum Computing
|
It turns out that our universe doesn't run on Turing Machines, but on quantum physics. And something called BQP is the class of algorithms that are actually efficiently computable in our universe. Studying the efficiency of algorithms relative to classical computers is useful if you're programming something that only needs to work today. But if you need to know what is efficiently computable in our universe (at the limit) from a theoretical perspective, quantum computing is the only way to understand that. |
|
Parallel Computing
|
There's a good chance that the first true AIs will have at least some algorithms that are inefficient. So they'll need as much processing power as we can throw at them. And there's every reason to believe that they'll be run on parallel architectures. There are a ton of issues that come up when you switch from assuming sequential instruction ordering to parallel processing. There's threading, deadlocks, message passing, etc. The good part about this course is that most of the problems are pinned down and solved: You're just learning the practice of something that you'll need to use as a tool, but won't need to extend much (if any). |
|
Automated Program Verification
|
Remember how I told you to learn functional programming way back at the beginning? Now that you wrote your code in functional style, you'll be able to do automated and interactive theorem proving on it to help verify that your code matches your specs. Errors don't make programs better and all large programs that aren't formally verified are reliably *full* of errors. Experts who have thought about the problem for more than 5 minutes agree that incorrectly designed AI could cause disasters, so world-class caution is advisable. |
|
Combinatorics and Discrete Probability
|
Life is uncertain and AIs will handle that uncertainty using probabilities. Also, probability is the foundation of the modern concept of rationality and the modern field of machine learning. Probability theory has the same foundational status in AI that logic has in mathematics. Everything else is built on top of probability. |
|
Bayesian Modeling and Inference
|
Now that you've learned how to calculate probabilities, how do you combine and compare all the probabilistic data you have? Like many choices before, there is a dominant paradigm (frequentism) and a minority paradigm (Bayesianism). If you learn the wrong method here, you're deviating from a knowably correct framework for integrating degrees of belief about new information and embracing a cadre of special purpose, ad-hoc statistical solutions that often break silently and without warning. Also, quite embarrassingly, frequentism's ability to get things right is bounded by how well it later turns out to have agreed with Bayesian methods anyway. Why not just do the correct thing from the beginning and not have your lunch eaten by Bayesians every time you and them disagree? |
|
Probability Theory
|
No more applied probability: Here be theory! Deep theories of probabilities are something you're going to have to extend to help build up the field of AI one day. So you actually have to know why all the things you're doing are working inside out. |
|
Machine Learning
|
Now that you chose the right branch of math, the right kind of statistics, and the right programming paradigm, you're prepared to study machine learning (aka statistical learning theory). There are lots of algorithms that leverage probabilistic inference. Here you'll start learning techniques like clustering, mixture models, and other things that cache out as precise, technical definitions of concepts that normally have rather confused or confusing English definitions. |
|
Artificial Intelligence
|
We made it! We're finally doing some AI work! Doing logical inference, heuristic development, and other techniques will leverage all the stuff you just learned in machine learning. While modern, mainstream AI has many useful techniques to offer you, the authors will tell you outright that, "the princess is in another castle". Or rather, there isn't a princess of general AI algorithms anywhere -- not yet. We're gonna have to go back to mathematics and build our own methods ourselves. |
|
Incompleteness and Undecidability
|
Probably the most celebrated results is mathematics are the negative results by Kurt Goedel: No finite set of axioms can allow all arithmetic statements to be decided as either true or false... and no set of self-referential axioms can even "believe" in its own consistency. Well, that's a darn shame, because recursively self-improving AI is going to need to side-step these theorems. Eventually, someone will unlock the key to over-coming this difficulty with self-reference, and if you want to help us do it, this course is part of the training ground. |
|
Metamathematics
|
Working within a framework of mathematics is great. Working above mathematics -- on mathematics -- with mathematics, is what this course is about. This would seem to be the most obvious first step to overcoming incompleteness somehow. Problem is, it's definitely not the whole answer. But it would be surprising if there were no clues here at all. |
|
Model Theory
|
One day, when someone does side-step self-reference problems enough to program a recursively self-improving AI, the guy sitting next to her who glances at the solution will go "Gosh, that's a nice bit of Model Theory you got there!"
|
|
Category Theory
|
Category theory is the precise way that you check if structures in one branch of math represent the same structures somewhere else. It's a remarkable field of meta-mathematics that nearly no one knows... and it could hold the keys to importing useful tools to help solve dilemmas in self-reference, truth, and consistency. |
Outside recommendations |
|
|
|
|
Harry Potter and the Methods of Rationality
|
Highly recommended book of light, enjoyable reading that predictably inspires people to realize FAI is an important problem AND that they should probably do something about that.
|
|
|
Global Catastrophic Risks
|
A good primer on xrisks and why they might matter. SPOILER ALERT: They matter.
|
|
|
The Sequences
|
Rationality: the indispensable art of non-self-destruction! There are manifold ways you can fail at life... especially since your brain is made out of broken, undocumented spaghetti code. You should learn more about this ASAP. That goes double if you want to build AIs.
|
|
|
Good and Real
|
A surprisingly thoughtful book on decision theory and other paradoxes in physics and math that can be dissolved. Reading this book is 100% better than continuing to go through your life with a hazy understanding of how important things like free will, choice, and meaning actually work.
|
|
|
MIRI Research Papers
|
MIRI has already published 30+ research papers that can help orient future Friendliness researchers. The work is pretty fascinating and readily accessible for people interested in the subject. For example: How do different proposals for value aggregation and extrapolation work out? What are the likely outcomes of different intelligence explosion scenarios? Which ethical theories are fit for use by an FAI? What improvements can be made to modern decision theories to stop them from diverging from winning strategies? When will AI arrive? Do AIs deserve moral consideration? Even though most of your work will be more technical than this, you can still gain a lot of shared background knowledge and more clearly see where the broad problem space is located.
|
|
|
Universal Artificial Intelligence
|
A useful book on "optimal" AI that gives a reasonable formalism for studying how the most powerful classes of AIs would behave under conservative safety design scenarios (i.e., lots and lots of reasoning ability).
|
Do also look into: Formal Epistemology, Game Theory, Decision Theory, and Deep Learning.
[LINK] Mastering Linear Algebra in 10 Days: Astounding Experiments in Ultra-Learning
Scott Young completed the four-year MIT computer science degree curriculum in less than one year. This is a post about how he did it.
During the yearlong pursuit, I perfected a method for peeling those layers of deep understanding faster. I’ve since used it on topics in math, biology, physics, economics and engineering. With just a few modifications, it also works well for practical skills such as programming, design or languages.
Here’s the basic structure of the method:
- Coverage
- Practice
- Insight
Which questions about online classes would you ask Peter Norvig?
A week ago Google launched an open source project called Course Builder it packages the software and technology used to build their July Class Power Searching with Google. The discussion forum for it is here. Tomorrow is the first live hangout where he will be answering questions about MOOC design and technical aspects of using Course Builder. The live hangout will is scheduled for the 26th of September.
Helping the World to Teach
In July, Research at Google ran a large open online course, Power Searching with Google, taught by search expert, Dan Russell. The course was successful, with 155,000 registered students. Through this experiment, we learned that Google technologies can help bring education to a global audience. So we packaged up the technology we used to build Power Searching and are providing it as an open source project called Course Builder. We want to make this technology available so that others can experiment with online learning.
The Course Builder open source project is an experimental early step for us in the world of online education. It is a snapshot of an approach we found useful and an indication of our future direction. We hope to continue development along these lines, but we wanted to make this limited code base available now, to see what early adopters will do with it, and to explore the future of learning technology. We will be hosting a community building event in the upcoming months to help more people get started using this software. edX shares in the open source vision for online learning platforms, and Google and the edX team are in discussions about open standards and technology sharing for course platforms.
We are excited that Stanford University, Indiana University, UC San Diego, Saylor.org, LearningByGivingFoundation.org, Swiss Federal Institute of Technology in Lausanne (EPFL), and a group of universities in Spain led by Universia, CRUE, and Banco Santander-Universidades are considering how this experimental technology might work for some of their online courses. Sebastian Thrun at Udacity welcomes this new option for instructors who would like to create an online class, while Daphne Koller at Coursera notes that the educational landscape is changing and it is exciting to see new avenues for teaching and learning emerge. We believe Google’s preliminary efforts here may be useful to those looking to scale online education through the cloud.
Along with releasing the experimental open source code, we’ve provided documentation and forums for anyone to learn how to develop and deploy an online course like Power Searching. In addition, over the next two weeks we will provide educators the opportunity to connect with the Google team working on the code via Google Hangouts. For access to the code, documentation, user forum, and information about the Hangouts, visit the Course Builder Open Source Project Page. To see what is possible with the Course Builder technology register for Google’s next version of Power Searching. We invite you to explore this brave new world of online learning with us.
A small group of us has been working on related matters but we are far from done reviewing the relevant literature. Not having any good questions yet, I thought what harm might there be in asking for the broader community to come up with a few questions! If Norvig has answered your questions in some of his other existing material that I've reviewed I'll respond with a link.
Learn Power Searching with Google
Google Search makes it amazingly easy to find information. Come learn about the powerful advanced tools we provide to help you find just the right information when the stakes are high.
Daniel Russell is doing a free Google class on how to search the web. Besides six 50-minute classes it will include interactive activities to practice new skills. Upon passing the post-course assessment you get a Certificate of Completion.
Advanced search skills are not only a useful everyday skill but vital to doing scholarship. Searching the web is a superpower that would make thinkers of previous centuries green with envy. Learn to use it well. I recommend checking out Inside Search, Russel's Blog or perhaps reading the article "How to solve impossible problems" to get a feeling about what you can expect to gain from it.
I think for most the value of information is high enough to be worth the investment. Also I suspect it will be plain fun. I am doing the class and strongly recommend it to fellow LessWrong users. Anyone else who has registered please say so publicly in the comments as well. :)
Registration is open from June 26, 2012 to July 16, 2012.
How do you find good scholarly criticism of a book?
When I read a book with new and interesting ideas, I usually want to know if there are major flaws that any knowledgeable scholar in the field would point out immediately (Two recent examples are Pinker's "The Better Angels of our Nature", and Harriss's "The Nurture Assumption")
I usually:
- Look at reviews on Amazon (especially the negative ones)
- Google with keywords like "criticism, "review", "problem", (and whatever major issues I seem to have run in) etc.
- Search Google Scholar for the same thing
- Ask in some communities (LessWrong, reddit AskHistorians) if anybody read it
One problem is that I end up spending a lot of time reading stuff of no interest - either reviewers explaining the book to people who haven't read it (and sometimes even misrepresenting it's arguments, or framing them in terms of their pet controversy), or bloggers/posters who haven't read the book so go off a summary and come up with arguments that are already well-addressed in it.
So, what tips and strategies do you have for finding solid scholarly criticism ?
Research Opportunity/Scholarship for ALL students (High school through Post-doc)
I recently was reminded of some work I did last year, and thought that it is the type of opportunity some LW-ers would be interested in, since there are a lot of STEM students on here.
The following are details about a research/scholarship opportunity. You must be a student, but you can be of any level: high school, college, grad school, or post-doc. You do not have to go to any specific school, but you probably have to relocate to Dayton, OH for the 10 weeks of the summer program. You do not have to relocate for the rest of the year.
They are very flexible on their admissions! GPA isn't high enough? Not a US citizen? Can't commit to the entire 10 weeks? They will still accept you if you turn in a good essay!
Basic Info
Website: http://wbi-icc.com/
5 minute video: http://vimeo.com/31103711
Tec^Edge is a research opportunity/experiential learning scholarship that takes students and puts them in groups with mentors from academia (such as professors with research projects), government (such as the Air Force Research Labs (AFRL)) and business (such as General Dynamics).
Besides Tec^Edge, it also goes by: Academic Leadership Pipeline Scholarship (ALPS), Summer at the Edge (SATE), and Year at the Edge (YATE).
Summer At the Edge (SATE) is a 10-week program, and generally requires relocating for the summer to Dayton, OH, where the facility is. The work is full-time, though you make your own hours, and you get paid $4000 for the whole thing, generally in the form of a scholarship to your school (which means no taxes!). The facility is amazingly nice. Lots of high-tech things to work/play with.
It has been described as "Lock a bunch of smart people in a room with lots of gadgets, occasionally shove pizza under the door, and see what comes out."
Year at the Edge (YATE) is the "off-season". They ask for about 10 hours a week (so the scholarship is only $1000/10 weeks). But you do not have to be in Dayton. You can work from your school, but the occasional virtual meeting is required. For some reason, they were set on using Second Life instead of Skype for this, even though everyone disliked it. I am guessing it was for some sort of experiment..
There are lots of projects, and they try to put you on one that matches your interests. Many of the projects are sponsored by the Air Force Research Labs, so there are a lot involving sensors (aka surveillance) and aeronautics. Most projects need a lot of programming, most commonly in Java. But there are also opportunities for non-programmers (They are very big on inter-disciplinary work). Oftentimes you can request your own project, if there is something specific you want to work on. Anything that AFRL or DARPA might be interested in is generally accepted.
A sample project: One group was trying to develop a micro air vehicle that could be shot out of a rocket-launcher thing, so it had to roll up into a 2" diameter. It had to carry sensors (aka video feed), and it had to have a controlled descent. The whole thing had to cost less than $100 per unit, and their end of year test was to drop it from a height of perhaps 200' (I forget), use the video feed to find their mentor's truck, and use the controlled descent to land it in his truck bed. I don't know if they were successful in this...
Besides getting to work on awesome projects, you also have a chance to meet and work under some pretty awesome people and network with contacts from various research labs. They are pretty willing to buy any expensive gadgetry required for your project. You can get your work published. You have lots of freedom as to what you want to do, how/when you want to do it. There are some really great presenters that they bring in too. All in all, it's a pretty awesome experience.
Downsides: The organization is what they call "Chaordic". There's not much of a hierarchy. You have to be ok pretty much making your own way. A lot of projects have some application to surveillance, so you have to be ok with that. Most LW-ers would have to relocate for the summer program. I think they help out with this, but all the same, Dayton is not the most interesting city to be in.
My experiences:
For Summer At the Edge (SATE) I was teamed with a post-doc in some sort of computer science, and a 17 year-old programmer from a local high school. We were working on a project from the Human Performance Wing of the AFRL. The whole of our instruction was "Do something with Information Visualization, and Computer Mediated Communications." In other words, ways to make pretty pictures out of things like email, chat rooms, blogs, etc, so that people can understand them faster or see patterns easier.
Some of the groups were very organized under their mentor who had a specific project that they were working on, and so would tell students what they wanted done. Our group was very self-led. Our mentor would ask us if we needed anything, but pretty much would leave us to our own devices.
We did some research for about a week, then my partners started working on programming text analyzers. Not being a programmer, I had to find other things to do. Not knowing what to do, I spent a lot of time doing the paperwork and giving presentations, organizing the SATE trip to an amusement park, and generally helping out.
One thing about SATE is that you have to be very self-motivated. If you don't have something to do, it is up to you to find something. Because it's so self-led, there's a decent amount of updates you have to give (such as a weekly email about what you accomplished that week, and how many hours you worked), and also a paper that you have to submit at the end, summarizing your findings.
Every now and then I'd have an idea. There were a lot of dead ends, but eventually I managed to coalesce my ideas into somewhat of a whole. I probably spent about a month working on my actual project, and then about a week writing my paper, and some of my teammates' paper. Both of our papers ended up getting published in the Collaborative Technologies and Systems International Symposium. If I had still been a student when the conference occurred, Tec^Edge would have paid my way for me to present the poster at the conference, but as it stood, our adviser presented for me.
After SATE was over, I applied for Year at the Edge (YATE). This program only requires ~10 hours a week, since it is during the school year, and allows you to work from home/school. For this project there was even less instruction, as I had to propose my own project. If you can manage to turn a class assignment into something YATE is interested in, they are quite accepting of it.
I had a pretty open-ended project for a Computer Design class, so I asked my partner in that class if she would be willing to do a project on something called Computer Supported Collaborative Work (aka "Using computers to work together with other people"), and I proposed the same project to YATE. They all agreed, so for all intents and purposes I ended up getting a research scholarship to do my homework!
This time there was a bit more structure, as we were following the class guidelines as to what we needed to accomplish. Mainly we were doing quantitative and qualitative measurements on collaborating using Google docs, versus other collaborative methods (being in the same room and sharing a computer). Being a much smaller project, this didn't get published, but we did get an A on the assignment, and used the paper-writing as an excuse to learn LaTeX.
Unfortunately, that was my last quarter before my divorce, so I didn't continue with the program. But I couldn't recommend it more to anyone who is interested. You get a lot of opportunities to work on whatever interests you. The mentors are amazing contacts from many different research companies that you can use as references. It's not uncommon to get offered a job or internship with the company that is mentoring your project. Also, even if you are a high schooler or an undergrad, you have the opportunity to get published.
Want to Apply?
This website has application instructions and a short video: http://wbi-icc.com/who-we-work-with/students-teachers
Things they like: Passion, Willingness to venture into the unknown, Willingness to "fail" (allowing discovery of things that don't work), Interdisciplinary work and knowledge, Desire to make a difference
My admissions essay earned me a spot as a "Student Leader" and I'm willing to post it, if it will help people see the sort of thing that they are looking for. But I won't bother, if no one asks!
If you have any questions about it, let me know!
Online Course in Evidence-Based Medicine
The Foundation for Blood Research has created an online course in Evidence-Based Medicine, aimed at "advanced high school science students, college students, nursing students, and 1st or 2nd year medical students." It focuses on evaluating research papers and applying statistics to medical diagnosis. I have taken this course, and it was useful practice in Bayesian reasoning.
The course involves working through a couple case studies of ER patients. Students will observe the patient, review research on relevant diagnostic tests, and calculate posterior probabilities given the available information. For instance: once case involves a woman who may have bacterial meningitis, but her spinal fluid test results are mixed. Students then read parts of a paper describing the success of different components of the spinal fluid test as predictors of meningitis.
The course is self-paced and highly modular, alternating between videos, multiple choice or calculation questions, and short written submissions. There is no in-course interaction between students taking the same course, but it is divided into "class sections" for the convenience of teachers who want to observe their students. It works well with Firefox and Safari, and slightly less well (but still easily usable) with Internet Explorer.
Anyone who is interested or wants more information, look at their website or ask me in the comments. Once a decent number of people have shown some interest, I will contact one of the site administrators and he'll set up an official class section for us.
EDIT: I have contacted the site administrator, we should have a class section available soon. Section name and info on how to log in will be posted shortly.
EDIT2: The course section is up: go to the http://evidenceworksremote.com/courses/ and then find the Less Wrong Community course. When you click on the course listing you will be asked to register. Once you receive the acknowledging email return to the course and enter the "enrollment" key: LW101 . I will be able to see your responses to the questions and possibly able to provide feedback. Once you have completed the course, Dr. Allan, who is one of the developers, would appreciate feedback by email.
Which fields of learning have clarified your thinking? How and why?
Did computer programming make you a clearer, more precise thinker? How about mathematics? If so, what kind? Set theory? Probability theory?
Microeconomics? Poker? English? Civil Engineering? Underwater Basket Weaving? (For adding... depth.)
Anything I missed?
Context: I have a palette of courses to dab onto my university schedule, and I don't know which ones to chose. This much is for certain: I want to come out of university as a problem solving beast. If there are fields of inquiry whose methods easily transfer to other fields, it is those fields that I want to learn in, at least initially.
Rip apart, Less Wrong!
Science of human dominance?
I'm trying to do some research related to human dominance including social signaling and how dominance is both successfully and unsuccessfully challenged. Ideally I'd like to find what the common factors are rather than having it be too particular to one community or another. Unfortunately everything I can find on the topic is either about dominance behavior of other primates or is ad-hoc self-help advice by self-proclaimed gurus of social power.
Can anyone point me in the direction of the science of human dominance behavior?
Questions about doing literature searches
I'd like to get better at doing literature searches. Luke has written a bit about this, but I still have lots of questions.
I've mostly been using Google Scholar; it seems OK, but not stellar. What other tools are useful and for what? Added: is there a good way to find papers that cite a many of a set of papers (the idea being that if I find a number of relevant papers, I want to see if there's a review which covers them)?
How do you look specifically for review papers and/or books? For example, I'm interested in cognitive skill acquisition and I've found a review paper from 1996 but I'd like to find recent review papers on the same or closely related topics and haven't had any luck. Are there specific keywords or phrase permutations that often help? Is searching just the papers that cite an older paper useful? Is searching the same journal useful?
How do you search for criticism? Lets say I've found a paper that uses a particular method or theory but I want to know whether there are significant criticisms. Are there phrases that are often associated criticism papers?
Are there any activities that are especially good for practicing literature searches?
Software tools for efficient scholarship
I've figured out a few things about efficient scholarship, but my process isn't optimal. In particular, I'm probably not using the best software tools. I and probably many Less Wrongers could benefit from using the best scholarship software around.
Please post your scholarship software recommendations below!
In particular, the thing I'm looking for is a good citation manager (for Mac): one that will scan all the PDFs on my NAS (network attached storage), query web databases to fill out their metadata in a searchable database, and allow me to export bibliographic records in a variety of formats. Here are the failure modes of the ones I've tried so far:
- Zotero: When I try to import any of the 80,000 pdfs on my NAS so that it can check internet databases for their metadata, it makes a local copy of each PDF even though I have "automatically attach PDFs..." unchecked in the preferences window. Also, the program crashes when I try to import more than about 10 PDFs at a time, even though my new Macbook Air is lightning fast doing everything else.
- Mendeley: As far as I can tell, Mendeley cannot use a NAS drive as a 'watched folder'. Maybe there's a hack for this?
- Sente: Cannot batch-import without prompting for interaction for almost every PDF imported.
- BibDesk: Does not import PDFs.
- Bookends: Cannot batch-import dozens of PDFs quickly.
- RefDB: No Mac binary, not sure if it can import PDFs.
Garr.
Scholarship Booster: My favorite journals
Those interested in my same primary area of interest (cogsci/philosophy) may like to know what my favorite journals are. (Notice they are heavily weighted for review articles.) Here are some of my favorites:
- Trends in Cognitive Sciences
- Behavioral and Brain Sciences
- Annual Review of Psychology
- Topics in Cognitive Science
- Current Directions in Psychological Science
- Wiley Interdisciplinary Reviews: Cognitive Science
- Mind and Language
- Artificial Intelligence Review
- Philosophy Compass
- Review of Philosophy and Psychology
- Nature Reviews Neuroscience
- Judgment and Decision Making
- Journal of Behavioral Decision Making
- Thinking and Reasoning
Defrag conference scholarships
http://www.defragcon.com/2011/general/defrag-announcements/
Eric Nolin of the Defrag conference is looking to organize a scholarship fund for high school girls who want to study computer science in university.
Till that's in place, they're funding scholarships for people to attend the conference.
General textbook comparison thread
We've already had a lengthy (and still active) thread attempting to address the question "What are the best textbooks, and why are they better than their rivals?". That's excellent, but no one is going to post there unless they're prepared to claim: Textbook X is the best on its subject. But surely many of us have read many texts for which we couldn't say that but could say "I've read X and Y, and here's how they differ". A good supply of such comparisons would be extremely useful.
I propose this thread for that purpose. Rules:
- Each top-level reply should concern two or more texts on a single subject, and provide enough information about how they compare to one another that an interested would-be reader should be able to tell which is likely to be better for his or her purposes.
- Replies to these offering or soliciting further comparisons in the same domain are encouraged.
- At least one book in each comparison should either
- be a very good one, or at least
- look like a very good one even though it isn't.
- be a very good one, or at least
If this gets enough responses that simply looking through them becomes tiresome, I'll update the article with (something like) a list of textbooks, arranged by subject and then by author, with links for the comments in which they're compared to other books and a brief summary of what's said about them. (I might include links to comments in Luke's thread too, since anything that deserves its place there would also be acceptable here.)
See also: magfrump's request for recommendations of basic science books; "Recommended Rationalist Reading" (narrower subject focus, and without the element of comparison).
Topic Search Poll Results and Short Reports
At the end of June, I asked Less Wrong to vote for "What topic[s] would be best for an investigation and brief post?" in order to direct a search for topics to examine here. My thanks to everyone that participated (especially since the comments hint that the poll format was not well-liked). The most-wanted topics follow, and the complete list can be found on Google Docs -- maps and graphs related to the poll are also available on All Our Ideas. A score for a topic in the results below is an "estimated [percent] chance that it will win against a randomly chosen idea."
- Systems theory -- 71.6
- Leadership -- 70.7
- Linguistics (general) -- 70.7
- Finance -- 67.0
- Bayesian approach to business -- 60.7
- Lisp (Programming language) -- 59.7
- Anthropology (general) -- 59.4
- Sociology (general) -- 59.2
- Political Science (general) -- 58.5
- Historiography (the methods of history) -- 58.3
- Logistics -- 56.8
- Sociology of Political Organizations -- 56.0
- Military Theory -- 52.1
- Diplomacy -- 51.1
Systems theory, in first place, is a topic that I found while rummaging through online sources, including Wikipedia, for items to add to the poll; it's described there as the "study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. [....] In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback." Leadership seems to fall into both the social and "being effective" categories of interest, but has only lightly been touched on in previous discussion here despite a lot of ink spilled on the topic elsewhere -- the top Google results for "leadership" on this site are currently Calcsam's post on community roles and a book review for the Arbinger Institute's Leadership and Self Deception. "To Lead, You Must Stand Up" also comes to mind.
How to Use It
The spreadsheet includes columns for "Currently Investigated By" and "Writeup URLs" -- feel free to add your name or writeup links. If you already know a thing or two about one of the above topics, share your knowledge in a comment below or in a discussion post as appropriate, similar to the earlier "What can you teach us?" If you want to survey what currently exists on a topic, grab a few books, investigate, and then let us know what you found. When a related post instead of just a comment is appropriate, I recommend the tag "topic_search" As mentioned previously, even investigations that end in a comment to this post that a topic isn't useful for LW is still itself useful for the search.
Stanford Intro to AI course to be taught for free online
Taught by professors Sebastian Thurn and Peter Norvig: http://www.ai-class.com/
I figured some here might be interested. :)
Proposal: Systematic Search for Useful Ideas
LessWrong is a font of good ideas, but the topics and interests usually expressed and explored here tend to cluster over few areas. As such, high-value topics may still be present for the community in other fields which can be systematically explored, rather than waiting for a random encounter. Additionally, there seems to be interest here in examining a wider variety of topics. In order to do this, I suggest creating a community list of areas to look into (besides the usual AI, Cog Sci, Comp Sci, Econ, Math, Philosophy, Psych, Statistics, etc.) and then reading a bit on the basics of these fields. In additional to potentially uncovering useful ideas per se, this also might offer the opportunity to populate the textbooks resource list and engage in not-random acts of scholarship.
Everyone Split Up, There’s a A Lot of Ideosphere to Cover
A rough sketch of how I think the project will work follows. I’ll be proceeding with this and tackling at least one or two subjects as long as there’s at least a few other people interested in working on it too.Step 1, Community Evaluation: Using All Our Ideas or similar, generate a list of fields to investigate.
Step 2, Sign-Up: People have the best sense of what they already know and their abilities, so at this point anyone that wants to can pick a subject that’s best for them to look into.
Step 3, Study: I imagine this will mostly involve self-directed reading of a handful of texts, watching some online videos, and maybe calling up one or two people -- in other words, nothing too dramatic. If a vein of something interesting is found, it’s probably better that it’s “marked” for further follow-up rather than further examined alone.
Step 4, Post: Some these investigations will not reveal anything -- that’s actually a good thing (explained below); for these, a short “Looked into it, nothing here” sort of comment should suffice. Subjects with bigger findings should get bigger, more detailed comments/posts.
Evaluation of Proposal
As a first step, I’ll use a variation of the Heilmeier questions which is an (admittedly idiosyncratic) mix of the original version and gregv’s enhanced version.- What are you trying to do? Articulate your objectives using absolutely no jargon.
Produce comments or posts providing very brief overviews of fields of knowledge, not previously discussed here, with notes pertaining to Less Wrong topics and interests. - Who cares? How many people will benefit?
This post is partially an attempt to determine that, but there seems to be at least some interest in more variety on the site (see above). Additionally, the posts should be a good general resource for anyone that stumbles across them, and might even make good content for search purposes. - Why hasn't someone already solved this problem? What makes you think what stopped them won't stop you?
The idea is roughly book club meets Wikipedia, but with an emphasis on creating a small evaluative body of knowledge rather than a massive descriptive encyclopedia, and with a LessWrong twist. The sharper focus should make the results more useful to go through than just hitting “random page” in yon encyclopedia. - How much have projects like this cost (time equivalent)?
Some have the ability to take on “whole fields of knowledge in mere weeks” but that’s not typical -- investigating a subject in this case is roughly comparable in complexity to taking an introductory class or two, which people without any previous training normally accomplish over a period of about three to four months at a pace which is not especially strenuous, and with fairly light monetary costs beyond tuition/fees (which aren't applicable here). - What are the midterm and final "exams" to check for success?
For each individual investigation, a good “midterm” check would be for the person looking into a field to have an list of resources or texts they’re working on. The final “exam” is a posting indicating if anything useful or interesting was found, and if so, what. - If y [this community search] fails to solve x [uncover useful knowledge in fields previously under-examined on LessWrong], what would that teach you that you (hopefully) didn't know at the beginning?
Quite possibly, this could be a good thing -- it indicates that the mix of topics on LessWrong is approximately right, and things can continue on. In this case, we’d end up seeing a bunch of short “nothing interesting here” comments, and can rest more or less assured that further investigation into even more minute detail in unnecessarily. This is conditional on not-terrible scholarship and a reasonably good priority list from step 1.
How to offer peer review comments
I was recently asked by an editor to offer (single-blind) peer review comments for some upcoming academic work on the Singularity. Having never done this before, I sought out some literature on how to do it.
In case others find themselves in this position, here are some helpful papers I found:
- Benos et al., How to review a paper
- Guilford, Teaching peer review and the process of scientific writing
- Provenzale & Stanley, A systematic guide to reviewing a manuscript
Reading the Sequences before Starting to Post: Costs and Benefits
This post arose from this discussion in the "Philosophy: a Diseased Discipline" post.
Current Practice
There have been several conversations lately about the costs and benefits of scholarship, the effort of reading the sequences, and attempts to repackage the sequence material in an easier form [1]. There also used to be a practice on LW of telling newbies who weren't producing good content to come back when they'd read the sequences. However, David Gerard, who has been paying more attention than me, has noticed that this practice has stopped. One plausible explanation is that the stoppage is due to a rising awareness of the effort that reading the sequences takes.
In an impromptu unscientific poll, 10 respondents said that they had read the sequences while still lurking on LW, 3 that they read them after creating accounts, and 8 that they had read them while they were still on OB. Nobody said that they still hadn't read the sequences [2]. So, assuming that this roughly represents the status quo, most LW posts/comments come from people who have read the sequences. The questions are: One, is this situation changing (are fewer people reading the sequences than in the past)? And two, should it change, and in what direction?
To answer this, one needs to look at the costs and benefits.
Costs
Length: The sequences comprise over a million words, not counting the comments. They cover material as diverse as semantics, quantum theory, cognitive science, metaethics, and how to write a good eutopia.
Interdependency: Each post in a sequence requires understanding of the previous posts in that sequence, and sometimes posts from earlier sequences. As well as being a source of intimidating and annoying tab explosions, this exacerbates the problem of length. It's hard to read the sequences except going through large chunks systematically, so they can't be broken up and read in a person's spare time.
Possible Memetic Hazard: Some of the ideas in the Sequences are controversial [3]. These points are often clearly marked in the posts and debated in the comments, so they won't sneak up on anyone; on the other hand, Memetic Hazard was used to describe controversial topics here, so at least someone thinks it's a problem. Some potential readers may not want to be exposed to treatments of controversial issues that argue for one side before they read balanced overviews. Also, discomfort has been expressed over the possibility of LW being a cult. I don't want this post to turn into a forum for the is-it-a-cult conversation, so it's up here as something that may cause disutility to some people who read the Sequences.
Benefits
Usefulness: various people [4] have discussed the various benefits of rationality knowledge in helping them "Win at Life". These benefits vary widely from person to person, so there are many ways to take advantage of the sequences in one's own life.
Informativeness: On questions that don't have immediate practical relevance, it's still good for the community if everyone is familiar with the basic material. Discussions of uploading, for example, wouldn't go very far if people had to stop to explain why they believe that consciousness is physical. Having all participants start out with a minimum number of undissolved confusions improves the SNR of Less Wrong even when it doesn't directly help the individual members win.
A Common Vocabulary: on a forum where everyone has read the sequences, it's easy to refer to them in conversation. Telling someone that their position is equivalent to two-boxing on Newcomb's problem will quickly convey what you mean and allow the person an easy way to craft an answer. Pointing out that a debate is over the meaning of a word will do more to prevent it from expanding into a giant mess than if the participants hadn't read Making Beliefs Pay Rent. And using examples like Bleggs and Rubes or similar can connect a commenter's example to ver audience's current knowlege of the concept.
Please comment to suggest more costs and benefits, provide more info on the sequence-reading habits of commenters, share your experience, or explain why everything I just said is wrong.
[1] Some examples: The Neglected Virtue of Scholarship, Costs and Benefits of Scholarship, Rationality Workbook Proposal.
[2] This option in the poll was created after the others and would up being elsewhere on the page, so it is probably underrepresented. I'm just taking the results as a first approximation, and will edit this post if the comments suggest the status quo is not what I thought it was.
[3] Some examples: The Many-Worlds and Timeless formulations of quantum mechanics are still being debated by Physicists. Perhaps less importantly, as an average reader can understand the debate and form ver own opinion, issues like the Zombie World are still being debated by philosophers.
[4] See this post for an example: Reflections on Rationality a Year Out
= 783df68a0f980790206b9ea87794c5b6)












Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)