Rational toy buying
I have an 8-year-old sister who is very interested in science. The school she attends (in rural Indiana) takes the mantra of "teaching to the test" to a whole new level; my sister has already come home from school many days crying and stressed over fear that she won't pass her state's proficiency tests (which are 5+ months away). They are underfunded and the science curriculum is what most of the (overtly religious) teachers are undertrained in, and so science is essentially not addressed. My sister has about an order of magnitude more homework on cursive writing than on anything related to science.
I want to purchase a gift for her for the holidays this year (the occasion doesn't matter, but it's a time when I'll actually be home so I can give her the gift and play with her / explain how to use it / hopefully help her start thinking about some things). I'm willing to consider age-appropriate ideas across several price ranges, but I want to think of something that will deliver a lot of utility: i.e. it should be compelling enough that an 8-year-old will actually like using it and there should be at least some evidence that she will benefit from it.
I've considered things like the EDUbuntu computers that come with lots of educational software, but actually buying one seems to be not straightforward. Is the best thing to buy a cheap netbook at then install EDUbuntu myself? Is a netbook too much for an 8-year-old? I'm mostly focused on things that will help her be proficient with computers and potentially help her develop a more sophisticated interest in them as she grows up. The Lego Mindstorm robotics stuff also crossed my mind.
Does anyone have experience with this / know of resources for making a good investment? Or am I way over-thinking this and just some regular Legos or art supplies are going to do essentially just as much good?
Added 02/25/2012
Much of the advice was very helpful. I ultimately found out that my young sister was interested in "mixing chemicals together because it looked cool." An item that I found which looked like a good way to bridge the gap between her more girly interests and her interest in chemistry was Perfume Science from Thames and Kosmos. I also purchased a variety kit from Snap Circuits. She loved both items and we spent a good bit of time playing with them during my trip home to see my family. We made several different kinds of perfume and also did some activities that helped explain how different extracts have been acquired throughout human history for their smells. Ultimately, my sister made a science fair project (something which surprised my parents a lot) based around one of the activities in the perfume science booklet, and she won second place.
I was very happy with both my decision to ask this question on LessWrong (despite the title of the post, which seems to seethingly annoy many LWers) and my purchase decisions. I used the website Fat Brain Toys to purchase the items and everything arrived without a problem. That site seemed to also have a reasonably good selection for more educationally oriented toys across several ages, and with useful customer reviews.
[LINK] More Bathtubs
An interesting article today on the correlation heuristic/fallacy w.r.t. finance. It describes how anchoring and availability bias contribute to difficulty understanding the difference between stocks and flows.
More shameless ploys for job advice
I've posted a few things seeking career advice with mixed success. In this case I have a more concrete question and if you feel like commenting, I'd appreciate it. I think it helps me to hear what a community of others thinks from a rational perspective because there are often many components to a decision that I had not anticipated.
I am currently a grad student working in computer vision. I dislike the way that my current adviser focuses only on projects that have short-term commercial gains. I want to study more fundamental, theoretical research which may take more time to develop but will also be more aesthetically pleasing to me. For me, the only reason to agree to be paid so little as a graduate student is to gain the opportunity to work freely on high risk projects that happen to be of personal interest. Practical considerations are not interesting to me as motivation for a Ph.D. On the other hand, it has felt nearly impossible to actually find faculty willing to have students work on theory. Rather than grinding away with no dental insurance for 3 more years, followed by low paying post-docs, etc., perhaps seeking a job will be better.
I have some interesting job prospects that are all with larger companies. The jobs are basically business analytics, including scientific computing, data mining, and machine learning. I'm sure the problems to work on are not that great; not going to be Earth shattering, but at the same time they sound a lot more interesting to me than hedge fund data analysis or military research labs (I have working experience at a government lab and I did not enjoy it). The hours would be better; the pay is fair and it would be a good living. I could pursue some things as serious hobbies outside work.
At the same time though, there feels like a nagging opportunity cost. I am not naive enough to believe there will be a nice faculty job waiting for me even if I finish my Ph.D. However, I really enjoy theoretical and mathematical physics, machine learning, computational complexity, and scientific computing, and various philosophical considerations generated by these. Being able to teach about them, research them, and work on them professionally seems incredibly appealing. Am I making a big mistake if I leave? How can one pursue philosophical interests and desires to work in theory outside of a typical job? Or should I even worry about such a thing?
Watts, son
Some interesting numbers to contextualize IBM’s Watson:
- 90 Power 750 Express servers, each with 4 CPUs, each of those having 8 cores
- Total of 15TB RAM (yep, all of Watson’s data was stored in RAM for rapid search. The human brain’s memory capacity is estimated at between 3 and 6 TB, and not all of that functions like RAM, and it’s implemented in meat.)
- Each of the Power 750 Express servers seems to consume a maximum of 1,949 watts, making a total of 175kw for the whole computer
- There also appears to be a sophisticated system connected to the Jeopardy buzzer but I can’t find power specs for that part.
- IBM estimates that Watson can compute at about 80 teraflops (10^12). This paper mentions in passing that the human brain operates in the petaflop range (10^15), but at the same time, a brain is not a digital system and so the flop comparison is less meaningful.
To put this in perspective, a conservative upper bound for a human being standing still is at most about 150w — less than 1/10 of 1% of Watson — and the person just holds the buzzer and operates it with a muscular control system.
Each of the servers generates a maximum of 6,649 BTU/hour. Watson overall would generate about 600,000 BTU/hour and require massive amounts of air conditioning. I don’t know a good estimate on heat removal, but it would up Watson’s energy cost significantly.
I don’t mean to criticize Watson unduly; it certainly is an impressive engineering achievement and has generated a lot of good publicity and public interest in computing. The engineering feat is impressive if for no other reason than that it is the first accomplishment of this scale, and pioneering is always hard… future Watsons will be cheaper, faster, and more effective because of IBM’s great work on this.
But at the same time, the amazing power and storage costs for Watson really kind of water it down for me. I’m not surprised that if you throw power and hardware and memory at a problem, you can use rather straightforward machine learning methods to solve it. I feel similarly about Deep Blue and chess.
A Turing test that would be more impressive to me would be building something like Watson or Deep Blue that is not allowed to consume more power than an average human, and has comparable memory and speed. The reason this would be impressive is that in order to build it, you’d have to have some way of representing data and reasoning in the system that is efficient to a similar degree that human minds are. One thing you could not do is simply concatenate an unreasonable number of large feature vectors together and overfit a machine learning model. Since this is an important open problem with lots of implications, we should use funding and publicity to drive research organizations like IBM towards that goal. Maybe building Watson is a first step and now the task is to miniaturize Watson, and in doing so, we’ll be forced to learn about efficient brain architectures along the way.
Note: I gathered the numbers above by looking here and then scouring around for various listings of specific hardware specs. I'm willing to believe some of my numbers might be off, but probably not significantly.
Informal job survey
I've recently been thinking about future job prospects and ways that I might alter my preferences to increase the likelihood that I'll be happy with my future career. I have read some of the LessWrong resources about this issue, but they don't seem to address my particular concerns. I think there is a high relative importance for selecting a career with a high capacity for making me happy. It will consume at least 8 prime daylight hours of my work days and in many cases also some of the weekend. In all likelihood I will also be forced to sit in front of a computer for extended periods of time. The tasks I am assigned may have nothing to do with the things that I happen to find intellectually interesting or of ethical importance. And the work will likely zap me of most of the energy that I could use to pursue hobbies or other more "intrinsically worthwhile endeavors" (intrinsic to my personal preference ordering). Given that I believe these factors will largely determine whether I feel happy in many future situations and also whether I feel generically happy about the content of my life as a whole, I think it is worthwhile to seek advice from other rationalists in how to choose an appropriate career goal and take steps to pursue it.
What I have found on LessWrong, however, is that ambiguous and open-ended pleas for advice generally steer off course, even if the tangential issues are very interesting and insightful. Rather than query everyone for open advice about preference hacking, vague goal achievement, and wisdom for properly assigning value to some of the factors I have listed above, I propose a simpler informal job survey.
If you are interested, please briefly list the job you have or the job of someone you know very well (well enough that you feel you know relevant details about the job, details that may be hard to gather in less than 1 hour of internet searching). You don't have to reveal the location or name of the employer or anything like that, just the type of job. Optionally, please also include a sentence stating whether you (or your friend, etc.) seem to enjoy the job and why. For example, my entry would be like this:
I am a graduate student studying applied mathematics. I enjoy the access to educational resources and the flexible schedule that my current job offers, but I think my personal displeasure with computer programming and my perception that future jobs doing mathematical theory are scarce cause me to dislike the job overall.
If enough people are willing to participate, my hope is that the stream of small anecdotal remarks will serve as a brainstorming session. I hope to hear about jobs I may never have thought of, and also reasons for liking or disliking a job that I may never have thought of. The goal is to spark additional search on my own and also to gauge my current preferences in light of preferences that others have experienced with specific jobs. Such a survey would be a very helpful resource allowing me to synthesize data about job directions where the initial search will have a higher probability of being helpful for me.
How to hack one's self to want to want to ... hack one's self.
I was inspired by the recent post discussing self-hacking for the purpose of changing a relationship perspective to achieve a goal. Despite my feeling inspired, though, I also felt like life hacking was not something I could ever want to do even if I perceived benefits to doing it. It seems to me that the place where I would need to begin is hacking myself in order to cause myself to want to be hacked. But then I started contemplating whether this is a plausible thing to do.
In my own case, there are two concrete examples in mind. I am a graduate student working on applied math and probability theory in the field of machine vision. I was one of those bright-eyes, bushy-tailed dolts as an undergrad who just sort of floated to grad school believing that as long as I worked sufficiently hard, it was a logical conclusion that I would get a tenure-track faculty position at a desirable university. Even though I am a fellowship award winner and I am working with a well-known researcher at an Ivy League school, my experience in grad school (along with some noted articles) has forced me to re-examine a lot of my priorities. Tenure-track positions are just too difficult to achieve and achieving them is based on networking, politics, and whether the popularity of your research happens to have a peak at the same time that your productivity in that area also has a peak.
But the alternatives that I see are: join the consulting/business/startup world, become a programmer/analyst for a large software/IT/computer company, work for a government research lab. I worked for two years at MIT's Lincoln Laboratory as a radar analyst and signal processing algorithm developer prior to grad school. The main reason I left that job was because I (foolishly) thought that graduate school was where someone goes to specifically learn the higher-level knowledge and skills to do theoretical work that transcends the software development / data processing work that is so common. I'm more interested in creating tools that go into the toolbox of an engineer than with actually using those tools to create something that people want to pay for.
I have been deeply thinking about these issues for more than two years now, almost every day. I read everything that I can and I try to be as blunt and to-the-point about it as I can be. Future career prospects seem bleak to me. Everyone is getting crushed by data right now. I was just talking with my adviser recently about how so much of the mathematical framework for studying vision over the last 30 years is just being flushed down the tubes because of the massive amount of data processing and large scale machine learning we can now tractably perform. If you want to build a cup-detector for example, you can do lots of fancy modeling, stochastic texture mapping, active contour models, fancy differential geometry, occlusion modeling, etc. Or.. you can just train an SVM on 50,000,000 weakly labeled images of cups you find on the internet. And that SVM will utterly crush the performance of the expert system based on 30 years of research from amazing mathematicians. And this crushing effect only stands to get much much worse and at an increasing pace.
In light of this, it seems to me that I should be learning as much as I can about large-scale data processing, GPU computing, advanced parallel architectures, and the gross details of implementing bleeding edge machine learning. But, currently, this is exactly the sort of thing I hate and went to graduate school to avoid. I wanted to study Total Variation minimization, or PDE-driven diffusion models in image processing, etc. And these are things that are completely crushed by large data processing.
So anyway, long story short: suppose that I really like "math theory and teaching at a respected research university" but I see the coming data steamroller and believe that this preference will cause me to feel unhappy in the future when many other preferences I have (and some I don't yet know about) are effected negatively by pursuit of a phantom tenure-track position. But suppose also that another preference I have is that I really hate "writing computer code to build widgets for customers" which can include large scale data analyses, and thus I feel an aversion to even trying to *want* to hack myself and orient myself to a more practical career goal.
How does one hack one's self to change one's preferences when the preference in question is "I don't want to hack myself?"
Psychologist making pseudo-claim that recent works "compromise the Bayesian point of view"
I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:
Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.
I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.
2011 Buhl Lecture, Scott Aaronson on Quantum Complexity
I was planning to post this in the main area, but my thoughts are significantly less well-formed than I thought they were. Anyway, I hope that interested parties find it nonetheless.
In the Carnegie Mellon 2011 Buhl Lecture, Scott Aaronson gives a remarkably clear and concise review of P, NP, other fundamentals in complexity theory, and their quantum extensions. In particular, beginning around the 46 minute mark, a sequence of examples is given in which the intuition from computability theory would have accurately predicted physical results (and in some cases this actually happened, so it wasn't just hindsight bias).
In previous posts we have learned about Einstein's arrogance and Einstein's speed. This pattern of results flowing from computational complexity to physical predictions seems odd to me in that context. Here we are using physical computers to derive abstractions about the limits of computation, and from there we are successfully able to intuit limits of physical computation (e.g. brains computing abstractions of the fundamental limits of brains computing abstractions...) At what point do we hit the stage where individual scientists can rationally know that results from computational complexity theory are more fundamental than traditional physics? It seems like a paradox wholly different than Einstein rationally knowing (from examining bits of theory-space evidence rather than traditional-experiment-space evidence) that relativity would hold true. In what sort of evidence space can physical brain computation yielding complexity limits count as bits of evidence factoring into expected physical outcomes (such as the exponential smallness of the spectral gap of NP-hard-Hamiltonians from the quantum adiabatic theorem)?
Maybe some contributors more well-versed in complexity theory can steer this in a useful direction.
States of knowledge as amplitude configurations
I am reading through the sequence on quantum physics and have had some questions which I am sure have been thought about by far more qualified people. If you have any useful comments or links about these ideas, please share.
Most of the strongest resistance to ideas about rationalism that I encounter comes not from people with religious beliefs per se, but usually from mathematicians or philosophers who want to assert arguments about the limits of knowledge, the fidelity of sensory perception as a means for gaining knowledge, and various (what I consider to be) pathological examples (such as the zombie example). Among other things, people tend to reduce the argument to the existence of proper names a la Wittgenstein and then go on to assert that the meaning of mathematics or mathematical proofs constitutes something which is fundamentally not part of the physical world.
As I am reading the quantum physics sequence (keep in mind that I am not a physicist; I am an applied mathematician and statistician and so the mathematical framework of Hilbert spaces and amplitude configurations makes vastly much more sense to me than billiard balls or waves, yet connecting it to reality is still very hard for me) I am struck by the thought that all thoughts are themselves fundamentally just amplitude configurations, and by extension, all claims about knowledge about things are also statements about amplitude configurations. For example, my view is that the color red does not exist in and of itself but rather that the experience of the color red is a statement about common configurations of particle amplitudes. When I say "that sign is red", one could unpack this into a detailed statement about statistical properties of configurations of particles in my brain.
The same reasoning seems to apply just as well to something like group theory. States of knowledge about the Sylow theorems, just as an example, would be properties of particle amplitude configurations in a brain. The Sylow theorems are not separately existing entities which are of themselves "true" in any sense.
Perhaps I am way off base in thinking this way. Can any philosophers of the mind point me in the right direction to read more about this?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)