Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I'm reading Nurture Shock by Po Bronson & Ashley Merryman. Several things in the book, esp. the chapter on "Tools of the Mind", an intriguing education program, suggest that our education of young children not only isn't very good even when evaluated using tests that the curriculum was designed for, it's worse than just letting kids play. (My analogy and interpretation—don't blame this on the Tools people—is that conventional education may be like a Soviet five-year plan, trying to force children to acquire skills & knowledge that they would have been motivated to learn on their own if there weren't a school, and that early education shouldn't focus entirely on teaching specific facts, but also on teaching how to think, form abstractions, and control impulses.)
Say they're going to play fireman. The Tools teacher teaches the kids about what firemen do and what happens in a fire, and gives the kids different roles to play, then lets them play. They teach facts not because the facts are important, but to make the play session longer and more complicated. Tools does well in increasing test scores, but even better at reducing disruptive behavior. 
Tools has a variety of computer games that are designed to get kids to exercise particular cognitive skills, like focusing on something while being aware of background events. But the games often sound like more-boring ways of teaching kids the same things that video-games teach them.
Tools did no better than the existing curriculum on certain metrics in a recent larger study. But it didn't perform worse, either.
The first study you do with any biological intervention is to compare the intervention to a control group that has no intervention. But in education, AFAIK no one has ever done this. Everyone uses the existing curriculum as the control.
Whatever country you're in, what metrics do you use, and what evidence do you have that your schools are better than nothing at all?
There may be some things that you need to sit kids down and force them to learn—say, arithmetic, math, and typing—but I kinda doubt it's more than 20% of the grade school curriculum. I spent a lot of time practicing penmanship, futilely trying to memorizing the capitals and chief exports of all fifty states, and studying the history of Thanksgiving and the American Revolution over and over again. We could have a short-hours classroom hours control group, where kids spend a few hours a day learning those few facts they need to know, and the rest of the time playing.
ADDED: There is one kind of control--kids who've not gone to pre-school vs. kids who went to pre-school, or who went to Head Start.
 I fear somebody is going to complain that disruptive behavior is what we need to teach children so they can innovate and question authority. Open to discussion, but if it worked that way, we'd be overwhelmed with innovators and independent thinkers today.
 I actually learned the names of all the states from a song, and learned where they are from a jigsaw puzzle.
"Hjernevask" a well known (in Norway at least) documentary series that I am sure will be interesting to rationalists here is now available with English subtitles online. Produced by Ole Martin Ihle and Harald Eia a Norwegian documentarian and comedian, it casts a light on both ways in which we know people to be different as well as the culture that is academia in the Nordic country and probably elsewhere as well.
- The Gender Equality Paradox - Why do girls tend to go into empathizing professions and boys into systemizing professions? Why does the labor market become more gender segregated the more economic prosperity a country has?
- The Parental Effect - How much influence do parents really have on their children? To what degree is intelligence inherited?
- Gay/Straight - To what extent is sexual preference innate? Are there differences between heterosexual and homosexual brains? Is homosexuality a result of a choice or is it innate?
- Violence - Are people from some cultures more aggressive than others?
- Sex - Are there biological reasons men have a greater tendency than women to want sex without obligation?
- Race - Are there significant genetic differences between different peoples?
- Nature or Nurture - Is personality acquired or inherited?
The link go to the YouTube videos with English subtitles. Because linkrot sucks I'm providing another source for the videos.
There was very little in the series that I found new and disagreed with some presentations. But this is not surprising given my eccentric interest in humans. (^_^) I found the interviews with the scientists and academics interesting and think that overall the series presents a good overview something well worth watching especially considering some of the debates I've seen taken place here recently. (;_;)
I'm somewhat frustrated by the frequent posts warning us about the dangers of Ev. Psych reasoning. (It seems like we average at least one of these per month).
It seems like a lot of this widespread hostility (the reaction to Harald Eia's Hjernevask is a good example of this hostility) stems from the fact that ev. psych is new. New ideas are held to much higher standard than old ones. The early reaction to ev. psych within psychology was characteristic of this effect. Behaviorists, Freudians, and Social Psychologists all had created their own theories of "ultimate causation" for human behaviour. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.
But science started to suffer. With so much easy money, few wanted to study the hard sciences. And the social sciences suffered in another way: The ties with the government became too tight, and created a culture where controversial issues, and tough discussions were avoided. Too critical, and you could risk getting no more money.
It was in this culture Harald Eia started his studies, in sociology, early in the nineties. He made it as far as becoming a junior researcher, but then dropped off, and started a career as a comedian instead. He has said that he suddenly, after reading some books which not were on the syllabus, discovered that he had been cheated. What he was taught in his sociology classes was not up-to-date with international research, and more based on ideology than science.
The latter wrote that in a 2010 article on the documentary series that I would also recommend reading. HT to iSteve where it is quoted in full.
I have been talking to some people (few specific people I thought would benefit and appreciate it) in my dorm and teaching them rationality. I have been thinking of skills that should be taught first and it made me think about what skill is most important to me as a rationalist.
I decided to start with the question “What does it mean to be able to test something with an experiment?” which could also mean “What does it mean to be falsifiable?”
To help my point I brought up the thought experiment with a dragon in Carl Sagan’s garage which is as follows
Carl: There is a dragon in my garage
Me: I thought dragons only existed in legends and I want to see for myself
Carl: Sure follow me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is invisible
Me: Let me throw some flour in so I can see where the dragon is by the disruption of the flour
Carl: My dragon is incorporeal
And so on
The answer that I was trying to bring about was along the lines that if something could be tested by an experiment then it must have at least one different effect if it were true than if it were false. Further if something had at least one effect different if it were true than if it was false then I could at least in theory test it with an experiment.
This led me to the statement:
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.
Anthony (the person I was talking to at the time) started his counter argument with any object in a thought experiment cannot be tested for but still has a meaning.
So I revised my statement any object that if brought into the real world cannot be tested for has no meaning. Under the assumption that if an object could not be tested for in the real world it also has no effect on anything in the thought experiment. i.e. the story with the dragon would have gone the same way independent of its truth values if it were in the real world.
Then the discussion continued into could it be rational to have a belief that could not even in theory be tested. It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.
I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it. Even further I argued that it would in fact be a false belief because p(world) > p(world)p(impermeable invisible dragon) which is a simple occum’s razor argument.
My intended direction for this argument with Anthony from this point was to apply these points to theology but we ran out of time and we have not had time again to talk so that may be a future post.
Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable. I realized then that it might be rational to believe non-falsifiable things for two reasons (I’m sure there’s more but these are the main one’s I can think of please comment your own)
1) The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)
2) You believe that the belief will someday allow you to make an original theory which will be falsifiable.
Both of these reasons if not used very carefully will allow false beliefs. As such I myself decided that if a belief or new theory sufficiently meets these conditions enough to make me want to believe them I should put them into a special category of my thoughts (perhaps conjectures). This category should be below beliefs in power but still held as how the world works and anything in this category should always strive to leave it, meaning that I should always strive to make any non-falsifiable conjecture no longer be a conjecture through making it a belief or disproving it.
Note: This is my first post so as well as discussing the post, critiques simply to the writing are deeply welcomed in PM to me.
As I understand it, Lent is a holiday where we celebrate the scientific method by changing exactly one variable in our lives for 40 days. This seems like a convenient Schelling point for rationalists to adopt, so:
What variable are you going to change for the next 40 days?
(I am really annoyed I didn't think of this yesterday.)
A few good quotes:
The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.
Laugh or cry?:"He prefers psychology’s close-up focus on the quirks of actual human minds to the sweeping theory and deduction involved in economics."
Last summer, not long after Sanna and Smeesters left their respective universities, Simonsohn laid out his approach to fraud-busting in an online article called “Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone”. Afterward, his inbox was flooded with tips from strangers. People wanted him to investigate election results, drug trials, the work of colleagues they’d long doubted. He has not replied to these messages. Making a couple of busts is one thing. Assuming the mantle of the social sciences’ full-time Grand Inquisitor would be quite another.
This looks like a clue that there's work available for anyone who knows statistics. Eventually, there will be an additional line of work for how to tell whether a forensic statistician is competent.
[Feel free to read this poor little unrigorous and unsourced post in JK Simmons' voice. That is entirely optional and you are of course free to read it in any voice you like; I only thought it might be interesting in the light of what is mentioned in the edit at the bottom of the text]
Nowadays, it seems that the correlation between sciency stuff, social ineptitude, and uncoolness, is cemented in the mind of the public. But this seems to be very era-specific, even time-specific.
As a lesswronger, I find what follows ironic: In Islamic countries, "scientists" are called with the same word use for religious leaders and other teachers, "olama", literally "knowers"; historically, there's been a huge overlap between the two, and, when one of these folks speaks, you're supposed to shut up and listen. This is still true to this day. There might not be much wealth to be gained from marrying a scientist, but there was status; amusingly enough, it's in modern-day materialism that is pushing them into irrelevance as money becomes, more and more, the sole measure of status.
In the West, in the XIXth century, Science and Progress were hip and awesome. Being a scientist of some sort was practically a requirement for any pulp hero. In the USA, an era of great works of engineering that had a dramatic impact on life quality made engineers heroes of popular fiction, men of knowledge and rigour who would not bow down to money and lawyer-cushioned bourgeois, or to corrupt and fickle politicians, men who would stand up against injustice and get the job done no matter what. Everyone wanted to call themselves an engineer, and the word was rampantly abused into meaninglessness; florists called themselves "flower engineers"! That's how cool being an engineer was.
In the Soviet Union, as long as they didn't step on the toes of the Party, scientists were highly acclaimed and respected, they got tons of honour and status. There was a huge emphasis on technological progress, on mankind reaching its full potential (at least on paper).
Nowadays, nearly the entire leadership of China is made of technicians and engineers. Not lawyers, or economists, or literati. These people only care about one thing, getting the job done - and that's what Science does.
So, I've really got to ask, when and *how* did Science and Engineering become "uncool", and why are they termed "geek", the term used for sideshow circus performers whose speciality was eating chickens alive (or something like that), and which, before that, used to be synonymous with freak and fook? When and how did we become worse than clowns in the eyes of society? Most importantly: how can the process be reversed?
After all, from a utilitarian standpoint, Science being cool and appreciated and respectable is kind of important.
EDIT: There's also the strange relationship, in the public mind, between science and dangerous, callous, abusive insanity, with a long tradition in popular fiction from Victor Von Frankenstein and Captain Nemo to Tony Stark and GLaDOS, and some Real Life counterparts, especially in brutal totalitarian regimes. Wikipedia has an interesting article on the topic, and how the characterization and prevalence of the Mad Scientist related to time-pertinent perceptions of Science.
For some reason, that aspect is often treated as cool and dramatic and impressive (besides being characterized as repulsive), perhaps because it involves displays of power over others, which is a high-status thing to do. Is that one of the existing paths to social prestige? Achieving power, and being inconsiderate about flaunting it? I'd like to hear more constructive alternatives, because that one doesn't seem viable, from where I stand.
Luke has mentioned much of the research that aspiring philosophers ought to read here.
In fact, he delineated a basis upon which good philosophy can be build, a worldview brought by science and experimentation that relates to, and informs, the kinds of facts which philosophers need to understand to increase their probabilities of asking, and giving good answers to, relevant questions.
Some argued that his list is biased, let us assume for the time being it isn't.
Some argued that the main problem with the list is that it requires either unmanageable amount of time to go through, or improbable levels of intelligence/motivation to do so. This argument does make sense if the purpose of the list was "Let us create a good Philosophy Course".
But this is not the purpose of it. The purpose of it, as most of what Luke publicly does is to save the World. And if doing so requires making people go through an enormous amount of pages of content besides their formal education, well, then so be it. If it has to be a six year course, then it has to.
At the end of his post he says:
You might also let them read 20th century analytic philosophy at that point [after going through his Mega-Course] — hopefully their training will have inoculated them from picking up bad thinking habits.
Now 20th century Analytic Philosophy, and some philosophy that isn't strictly analytic, should definitely be at a philosophy course. I urge other LessWronger philosophers to guide people through that.
Here is a list I have published here before, for Philosophy of Mind and Language (sometimes considered subsets or children of Analytic Philosophy). It covers only the minimal reading necessary to grasp the place of computationalism, and so-called computational theories of mind within the larger debate of philosophy.
But the last century has seen a lot of good philosophy that by luck didn't conflict with neither the science of the day, nor the science that was developed until 2012. Sometimes authors were very careful when writing their philosophy, and well versed in science, like Dennett, Hofstadter, Putnam, Ned Block, and Chalmers. Finally, frequently the topics at hand are sufficiently orthogonal with scientific development that it simply didn't matter that the author didn't know in 1970 what we (after the Mega-Course) know today.
So I ask Luke, Pragmatist, Carl Shulman and others to help build the layer that will sit on top of the science layer in the "Philosophy Given Science" Mega-Course for aspiring philosophers. The course will have four layers. Below the science layer, will be its prerequisites (admittedly large), and atop the one I'm suggesting here, we hope to start building a really good philosophy that is compatible with our scientific understanding, tackles mostly Big Questions which are highly likely to be meaningful, and frequently also useful for the major issues we still have time to solve.
This is the pyramidal structure I suggest we create, 1,2 and 3 being the content of the Mega-Course, and 4 being the likely outcome we expect it to facilitate, made by those who undertake it:
4) Philosophy given 1,2 and 3. Tackling the Big Questions, and making it portable to areas such as AGI, Biotech, etc...
3) Philosophy, up to 2012, that is well informed about or orthogonal to Science so far. Or lucky.
2) Science that is relevant to philosophy. This.
1) Prerequisites for 2.
In this post we begin layer three, I'll start by copying the Mind and Language I had sent. After I'll include some of Bostrom's recommendations within philosophy to me as an undergrad, and my selection of Dennett's, and Dennett's selection of science:
Language and Mind:
- 37 Ways words can be Wrong - Yudkowsky
- Darwin Dangerous Idea Chapters 3,5, 11, 12 and 14 - Daniel Dennett
- On Denoting - Bertrand Russell
- On What There Is - Quine
- Two Dogmas of Empiricism - Quine
- Namind and Necessity - Kripke OR Two Dimensional Semantics - David Chalmers
- “Is Personal Identity What Matters?” - Derek Parfit
- Breakdown of Will - Part Two (don’t read part 3) George Ainslie
- Concepts of Consciousness 2003 - Ned Block
- Attitudes de dicto and de se - David Lewis- Phil Papers 1
- General Semantics - David Lewis - Phil Papers 1
- The Stuff of Thought, Chapter 3 “Fifty Thousand Innate Concepts” - Steve Pinker
- Beyond Belief - Daniel Dennett in Intentional Stance
- The Content and Epistemology of Phenomenal Belief - David Chalmers
- Quining Qualia OR I Am a Strange Loop OR Consciousness Explained - Dan & Doug
- Intentionality - Pierre Jacob - Stanford Encyclopedia Phil
From Bostrom's suggestions:
- Philosophical Papers - David Lewis
- Frank Arntzenius
- Timothy Williamson
- Brian Skyrms
- Real Patterns
- True Believers
- Kinds of Minds
- Intentional Systems In Cognitive Ethology
- Those mentioned above in the Mind and Language list.
Not previously cited, but in Luke's favorites list:
- Noam Chomsky
- Stephen Stich
- Hilary Kronblith
- Eric schwitzgebel
- Michael Bishop
Dennett's suggestions on interdisciplinary science (layer 2):
- The Company of Strangers - Paul Seabright
- Not by Genes Alone - Boyd and Richerson
- I Am a Strange Loop. - Hofstadter
- Probably easier to list what should not be read...
This may initially appear overwhelming, but it is probably one order of magnitude less content than Luke's original post about layer 2. Once again I ask philosophers to specify more things within areas that are not well addressed here, such as ethics. Also books by scientists dealing with philosophical topics (such as Sam Harris: The Moral Landscape) can be added here.
The "Philosophy Given Science" MegaCourse may never actually take place, but it will be a very valuable guideline for institutions to influence actual Philosophy courses, for Philosophy teachers to get cohesive and preselected content to teach, and most importantly for diligent aspiring philosophers willing to get to the Big and relevant problems, instead of being the ball in the chaotic Pinball game that academic philosophy has become, despite all good things it brought. When the path is too long, a shortcut is not a shortcut anymore, it is the only way to get there before it is too late.
As I understand it, in some circles, there is a burgeoning hope that practice in this generation will somehow improve performance in the next – based on a word they have heard but do not understand. That word is epigenetics.
Genes can certainly be modified in ways that persist. For example, the cells in your skin produce more skin cells when they divide, rather than muscle cells or neurons. Most of your cells have a copy of the entire human genome, but only certain elements are expressed in a particular type of cell, and that pattern persists when that kind of cell divides. We understand, to a degree, some of the chemical changes that cause these lasting changes in gene expression patterns. One is methylation, a method of suppressing gene activity. It involves attaching a methyl group to a cytosine base. This methylation pattern is copied when somatic cells divide.
The question is whether A. such changes can persist into the next generation and B. if they do, is this some sort of adaptive process, rather than an occasional screwup? We’re interested in whether this happens in humans, so we’ll only consider mammals.
It’s rare, but sometimes it happens. It has only been found to happen at a few sites in the genome, and when it does happen, only a fraction of the offspring are affected. Probably the best known example is the agouti yellow allele in mice. Mice that carry this allele are fat, yellow, and prone to cancer and diabetes – some of them. Yellow mothers tend to have yellow babies, while genetically identical brown mothers mostly have brown babies. The agouti yellow allele is the product of a recent insertion in the genome, about 50 years ago. For the overwhelming majority of genes, the epigenetic markers are reset in a new embryo, which means that epigenetic changes induced by the parent’s experiences disappear. The embryo is back at square one. This agouti yellow allele is screwed up – somehow the reset isn’t happening correctly.
In mice, the mammalian species in which most such investigations have been done, the few other locations in the genome where anything like this happens are mainly retroposons and other repeated elements.
There is another way that you can get transmission across generations without genetic change. Rats that are nurtured by stressed mothers are more likely to be stressed. This isn’t transmitted perfectly, but it happens. Presumably the uterine environment, or maybe maternal behavior, is different in stressed mice in a way that stresses their offspring. This reminds me of a science fiction story that abused this principle. The idea was that alligators (or maybe it was crocodiles) almost have a four-chambered heart, which is generally associated with higher metabolism and friskiness. Our protagonist operates on an alligator and soups up its heart: the now-more-vigorous animal has better blood circulation and lays healthier eggs that develop into babies that also have a working four-chambered heart. So ‘normal’ alligators were like stressed mice: fix the problem and you get to see what they’re really capable of. The problem was that the most interesting consequence was growing wings, flying around and eating people. Alligators turned out to be stunted dragons. Not so good.
Anyhow, what reason is there to believe that reading Gradshteyn and Ryzhik until your eyes bleed will plant the seeds of math to come in your descendants? None. Oh, I can come up with a scenario, if you want: but it requires that civilization (in particular, the key part of civilization, heavy use of weird definite and indefinite integrals and vast reproductive rewards for those skilled in such things) has risen and fallen over and over again at fairly short (but irregular) intervals, so that humans have faced this adaptive problem over and over and over again. A little like the way in which generations of aphids do different things in the summer (parthenogenesis) than in the late fall (sexual reproduction) – although that probably depends on direct cues like length of day rather than epigenetic changes. Something like Motie history, maybe. But I don’t believe it. Not even a little bit.
Nature hasn’t even figured out how to have Jewish boys be born circumcised yet.
So why are people talking about this? Why do people like Tyler Cowen invoke it to ward off evil facts?
Because they’re chuckleheads, what else?
I think we can be a bit more specific that that so lets take it as an exercise. Motivated cognition for starters.
If you want to learn why the Conan the Barbarian was generated by better priors than modern history books, what the blind idiot god may have in store for you or how to solve thick problems check out other articles from the blog shared under the tag: westhunter
Fifteen years ago John Horgan wrote The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age. I remain skeptical as to the specific details of this book, but Carl’s write-up in The New York Times of a new paper in PNAS on the relative commonness of scientific misconduct in cases of retraction makes me mull over the genuine possibility of the end of science as we know it. This sounds ridiculous on the face of it, but you have to understand my model of and framework for what science is. In short: science is people. I accept the reality that science existed in some form among strands of pre-Socratic thought, or among late antique and medieval Muslims and Christians (not to mention among some Chinese as well). Additionally, I can accept the cognitive model whereby science and scientific curiosity is rooted in our psychology in a very deep sense, so that even small children engage in theory-building.
That is all well and good. The basic building blocks for many inventions and institutions existed long before their instantiation. But nevertheless the creation of institutions and inventions at a given moment is deeply contingent. Between 1600 and 1800 the culture of science as we know it emerged in the West. In the 19th and 20th centuries this culture became professionalized, but despite the explicit institutions and formal titles it is bound together by a common set of norms, an ethos if you will. Scientists work long hours for modest remuneration for the vain hope that they will grasp onto one fragment of reality, and pull it out of the darkness and declare to all, “behold!” That’s a rather flowery way of putting the reality that the game is about fun & fame. Most will not gain fame, but hopefully the fun will continue. Even if others may find one’s interests abstruse or esoteric, it is a special thing to be paid to reflect upon and explore what one is interested in.
Obviously this is an idealization. Science is a highly social and political enterprise, and injustice does occur. Merit and effort are not always rewarded, and on occasion machination truly pays. But overall the culture and enterprise muddle along, and are better in terms of yielding a better sense of reality as it is than its competitors. And yet all great things can end, and free-riders can destroy a system. If your rivals and competitors and cheat and getting ahead, what’s to stop you but your own conscience? People will flinch from violating norms initially, even if those actions are in their own self-interest, but eventually they will break. And once they break the norms have shifted, and once a few break, the rest will follow. This is the logic which drives a vicious positive feedback loop, and individuals in their rational self-interest begin to cannibalize the components of the institutions which ideally would allow all to flourish. No one wants to be the last one in a collapsing building, the sucker who asserts that the structure will hold despite all evidence to the contrary.
Deluded as most graduate students are, they by and large are driven by an ideal. Once the ideal, the illusion, is ripped apart, and eaten away from within, one can’t rebuild it in a day. Trust evolves and accumulates it organically. One can not will it into existence. Centuries of capital are at stake, and it would be best to learn the lessons of history. We may declare that history has ended, but we can’t unilaterally abolish eternal laws.
Link to original post.
This post is a followup report to this.
On Friday's lecture, I was able to briefly cover several topics as an introduction. They centred around rationality (what it is), truth (what it is and why we should pursue it), and Newcomb's Paradox.
The turnout was as expected (6 out of a total 7 group members, with 1 having other obligations that day). Throughout the talk I would ask for some proposed definitions before giving them. It is unfortunate when I asked what "truth" is, mysterious answers such as "truth is the meaning of life", and "truth is the pursuit of truth". When asked what they meant by their answers, they either rephrased what they said with the same vagueness or were unable to give an answer. One member, however, did say that "Truth is what is real", only to have other members ask what he meant by "real". It offered a rather nice opportunity for a map-and-territory tangent before giving some version of "The Simple Truth".
I used the definitions given in 'What Do We Mean By "Rationality"?' to describe epistemic and instrumental rationality, and gave several examples as to what rationality is not (Dr. Spock, logic/reason, etc). As a practice, I introduced Newcomb's Paradox. There was ample debate with an even split between one-box and two-boxers. Due to time constraints, we weren't able to come to a conclusion (although the one-boxing side was making a stronger argument). By the end of lunch period, everyone seemed to have a good grasp that rationality is simply making the best decision to achieve one's goals, whatever they may be.
Overall, I'd say it was successful. My next turn is on October 3rd, and apart from a little review, I'm going to go over the 5-second level, and use of words. Saying what they mean is something we as a group need to work on.
The researchers showed monkeys specific images and then trained them to select those images out of a larger set after a time delay. They recorded the monkeys' brain function to determine which signals were important. The experiment tests the monkey's performance on this task in different cases, as described by io9:
Once they were satisfied that the correct mapping had been done, they administered cocaine to the monkeys to impair their performance on the match-to-sample task (seems like a rather severe drug to administer, but there you have it). Immediately, the monkeys' performance fell by a factor of 20%.
It was at this point that the researchers engaged the neural device. Specifically, they deployed a "multi-input multi-output nonlinear" (MIMO) model to stimulate the neurons that the monkeys needed to complete the task. The inputs of this device monitored such things as blood flow, temperature, and the electrical activity of other neurons, while the outputs triggered the individual neurons required for decision making. Taken together, the i/o model was able to predict the output of the cortical neurons — and in turn deliver electrical stimulation to the right neurons at the right time.
And incredibly, it worked. The researchers successfully restored the monkeys' decision-making skills even though they were still dealing with the effects of the cocaine. Moreover, when duplicating the experiment under normal conditions, the monkeys' performance improved beyond the 75% proficiency level shown earlier. In other words, a kind of cognitive enhancement had happened.
This research is a remarkable followup to research that was done in rodents last year.
Just recently at my high school, a group of classmates and I started a science club. A major component of this is listening and giving peer lectures on topics of physics, math, computer science, etc. I picked a topic a bit off to the side: philosophy and decision making. Naturally, this includes rationality. My plan is to start with something based off the sequences, specifically "How to Actually Change Your Mind" and "A Human's Guide to Words".
I was hoping the Less Wrong community could give me some suggestions, tips, or even alternative ways to approach this. There is no end goal, we just want to learn more and think better. All our members are among the top 5% academically of their own grade. Most of us are seniors and have finished high school math, taking AP Calculus this year. We have covered basic statistics and Bayes' Theorem, but only applied it to the Disease Problem.
Any help or ideas are appreciated.
Update: Thank you for all these suggestions! They are incredibly helpful for me. I will attempt to make a recording of the lecture period if possible. I will make another discussion post sometime next weekend (the first lecture is next Friday) to report how it went.
Update 2: Report here.
Summary: The Greeks likely rejected a heliocentric theory because it would conflict with the lack of any visible stellar parallax, not for egotistical, common-sense, or aesthetic reasons.
I had always heard that the Greeks embraced a geocentric universe for common-sense, aesthetic reasons - not scientific ones. But it seems as if the real story is more complicated than that:
Now this is the kicker in your Popperian dirtsack. The Greeks had the right theory (heliocentric solar system) but discarded it on the basis of experimental evidence! Never preach to me about progress-in-science when all you’ve heard is a one-liner about Popper and the communal acceptance of general relativity. Especially don’t follow it up by saying that science marches toward the Truth whilst religion thwarts its progress. According to Astronomer Lisa, it’s not true that the Greeks simply thought they and their Gods were at the centre of the Universe because they were egotistical. They reasoned to the geocentric conclusion based on quantitative evidence. How? They measured parallax.(Difference in stellar appearance from spring to fall, when we’re on opposite sides of the Sun.) Given the insensitivity of their measurement tools at the time, the stars didn’t change positions at all when the Earth moved to the other side of the Sun. Based on that, they rejected the heliocentric hypothesis. If the Earth actually did move around the Sun, then the stars would logically have to appear different from one time to another. But they remain ever fixed in the same place in the Heavens, therefore the Earth must be still (geocentric).
I dug a little bit deeper, and this seems to be more or less accurate. From The Greek Heliocentric Theory and its Abandonment:
This paper then examines possible reasons for the Greek abandonment of the heliocentric theory and concludes that there is no reason to deplore its abandonment. In developing the heliocentric theory the Greeks had run the gamut of theorizing. We are indebted to the Alexandrians and Hipparchus for turning away from speculation to take up the recording of precise astronomical data. Here was laid the foundation upon which modern astronomy was built.
Let us now suppose that Aristarchus’ theory was widely circulated and that it was given careful consideration by leading astronomers. There is one objection that immediately arises when the earth is put in motion, the very difficulty which must have disquieted Copernicus and which caused Tycho Brahe shortly afterwards to renounce Copernicus’ heliocentric system and to put the earth again at rest. (Tycho reverted to a system first suggested by some ancient Greek, who made the planets revolve about the sun and the sun about the earth.) The difficulty is this. As soon as the earth is set in motion in an annual revolution about the sun, the distance between any two of the earth’s positions that are six months apart will be twice as great as the earth’s distance from the sun. Over such vast distances some displacement in the positions of the stars ought to be observed. The more accurate the astronomical instruments and the greater the estimated distance of the sun, the more reason should there be to expect stellar displacement. Now it so happened that Aristarchus reached his conclusions at the very time when interest was keen at Alexandria and elsewhere in the Greek world in accurate observations and when marked improvements were being made in precision instruments. To appreciate these developments we need only recall the careful stellar catalogues of Aristyllus and Timocharis early in the third century B.C., the work of the latter enabling Hipparchus to discover the precession of the equinoxes, and the armillary sphere of Eratosthenes by which he was able to determine the obliquity of the ecliptic and the circumference of the earth. Hipparchus continued to make improvements in the next century. He, as we shall see, had a much better appreciation of the sun’s great distance than Copernicus. Of course it was impossible to observe stellar displacement without the aid of a telescope. Inability to observe it left astronomers with only two alternatives: either the stars were so remote that it was impossible to detect displacement, or the earth would have to remain at rest.
..Heath was of the opinion that Hipparchus was responsible for the death of Aristarchus’ theory, that the adherence of so preeminent an astronomer to a geocentric orientation sealed the doom of the heliocentric theory. This is a reasonable conjecture. Hipparchus was noted for his careful observations, his stellar catalogues, and the remarkable precision of his recordings of solar and lunar motions. According to Ptolemy he was devoted to truth above all else and because he did not possess sufficient data, he refused to attempt to account for planetary motions as he had for those of the sun and moon. His discovery of the precession of the equinoxes attests to the keenness of his observations. He came much closer to appreciating the vast distance of the sun than Copernicus did.
..We do not know whether or not Hipparchus ever seriously entertained Aristarchus’ views about the earth’s motions, but from what we have seen of his cautious and accurate methods, it is likely that he would have quickly rejected the heliocentric theory in the absence of visible stellar displacement.
Aristarchus was successful in explaining variations in brilliance and reverse courses of the planets, but planetary motions are far more complicated than that. Kepler was the first to realize that the planets do not describe circular orbits, but rather ellipses, and that the sun is not in the middle of these orbits but in the foci of the ellipses. That something was wrong might have been suspected as early as 330 B.C., for Callippus noticed that the seasons were not of the same length. He estimated their lengths between solstices and equinoxes to be 94, 92, 89, and 90 days- figures that are very nearly correct. Or to show the irregularities that might result from combining the eccentricities of the orbits of two planets, in some years Mars and the earth at closest approximation are 36 million miles apart and in other years (as in 1948) may be 63 million miles apart at their nearest approach. Now the Alexandrians were pointing their precision sights at the planets and must have been disturbed by these peculiarities. Furthermore they would have been less kindly disposed towards Aristarchus’ explanation of the absence of visible stellar parallax by placing the stars at an almost infinite distance away because they had a better appreciation of the sun’s vast distance and consequently would have stronger reason to expect to find parallax. It would seem that the more precise the instruments, the less likelihood there would be of the earth’s being in motion.
As I discussed in the above posts, the Journal of Personality and Social Psychology, a leading psych journal, published a deeply flawed parapsychology study (see the second post for details) which had apparently been tortured to produce results. Then they rejected an attempt to replicate that found no effect, citing a sadly typical policy of not publishing replications. Some of you may enjoy reading one enterprising researcher's amusing satire article, purportedly (not actually) "tallying" past confirmations and disconfirmations in JPSP and drawing conclusions.
ETA: To clarify the last sentence, they didn't really find 4800+ confirmation and two disconfirmations. As they say in small print, the data were made up. It's right by the chart.
Updated Version @ LW Wiki: wiki.lesswrong.com/wiki/Programming_resources
The book intends to show a layman the basic mechanical principles of how computers work, instead of merely summarizing how the different parts relate. He starts with basic principles of language and logic and then demonstrates how they can be embodied by electrical circuits, and these principles give him an opening to describe in principle how computers work mechanically without requiring very much technical knowledge. Although it is not possible in a medium sized book for layman to describe the entire technical summary of a computer, he describes how and why it is possible that elaborate electronics can act in the ways computers do. In the introduction, he contrasts his own work with those books which "include pictures of trains full of 1s and 0s."
Indeed, the best way to understand how computers work is to build one from scratch, and this textbook leads students through twelve chapters and projects that gradually build a basic hardware platform and a modern software hierarchy from the ground up. In the process, the students gain hands-on knowledge of hardware architecture, operating systems, programming languages, compilers, data structures, algorithms, and software engineering. Using this constructive approach, the book exposes a significant body of computer science knowledge and demonstrates how theoretical and applied techniques taught in other courses fit into the overall picture.
Write Great Code Volume I: Understanding the Machine
This, the first of four volumes, teaches important concepts of machine organization in a language-independent fashion, giving programmers what they need to know to write great code in any language, without the usual overhead of learning assembly language to master this topic. The Write Great Code series will help programmers make wiser choices with respect to programming statements and data types when writing software.
Write Great Code Volume II: Thinking Low-Level, Writing High-Level
...a good question to ask might be "Is there some way to write high-level language code to help the compiler produce high-quality machine code?" The answer to this question is "yes" and Write Great Code, Volume II, will teach you how to write such high-level code. This volume in the Write Great Code series describes how compilers translate statements into machine code so that you can choose appropriate high-level programming language statements to produce executable code that is almost as good as hand-optimized assembly code.
Assembly is a low-level programming language that's one step above a computer's native machine language. Although assembly language is commonly used for writing device drivers, emulators, and video games, many programmers find its somewhat unfriendly syntax intimidating to learn and use.
Since 1996, Randall Hyde's The Art of Assembly Language has provided a comprehensive, plain-English, and patient introduction to assembly for non-assembly programmers. Hyde's primary teaching tool, High Level Assembler (or HLA), incorporates many of the features found in high-level languages (like C, C++, and Java) to help you quickly grasp basic assembly concepts. HLA lets you write true low-level code while enjoying the benefits of high-level language programming.
This work is not about computer programming in the narrow sense, but about the algorithms and methods which lie at the heart of most computer systems.
At the end of 1999, these books were named among the best twelve physical-science monographs of the century by American Scientist, along with: Dirac on quantum mechanics, Einstein on relativity, Mandelbrot on fractals, Pauling on the chemical bond, Russell and Whitehead on foundations of mathematics, von Neumann and Morgenstern on game theory, Wiener on cybernetics, Woodward and Hoffmann on orbital symmetry, Feynman on quantum electrodynamics, Smith on the search for structure, and Einstein's collected papers.
Ruby, Io, Prolog, Scala, Erlang, Clojure, Haskell. With Seven Languages in Seven Weeks, by Bruce A. Tate, you'll go beyond the syntax-and beyond the 20-minute tutorial you'll find someplace online. This book has an audacious goal: to present a meaningful exploration of seven languages within a single book. Rather than serve as a complete reference or installation guide, Seven Languages hits what's essential and unique about each language. Moreover, this approach will help teach you how to grok new languages.
For each language, you'll solve a nontrivial problem, using techniques that show off the language's most important features. As the book proceeds, you'll discover the strengths and weaknesses of the languages, while dissecting the process of learning languages quickly--for example, finding the typing and programming models, decision structures, and how you interact with them.
The ubiquity of computers in everyday life in the 21st century justifies the centrality of programming languages to computer science education. Programming languages is the area that connects the theoretical foundations of computer science, the source of problem-solving algorithms, to modern computer architectures on which the corresponding programs produce solutions. Given the speed with which computing technology advances in this post-Internet era, a computing textbook must present a structure for organizing information about a subject, not just the facts of the subject itself. In this book, Michael Scott broadly and comprehensively presents the key concepts of programming languages and their implementation, in a manner appropriate for computer science majors.
This well-respected text offers an accessible introduction to functional programming concepts and techniques for students of mathematics and computer science. The treatment is as nontechnical as possible, assuming no prior knowledge of mathematics or functional programming. Numerous exercises appear throughout the text, and all problems feature complete solutions.
4. How to Design Programs (An Introduction to Computing and Programming)
This introduction to programming places computer science in the core of a liberal arts education. Unlike other introductory books, it focuses on the program design process. This approach fosters a variety of skills--critical reading, analytical thinking, creative synthesis, and attention to detail--that are important for everyone, not just future computer programmers.The book exposes readers to two fundamentally new ideas. First, it presents program design guidelines that show the reader how to analyze a problem statement; how to formulate concise goals; how to make up examples; how to develop an outline of the solution, based on the analysis; how to finish the program; and how to test. Each step produces a well-defined intermediate product. Second, the book comes with a novel programming environment, the first one explicitly designed for beginners.
Using a dialect of the Lisp programming language known as Scheme, the book explains core computer science concepts, including abstraction, recursion, interpreters and metalinguistic abstraction, and teaches modular programming.
The program also introduces a practical implementation of the register machine concept, defining and developing an assembler for such a construct, which is used as a virtual machine for the implementation of interpreters and compilers in the book, and as a testbed for illustrating the implementation and effect of modifications to the evaluation mechanism. Working Scheme systems based on the design described in this book are quite common student projects.
Mathematician Alan Turing invented an imaginary computer known as the Turing Machine; in an age before computers, he explored the concept of what it meant to be computable, creating the field of computability theory in the process, a foundation of present-day computer programming.
The book expands Turing’s original 36-page paper with additional background chapters and extensive annotations; the author elaborates on and clarifies many of Turing’s statements, making the original difficult-to-read document accessible to present day programmers, computer science majors, math geeks, and others.
2. New Turing Omnibus (New Turning Omnibus : 66 Excursions in Computer Science)
This text provides a broad introduction to the realm of computers. Updated and expanded, "The New Turing Omnibus" offers 66 concise articles on the major points of interest in computer science theory, technology and applications. New for this edition are: updated information on algorithms, detecting primes, noncomputable functions, and self-replicating computers - plus completely new sections on the Mandelbrot set, genetic algorithms, the Newton-Raphson Method, neural networks that learn, DOS systems for personal computers, and computer viruses.
Udacity is a private educational organization founded by Sebastian Thrun, David Stavens, and Mike Sokolsky, with the stated goal of democratizing education
It is the outgrowth of free computer science classes offered in 2011 through Stanford University. As of May 2012 Udacity has six active courses.
The first two courses ever launched on Udacity both started on 20th February, 2012, entitled "CS 101: Building a Search Engine", taught by Dave Evans, from the University of Virginia, and "CS 373: Programming a Robotic Car" taught by Thrun. Both courses use Python.
A bold experiment in distributed education, "Introduction to Artificial Intelligence" will be offered free and online to students worldwide from October 10th to December 18th 2011. The course will include feedback on progress and a statement of accomplishment. Taught by Sebastian Thrun and Peter Norvig, the curriculum draws from that used in Stanford's introductory Artificial Intelligence course. The instructors will offer similar materials, assignments, and exams.
Artificial Intelligence is the science of making computer software that reasons about the world around it. Humanoid robots, Google Goggles, self-driving cars, even software that suggests music you might like to hear are all examples of AI. In this class, you will learn how to create this software from two of the leaders in the field. Class begins October 10.
This book introduces the mathematics that supports advanced computer programming and the analysis of algorithms. The primary aim of its well-known authors is to provide a solid and relevant base of mathematical skills - the skills needed to solve complex problems, to evaluate horrendous sums, and to discover subtle patterns in data. It is an indispensable text and reference not only for computer scientists - the authors themselves rely heavily on it! - but for serious users of mathematics in virtually every discipline.
The textbook Algorithms, 4th Edition by Robert Sedgewick and Kevin Wayne surveys the most important algorithms and data structures in use today.
Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.
Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems.
Python Challenge is a game in which each level can be solved by a bit of (Python) programming.
CodeChef is a global programming community. We host contests, trainings and events for programmers around the world. Our goal is to provide a platform for programmers everywhere to meet, compete, and have fun.
4. Write your own programs.
An open-source Python Integrated Development Environment (IDE)
Introduction to programming and computer science (using Python)
“Invent Your Own Computer Games with Python” is a free book (as in, open source) and a free eBook (as in, no cost to download) that teaches you how to program in the Python programming language. Each chapter gives you the complete source code for a new game, and then teaches the programming concepts from the example.
“Invent with Python” was written to be understandable by kids as young as 10 to 12 years old, although it is great for anyone of any age who has never programmed before.
Have you always wanted to learn how to code but never thought you could? Are you looking to build a foundation for more complex coding? Do you want to challenge your brain in a new way? Then Learn Python the Hard Way is the book for you.
Think Python is an introduction to Python programming for beginners. It starts with basic concepts of programming, and is carefully designed to define all terms when they are first used and to develop each new concept in a logical progression. Larger pieces, like recursion and object-oriented programming are divided into a sequence of smaller steps and introduced over the course of several chapters.
This book is suitable for use in a university-level first course in computing (CS1), as well as the increasingly popular course known as CS0. It is difficult for many students to master basic concepts in computer science and programming. A large portion of the confusion can be blamed on the complexity of the tools and materials that are traditionally used to teach CS1 and CS2. This textbook was written with a single overarching goal: to present the core concepts of computer science as simply as possible without being simplistic.
Computers are used in every part of science from ecology to particle physics. This introduction to computer science continually reinforces those ties by using real-world science problems as examples. Anyone who has taken a high school science class will be able to follow along as the book introduces the basics of programming, then goes on to show readers how to work with databases, download data from the web automatically, build graphical interfaces, and most importantly, how to think like a professional programmer.
The Quick Python Book, Second Edition, is a clear, concise introduction to Python 3, aimed at programmers new to Python. This updated edition includes all the changes in Python 3, itself a significant shift from earlier versions of Python.
The book begins with basic but useful programs that teach the core features of syntax, control flow, and data structures. It then moves to larger applications involving code management, object-oriented programming, web development, and converting code from earlier versions of Python.
The Haskell Platform is the easiest way to get started with programming Haskell. It comes with all you need to get up and running. Think of it as "Haskell: batteries included".
This page will help you get started as quickly as possible.
Haskell is one of the leading languages for teaching functional programming, enabling students to write simpler and cleaner code, and to learn how to structure and reason about programs. This introduction is ideal for beginners: it requires no previous programming experience and all concepts are explained from first principles via carefully chosen examples. Each chapter includes exercises that range from the straightforward to extended projects, plus suggestions for further reading on more advanced topics. The author is a leading Haskell researcher and instructor, well-known for his teaching skills. The presentation is clear and simple, and benefits from having been refined and class-tested over several years. The result is a text that can be used with courses, or for self-learning. Features include freely accessible Powerpoint slides for each chapter, solutions to exercises and examination questions (with solutions) available to instructors, and a downloadable code that's fully compliant with the latest Haskell release.
Learn You a Haskell, the funkiest way to learn Haskell, which is the best functional programming language around. You may have heard of it. This guide is meant for people who have programmed already, but have yet to try functional programming.
This easy-to-use, fast-moving tutorial introduces you to functional programming with Haskell. You'll learn how to use Haskell in a variety of practical ways, from short scripts to large and demanding applications. Real World Haskell takes you through the basics of functional programming at a brisk pace, and then helps you increase your understanding of Haskell in real-world issues like I/O, performance, dealing with data, concurrency, and more as you move through each chapter.
The textbook by Doets and van Eijck puts the Haskell programming language systematically to work for presenting a major piece of logic and mathematics. The reader is taken through chapters on basic logic, proof recipes, sets and lists, relations and functions, recursion and co-recursion, the number systems, polynomials and power series, ending with Cantor's infinities. The book uses Haskell for the executable and strongly typed manifestation of various mathematical notions at the level of declarative programming. The book adopts a systematic but relaxed mathematical style (definition, example, exercise, ...); the text is very pleasant to read due to a small amount of anecdotal information, and due to the fact that definitions are fluently integrated in the running text. An important goal of the book is to get the reader acquainted with reasoning about programs.
Lisp has been hailed as the world's most powerful programming language, but its cryptic syntax and academic reputation can be enough to scare off even experienced programmers. Those dark days are finally over—Land of Lisp brings the power of functional programming to the people!
With his brilliantly quirky comics and out-of-this-world games, longtime Lisper Conrad Barski teaches you the mysteries of Common Lisp. You'll start with the basics, like list manipulation, I/O, and recursion, then move on to more complex topics like macros, higher order programming, and domain-specific languages. Then, when your brain overheats, you can kick back with an action-packed comic book interlude!
Practical Common Lisp presents a thorough introduction to Common Lisp, providing you with an overall understanding of the language features and how they work. Over a third of the book is devoted to practical examples such as the core of a spam filter and a web application for browsing MP3s and streaming them via the Shoutcast protocol to any standard MP3 client software (e.g., iTunes, XMMS, or WinAmp). In other "practical" chapters, author Peter Seibel demonstrates how to build a simple but flexible in-memory database, how to parse binary files, and how to build a unit test framework in 26 lines of code.
Teaching users new and more powerful ways of thinking about programs, this two-in-one text contains a tutorial—full of examples—that explains all the essential concepts of Lisp programming, plus an up-to-date summary of ANSI Common Lisp, listing every operator in the language. Informative and fun, it gives users everything they need to start writing programs in Lisp both efficiently and effectively, and highlights such innovative Lisp features as automatic memory management, manifest typing, closures, and more. Dividing material into two parts, the tutorial half of the book covers subject-by-subject the essential core of Common Lisp, and sums up lessons of preceding chapters in two examples of real applications: a backward-chainer, and an embedded language for object-oriented programming. Consisting of three appendices, the summary half of the book gives source code for a selection of widely used Common Lisp operators, with definitions that offer a comprehensive explanation of the language and provide a rich source of real examples; summarizes some differences between ANSI Common Lisp and Common Lisp as it was originally defined in 1984; and contains a concise description of every function, macro, and special operator in ANSI Common Lisp. The book concludes with a section of notes containing clarifications, references, and additional code.
Paradigms of AI Programming is the first text to teach advanced Common Lisp techniques in the context of building major AI systems. By reconstructing authentic, complex AI programs using state-of-the-art Common Lisp, the book teaches students and professionals how to build and debug robust practical programs, while demonstrating superior programming style and important AI concepts. The author strongly emphasizes the practical performance issues involved in writing real working programs of significant size. Chapters on troubleshooting and efficiency are included, along with a discussion of the fundamentals of object-oriented programming and a description of the main CLOS functions. This volume is an excellent text for a course on AI programming, a useful supplement for general AI courses and an indispensable reference for the professional programmer.
Let Over Lambda is one of the most hardcore computer programming books out there. Starting with the fundamentals, it describes the most advanced features of the most advanced language: COMMON LISP. The point of this book is to expose you to ideas that you might otherwise never be exposed to.
These are Maxwell’s equations. Just four compact equations. With a little work it’s easy to understand the basic elements of the equations – what all the symbols mean, how we can compute all the relevant quantities, and so on. But while it’s easy to understand the elements of the equations, understanding all their consequences is another matter. Inside these equations is all of electromagnetism – everything from antennas to motors to circuits. If you think you understand the consequences of these four equations, then you may leave the room now, and you can come back and ace the exam at the end of semester.
RStudio™ is a free and open source integrated development environment (IDE) for R. You can run it on your desktop (Windows, Mac, or Linux) or even over the web using RStudio Server.
1. R Videos
2. R Tutorials
Here is a list of FREE R tutorials hosted in official website of universities around the world.
Here you will find daily news and tutorials about R, contributed by over 300 bloggers.
R is the world's most popular language for developing statistical software: Archaeologists use it to track the spread of ancient civilizations, drug companies use it to discover which medications are safe and effective, and actuaries use it to assess financial risks and keep economies running smoothly.
The Art of R Programming takes you on a guided tour of software development with R, from basic types and data structures to advanced topics like closures, recursion, and anonymous functions. No statistical knowledge is required, and your programming skills can range from hobbyist to pro.
Along the way, you'll learn about functional and object-oriented programming, running mathematical simulations, and rearranging complex data into simpler, more useful formats.
The target audience for this book is college students who are required to learn statistics, students with little background in mathematics and often no motivation to learn more.
There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all scenarios addressed by non-Bayesian textbooks--t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis).
This book is intended for first year graduate students or advanced undergraduates. It provides a bridge between undergraduate training and modern Bayesian methods for data analysis, which is becoming the accepted research standard. Prerequisite is knowledge of algebra and basic calculus. Free software now includes programs in JAGS, which runs on Macintosh, Linux, and Windows.
ciphergoth just asked what the actual value of Quantified Self/self-experimentation is. This finally tempted me into running value of information calculations on my own experiments. It took me all afternoon because it turned out I didn’t actually understand how to do it and I had a hard time figuring out the right values for specific experiments. (I may not have not gotten it right, still. Feel free to check my work!) Then it turned out to be too long for a comment, and as usual the master versions will be on my website at some point. But without further ado!
In coming decades, engineers will build an entity with intelligence on a level which can compete with humans. This entity will want to improve its own intelligence, and will be able to do so. The process of improvement will repeat, until it reaches a level far above that of humans; the entity will then be able to achieve its goals efficiently. It is thus essential that its goals are good for humanity. To guarantee this, it is necessary to define the correct goals before this intelligence is built.
I bought my niece a Kindle that just arrived and I'm about to load it up with books to give it to her tomorrow for her birthday. I've decided to be a sneaky uncle and include good books that can teach better abilities to think or at least to consider science cool and interesting. She is currently in the 4th Grade with 5th coming after the Summer.
She reads basically at her own grade level so while I'm open to stuffing the Kindle with books to be read when she's ready, I'd like to focus on giving her books she can read now. Ender's Game will be on there most likely. Game of Thrones will not.
What books would you give a youngling? Her interests currently trend toward the young mystery section, Hardy Boys and the like, but in my experience she is very open to trying new books with particular interest in YA fantasy but not much interest in Sci Fi (if I'm doing any other optimizing this year, I'll try to change her opinion on Sci Fi).
See previously “A good volunteer is hard to find”
Back in February 2012, lukeprog announced that SIAI was hiring more part-time remote researchers, and you could apply just by demonstrating your chops on a simple test: review the psychology literature on habit formation with an eye towards practical application. What factors strengthen new habits? How long do they take to harden? And so on. I was assigned to read through and rate the submissions and Luke could then look at them individually to decide who to hire. We didn’t get as many submissions as we were hoping for, so in April Luke posted again, this time with a quicker easier application form. (I don’t know how that has been working out.)
But in February, I remembered the linked post above from GiveWell where they mentioned many would-be volunteers did not even finish the test task. I did, and I didn’t find it that bad, and actually a kind of interesting exercise in critical thinking & being careful. People suggested that perhaps the attrition was due not to low volunteer quality, but to the feeling that they were not appreciated and were doing useless makework. (The same reason so many kids hate school…) But how to test this?
The plan is to replicate or fail to replicate all 2008 articles from three major Psychology journals.
ETA: http://openscienceframework.org/ is the homepage of the group behind this. It's still in Beta, but will eventually include some nifty looking science toolkits in addition to the reproducibility project.
The following is intended as 1) request for specific criticisms regarding the value of time investment on this project, and 2) pending favorable answer to this, a request for further involvement from qualified individuals. It is not intended as a random piece of interesting pop-sci, despite the subject matter, but as a volunteer opportunity.
Server Sky is a an engineering proposal to place thousands (eventually millions) of micron-thin satellites into medium orbit around the earth in the near term. It is being put forth by Keith Lofstrom, the inventor of the Launch Loop.
Abstract from the 2009 paper:
It is easier to move bits than atoms or energy. Server-sats are ultralight disks of silicon that convert sunlight into computation and communications. Powered by a large solar cell, propelled and steered by light pressure, networked and located by microwaves, and cooled by black-body radiation. Arrays of thousands of server-sats form highly redundant computation and database servers, as well as phased array antennas to reach thousands of transceivers on the ground.
First generation server-sats are 20 centimeters across ( about 8 inches ), 0.1 millimeters (100 microns) thick, and weigh 7 grams. They can be mass produced with off-the-shelf semiconductor technologies. Gallium arsenide radio chips provide intra-array, inter-array, and ground communication, as well as precise location information. Server-sats are launched stacked by the thousands in solid cylinders, shrouded and vibration-isolated inside a traditional satellite bus.
Some mildly negative evidence to start with: I have already had a satellite scientist tell me that this seems unlikely to work. Avoiding space debris and Kessler Syndrome, radio communications difficulties (especially uplink), and the need for precise synchronization are the obstacles he stressed as significant. He did not seem to have studied the proposal closely, but this at least tells us to be careful where to set our priors.
On the other hand, it appears Keith has given these problems a lot of thought already, and solutions can probably be worked out. The thinsats would have optical thrusters (small solar sails) and would thus be able to move themselves and each other around; defective ones could be collected for disposal without mounting an expensive retrieval mission, and the thrusters would also help avoid things in the first place. Furthermore the zone chosen (the m288 orbit) is relatively unused, so collisions with other satellites are unlikely. Also the satellites have powerful radar capabilities, which should lead to more easily detecting and eliminating space junk.
For the communications problem, the idea is to use three dimensional phased arrays of thinsats -- basically a bunch of satellites in a large block working in unison to generate a specific signal, behaving as if they were a much larger antenna. This is tricky and requires precision timing and exact distance information. The array's physical configuration will need to be randomized (or perhaps arranged according to an optimized pattern) in order to prevent grating lobes, a problem with interference patterns that is common with phased arrays. They would link with GPS and each other by radio on multiple bands to achieve "micron-precision thinsat location and orientation within the array".
According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.
Has anyone else here researched this idea, or have relevant knowledge? It seems like a great potential source of computing power for AI research, mind uploads, and so forth, but also for all those mundane, highly lucrative near term demands like web hosting and distributed business infrastructures.
From an altruistic standpoint, this kind of system could reduce poverty and increase equitable distribution of computing resources. It could also make solving hard scientific problems like aging and cryopreservation easier, and pave the road to solar power satellites. As it scales, it should also create demand (as well as available funding and processing power) for Launch Loop construction, or some other similarly low-cost form of space travel.
Value of information as to whether it can work or not therefore appears to be extremely high, something I think is crucial for a rationalist project. If it can work, the value of taking productive action (leadership, getting it funded, working out the problems, etc.) should be correspondingly high as well.
Update: Keith Lofstrom has responded on the wiki to the questions raised by the satellite scientist.
Note: Not all aspects of the project have complete descriptions yet, but there are answers to a lot of questions in the wiki.
Here is a summary list of questions raised and answers so far:
- How does this account for Moore's Law? (kilobug)
In his reply to the comments on Brin's post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves. Obsolete sats would not stay in use for long.
- What about ping time limits? (kilobug)
Ping times are going to be limited (70ms or so), and worse than you can theoretically get with a fat pipe (42ms), but it is still much better than you get with GEO (250+ ms). This is bad for high frequency trading, but fine for (parallelizable) number crunching and most other practical purposes.
- What kind of power consumption? Doesn't it cost more to launch than you save? (Vanvier)
It takes roughly 2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. Blackbody cooling is another benefit.
- Bits being flipped by cosmic radiation is a problem on earth, how can it be solved in space? (Vanvier)
Flash memory is acknowledged to be the most radiation sensitive component of the satellite. The solution would involve extensive error correction software and caching on multiple satellites.
- Periodic annealing tends to short circuits. Wouldn't this result in very short lifetimes? (Vanvier)
Circuits will be manufactured as two dimensional planes, which don't short as easily. Another significant engineering challenge: Thermal properties in the glass will need to be matched with the silicon and wires (for example, slotted wiring with silicon dioxide between the gaps) to prevent circuit damage. Per Vanvier, it may be less expensive to replace silicon with other materials for this purpose.
- What are the specific of putting servers in space? (ZankerH)
Efficient power/cooling, increased communications, overall scalability, relative lack of environmental impact.
Yet to be answered:
- Is the amount of speculative tech too high? E.g. if future kinds of RAM are needed, costs may be higher. (Vanvier)
- Is it easier to replace silicon with something else than find ways to make the rest of the sat match thermal expansion of silicon? (Vanvier)
- Can we get more data on economics/business plan? (Vanvier)
- Solar sails have been known to stick together. Is this a problem for thinsats, which are shipped stuck together? (Vanvier)
- Do most interesting processes bottleneck on communication efficiency? (skelterpot)
- What decreases in cost might we see with increased manufacturing yield? (skelterpot)
- Launch energy vs energy collection (answer above is more specific, but this was a commendable quick-check). (tgb)
- ECC RAM is standard technology used in server computers. (JoachimShipper)
- Fixing bit errors outside the memory (e.g. in CPU) is harder, something like Tandem Computers could be used, with added expense. (JoachimShipper)
- Some processor-heavy computing tasks, like calculating scrypt hashes, are not very parallelizable. (skelterpot)
- Other approaches like redundant hardware and error-checking within the CPU are possible, but they drive up the die area used. (skelterpot)
I'm an undergraduate studying molecular biology, and I am thinking of going into science. In Timothy Gower's "The Importance of Mathematics", he says that many mathematicians just do whatever interests them, regardless of social benefit. I'd rather do something with some interest or technological benefit to people outside of a small group with a very specific education.
Does anybody have any thoughts or links on judging the impact of the work on a research topic?
Clearly, the pursuit of a research topic must be producing truth to be helpful, and I've read Vladimir_M's heuristics regarding this.
Here's something I've tried. My current lab work is on the structure of membrane proteins in bacteria, so this is something I did to see where all this work on protein structure goes. I took a paper that I had found to be a very useful reference for my own work, about a protein that forms a pore in the bacterial membrane with a flexible loop, experimenting with the influence of this loop on the protein's structure. I used the Web of Science database to find a list of about two thousand papers that cited papers that cited this loop paper. I looked through this two-steps-away list for the ones that were not about molecules. Without too much effort, I found a few. The farthest from molecules that I found was a paper on a bacterium that sometimes causes meningitis, discussing about a particular stage in its colonization of the human body. A few of the two-steps-away articles were about antibiotics discovery; though molecular, this is a topic that has a great deal of impact outside of the world of research on biomolecules.
Though it occurs to me that it might be more fruitful to look the other way around: to identify some social benefits or interests people have, and see what scientific research is contributing the most to them.
LessWrong is not big on discussion of non-AI existential risks. But Neil deGrasse Tyson notes killer asteroids not just as a generic problem, but as a specific one, naming Apophis as an imminent hazard.
So treat this as your exercise for today: what are the numbers, what is the risk, what are the costs, what actions are appropriate? Assume your answers need to work in the context of a society that's responded to the notion of anthropogenic climate change with almost nothing but blue vs. green politics.
Luke's recent post mentioned that The Lancet has a policy encouraging the advance registration of clinical trials, while mine examined an apparent case study of data-peeking and on-the-fly transformation of studies. But how much variation is there across journals on such dimensions? Are there journals that buck the standards of their fields (demanding registration, p=0.01 rather than p=0.05 where the latter is typical in the field, advance specification of statistical analyses and subject numbers, etc)? What are some of the standouts? Are there fields without any such?
I wonder if there is a niche for a new open-access journal, along the lines of PLoS, with standards strict enough to reliably exclude false-positives. Some possible titles:
- The Journal of Real Effects
- (Settled) Science
- Probably True
- Journal of Non-Null Results, Really
- Too Good to Be False
It struck me this morning that a key feature that distinguishes art from science is that art is studied in the context of the artist, while science is not. When you learn calculus, mechanics, or optics, you don't read Newton. Science has content that can be abstracted out of one context - including the context of its creation - and studied and used in other contexts. This is a defining characteristic. Whereas art can't be easily removed from its context - one could argue art is context. When we study art, we study the original work by a single artist, to get that artist's vision.
(This isn't a defining characteristic of art - it wasn't true until the twelfth century, when writers and artists began signing their works. In ancient Greece, through the Middle Ages in Europe, the content, subject, or purpose of art was considered primary, in the same way that the content of science is today. "Homer's" Iliad was a collaborative project, in which many authors (presumably) agreed that the story was the important thing, not one author's vision of it, and (also presumably) added to it in much the way that science is cumulative today. Medieval art generally glorified the church or the state.)
However, because this is the way western society views art today, we can use this as a test. Is it art or science? Well, is its teaching organized around the creators, or around the content?
Philosophy and linguistics are somewhere between art and science by this test. So is symbolic AI, while data mining is pure science.
Those of us who have found the arguments for stagnation in our near future by Peter Thiel and Tyler Cowen pretty convincing, usually look only to the information and computer industries as something that is and perhaps even can keep us afloat. On the excellent West Hunters blog (which he shares with Henry Harpending) Gregory Cochran speculates that there might be room for progress in a seemingly unlikely field.
In The Great Stagnation, Tyler Cowen discusses a real problem – a slowdown in technical innovation, with slow economic growth as a consequence.. I think his perspective is limited, since he doesn’t know much about the inward nature of innovation. He is kind enough to make absolutely clear how little he knows by mentioning Tang and Teflon as spinoffs of the space program, which is of course wrong. It is unfair to emphasize this too strongly, since hardly anybody in public life knows jack shit about technology and invention. Try to think of a pundit with a patent.
Anyhow, it strikes me that a certain amount of knowledge may lead to useful insights. In particular, it may help us find low-hanging-fruit, technical innovations that are tasty and relatively easy – the sort of thing that seems obvious after someone thinks of it.
If we look at cases where an innovation or discovery was possible – even easy – for a long time before it was actually developed, we might be able to find patterns that would help us detect the low-hanging fruit dangling right in front of us today.
For now, one example. We know that gastric and duodenal ulcer, and most cases of stomach cancer, are caused by an infectious organism, helicobacter pylori. It apparently causes amnesia as well. This organism was first seen in 1875 – nobody paid any attention.
Letulle showed that it induced gastritis in guinea pigs, 1888. Walery Jaworski rediscovered it in 1889, and suspected that it might cause gastric disease. Nobody paid any attention. Krienitz associated it with gastric cancer in 1906. Who cares?
Around 1940, some American researchers rediscovered it, found it more common in ulcerated stomachs, and published their results. Some of them thought that this might be the cause of ulcers – but Palmer, a famous pathologist, couldn’t find it when he looked in the early 50s, so it officially disappeared again. He had used the wrong stain. John Lykoudis, a Greek country doctor noticed that a heavy dose of antibiotics coincided with his ulcer’s disappearance, and started treating patients with antibiotics – successfully. He tried to interest pharmaceutical companies – wrote to Geigy, Hoechst, Bayer, etc. No joy. JAMA rejected his article. The local medical society referred him for disciplinary action and fined him
The Chinese noticed that antibiotics could cure ulcers in the early 70s, but they were Commies, so it didn’t count.
Think about it: peptic and duodenal ulcer were fairly common, and so were effective antibiotics, starting in the mid-40s. . Every internist in the world – every surgeon – every GP was accidentally curing ulcers – not just one or twice, but again and again. For decades. Almost none of them noticed it, even though it was happening over and over, right in front of their eyes. Those who did notice were ignored until the mid-80s, when Robin Warren and Barry Marshall finally made the discovery stick. Even then, it took something like 10 years for antibiotic treatment of ulcers to become common, even though it was cheap and effective. Or perhaps because it was cheap and effective.
This illustrates an important point: doctors are lousy scientists, lousy researchers. They’re memorizers, not puzzle solvers. Considering that Western medicine was an ineffective pseudoscience – actually, closer to a malignant pseudoscience – for its first two thousand years, we shouldn’t be surprised. Since we’re looking for low-hanging fruit, this is good news. It means that the great discoveries in medicine are probably not mined out. From our point of view, past incompetence predicts future progress. The worse, the better!
I think Greg is underestimating the slight problems of massive over-regulation and guild-like rent seeking that limits medical research and providing medical advice quite severely. He does however make a compelling case for there to still be low hanging fruit there which with a more scientific and rational approach could easily be plucked. I also can't help but wonder if investigating older, supposedly disproved, treatments and theories together with novel research might bring up a few interesting things.
Many on LessWrong share Greg's estimation of the incompetence of the medical establishment, but how many share his optimism that our lack of recent progress isn't just the result of dealing with a really difficult problem set? It may be hard to tell if he is right.
Yes, this a repost from Hacker News, but I want to point out some books that are of LW-related interest.
The Hacker Shelf is a repository of freely available textbooks. Most of them are about computer programming or the business of computer programming, but there are a few that are perhaps interesting to the LW community. All of these were publicly available beforehand, but I'm linking to the aggregator in hopes that people can think of other freely available textbooks to submit there.
The site is in its beginning explosion phase; in the time it took to write this post, it doubled in size. If previous sites are any indication, it will crest in a month or so. People will probably lose interest after three months, and after a year the site will probably silently close shop.
MacKay, Information Theory, Inference, and Learning Algorithms
I really wish I had an older version of this book; the newer one has been marred by a Cambridge UP ad on the upper margin of every page. Publishers ruin everything.
The book covers reasonably concisely the basics of information theory and Bayesian methods, with some game theory and coding theory (in the sense of data compression) thrown in on the side. The style takes after Knuth, but refrains from the latter's more encyclopedic tendencies. It's also the type of book that gives a lot of extra content in the exercises. It unfortunately assumes a decent amount of mathematical knowledge — linear algebra and calculus, but nothing you wouldn't find on the Khan Academy.
Easley and Kleinberg, Networks, Crowds, and Markets
There's just a lot of stuff in this book, most of it of independent interest. The thread that ties the book together is graph theory, and with it they cover a great deal of game theory, voting theory, and economics. There are lots of graphs and pictures, and the writing style is pretty deliberate and slow-paced. The math is not very intense; all their probability spaces are discrete, so there's no calculus, and only a few touches of linear algebra.
Gabriel, Patterns of Software
This is a more fluffy book about the practice of software engineering. It's rather old, but I'm linking to it anyway because I agree with the author's feeling that the software engineering discipline has more or less misunderstood Christopher Alexander's work on pattern languages. The author tends to ramble on. I think there's some good wisdom about programming practices and organizational management in general that one could abstract away from this book.
Nisan et. al., Algorithmic Game Theory
I hesitate to link this because the math level is exceptionally high, perhaps high enough that anyone who can read the book probably knows the better part of its contents already. But game/decision theory is near and dear to LW's heart, so perhaps someone will gather some utility from this book. There's an awful lot going on in it. A brief selection: a section on the relationship between game theory and cryptography, a section on computation in prediction markets, and a section analyzing the incentives of information security.
Recent research in Oxford and elsewhere has shown that one type of brain stimulation in particular, called transcranial direct current stimulation or TDCS, can be used to improve language and maths abilities, memory, problem solving, attention, even movement.
Critically, this is not just helping to restore function in those with impaired abilities. TDCS can be used to enhance healthy people’s mental capacities. Indeed, most of the research so far has been carried out in healthy adults.
The article goes on to discuss the ethics of the technique.
"The point of this demonstration is to show that you can think with numbers in a practical and efficient way without having a statistician in the room. Anyone can handle the windowpane approach with numbers. Just have a clear definition of Changed? (Yes or No) and a clear definition of the Group (Treatment or Control). Then just count and look for percentage differences. A 10% difference is small, 30% is moderate, and 50% is large. And, realize that while “small” may be hard to detect, it can definitely make big practical effect.
Now whether you conceptualize Effect Sizes as windowpanes or jars with marbles, you now understand what the idea, Difference, means. You can count or see No, Small, Medium, or Large Differences and interpret those complex statistical arguments you encounter all the time. Realize again, that this approach is not Statistics for Dummies, Idiots, or Fools, but is a standard and mathematically correct way to present quantitative information."
tldr; Natural frequencies (ratios of counts of subjects) rather than Conditional probabilities, are easier for people to comprehend.
Jonah Lehrer has up another of his contrarian science articles: "Trials and Errors: Why Science Is Failing Us".
Main topics: the failure of drugs in clinical trials, diminishing returns to pharmaceutical research, doctors over-treating, and Humean causality-correlation distinction, with some Ioannidis mixed through-out.
In completely unrelated news, Nick Bostrom is stepping down from IEET's Chairman of the Board.
Avorn (2004) reports:
In a former British colony, most healers believed the conventional wisdom that a distillation of fluids extracted from the urine of horses, if dried to a powder and fed to aging women, could act as a general tonic, preserve youth, and ward of a variety of diseases. The preparation became enormously popular throughout the culture, and was widely used by older women in all strata of society. Many years later modern scientific studies revealed that long-term ingestion of the horse-urine extract was useless for most of its intended purposes, and that it causes tumors, blood clots, heart disease, and perhaps brain damage.
The former colony is the United States; the time is now; the drug is the family of hormone replacement products that include Prempro and Premarin (manufactured from pregnant mares' urine, hence its name). For decades, estrogen replacement in postmenopausal women was widely believed to have "cardio-protective" properties; other papers in respected medical journals reported that the drugs could treat depression and incontinence, as well as prevent Alzheimer's disease. The first large, well-conducted, controlled clinical trial of this treatment in women was not published until 1998: it found that estrogen replacement actually increased the rate of heart attacks in the patients studied. Another clinical trial published in 2002 presented further evidence that these products increased the risk of heart disease, stroke, and cancer. Further reports a year later found that rather than preventing Alzheimer's disease, the drugs appeared to double the risk of becoming senile.
Armstrong (2006) adds:
The treatment seemed to work because those who used the drug tended to be healthier than those who did not. This was because it was used by people who were more interested in taking care of their health.
This seems like by far the best investment of $300,000 out there, if your metric is revolutionary new physics discovered per dollar. I pointed the founder at Thiel's Breakout Labs, which is probably more suited to this kind of thing than Kickstarter. But there is still a very non-negligible chance that the Kickstarter Grant will come to fruition.
This is a first stab at solving Goodman's famous grue problem. I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem. I haven't looked at many proposed solutions to this paradox, besides some of the basic ones in "The New Problem of Induction". So, I apologize now if my solution is wildly unoriginal. I am willing to put you through this dear reader because:
- I wanted to see how I would fare against this still largely open, devastating, and classic problem, using only the arsenal provided to me by my minimal Bayesian training, and my regular LW reading.
- I wanted the first LW article about the grue problem to attack it from a distinctly Lesswrongian aproach without the benefit of hindsight knowledge of the solutions of non-LW philosophy.
- And lastly, because, even if this solution has been found before, if it is the right solution, it is to LW's credit that its students can solve the grue problem with only the use of LW skills and cognitive tools.
I would also like to warn the savvy subjective Bayesian that just because I think that probabilities model frequencies, and that I require frequencies out there in the world, does not mean that I am a frequentest or a realist about probability. I am a formalist with a grain of salt. There are no probabilities anywhere in my view, not even in minds; but the theorems of probability theory when interpreted share a fundamental contour with many important tools of the inquiring mind, including both, the nature of frequency, and the set of rational subjective belief systems. There is nothing more to probability than that system which produces its theorems.
Lastly, I would like to say, that even if I have not succeeded here (which I think I have), there is likely something valuable that can be made from the leftovers of my solution after the onslaught of penetrating critiques that I expect form this community. Solving this problem is essential to LW's methods, and our arsenal is fit to handle it. If we are going to be taken seriously in the philosophical community as a new movement, we must solve serious problems from academic philosophy, and we must do it in distinctly Lesswrongian ways.
"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
The nth emerald ever observed was green.
There is a very high probability that a never before observed emerald will be green."
That is the inference that the grue problem threatens, courtesy of Nelson Goodman. The grue problem starts by defining "grue":
"An object is grue iff it is first observed before time T, and it is green, or it is first observed after time T, and it is blue."
So you see that before time T, from the list of premises:
"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
The nth emerald ever observed was green."
(we will call these the green premises)
it follows that:
"The first emerald ever observed was grue.
The second emerald ever observed was grue.
The third emerald ever observed was grue.
The nth emerald ever observed was grue."
(we will call these the grue premises)
The proposer of the grue problem asks at this point: "So if the green premises are evidence that the next emerald will be green, why aren't the grue premises evidence for the next emerald being grue?" If an emerald is grue after time T, it is not green. Let's say that the green premises brings the probability of "A new unobserved emerald is green." to 99%. In the skeptic's hypothesis, by symmetry it should also bring the probability of "A new unobserved emerald is grue." to 99%. But of course after time T, this would mean that the probability of observing a green emerald is 99%, and the probability of not observing a green emerald is at least 99%, since these sentences have no intersection, i.e., they cannot happen together, to find the probability of their disjunction we just add their individual probabilities. This must give us a number at least as big as 198%, which is of course, a contradiction of the Komolgorov axioms. We should not be able to form a statement with a probability greater than one.
This threatens the whole of science, because you cannot simply keep this isolated to emeralds and color. We may think of the emeralds as trials, and green as the value of a random variable. Ultimately, every result of a scientific instrument is a random variable, with a very particular and useful distribution over its values. If we can't justify inferring probability distributions over random variables based on their previous results, we cannot justify a single bit of natural science. This, of course, says nothing about how it works in practice. We all know it works in practice. "A philosopher is someone who say's, 'I know it works in practice, I'm trying to see if it works in principle.'" - Dan Dennett
We may look at an analogous problem. Let's suppose that there is a table and that there are balls being dropped on this table, and that there is an infinitely thin line drawn perpendicular to the edge of the table somewhere which we are unaware of. The problem is to figure out the probability of the next ball being right of the line given the last results. Our first prediction should be that there is a 50% chance of the ball being right of the line, by symmetry. If we get the result that one ball landed right of the line, by Laplace's rule of succession we infer that there is a 2/3ds chance that the next ball will be right of the line. After n trials, if every trial gives a positive result, the probability we should assign to the next trial being positive as well is n+1/n +2.
If this line was placed 2/3ds down the table, we should expect that the ratio of rights to lefts should approach 2:1. This gives us a 2/3ds chance of the next ball being a right, and the fraction of Rights out of trials approaches 2/3ds ever more closely as more trials are performed.
Now let us suppose a grue skeptic approaching this situation. He might make up two terms "reft" and "light". Defined as you would expect, but just in case:
"A ball is reft of the line iff it is right of it before time T when it lands, or if it is left of it after time T when it lands.
A ball is light of the line iff it is left of the line before time T when it lands, or if it is right of the line after time T when it first lands."
The skeptic would continue:
"Why should we treat the observation of several occurrences of Right, as evidence for 'The next ball will land on the right.' and not as evidence for 'The next ball will land reft of the line.'?"
Things for some reason become perfectly clear at this point for the defender of Bayesian inference, because now we have an easy to imaginable model. Of course, if a ball landing right of the line is evidence for Right, then it cannot possibly be evidence for ~Right; to be evidence for Reft, after time T, is to be evidence for ~Right, because after time T, Reft is logically identical to ~Right; hence it is not evidence for Reft, after time T, for the same reasons it is not evidence for ~Right. Of course, before time T, any evidence for Reft is evidence for Right for analogous reasons.
But now the grue skeptic can say something brilliant, that stops much of what the Bayesian has proposed dead in its tracks:
"Why can't I just repeat that paragraph back to you and swap every occurrence of 'right' with 'reft' and 'left' with 'light', and vice versa? They are perfectly symmetrical in terms of their logical realtions to one another.
If we take 'reft' and 'light' as primitives, then we have to define 'right' and 'left' in terms of 'reft' and 'light' with the use of time intervals."
What can we possibly reply to this? Can he/she not do this with every argument we propose then? Certainly, the skeptic admits that Bayes, and the contradiction in Right & Reft, after time T, prohibits previous Rights from being evidence of both Right and Reft after time T; where he is challenging us is in choosing Right as the result which it is evidence for, even though "Reft" and "Right" have a completely symmetrical syntactical relationship. There is nothing about the definitions of reft and right which distinguishes them from each other, except their spelling. So is that it? No, this simply means we have to propose an argument that doesn't rely on purely syntactical reasoning. So that if the skeptic performs the swap on our argument, the resulting argument is no longer sound.
What would happen in this scenario if it were actually set up? I know that seems like a strangely concrete question for a philosophy text, but its answer is a helpful hint. What would happen is that after time T, the behavior of the ratio: 'Rights:Lefts' as more trials were added, would proceed as expected, and the behavior of the ratio: 'Refts:Lights' would approach the reciprocal of the ratio: 'Rights:Lefts'. The only way for this to not happen, is for us to have been calling the right side of the table "reft", or for the line to have moved. We can only figure out where the line is by knowing where the balls landed relative to it; anything we can figure out about where the line is from knowing which balls landed Reft and which ones landed Light, we can only figure out because in knowing this and and time, we can know if the ball landed left or right of the line.
To this I know of no reply which the grue skeptic can make. If he/she say's the paragraph back to me with the proper words swapped, it is not true, because In the hypothetical where we have a table, a line, and we are calling one side right and another side left, the only way for Refts:Lefts behave as expected as more trials are added is to move the line (if even that), otherwise the ratio of Refts to Lights will approach the reciprocal of Rights to Lefts.
This thin line is analogous to the frequency of emeralds that turn out green out of all the emeralds that get made. This is why we can assume that the line will not move, because that frequency has one precise value, which never changes. Its other important feature is reminding us that even if two terms are syntactically symmetrical, they may have semantic conditions for application which are ignored by the syntactical model, e.g., checking to see which side of the line the ball landed on.
Every random variable has as a part of it, stored in its definition/code, a frequency distribution over its values. By the fact that somethings happen sometimes, and others happen other times, we know that the world contains random variables, even if they are never fundamental in the source code. Note that "frequency" is not used as a state of partial knowledge, it is a fact about a set and one of its subsets.
The reason that:
"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
The nth emerald ever observed was green.
There is a very high probability that a never before observed emerald will be green."
is a valid inference, but the grue equivalent isn't, is that grue is not a property that the emerald construction sites of our universe deal with. They are blind to the grueness of their emeralds, they only say anything about whether or not the next emerald will be green. It may be that the rule that the emerald construction sites use to get either a green or non-green emerald change at time T, but the frequency of some particular result out of all trials will never change; the line will not move. As long as we know what symbols we are using for what values, observing many green emeralds is evidence that the next one will be grue, as long as it is before time T, every record of an observation of a green emerald is evidence against a grue one after time T. "Grue" changes meanings from green to blue at time T, 'green'''s meaning stays the same since we are using the same physical test to determine green-hood as before; just as we use the same test to tell whether the ball landed right or left. There is no reft in the universe's source code, and there is no grue. Green is not fundamental in the source code, but green can be reduced to some particular range of quanta states; if you had the universes source code, you couldn't write grue without first writing green; writing green without knowing a thing about grue would be just as hard as while knowing grue. Having a physical test, or primary condition for applicability, is what privileges green over grue after time T; to have a physical consistent test is the same as to reduce to a specifiable range of physical parameters; the existence of such a test is what prevents the skeptic from performing his/her swaps on our arguments.
Take this more as a brainstorm than as a final solution. It wasn't originally but it should have been. I'll write something more organized and consize after I think about the comments more, and make some graphics I've designed that make my argument much clearer, even to myself. But keep those comments coming, and tell me if you want specific credit for anything you may have added to my grue toolkit in the comments.
This American Life episode 450: "So Crazy It Just Might Work". The whole episode is good, but act one (6:48-42:27) is relevant to LW, about a trained scientist teaming up with an amateur on a cancer cure.
It's downloadable until 19 Nov 2011 or so, and streamable thereafter.
(Technical nit: It sounds to me like the reporter doesn't know the difference between sound and electromagnetism.)
Edit: Here's a quick rot13ed summary: Vg qbrfa'g tb jryy. Nagubal Ubyynaq frrf rkcrevzragny pbagebyf naq ercebqhpvovyvgl nf guvatf gung trg va uvf jnl. Ur frrzf gb unir gnxra [gur Penpxcbg Bssre](uggc://yrffjebat.pbz/yj/w8/gur_penpxcbg_bssre/).
Sometimes, scientific discovery is just a matter of sitting down and using the tools of "connected science" already available to us. Stories like this one underscore the need for generalists:
Don Swanson seems an unlikely person to make medical discoveries. A retired but still active information scientist at the University of Chicago, Swanson has no medical training, does no medical experiments, and has never had a laboratory. Despite this, he’s made several significant medical discoveries. One of the earliest was in 1988, when he investigated migraine headaches, and discovered evidence suggesting that migraines are caused by magnesium deficiency. At the time the idea was a surprise to other scientists studying migraines, but Swanson’s idea was subsequently tested and confirmed in multiple therapeutic trials by traditional medical groups.
How is it that someone without any medical training could make such a discovery? Although Swanson had none of the conventional credentials of medical research, what he did have was a clever idea. Swanson believed that scientific knowledge had grown so vast that important connections between subjects were going unnoticed, not because they were especially subtle or hard to grasp, but because no one had a broad enough understanding of science to notice those connections: in a big enough haystack, even a 50-foot needle may be hard to find. Swanson hoped to uncover such hidden connections using a medical search engine called Medline, which makes it possible to search millions of scientific papers in medicine—you can think of Medline as a high-level map of human medical knowledge. He began his work by using Medline to search the scientific literature for connections between migraines and other conditions. Here are two examples of connections he found: (1) migraines are associated with epilepsy; and (2) migraines are associated with blood clots forming more easily than usual. Of course, migraines have been the subject of much research, and so those are just two of a much longer list of connections that he found. But Swanson didn’t stop with that list. Instead, he took each of the associated conditions and then used Medline to find further connections to that condition. He learned that, for example, (1) magnesium deficiency increases susceptibility to epilepsy; and (2) magnesium deficiency makes blood clot more easily. Now, when he began his work Swanson had no idea he’d end up connecting migraines to magnesium deficiency. But once he’d found a few papers suggesting such two-stage connections between magnesium deficiency and migraines, he narrowed his search to concentrate on magnesium deficiency, eventually finding eleven such two-stage connections to migraines. Although this wasn’t the traditional sort of evidence favored by medical scientists, it nonetheless made a compelling case that migraines are connected to magnesium deficiency. Before Swanson’s work a few papers had tentatively (and mostly in passing) suggested that magnesium deficiency might be connected to migraines. But the earlier work wasn’t compelling, and was ignored by most scientists. By contrast, Swanson’s evidence was highly suggestive, and it was soon followed by therapeutic trials that confirmed the migraine-magnesium connection.
Michael Nielsen's new book Reinventing Discovery is invigorating. Here's one passage on how a small group talked an issue through and had a large impact on scientific progress:
Why is it that biologists share genetic data in GenBank in the first place? When you think about it, it’s a peculiar choice: if you’re a professional biologist it’s to your advantage to keep data secret as long as possible. Why share your data online before you get a chance to publish a paper or take out a patent on your work? In the scientific world it’s papers and, in some fields, patents that are rewarded by jobs and promotions. Publicly releasing data typically does nothing for your career, and might even damage it, by helping your scientific competitors.
In part for these reasons, GenBank took off slowly after it was launched in 1982. While many biologists were happy to access others’ data in GenBank, they had little interest in contributing their own data. But that has changed over time. Part of the reason for the change was a historic conference held in Bermuda in 1996, and attended by many of the world’s leading biologists, including several of the leaders of the government-sponsored Human Genome Project. Also present was Craig Venter, who would later lead a private effort to sequence the human genome. Although many attendees weren’t willing to unilaterally make the first move to share all their genetic data in advance of publication, everyone could see that science as a whole would benefit enormously if open sharing of data became common practice. So they sat and talked the issue over for days, eventually coming to a joint agreement—now known as the Bermuda Agreement—that all human genetic data should be immediately shared online. The agreement wasn’t just empty rhetoric. The biologists in the room had enough clout that they convinced several major scientific grant agencies to make immediate data sharing a mandatory requirement of working on the human genome. Scientists who refused to share data would get no grant money to do research. This changed the game, and immediate sharing of human genetic data became the norm. The Bermuda agreement eventually made its way to the highest levels of government: on March 14, 2000, US President Bill Clinton and UK Prime Minister Tony Blair issued a joint statement praising the principles described in the Bermuda Agreement, and urging scientists in every country to adopt similar principles. It’s because of the Bermuda Agreement and similar subsequent agreements that the human genome and the HapMap are publicly available.
Journalist David McRaney has very recently published a popular book on human rationality. The book, You Are Not So Smart, is currently the 3rd best selling book in Nonfiction/Philosophy on Amazon.com after less than a week on the market. (Eighth best selling book in Nonfiction/Education)
The tag-line of the project is: "A celebration of self-delusion." As such the book seems less an attempt at giving advice on how to act and decide, than an attempt to reveal, chapter by chapter, the folly of common sense.
Topics include: Hindsight Bias, Confirmation bias, The Sunk Cost Fallacy, Anchoring Effect, The Illusion of Transparency, The Just World Fallacy, Representativeness Heuristic, The Perils of Introspection, The Dunning-Kruger Effect, The Monty Hall Problem, The Bystander Effect, Placebo Buttons, Groupthink, Conformity, Social Loafing, Helplessness, Cults, Change Blindness, Self-Fulfilling Prophecies, Self Handicapping, Availability Heuristic, Self-Serving Bias, The Ultimatum Game, Inattentional Blindness.
These are topics we enjoy learning about, pride ourself in knowing a lot about, and, we profess, we would want more people to know about. A popular book on this subject is now out. This sounds like a good thing.
I will note that the blog features at least one direct quote from LessWrong.
We always know what we mean by our words, and so we expect others to know it too. Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant. It’s hard to empathise with someone who must interpret blindly, guided only by the words.
- Eliezer Yudowsky from Lesswrong.com
One one hand, You Are Not So Smart could bee a boon to Eliezer's popular rationality book by priming the market. His writings on a given topic have rarely been described as redundant. On the other hand, it seems to me that this book closely covers a number of topics, seemingly in a similar style to the treatments that were published on this site and Overcoming Bias. Intended to be published in book form at a later date. I will try to refrain from speculation here.
Sample blook chapters from YouAreNotSoSmart:
I'll save the rest of my review until I have actually read the book.
In the meantime I would like to know your thoughts on this project.
Two talks from the Future of Humanity Institute are now online (this is the first time we've done this, so please excuse the lack of polish). The first is Anders Sandberg talking about brain emulations (technical overview), the second is myself talking of the risks of Oracle AIs (informal presentation). They can be found here:
Fesability of whole-brain emulation: http://www.youtube.com/watch?v=3nIzPpF635c&feature=related, initial paper at http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf, new paper still to come.
Thinking inside the box: Using and controlling an Oracle AI:http://www.youtube.com/watch?v=Gz9zYQsT-QQ&feature=related, paper at http://www.aleph.se/papers/oracleAI.pdf
The October 2011 Scientific American has an editorial from its board of editors called "Ban chimp testing", that says: "In our view, the time has come to end biomedical experimentation on chimpanzees... Chimps should be used only in studies of major diseases and only when there is no other option." Much of the knowledge described in Luke's recent post on the cognitive science of rationality would have been impossible to acquire under such a ban.
I encourage you to write to Scientific American in favor of chimp testing. Some points that I plan to make:
- The editors obliquely criticized the Institute of Medicine's study of whether chimps are "truly necessary" for biomedical and behavioral research, on the grounds that the NIH instructed it to omit ethics from consideration. This is the correct approach. The team tasked with gathering evidence about the necessity of chimps for research, should not be making ethical judgements. They are gathering the data for someone else to make ethical judgements.
- Saying chimps should be used "only when there is no other option" is the same as saying chimps should never be used. There are always other options.
- This position might be morally defensible if humans were allowed to subject themselves for testing. The knowledge to be gained from experiment is surely worth the harm to the subject, if the subject chooses to undergo the experiment. In many cases there are humans who think an experiment is important enough that they would be willing to participate in it themselves - but they are not allowed to because of restrictions on human testing. Banning chimp testing should thus be done only in conjunction with allowing human testing.
I also encourage you to adopt a tone of moral outrage. Rather than taking the usual apologetic "we're so sorry, but we have to do this awful things in the name of science" tone, get indignant at the editors who intend to do harm to uncountable numbers of innocent people. And, if you find a way, get indignant not just about harm, but about lost potential, by pointing out the ways that our knowledge about how brains work can make our lives better, not just save us from disease.
You can comment on this here, but comments are AFAIK not printed in later issues as letters to the editor. Actual letters, or at least email, probably have more impact. You can't submit a letter to the editor through the website, because letters are magically different from things submitted on a website.
ADDED: Many people responded by claiming that banning chimp experimentation occupies some moral high ground. That is logically impossible.
To behave morally, you have to do two things:
1. Figure out, inherit, or otherwise acquire a set of moral goals are - let's say, for example, to maximize the sum over all individuals i of all species s of ws*[pleasure(s,i)-pain(s,i)].
2. Act in a way directed by those moral goals.
If you ban chimp testing, you are forbidding people from making moral decisions. If you really cared about the suffering of sentient beings, you would also care about the suffering of humans; and you would realize that there is a tradeoff between the suffering from those experimented on, and those who benefit, that is different for every experiment.
People who call for a ban on chimp testing are therefore calling to forbid people from making moral judgements and taking moral actions. There are a wide range of laws and positions that could be argued to be moral. But just saying "We are incapable of making moral decisions, so we will ban moral decision-making" is not one of them.
I recently ran across the following link:
Which discusses a new president at Harvey Mudd College, and specifically her work in making the computer science major more accessible to women. This seems neat and interesting except... barely any details are provided whatsoever.
They mention that the introductory computer science course was split into different courses, one of which is taught in Python. Looking at Maria Klowe's webpage on Harvey Mudd she references that these steps were taken in a three part plan, and says "I encourage you to read more" but there are no obvious links on the page to any specifics.
Is anyone from Harvey Mudd that knows more or how to find out more? For example, did the increase in female computer scientists go along with an increase in the size of the program (as is implied) or was there a displacement of male computer scientists? Is the success limited to the one department, or are other engineering and science majors picking up women? I noticed the front page mentioning that Harvey Mudd was recently named the top engineering school in the US, so presumably the gains don't result from "dumbing down" the program, but I'd like to see more information.
View more: Next