You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing

2 Gram_Stone 28 September 2016 11:16PM

[Link] SSC: It's Bayes All The Way Up

2 Houshalter 28 September 2016 06:06PM

Would you notice if science died?

4 Douglas_Knight 08 March 2016 04:04AM

Would you notice if science died?

Science is a big deal. It would be worth knowing if it stalled, regressed, or died out, whether the body of knowledge or the techniques for generating more knowledge. You could practice by reviewing history and looking for times and places where it stumbled. In this exercise you have the advantage of hindsight, but the disadvantage of much less direct access to the raw data of the scientific practice of the time. But regardless of how it compares to the real task, this is practice. This is an opportunity to test theories and methods before committing to them. There is a limited amount of history to practice on, but it’s a lot more than the real event, the present.

Many say that they would notice if science died because engineering would grind to a halt or even regress. What does this heuristic say when applied to history? Does it match other criteria?

Many say that the Greeks were good at science and the Romans at engineering (perhaps also the Han vs the Song). This is not really compatible with the heuristic above. What options do we have to draw a coherent conclusion? Either science did not die, or engineering did not advance, or science is not so necessary for engineering; Either we are bad at judging science from history, or we are bad at judging engineering from history, or engineering is not a good heuristic for judging science. None of these are comforting for our ability to judge the future. The third is simply the rejection of the popular heuristic. The first two are the rejection of the exercise of history. But if we cannot judge history, we have no opportunity to practice. Worse, if we are unable to judge history, the present may be no easier.

One recourse is to posit that the past is difficult because of sparse information and that the future we experience ourselves will be easy to judge. But many people lived through the past; what did they think at the time? In particular, how did the Romans think they compared to the Greeks? Did they think that there was progress or regress? Did they agree with modern hindsight? They thought that the Greeks were good at science. Pop science books by Pliny and Seneca are really accounts of Greek knowledge. Similarly, Varro’s practical book of agriculture is based on dozens of Greek sources. And the Romans were proud of their engineering. Frotinus urged his readers to compared the Roman aqueducts to the idle pyramids and wonders of the Greeks. Maybe he should be discounted for his professional interest. But Pliny describes the Roman aqueducts as the most remarkable achievement in the world in the midst of account of Greek knowledge. Indeed, the modern conventional wisdom is probably simply copied from the Romans. Did the Romans endorse the third claim, that science was a prerequisite to engineering? I do not know. Perhaps the they held that it was necessary, but could be left to Greek slaves.

I think that this example should make people nervous about the heuristic about science and engineering. But people who don’t hold any such heuristic should be even more nervous.

I think I know what the answer is. I think that engineering did regress, but the Romans did not notice. They were too impressed by size, so they made bigger aqueducts, without otherwise improving on Greek techniques; and they failed to copy much other Greek technology. Perhaps the heuristic is fine, but it just passes the buck: how much can you trust your judgement of the state of engineering? On the other hand, I think that science regressed much more than engineering, so I do not think them as coupled as the heuristic suggests.

Would you notice if science died? How would you notice? Have you tried that method against history?


Some historical test cases: the transition from Greece to Rome; Han vs Tang vs Song; the Renaissance.

This year's biggest scientific achievements

9 Elo 13 December 2015 05:26AM

For our solstice event I tried to put together a list of this year’s biggest scientific achievements.  They can likely all be looked up with a bit of searching and each one is worthy of a celebration in their own right.  But mostly I want to say; we have come a long way this year.  And we have a long way to go.

I tried to include science and technology in this list, but really anything world-scale (non-politics or natural disaster) is worthy of celebrating.


  • Rosetta mission lands on a comet

 

  • using young blood to fight old age (rats)

 

  • kinghorn human sequencing machines (Sydney relevant)

  • 100,000 genomes project

 

  • the world's oldest cave art @ 40,000 years old

 

  • tesla battery//released their patents on their electric engines for use by anyone.

 

  • Virtual reality (cardboard) 

 

  • Astronauts growing their own food 

 

  • Self driving cars

 

  • cubesats

 

  • Lab grown kidneys successfully implanted into animals 

 

  • synthetic DNA

  • Chicken with a reptile face 

 

  • nearly an altzeimers cure (ultrasound techniques)

 

  • DAWN orbits Ceres

 

  • Deepdreaming machine learning (and twitch-deepdream)

  • Prosthetic limbs that transmit feeling back to the user 

  • Autonomous rocket landing pointy end up 

  • Lightsail project 

  • Ion space travel engine 

  • Anti - aging virus injected into the patient 0

  • Super black substance made 

  • Q-carbon 

  • High temperature superconductor (-70c)

  • 23&me were allowed to open back up

  • Enchroma colourblindness adjusting glasses

  • Google releases "Tensor Flow" which whilst its not very good at the moment has the potential to centralize the Deep Learning libraries.

  • CRISPR's ability to change the germ line.

  • Deep Dreaming, but also image generation.  Faces generated, bedrooms generated and even a toilet in a field. Its clear that within the next few years you will have pictures entirely generated by Neural Nets. (Code: https://github.com/soumith/dcgan.torch).





from https://en.wikipedia.org/wiki/2015

April 29 – The World Health Organization (WHO) declares that rubella has been eradicated from the Americas.

July 14 - NASA's New Horizons spacecraft performs a close flyby of Pluto, becoming the first spacecraft in history to visit the distant world.

September 10 – Scientists announce the discovery of Homo naledi, a previously unknown species of early human in South Africa.

September 28 – NASA announces that liquid water has been found on Mars.


Recommendations from the slack:

china makes a genetically modified micropig and sells it: http://www.theguardian.com/world/2015/oct/03/micropig-animal-rights-genetics-china-pets-outrage

psyc studies can’t be reproduced: http://www.theverge.com/2015/8/27/9216565/psychology-studies-reproducability-issues

zoom contact lenses

http://mic.com/articles/118670/this-painless-eye-implant-could-give-you-superhuman-vision#.4S5ihAKNE

room temperature synthetic diamonds

http://phys.org/news/2015-11-phase-carbon-diamond-room-temperature.html 


Notable deaths

terry pratchett passed away

malcolm fraser

John Forbes Nash Jr

Oliver Sacks

Christopher lee


Nobel medals this year

Chemistry – Paul L. Modrich; Aziz Sancar and Tomas Lindahl ("for mechanistic studies of DNA repair")

Economics – Angus Deaton ("for his analysis of consumption, poverty, and welfare")

Literature – Svetlana Alexievich ("for her polyphonic writings, a monument to suffering and courage in our time" )

Peace – Tunisian National Dialogue Quartet ("for its decisive contribution to the building of a pluralistic democracy in Tunisia in the wake of the Jasmine Revolution of 2011")

Physics – Takaaki Kajita and Arthur B. McDonald ("for the discovery of neutrino oscillations, which shows that neutrinos have mass")

Physiology or Medicine – William C Campbell, Satoshi Ōmura ("for their discoveries concerning a novel therapy against infections caused by roundworm parasites") and Tu Youyou ("for her discoveries concerning a novel therapy against Malaria"[116])

 

Other:

The dress 

Ebola outbreak

Polio came back 

(also this year) - upcoming spaceX return flight on the 19th dec

runner up: vat meat is almost ready.

runner up: soylent got a lot better this year

runner up: quantum computing having progressive developments but nothing specific

 

Things that happened 100 years ago (from wikipedia):

  • March 19 – Pluto is photographed for the first time
  • September 11 – The Pennsylvania Railroad begins electrified commuter rail service between Paoli and Philadelphia, using overhead AC trolley wires for power. This type of system is later used in long-distance passenger trains between New York City, Washington, D.C., and Harrisburg, Pennsylvania.
  • November 25 – Einstein's theory of general relativity is formulated.
  • Alfred Wegener publishes his theory of Pangaea.
Birth: 
  • Thomas Huckle Weller, American virologist, recipient of the Nobel Prize in Physiology or Medicine (d. 2008)
  • Charles Townes, American physicist, Nobel Prize laureate (d. 2015)
  • August 27 – Norman F. Ramsey, American physicist, Nobel Prize laureate (d. 2011)
  • Clifford Shull, American physicist, Nobel Prize laureate (d. 2001)
  • November 19 – Earl Wilbur Sutherland Jr., American physiologist, Nobel Prize laureate (d. 1974)
  • Henry Taube, Canadian-born chemist, Nobel Prize laureate (d. 2005)
Deaths:
  • Paul Ehrlich, German scientist, recipient of the Nobel Prize in Physiology or Medicine (b. 1854)
  • December 19 – Alois Alzheimer, German psychiatrist and neuropathologist (b. 1864)
Nobel Prizes:
  • Chemistry – Richard Willstätter
  • Literature – Romain Rolland
  • Medicine – not awarded
  • Peace – not awarded
  • Physics – William Henry Bragg and William Lawrence Bragg

Meta - This list was compiled for Sydney’s Solstice event; I figured I would share this because it’s pretty neat.

Time to compose: 3-4hrs

With comments from the IRC and slack

To see more of my posts visit my Table of contents

As usual; any suggestions welcome below.

How do you choose areas of scientific research?

5 FrameBenignly 07 November 2015 01:15AM

I've been thinking lately about what is the optimal way to organize scientific research both for individuals and for groups. My first idea: research should have a long-term goal. If you don't have a long-term goal, you will end up wasting a lot of time on useless pursuits. For instance, my rough thought process of the goal of economics is that it should be “how do we maximize the productive output of society and distribute this is in an equitable manner without preventing the individual from being unproductive if they so choose?”, the goal of political science should be “how do we maximize the government's abilities to provide the resources we want while allowing individuals the freedom to pursue their goals without constraint toward other individuals?”, and the goal of psychology should be “how do we maximize the ability of individuals to make the decisions they would choose if their understanding of the problems they encounter was perfect?” These are rough, as I said, but I think they go further than the way most researchers seem to think about such problems.

 

Political science seems to do the worst in this area in my opinion. Very little research seems to have anything to do with what causes governments to make correct decisions, and when they do research of this type, their evaluation of correct decision making often is based on a very poor metric such as corruption. I think this is a major contributor to why governments are so awful, and yet very few political scientists seem to have well-developed theories grounded in empirical research on ways to significantly improve the government. Yes, they have ideas on how to improve government, but they're frequently not grounded in robust scientific evidence.

 

Another area I've been considering is search parameters of moving through research topics. An assumption I have is that the overwhelming majority of possible theories are wrong such that only a minority of areas of research will result in something other than a null outcome. Another assumption is that correct theories are generally clustered. If you get a correct result in one place, there will be a lot more correct results in a related area than for any randomly chosen theory. There seems like two major methods for searching through the landscape of possibilities. One method is to choose an area where you have strong reason to believe there might be a cluster nearby that fits with your research goals and then randomly pick isolated areas of that research area until you get to a major breakthrough, then go through the various permutations of that breakthrough until you have a complete understanding of that particular cluster area of knowledge. Another method would be to take out large chunks of research possibilities, and to just throw the book at it basically. If you come back with nothing, then you can conclude that the entire section is empty. If you get a hit, you can then isolate the many subcomponents and figure out what exactly is going on. Technically I believe the chunking approach should be slightly faster than the random approach, but only by a slight amount unless the random approach is overly isolated. If the cluster of most important ideas are at 10 to the -10th power, and you isolate variables at 10 to the -100th power, then time will be wasted going back up to the correct level. You have to guess what level of isolation will result in the most important insights.

 

One mistake I think is to isolate variables, and then proceed through the universe of possibilities systematically one at a time. If you get a null result in one place, it's likely true that very similar research will also result in a null result. Another mistake I often see is researchers not bothering to isolate after they get a hit. You'll sometimes see thousands of studies on the exact same thing without any application of reductionism eg the finding that people who eat breakfast are generally healthier. Clinical and business researchers seem to most frequently make this mistake of forgetting reductionism.

 

I'm also thinking through what types of research are most critical, but haven't gotten too far in that vein yet. It seems like long-term research (40+ years until major breakthrough) should be centered around the singularity, but what about more immediate research?

making notes - an instrumental rationality process.

14 Elo 05 September 2015 10:51PM

The value of having notes. Why do I make notes.

 

Story time!

At one point in my life I had a memory crash. Which is to say once upon a time I could remember a whole lot more than I was presently remembering. I recall thinking, "what did I have for breakfast last Monday? Oh no! Why can't I remember!". I was terrified. It took a while but eventually I realised that remembering what I had for breakfast last Monday was:

  1. not crucial to the rest of my life

  2. not crucial to being a function human being

  3. I was not sure if I usually remember what I ate last Monday; or if this was the first time I tried to recall it with such stubbornness to notice that I had no idea.


After surviving my first teen-life crisis I went on to realise a few things about life and about memory:

  1. I will not be remembering everything forever.

  2. Sometimes I forget things that I said I would do. Especially when the number of things I think I will do increases past 2-3 and upwards to 20-30.

  3. Don't worry! There is a solution!

  4. As someone at the age of mid-20s who is already forgetting things; a friendly mid-30 year old mentioned that in 10 years I will have 1/3rd more life to be trying to remember as well. Which should also serve as a really good reason why you should always comment your code as you go; and why you should definitely write notes. "Past me thought future me knew exactly what I meant even though past me actually had no idea what they were going on about".


The foundation of science.

Observation

There are many things that could be considered the foundations of science. I believe that one of the earliest foundations you can possibly engage in is observation.


Evidence

In a more-than-goldfish form; observation means holding information. It means keeping things for review till later in your life; either at the end of this week; month or year. Observation is only the start. Writing it down makes it evidence. Biased, personal, scrawl, (bad) evidence all the same. If you want to be more effective at changing your mind; you need to know what your mind says.


Review

It's great to make notes. That's exactly what I am saying. It goes further though. Take notes and then review them. Weekly; monthly; yearly. Unsure about where you are going? Know where you have come from. With that you can move forward with better purpose.


My note taking process:


1. get a notebook.

This picture includes some types of notebooks that I have tried.

  1. A4 lined paper cardboard front and back. Becomes difficult to carry because it was big. And hard to open it up and use it as well. side-bound is also something I didn't like because I am left handed and it seemed to get in my way.

  2. bad photo but its a pad of grid-paper. I found a stack of these on the middle of the ground late at night as if they fell off a truck or something. I really liked them except for them being stuck together by essentially nothing and falling to pieces by the time I got to the bottom of the pad.

  3. lined note paper. I will never go back to a book that doesn't hold together. The risk of losing paper is terrible. I don't mind occasionally ripping out some paper but to lose a page when I didn't want to; has never worked safely for me.

  4. Top spiral bound; 100 pages. This did not have enough pages; I bought it after a 200pager ran out of paper and I needed a quick replacement, well it was quick – I used it up in half the time the last book lasted.

  5. Top spiral bound 200 pages notepad, plastic cover; these are the type of book I currently use. 8 is my book that I am writing in right now.

  6. 300 pages top spiral bound – as you can see by the tape – it started falling apart by the time I got to the end of it.

  7. small notebook. I got these because they were 48c each, they never worked for me. I would bend them, forget them, leave them in the wrong places, and generally not have them around when I wanted them.

  8. I am about half way through my current book; the first page of my book says 23/7/15, today it is 1/9/15. Estimate a book every 2 months. Although it really depends on how you use it.

  9. a future book I will try, It holds a pen so I will probably find that useful.

  10. also a future one, I expect it to be too small to be useful for me.

  11. A gift from a more organised person than I. It is a moleskin grid-paper book and I plan to also try it soon.


The important take-aways from this is – try several, they might work in different ways and for different reasons. Has your life change substantially i.e. you don't sit much at a desk any more? Is the book not working; maybe another type of book would work better.


I only write on the bottom of the flip-page, and occasionally scrawl diagrams on the other side of the page. But only when they relevant. This way I can always flip through easy, and not worry about the other side of the paper.

 

2. carry a notebook. Everywhere. Find a way to make it a habit. Don't carry a bag? You could. Then you can carry your notepad everywhere with you in a bag. Consider a pocket-sized book as a solution to not wanting to carry a bag.


3. when you stop moving; turn the notebook to the correct page and write the date.

Writing the date is almost entirely useless. I really never care what the date is. I sometimes care that when I look back over the book I can see the timeline around which the events happened, but really – the date means nothing to me.


What writing the date helps to do:

  • make sure you have a writing implement

  • make sure it works

  • make sure you are on the right page

  • make sure you can see the pad

  • make sure you can write in this position

  • make you start a page

  • make you consider writing more things

  • make it look to others like you know what you are doing (signalling that you are a note-taker, is super important to help people get used to you as a note-taker and encourage that persona onto you)


This is the reason why I write the date; I can't specify enough why I don't care about what date it is, but why I do it anyway.


4. Other things I write:

  • Names of people I meet. Congratulations; you are one step closer to never forgetting the name of anyone ever. Also when you want to think; "When did I last see bob", you can kinda look it up in a dumb - date-sorted list. (to be covered in my post about names – but its a lot easier to look it up 5 minutes later when you have it written down)

  • Where I am/What event I am at. (nice to know what you go to sometimes)

  • What time I got here or what time it started (if its a meeting)

  • What time it ended (or what time I stopped writing things)


It's at this point that the rest of the things you write are kinda personal choices some of mine are:

  • Interesting thoughts I have had

  • Interesting quotes people say

  • Action points that I want to do if I can't do them immediately.

  • Shopping lists

  • diagrams of what you are trying to say.

  • Graphs you see.

  • the general topic of conversation as it changes. (so far this is enough for me to remember the entire conversation and who was there and what they had to say about the matter)


Sexy.

That's right. I said it. Its sexy. There are occasional discussion events near to where I live; that I go to with a notepad. Am I better than the average dude who shows up to chat? no. But everyone knows me. The guy who takes notes. And damn they know I know what I am talking about. And damn they all wish they were me. You know how glasses became a geek-culture signal? Well this is too. Like no other. Want to signal being a sharp human who knows what's going down? Carry a notebook, and show it off to people.


The coordinators have said to me; "It makes me so happy to see someone taking notes, it really makes me feel like I am saying something useful". The least I can do is take notes.

 


Other notes about notebooks

The number of brilliant people I know who carry a book of some kind will far outweighs the number of people who don't. I don't usually trust the common opinion; but sometimes you just gotta go with what's right.


If it stops working; at least you tried it. If it works; you have evidence and can change the world in the future.


"I write in my phone". (sounds a lot like, "I could write notes in my phone") I hear this a lot.  Especially in person while I am writing notes. Indeed you do. Which is why I am the one with a notebook out and at the end of talking to you I will actually have notes and you will not. If you are genuinely the kind of person with notes in their phone I commend you for doing something with technology that I cannot seem to have sorted out; but if you are like me; and a lot of other people who could always say they could take notes in their phone; but never do; or never look at those notes... Its time to fix this.


a quote from a friend - “I realized in my mid twenties that I would look like a complete badass in a decade, if I could point people to a shelf of my notebooks.” And I love this too.


A friend has suggested that flashcards are his brain; and notepads are not.  I agree that flashcards have benefits. namely to do with organising things around, shuffling etc.  It really depends on what notes you are taking.  I quite like having a default chronology to things, but that might not work for you.


In our local Rationality Dojo’s we give away notebooks.  For the marginal costs of a book of paper; we are making people’s lives better.


The big take away

Get a notebook; make notes; add value to your life.

 

 


Meta:

This post took 3 hours to write over a week


Please add your experiences if you work differently surrounding note taking.


Please fill out the survey of if you found this post helpful.

Thinking like a Scientist

5 FrameBenignly 19 July 2015 02:43PM
I've been often wondering why scientific thinking seems to be so rare.  What I mean by this is dividing problems into theory and empiricism, specifying your theory exactly then looking for evidence to either confirm or deny the theory, or finding evidence to later form an exact theory.

This is a bit narrower than the broader scope of rational thinking.  A lot of rationality isn't scientific.  Scientific methods don't just allow you to get a solution, but also to understand that solution.

For instance, a lot of early Renaissance tradesmen were rational, but not scientific.  They knew that a certain set of steps produced iron, but the average blacksmith couldn't tell you anything about chemical processes.  They simply did a set of steps and got a result.

Similarly, a lot of modern medicine is rational, but not too scientific.  A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is.  They may run a test to verify their guess.  Their job generally requires a gigantic memory of different diseases, but not too much knowledge of scientific investigation.

What's most damning is that our scientific curriculum in schools don't teach a lot of scientific thinking.

What we get instead is mostly useless facts.  We learn what a cell membrane is, or how to balance a chemical equation.  Learning about, say, the difference between independent and dependent variables is often left to circumstance.  You learn about type I and type II errors when you happen upon a teacher who thinks it's a good time to include that in the curriculum, or you learn it on your own.  Some curriculums include a required research methods course, but the availability and quality of this course varies greatly between both disciplines and colleges.  Why there isn't a single standardized method of teaching this stuff is beyond me.  Even math curriculums are structured around calculus instead of the much more useful statistics and data science placing ridiculous hurdles for the typical non-major that most won't surmount.

It should not be surprising then that so many fail at even basic analysis.  I have seen many people make basic errors that they are more than capable of understanding but simply were never taught.  People aren't precise with their definitions.  They don't outline their relevant variables.  They construct far too complex theoretical models without data.  They come to conclusions based on small sample sizes.  They overweight personal experiences, even those experienced by others, and underweight statistical data.  They focus too much on outliers and not enough on averages.  Even professors, who do excellent research otherwise, often suddenly stop thinking analytically as soon as they step outside their domain of expertise.  And some professors never learn the proper method.

Much of this site focuses on logical consistency and eliminating biases.  It often takes this to an extreme; what Yvain refers to as X-Rationality.  But eliminating biases barely scratches the surface of what is often necessary to truly understand a problem.  This may be why it is said that learning about rationality often reduces rationality.  An incomplete, slightly improved, but still quite terrible solution may generate a false sense of certainty.  Unbiased analysis won't fix a lousy dataset.  And it seems rather backwards to focus on what not to do (biases) rather than what to do (analytic techniques).

 

True understanding is often extremely hard.  Good scientific analysis is hard.  It's disappointing that most people don't seem to understand even the basics of science.

The Galileo affair: who was on the side of rationality?

35 Val 15 February 2015 08:52PM

Introduction

A recent survey showed that the LessWrong discussion forums mostly attract readers who are predominantly either atheists or agnostics, and who lean towards the left or far left in politics. As one of the main goals of LessWrong is overcoming bias, I would like to come up with a topic which I think has a high probability of challenging some biases held by at least some members of the community. It's easy to fight against biases when the biases belong to your opponents, but much harder when you yourself might be the one with biases. It's also easy to cherry-pick arguments which prove your beliefs and ignore those which would disprove them. It's also common in such discussions, that the side calling itself rationalist makes exactly the same mistakes they accuse their opponents of doing. Far too often have I seen people (sometimes even Yudkowsky himself) who are very good rationalists but can quickly become irrational and use several fallacies when arguing about history or religion. This most commonly manifests when we take the dumbest and most fundamentalist young Earth creationists as an example, winning easily against them, then claiming that we disproved all arguments ever made by any theist. No, this article will not be about whether God exists or not, or whether any real world religion is fundamentally right or wrong. I strongly discourage any discussion about these two topics.

This article has two main purposes:

1. To show an interesting example where the scientific method can lead to wrong conclusions

2. To overcome a certain specific bias, namely, that the pre-modern Catholic Church was opposed to the concept of the Earth orbiting the Sun with the deliberate purpose of hindering scientific progress and to keep the world in ignorance. I hope this would prove to also be an interesting challenge for your rationality, because it is easy to fight against bias in others, but not so easy to fight against bias on yourselves.

The basis of my claims is that I have read the book written by Galilei himself, and I'm very interested (and not a professional, but well read) in early modern, but especially 16-17th century history.

 

Geocentrism versus Heliocentrism

I assume every educated person knows the name of Galileo Galilei. I won't waste the space on the site and the time of the readers to present a full biography about his life, there are plenty of on-line resources where you can find more than enough biographic information about him.

The controversy?

What is interesting about him is how many people have severe misconceptions about him. Far too often he is celebrated as the one sane man in an era of ignorance, the sole propagator of science and rationality when the powers of that era suppressed any scientific thought and ridiculed everyone who tried to challenge the accepted theories about the physical world. Some even go as far as claiming that people believed the Earth was flat. Although the flat Earth theory was not propagated at all, it's true that the heliocentric view of the Solar System (the Earth revolving around the Sun) was not yet accepted.

However, the claim that the Church was suppressing evidence about heliocentrism "to maintain its power over the ignorant masses" can be disproved easily:

- The common people didn't go to school where they could have learned about it, and those commoners who did go to school, just learned to read and write, not much more, so they wouldn't care less about what orbits around what. This differs from 20-21th century fundamentalists who want to teach young Earth creationism in schools - back then in the 17th century, there would be no classes where either the geocentric or heliocentric views could have been taught to the masses.

- Heliocentrism was not discovered by Galilei. It was first proposed by Nicolaus Copernicus almost 100 years before Galilei. Copernicus didn't have any affairs with the Inquisition. His theories didn't gain wide acceptance, but he and his followers weren't persecuted either.

- Galilei was only sentenced to house arrest, and mostly because of insulting the pope and doing other unwise things. The political climate in 17th century Italy was quite messy, and Galilei did quite a few unfortunate choices regarding his alliances. Actually, Galilei was the one who brought religion into the debate: his opponents were citing Aristotle, not the Bible in their arguments. Galilei, however, wanted to redefine the Scripture based on his (unproven) beliefs, and insisted that he should have the authority to push his own views about how people interpret the Bible. Of course this pissed quite a few people off, and his case was not helped by publicly calling the pope an idiot.

- For a long time Galilei was a good friend of the pope, while holding heliocentric views. So were a couple of other astronomers. The heliocentrism-geocentrism debates were common among astronomers of the day, and were not hindered, but even encouraged by the pope.

- The heliocentrism-geocentrism debate was never an ateism-theism debate. The heliocentrists were committed theists, just like  the defenders of geocentrism. The Church didn't suppress science, but actually funded the research of most scientists.

- The defenders of geocentrism didn't use the Bible as a basis for their claims. They used Aristotle and, for the time being, good scientific reasoning. The heliocentrists were much more prone to use the "God did it" argument when they couldn't defend the gaps in their proofs.

 

The birth of heliocentrism.

By the 16th century, astronomers have plotted the movements of the most important celestial bodies in the sky. Observing the motion of the Sun, the Moon and the stars, it would seem obvious that the Earth is motionless and everything orbits around it. This model (called geocentrism) had only one minor flaw: the planets would sometimes make a loop in their motion, "moving backwards". This required a lot of very complicated formulas to model their motions. Thus, by the virtue of Occam's razor, a theory was born which could better explain the motion of the planets: what if the Earth and everything else orbited around the Sun? However, this new theory (heliocentrism) had a lot of issues, because while it could explain the looping motion of the planets, there were a lot of things which it either couldn't explain, or the geocentric model could explain it much better.

 

The proofs, advantages and disadvantages

The heliocentric view had only a single advantage against the geocentric one: it could describe the motion of the planets by a much simper formula.

However, it had a number of severe problems:

- Gravity. Why do the objects have weight, and why are they all pulled towards the center of the Earth? Why don't objects fall off the Earth on the other side of the planet? Remember, Newton wasn't even born yet! The geocentric view had a very simple explanation, dating back to Aristotle: it is the nature of all objects that they strive towards the center of the world, and the center of the spherical Earth is the center of the world. The heliocentric theory couldn't counter this argument.

- Stellar parallax. If the Earth is not stationary, then the relative position of the stars should change as the Earth orbits the Sun. No such change was observable by the instruments of that time. Only in the first half of the 19th century did we succeed in measuring it, and only then was the movement of the Earth around the Sun finally proven.

- Galilei tried to used the tides as a proof. The geocentrists argued that the tides are caused by the Moon even if they didn't knew by what mechanisms, but Galilei said that it's just a coincidence, and the tides are not caused by the Moon: just as if we put a barrel of water onto a cart, the water would be still if the cart was stationary and the water would be sloshing around if the cart was pulled by a horse, so are the tides caused by the water sloshing around as the Earth moves. If you read Galilei's book, you will discover quite a number of such silly arguments, and you'll see that Galilei was anything but a rationalist. Instead of changing his views against overwhelming proofs, he used  all possible fallacies to push his view through.

Actually the most interesting author in this topic was Riccioli. If you study his writings you will get definite proof that the heliocentrism-geocentrism debate was handled with scientific accuracy and rationality, and it was not a religious debate at all. He defended geocentrism, and presented 126 arguments in the topic (49 for heliocentrism, 77 against), and only two of them (both for heliocentrism) had any religious connotations, and he stated valid responses against both of them. This means that he, as a rationalist, presented both sides of the debate in a neutral way, and used reasoning instead of appeal to authority or faith in all cases. Actually this was what the pope expected of Galilei, and such a book was what he commissioned from Galilei. Galilei instead wrote a book where he caricatured the pope as a strawman, and instead of presenting arguments for and against both world-views in a neutral way, he wrote a book which can be called anything but scientific.

By the way, Riccioli was a Catholic priest. And a scientist. And, it seems to me, also a rationalist. Studying the works of such people like him, you might want to change your mind if you perceive a conflict between science and religion, which is part of today's public consciousness only because of a small number of very loud religious fundamentalists, helped by some committed atheists trying to suggest that all theists are like them.

Finally, I would like to copy a short summary about this book:

Journal for the History of Astronomy, Vol. 43, No. 2, p. 215-226
In 1651 the Italian astronomer Giovanni Battista Riccioli published within his Almagestum Novum, a massive 1500 page treatise on astronomy, a discussion of 126 arguments for and against the Copernican hypothesis (49 for, 77 against). A synopsis of each argument is presented here, with discussion and analysis. Seen through Riccioli's 126 arguments, the debate over the Copernican hypothesis appears dynamic and indeed similar to more modern scientific debates. Both sides present good arguments as point and counter-point. Religious arguments play a minor role in the debate; careful, reproducible experiments a major role. To Riccioli, the anti-Copernican arguments carry the greater weight, on the basis of a few key arguments against which the Copernicans have no good response. These include arguments based on telescopic observations of stars, and on the apparent absence of what today would be called "Coriolis Effect" phenomena; both have been overlooked by the historical record (which paints a picture of the 126 arguments that little resembles them). Given the available scientific knowledge in 1651, a geo-heliocentric hypothesis clearly had real strength, but Riccioli presents it as merely the "least absurd" available model - perhaps comparable to the Standard Model in particle physics today - and not as a fully coherent theory. Riccioli's work sheds light on a fascinating piece of the history of astronomy, and highlights the competence of scientists of his time.

The full article can be found under this link. I recommend it to everyone interested in the topic. It shows that geocentrists at that time had real scientific proofs and real experiments regarding their theories, and for most of them the heliocentrists had no meaningful answers.

 

Disclaimers:

- I'm not a Catholic, so I have no reason to defend the historic Catholic church due to "justifying my insecurities" - a very common accusation against someone perceived to be defending theists in a predominantly atheist discussion forum.

- Any discussion about any perceived proofs for or against the existence of God would be off-topic here. I know it's tempting to show off your best proofs against your carefully constructed straw-men yet again, but this is just not the place for it, as it would detract from the main purpose of this article, as summarized in its introduction.

- English is not my native language. Nevertheless, I hope that what I wrote was comprehensive enough to be understandable. If there is any part of my article which you find ambiguous, feel free to ask.

I have great hopes and expectations that the LessWrong community is suitable to discuss such ideas. I have experience with presenting these ideas on other, predominantly atheist internet communities, and most often the reactions was outright flaming, a hurricane of unexplained downvotes, and prejudicial ad hominem attacks based on what affiliations they assumed I was subscribing to. It is common for people to decide whether they believe a claim or not, based solely by whether the claim suits their ideological affiliations or not. The best quality of rationalists, however, should be to be able to change their views when confronted by overwhelming proof, instead of trying to come up with more and more convoluted explanations. In the time I spent in the LessWrong community, I became to respect that the people here can argue in a civil manner, listening to the arguments of others instead of discarding them outright.

 

Some recent evidence against the Big Bang

6 JStewart 07 January 2015 05:06AM

I am submitting this on behalf of MazeHatter, who originally posted it here in the most recent open tread. Go there to upvote if you like this submission.

Begin MazeHatter:

I grew up thinking that the Big Bang was the beginning of it all. In 2013 and 2014 a good number of observations have thrown some of our basic assumptions about the theory into question. There were anomalies observed in the CMB, previously ignored, now confirmed by Planck:

Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.

Furthermore, a cold spot extends over a patch of sky that is much larger than expected.

The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.

“The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.

... One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scale than we can observe. ...

“Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting,” says Professor Efstathiou.

http://www.esa.int/Our_Activities/Space_Science/Planck/Planck_reveals_an_almost_perfect_Universe

We are also getting a better look at galaxies at greater distances, thinking they would all be young galaxies, and finding they are not:

The finding raises new questions about how these galaxies formed so rapidly and why they stopped forming stars so early. It is an enigma that these galaxies seem to come out of nowhere.

http://carnegiescience.edu/news/some_galaxies_early_universe_grew_quickly

http://mq.edu.au/newsroom/2014/03/11/granny-galaxies-discovered-in-the-early-universe/

The newly classified galaxies are striking in that they look a lot like those in today's universe, with disks, bars and spiral arms. But theorists predict that these should have taken another 2 billion years to begin to form, so things seem to have been settling down a lot earlier than expected.

B. D. Simmons et al. Galaxy Zoo: CANDELS Barred Disks and Bar Fractions. Monthly Notices of the Royal Astronomical Society, 2014 DOI: 10.1093/mnras/stu1817

http://www.sciencedaily.com/releases/2014/10/141030101241.htm

The findings cast doubt on current models of galaxy formation, which struggle to explain how these remote and young galaxies grew so big so fast.

http://www.nasa.gov/jpl/spitzer/splash-project-dives-deep-for-galaxies/#.VBxS4o938jg

Although it seems we don't have to look so far away to find evidence that galaxy formation is inconsistent with the Big Bang timeline.

If the modern galaxy formation theory were right, these dwarf galaxies simply wouldn't exist.

Merrick and study lead Marcel Pawlowski consider themselves part of a small-but-growing group of experts questioning the wisdom of current astronomical models.

"When you have a clear contradiction like this, you ought to focus on it," Merritt said. "This is how progress in science is made."

http://www.natureworldnews.com/articles/7528/20140611/galaxy-formation-theories-undermined-dwarf-galaxies.htm

http://arxiv.org/abs/1406.1799

Another observation is that lithium abundances are way too low for the theory in other places, not just here:

A star cluster some 80,000 light-years from Earth looks mysteriously deficient in the element lithium, just like nearby stars, astronomers reported on Wednesday.

That curious deficiency suggests that astrophysicists either don't fully understand the big bang, they suggest, or else don't fully understand the way that stars work.

http://news.nationalgeographic.com/news/2014/09/140910-space-lithium-m54-star-cluster-science/

It also seems there is larger scale structure continually being discovered larger than the Big Bang is thought to account for:

"The first odd thing we noticed was that some of the quasars' rotation axes were aligned with each other -- despite the fact that these quasars are separated by billions of light-years," said Hutsemékers. The team then went further and looked to see if the rotation axes were linked, not just to each other, but also to the structure of the Universe on large scales at that time.

"The alignments in the new data, on scales even bigger than current predictions from simulations, may be a hint that there is a missing ingredient in our current models of the cosmos," concludes Dominique Sluse.

http://www.sciencedaily.com/releases/2014/11/141119084506.htm

D. Hutsemékers, L. Braibant, V. Pelgrims, D. Sluse. Alignment of quasar polarizations with large-scale structures. Astronomy & Astrophysics, 2014

Dr Clowes said: "While it is difficult to fathom the scale of this LQG, we can say quite definitely it is the largest structure ever seen in the entire universe. This is hugely exciting -- not least because it runs counter to our current understanding of the scale of the universe.

http://www.sciencedaily.com/releases/2013/01/130111092539.htm

These observations have been made just recently. It seems that in the 1980's, when I was first introduced to the Big Bang as a child, the experts in the field knew then there were problems with it, and devised inflation as a solution. And today, the validity of that solution is being called into question by those same experts:

In light of these arguments, the oft-cited claim that cosmological data have verified the central predictions of inflationary theory is misleading, at best. What one can say is that data have confirmed predictions of the naive inflationary theory as we understood it before 1983, but this theory is not inflationary cosmology as understood today. The naive theory supposes that inflation leads to a predictable outcome governed by the laws of classical physics. The truth is that quantum physics rules inflation, and anything that can happen will happen. And if inflationary theory makes no firm predictions, what is its point?

http://www.physics.princeton.edu/~steinh/0411036.pdf

What are the odds 2015 will be more like 2014 where we (again) found larger and older galaxies at greater distances, or will it be more like 1983?

[Link] Chalmers on Computation: A first step From Physics to Metaethics?

0 john_ku 18 November 2014 10:39AM

A Computational Foundation for the Study of Cognition by David Chalmers

Abstract from the paper:

Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions.

Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science.

See my welcome thread submission for a brief description of how I conceive of this as the first step towards formalizing friendliness.

A "Holy Grail" Humor Theory in One Page.

-1 EGarrett 18 August 2014 10:26AM

Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here.

I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence.

Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome.



 

A "Holy Grail" Humor Theory in One Page.


Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this...


In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion:


Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety

 

Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find the thing funny and will laugh, in the smallest amounts with slight smiles, small feelings of pleasure or small diaphragm spasms. The numerator terms simply state that something has to be significantly lower in quality than what we assumed, and we must notice it and feel it's real, and the denominator states that anxiety lowers the reaction. This is because laughter is a noisy reflex that threatens someone else's status, so if there is a chance of violence from the person, a danger to threatening a loved one's status, or a predator or other threat from making noise, the reflex will be mitigated. The common feeling amongst those situations, anxiety, has come to cause this.

This may appear to be an ad hoc hypothesis, but unlike those, this can clearly unite and explain everything we've observed about humor, including our cultural sayings and the scientific observations of the previous incomplete theories. Some noticed that it involves surprise, some noticed that it involves things being incorrect, all noticed the pleasure without seeing the reason. This covers all of it, naturally, and with a core concept simple enough to explain to a child. Our sayings, like "it's too soon" for a joke after a tragedy, can all be covered as well ("too soon" indicates that we still have anxiety associated with the event).

The previous confusion about humor came from a few things. For one, there are at least 4 types of laughter: At ourselves, at others we know, at others we don't know (who have an average expectation), and directly at the person with whom we're speaking. We often laugh for one reason instead of the other, like "bad jokes" making us laugh at the teller. In addition, besides physical failure, like slipping, we also have a basic laugh instinct for mental failure, through misplacement. We sense attempts to order things that have gone wrong. Puns and similar references trigger this. Furthermore, we laugh loudest when we notice multiple errors (quality-gaps) at once, like a person dressed foolishly (such as a court jester), exposing errors by others.

We call this the "Status Loss Theory," and we've written two papers on it. The first is 6 pages, offers a chart of old theories and explains this more, with 7 examples. The second is 27 pages and goes through 40 more examples, applying this concept to sayings, comedians, shows, memes, and other comedy types, and even drawing predictions from the theory that have been verified by very recent neurology studies, to hopefully exhaustively demonstrate the idea's explanatory power. If it's not complete, it should still make enough progress to greatly advance humor study. If it is, it should redefine the field. Thanks for your time.

Common sense quantum mechanics

11 dvasya 15 May 2014 08:10PM

Related to: Quantum physics sequence.

TLDR: Quantum mechanics can be derived from the rules of probabilistic reasoning. The wavefunction is a mathematical vehicle to transform a nonlinear problem into a linear one. The Born rule that is so puzzling for MWI results from the particular mathematical form of this functional substitution.

This is a brief overview a recent paper in Annals of Physics (recently mentioned in Discussion):

Quantum theory as the most robust description of reproducible experiments (arXiv)

by Hans De RaedtMikhail I. Katsnelson, and Kristel Michielsen. Abstract:

It is shown that the basic equations of quantum theory can be obtained from a straightforward application of logical inference to experiments for which there is uncertainty about individual events and for which the frequencies of the observed events are robust with respect to small changes in the conditions under which the experiments are carried out.

In a nutshell, the authors use the "plausible reasoning" rules (as in, e.g., Jaynes' Probability Theory) to recover the quantum-physical results for the EPR and SternGerlach experiments by adding a notion of experimental reproducibility in a mathematically well-formulated way and without any "quantum" assumptions. Then they show how the Schrodinger equation (SE) can be obtained from the nonlinear variational problem on the probability P for the particle-in-a-potential problem when the classical Hamilton-Jacobi equation holds "on average". The SE allows to transform the nonlinear variational problem into a linear one, and in the course of said transformation, the (real-valued) probability P and the action S are combined in a single complex-valued function ~P1/2exp(iS) which becomes the argument of SE (the wavefunction).

This casts the "serious mystery" of Born probabilities in a new light. Instead of the observed frequency being the square(d amplitude) of the "physically fundamental" wavefunction, the wavefunction is seen as a mathematical vehicle to convert a difficult nonlinear variational problem for inferential probability into a manageable linear PDE, where it so happens that the probability enters the wavefunction under a square root.

Below I will excerpt some math from the paper, mainly to show that the approach actually works, but outlining just the key steps. This will be followed by some general discussion and reflection.

1. Plausible reasoning and reproducibility

The authors start from the usual desiderata that are well laid out in Jaynes' Probability Theory and elsewhere, and add to them another condition:

There may be uncertainty about each event. The conditions under which the experiment is carried out may be uncertain. The frequencies with which events are observed are reproducible and robust against small changes in the conditions.

Mathematically, this is a requirement that the probability P(x|θ,Z) of observation x given an uncertain experimental parameter θ and the rest of out knowledge Z, is maximally robust to small changes in θ and independent of θ. Using log-probabilities, this amounts to minimizing the "evidence"

for any small ε so that |Ev| is not a function of θ (but the probability is).

2. The EinsteinPodolskyRosenBohm experiment

There is a source S that, when activated, sends a pair of signals to two routers R1,2. Each router then sends the signal to one of its two detectors Di+,– (i=1,2). Each router can be rotated and we denote as θ the angle between them. The experiment is repeated N times yielding the data set {x1,y1}, {x2,y2}, ... {xN,yN} where x and y are the outcomes from the two detectors (+1 or –1). We want to find the probability P(x,y|θ,Z).

After some calculations it is found that the single-trial probability can be expressed as P(x,y|θ,Z) = (1 + xyE12(θ) ) / 4, where E12(θ) = Σx,y=+–1 xyP(x,y|θ,Z) is a periodic function.

From the properties of Bernoulli trials it follows that, for a data set of N trials with nxy total outcomes of each type {x,y},

and expanding this in a Taylor series it is found that

The expression in the sum is the Fisher information IF for P. The maximum robustness requirement means it must be minimized. Writing it down as IF = 1/(1 – E12(θ)2) (dE12(θ)/dθ)2 one finds that E12(θ) = cos(θIF1/2 + φ), and since E12 must be periodic in angle, IF1/2 is a natural number, so the smallest possible value is IF = 1. Choosing φ π it is found that E12(θ) = –cos(θ), and we obtain the result that

which is the well-known correlation of two spin-1/2 particles in the singlet state.

Needless to say, our derivation did not use any concepts of quantum theory. Only plain, rational reasoning strictly complying with the rules of logical inference and some elementary facts about the experiment were used

3. The SternGerlach experiment

This case is analogous and simpler than the previous one. The setup contains a source emitting a particle with magnetic moment S, a magnet with field in the direction a, and two detectors D+ and D.

Similarly to the previous section, P(x|θ,Z) = (1 + xE(θ) ) / 2, where E(θ) = P(+|θ,Z) – P(–|θ,Z) is an unknown periodic function. By complete analogy we seek the minimum of IF and find that E(θ) = +–cos(θ), so that

In quantum theory, [this] equation is in essence just the postulate (Born’s rule) that the probability to observe the particle with spin up is given by the square of the absolute value of the amplitude of the wavefunction projected onto the spin-up state. Obviously, the variability of the conditions under which an experiment is carried out is not included in the quantum theoretical description. In contrast, in the logical inference approach, [equation] is not postulated but follows from the assumption that the (thought) experiment that is being performed yields the most reproducible results, revealing the conditions for an experiment to produce data which is described by quantum theory.

To repeat: there are no wavefunctions in the present approach. The only assumption is that a dependence of outcome on particle/magnet orientation is observed with robustness/reproducibility.

4. Schrodinger equation

A particle is located in unknown position θ on a line segment [–L, L]. Another line segment [–L, L] is uniformly covered with detectors. A source emits a signal and the particle's response is detected by one of the detectors.

After going to the continuum limit of infinitely many infinitely small detectors and accounting for translational invariance it is possible to show that the position of the particle θ and of the detector x can be interchanged so that dP(x|θ,Z)/dθ = –dP(x|θ,Z)/dx.

In exactly the same way as before we need to minimize Ev by minimizing the Fisher information, which is now

However, simply solving this minimization problem will not give us anything new because nothing so far accounted for the fact that the particle moves in a potential. This needs to be built into the problem. This can be done by requiring that the classical Hamilton-Jacobi equation holds on average. Using the Lagrange multiplier method, we now need to minimize the functional

Here S(x) is the action (Hamilton's principal function). This minimization yields solutions for the two functions P(x|θ,Z) and S(x). It is a difficult nonlinear minimization problem, but it is possible to find a matching solution in a tractable way using a mathematical "trick". It is known that standard variational minimization of the functional

yields the Schrodinger equation for its extrema. On the other hand, if one makes the substitution combining two real-valued functions P and S into a single complex-valued ψ,

Q is immediately transformed into F, concluding the derivation of the Schrodinger equation. Incidentally, ψ is constructed so that P(x|θ,Z) = |ψ(x|θ,Z)|2, which is the Born rule.

Summing up the meaning of Schrodinger equation in the present context:

Of course, a priori there is no good reason to assume that on average there is agreement with Newtonian mechanics ... In other words, the time-independent Schrodinger equation describes the collective of repeated experiments ... subject to the condition that the averaged observations comply with Newtonian mechanics.

The authors then proceed to derive the time-dependent SE (independently from the stationary SE) in a largely similar fashion.

5. What it all means

Classical mechanics assumes that everything about the system's state and dynamics can be known (at least in principle). It starts from axioms and proceeds to derive its conclusions deductively (as opposed to inductive reasoning). In this respect quantum mechanics is to classical mechanics what probabilistic logic is to classical logic.

Quantum theory is viewed here not as a description of what really goes on at the microscopic level, but as an instance of logical inference:

in the logical inference approach, we take the point of view that a description of our knowledge of the phenomena at a certain level is independent of the description at a more detailed level.

and

quantum theory does not provide any insight into the motion of a particle but instead describes all what can be inferred (within the framework of logical inference) from or, using Bohr’s words, said about the observed data

Such a treatment of QM is similar in spirit to Jaynes' Information Theory and Statistical Mechanics papers (I, II). Traditionally statistical mechanics/thermodynamics is derived bottom-up from the microscopic mechanics and a series of postulates (such as ergodicity) that allow us to progressively ignore microscopic details under strictly defined conditions. In contrast, Jaynes starts with minimum possible assumptions:

"The quantity x is capable of assuming the discrete values xi ... all we know is the expectation value of the function f(x) ... On the basis of this information, what is the expectation value of the function g(x)?"

and proceeds to derive the foundations of statistical physics from the maximum entropy principle. Of course, these papers deserve a separate post.

This community should be particularly interested in how this all aligns with the many-worlds interpretation. Obviously, any conclusions drawn from this work can only apply to the "quantum multiverse" level and cannot rule out or support any other many-worlds proposals.

In quantum physics, MWI does quite naturally resolve some difficult issues in the "wavefunction-centristic" view. However, we see that the concept wavefunction is not really central for quantum mechanics. This removes the whole problem of wavefunction collapse that MWI seeks to resolve.

The Born rule is arguably a big issue for MWI. But here it essentially boils down to "x is quadratic in t where t = sqrt(x)". Without the wavefunction (only probabilities) the problem simply does not appear.

Here is another interesting conclusion:

if it is difficult to engineer nanoscale devices which operate in a regime where the data is reproducible, it is also difficult to perform these experiments such that the data complies with quantum theory.

In particular, this relates to the decoherence of a system via random interactions with the environment. Thus decoherence becomes not as a physical intrinsically-quantum phenomenon of "worlds drifting apart", but a property of experiments that are not well-isolated from the influence of environment and therefore not reproducible. Well-isolated experiments are robust (and described by "quantum inference") and poorly-isolated experiments are not (hence quantum inference does not apply).

In sum, it appears that quantum physics when viewed as inference does not require many-worlds any more than probability theory does.

The Cold War divided Science

21 Douglas_Knight 05 April 2014 11:10PM

What can we learn about science from the divide during the Cold War?

I have one example in mind: America held that coal and oil were fossil fuels, the stored energy of the sun, while the Soviets held that they were the result of geologic forces applied to primordial methane.

At least one side is thoroughly wrong. This isn't a politically charged topic like sociology, or even biology, but a physical science where people are supposed to agree on the answers. This isn't a matter of research priorities, where one side doesn't care enough to figure things out, but a topic that both sides saw to be of great importance, and where they both claimed to apply their theories. On the other hand, Lysenkoism seems to have resulted from the practical importance of crop breeding.

First of all, this example supports the claim that there really was a divide, that science was disconnected into two poorly communicating camps. It suggests that when the two sides reached the same results on other topics, they did so independently. Even if we cannot learn from this example, it suggests that we may be able to learn from other consequences of dividing the scientific community.

My understanding is that although some Russian language research papers were available in America, they were completely ignored and the scientists failed to even acknowledge that there was a community with divergent opinions. I don't know about the other direction.

Some questions:

  • Are there other topics, ideally in physical science, on which such a substantial disagreement persisted for decades? not necessarily between these two parties?
  • Did the Soviet scientists know that their American counterpoints disagreed?
  • Did Warsaw Pact (eg, Polish) scientists generally agree with the Soviets about the origin of coal and oil? Were they aware of the American position? Did other Western countries agree with America? How about other countries, such as China and Japan?
  • What are the current Russian beliefs about coal and oil? I tried running Russian Wikipedia through google translate and it seemed to support the biogenic theory. (right?) Has there been a reversal among Russian scientists? When? Or does Wikipedia represent foreign opinion? If a divide remains, does it follow the Iron Curtain, or some new line?
  • Have I missed some detail that would make me not classify this as an honest disagreement between two scientific establishments?
  • Finally, the original question: what can we learn about the institution of science?

What are some science mistakes you made in college?

5 aarongertler 23 March 2014 05:28AM

Hello, Less Wrong!

This seems like a community with a relatively high density of people who have worked in labs, so I'm posting here.

I recently finished the first draft of something I'm calling "The Hapless Undergraduate's Guide to Research" (HUGR). (Yes, "HUGS" would be a good acronym, but "science" isn't specific enough.) Not sure if it will ever be released, or what the final format will be, but I'll need more things to put in it whatever happens.

Basically, this is meant to be an ever-growing collection of mistakes that new researchers (grad or undergrad) have made while working in labs. Hundreds of thousands of students around the English-speaking world do lab work, and based on my own experiences in a neuroscience lab, it seems like things can easily go wrong, especially when rookie researchers are involved. There's nothing wrong with making mistakes, but it would be nice to have a source of information around that people (especially students) might read, and which might help them watch out for some of the problems with the biggest pain-to-ease-of-avoidance ratios.

Since my experience is specifically in neuroscience, and even more specifically in "phone screening and research and data entry", I'd like to draw from a broad collection of perspectives. And, come to think of it, there's no reason to limit this to research assistants--all scientists, from CS to anthropology, are welcome!

So--what are some science mistakes you have made? What should you have done to prevent them, in terms of "simple habits/heuristics other people can apply"? Feel free to mention mistakes from other people that you've seen, as long as you're not naming names in a damaging way. Thanks for any help you can provide!

 

And here are a couple of examples of mistakes I've gathered so far:

--Research done with elderly subjects. On a snowy day, the sidewalk froze, so subjects couldn't be screened for a day, because no one thought to salt the sidewalks in advance. Lots of scheduling chaos.

--Data entry being done for papers with certain characteristics. Research assistants and principal investigator were not on the same page regarding which data was worth collecting. Each paper had to be read 7 or 8 times by the time all was said and done, and constructing the database took six extra weeks.

--A research assistant clamped a special glass tube too tight, broke it, and found that replacements would take weeks to come in... well, there may not be much of a lesson in that, but maybe knowing equipment is hard to replace cold subconsciously induce more caring.

Learn (and Maybe Get a Credential in) Data Science

10 Jayson_Virissimo 01 February 2014 06:39PM

Coursera is now offering a sequence of online courses on data science. They include:

1. The Data Scientist's Toolbox

Upon completion of this course you will be able to identify and classify data science problems. You will also have created your Github account, created your first repository, and pushed your first markdown file to your account.


In this course you will learn how to program in R and how to use R for effective data analysis. You will learn how to install and configure software necessary for a statistical programming environment, discuss generic programming language concepts as they are implemented in a high-level statistical language. The course covers practical issues in statistical computing which includes programming in R, reading data into R, accessing R packages, writing R functions, debugging, and organizing and commenting R code. Topics in statistical data analysis and optimization will provide working examples.


Upon completion of this course you will be able to obtain data from a variety of sources. You will know the principles of tidy data and data sharing. Finally, you will understand and be able to apply the basic tools for data cleaning and manipulation.


After successfully completing this course you will be able to make visual representations of data using the base, lattice, and ggplot2 plotting systems in R, apply basic principles of data graphics to create rich analytic graphics from different types of datasets, construct exploratory summaries of data in support of a specific question, and create visualizations of multidimensional data using exploratory multivariate statistical techniques.


In this course you will learn to write a document using R markdown, integrate live R code into a literate statistical program, compile R markdown documents using knitr and related tools, and organize a data analysis so that it is reproducible and accessible to others.


In this class students will learn the fundamentals of statistical inference. Students will receive a broad overview of the goals, assumptions and modes of performing statistical inference. Students will be able to perform inferential tasks in highly targeted settings and will be able to use  the skills developed as a roadmap for more complex inferential challenges.


In this course students will learn how to fit regression models, how to interpret coefficients, how to investigate residuals and variability.  Students will further learn special cases of regression models including use of dummy variables and multivariable adjustment. Extensions to generalized linear models, especially considering Poisson and logistic regression will be reviewed.


Upon completion of this course you will understand the components of a machine learning algorithm. You will also know how to apply multiple basic machine learning tools. You will also learn to apply these tools to build and evaluate predictors on real data.


Students will learn how communicate using statistics and statistical products. Emphasis will be paid to communicating uncertainty in statistical results. Students will learn how to create simple Shiny web applications and R packages for their data products.

You can take the entire sequence for free or pay $49 for each course in order to (upon completion) receive a Specialization Certificate from Johns Hopkins University.

The very popular blog Simply Statistics discusses the program here.

Local truth

13 NancyLebovitz 20 December 2013 05:04PM

New Salt Compounds Challenge the Foundation of Chemistry

The title is overblown (it depends on what you think the foundation is), but get a load of this:

"I think this work is the beginning of a revolution in chemistry," Oganov says. "We found, at low pressures achievable in the lab, perfectly stable compounds that contradict the classical rules of chemistry. If you apply the rather modest pressure of 200,000 atmospheres -- for comparison purposes, the pressure at the center of the Earth is 3.6 million atmospheres -- everything we know from chemistry textbooks falls apart."
Standard chemistry textbooks say that sodium and chlorine have very different electronegativities, and thus must form an ionic compound with a well-defined composition. Sodium's charge is +1, chlorine's charge is -1; sodium will give away an electron, chlorine wants to take an electron. According to chemistry texts and common sense, the only possible combination of these atoms in a compound is 1:1 -- rock salt, or NaCl. "We found crazy compounds that violate textbook rules -- NaCl3, NaCl7, Na3Cl2, Na2Cl, and Na3Cl," says Weiwei Zhang, the lead author and visiting scholar at the Oganov lab and Stony Brook's Center for Materials by Design, directed by Oganov.
"These compounds are thermodynamically stable and, once made, remain indefinitely; nothing will make them fall apart. Classical chemistry forbids their very existence. Classical chemistry also says atoms try to fulfill the octet rule -- elements gain or lose electrons to attain an electron configuration of the nearest noble gas, with complete outer electron shells that make them very stable. Well, here that rule is not satisfied."

And here's the philosophical bit:

"For a long time, this idea was haunting me -- when a chemistry textbook says that a certain compound is impossible, what does it really mean, impossible? Because I can, on the computer, place atoms in certain positions and in certain proportions. Then I can compute the energy. 'Impossible' really means that the energy is going to be high. So how high is it going to be? And is there any way to bring that energy down, and make these compounds stable?"
To Oganov, impossible didn't mean something absolute. "The rules of chemistry are not like mathematical theorems, which cannot be broken," he says. "The rules of chemistry can be broken, because impossible only means 'softly' impossible! You just need to find conditions where these rules no longer hold."

The obvious example of local truth is relativistic effects being pretty much invisible over the durations and distances that are normal for people, but there's also that the surface of the earth is near enough to flat for many human purposes.

Any suggestions for other truths which could turn out to be local?

I am switching to biomedical engineering and am looking for feedback on my strategy and assumptions

4 [deleted] 16 November 2013 03:42AM

I wrote this post up and circulated it among my rationalist friends. I've copied it verbatim. I figure the more rationally inclined people that can critique my plan the better.

--

TL;DR:

* I'm going to commit to biomedical engineering for a very specific set of reasons related to career flexibility and intrinsic interest.
* I still want to have computer science and design arts skills, but biomedical engineering seems like a better university investment.
* I would like to have my cake and eat it too by doing biomedical engineering, while practicing computer science and design on the side.
* There are potential tradeoffs, weaknesses and assumptions in this decision that are relevant and possibly critical. This includes time management, ease of learning, development of problem solving solving abilities and working conditions.

I am posting this here because everyone is pretty clever and likes decisions. I am looking for feedback on my reasoning and the facts in my assumptions so that I can do what's best. This was me mostly thinking out loud, and given the timeframe I'm on I couldn't learn and apply any real formal method other than just thinking it through. So it's long, but I hope that everyone can benefit by me putting this here.

--
So currently I'm weighing going into biomedical engineering as my major over a major in computer science, or the [human-computer interaction/media studies/gaming/ industrial design grab bag] major, at Simon Fraser University. Other than the fact that engineering biology is so damn cool, the relevant decision factors include reasons like:

  1. medical science is booming with opportunities at all levels in the system, meaning that there might be a lot of financial opportunity in more exploratory economies like in SV;
  2. the interdisciplinary nature of biomedical engineering means that I have skills with greater transferability as well as insight into a wide range of technologies and processes instead of a narrow few;
  3. aside from molecular biology, biomedical engineering is the field that appears closest to cognitive enhancement and making cyborgs for a living;
  4. compared to most kinds of engineering, it is more easy to self-teach computer science and other forms of digital value-making (web design or graphical modelling) due to the availability of educational resources; the approaching-free cost of computing power; established communities based around development; and clear measurements of feedback. By contrast, biomedical engineering may require labs to be educated on biological principles, which are increasingly available but scarce for hobbyists; basic science textbooks are strongly variant in quality; and there isn't the equivalent of a Github for biology making non-school collaborative learning difficult.

The two implications here are that even if I am still interested in computer science, which I am, and although biomedical engineering is less upwind than programming and math, it makes more sense to blow a lot of money on a more specialized education to get domain knowledge while doing computer science on the side, than to spend money on an option whose potential cost is so low because of self study. This conjecture, and the assumptions therein, is critical to my strategy.

So the best option combination that I figure that I should take is this:

  1. To get the value from Biomedical Engineering, I will do the biomedical engineering curriculum formally at SFU for the rest of my time there as my main focus.
  2. To get the value from computer science, I will make like a hacker and educate myself with available textbooks and look for working gigs in my spare time.
  3. To get the value from the media and design major, I will talk to the faculty directly about what I can do to take their courses on human computer interaction and industrial design, and otherwise be mentored. As a result I could seize all the real interesting knowledge while ignoring the crap.

Tradeoffs exist, of course. These are a few that I can think of:

  • I don't expect to be making as much as an entry level biomedical engineer as I would as a programmer in Silicon Valley, if that was ever possible; nor do I believe that my income would grow at the same rate. As a counterpoint, my range of potential competencies will be greater than the typical programmer, due to an exposure to physical, chemical, and biological systems, their experimentation, and product development. I feel that this greater flexibility could help with companies or startups that are oriented towards health or technological forecasting, but this is just a guess. In any case that makes me feel more comfortable, having that broader knowledge, but one could argue that programming being so popular and upwind makes it the more stable choice anyway. Don't know.
  • It's difficult to make money as an undergraduate with any of the skills I would pick up in biomedical engineering for at least a few years. This is important to me because I want to have more-than-minimum wages jobs as a way of completing my education on a debit. While web and graphic designers can start forming their own employment almost immediately, and while programmers can walk into a business or a bank and hustle; doing so with physics, chemistry or biology seems a bit more difficult. This is somewhat countered by co-op and work placement, and the fact that it doesn't seem to take too much programming or web design theory and practice before being able to start selling your skills (i.e. on the order of months).
  • Biomedical Engineering has few aesthetic and artistic aspects, the two of which I value. This is what attracted me to the media and design program in the first place. Instead I get to work with technologies which I know will have measurable and practical use, improving the quality of life for the sick and dying. Expressing myself with art and more free-wheeling design is not super urgent, so I'm willing to make this trade. I still hope to be able to orient myself for developing beautiful and useful data visualizations in practical applications, like this guy, and to experiment with maker hacking.

There is still the issue of assuring more-than-dilettante expertise in computer science and design stuff (see Expert Beginner syndrome: http://www.daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner). I am semi-confident in my ability to network myself into mentorships with members of faculty [at SFU] that are not my own, and if I'm not good at it now I still believe that it's possible. In addition, my dad has recently become a software consultant and is willing to apprentice me, giving a direct education about software engineering (although not necessarily a good one, at least it's somewhat real).

There are potential weaknesses in my analysis and strategy.

  • The time investment in the biomedical engineering faculty as SFU is very high. The requirements are similar to those of being a grad student, complete with a 3.00 minimum GPA and research project. The faculty does everything in its power to allay the burden while still maintaining the standard. However, this crowding out of time reduces the amount of potential time spent learning computer science. This makes the probability of efficient self-teaching go down. (that GPA standard might lead to scholarship access which is good, but more of an externality in this case.)
  • While we're on the conscientiousness load: conscientiousness is considered to be an invariant personality trait, but I'm not buying it. The typical person may experience on average no change in their conscientiousness, but typical people don't commit to interventions that affect the workload they can take on either by strengthening willpower, increasing energy, changing thought patterns (see "The Motivation Hacker") or improving organization through external aids. Still, my baseline level of conscientiousness has historically been quite low. This raises the up front cost of learning novel material I'm not familiar with, unlike computing, of which I have a stronger familiarity due to lifelong exposure; this lets me cruise by in computing courses but not necessarily ace them. Nevertheless, that's a lower downside risk.
  • Although medical problems are interesting and I have a lot of intrinsic interest in the domain knowledge, there are components of research that interest me while others that I don't currently enjoy as much as evidenced from my current exposure. I can seem myself getting into the data processing and visualization, drafting ergonomic wearable tech, and circuit design especially wrt EEGs. Brute force labwork would be less engaging and takes more out of me, despite systems biology principles being tough but engaging. So there's the possibility that I would only enjoy a limited scope of biomedical engineering work, making the major not worth it or unpleasant.
  • Due to the less steep learning curve and more coherent structure of the computer science field, it seems easier to approach the "career satisfaction" or "work passion" threshold with CS than for BME. Feeling satisfied with your career depends on many factors, but Cal Newport argues that the largest factor is essentially mastery, which leads to involvement. Mastery seems more difficult to guage with the noisy and prolonged feedback of the engineering sciences, so the motivations with the greatest relative importance might be the satisfaction of turning out product, satisfying factual curiosity or curiosity about established/canon models (as opposed to curiosity which is more local to your own circumstances or you figuring things out), and in the case of biomed, saving lives by design. With mathematics and programming the problem space is such that you can do math and programming for their own sakes.
  • Most instances of biomedical engineering majors around the world are mainly graduate studies. The most often reported experience is that when you have someone getting a PhD in biomedical engineering, it's in addition to their undergraduate experience as a mechanical engineer, an electrical engineer or a computer scientist. The story goes that these problem solving skills are applied to the biology after being developed - once again a case of some fields being more upwind than others. By contrast, an undergradute in bioengineering would be taking courses where they are not developing these skills, as our current understanding of biology is not strongly predictive. After talking to one of the faculty heads, the person who designed the program, he is very much aware of problems such as these in engineers as they are currently educated. This includes overdoing specialization and under-emphasizing the entire product development process, or a principle of "first, do no harm". He has been working on the curriculum for thirty years as opposed to the seven years of cases like MIT - I consider this moderate evidence that I will not be missing out on the necessary mental toolkit over other engineers.
  • In the case where biomedical engineering is less flexible than I believed, I would essentially have a "jack of all trades" education meaning engineering firms in general would pass over me in favor of a more specialized candidate. This is partially hedged against by learning the computer science as an "out", but in the end it points to the possibility that the way I'm perceiving this major's value is incorrect.

So for this "have cake and eat it to" plan to work there are a larger string of case exceptions in the biomedical option than the computing options, and definitely the media and design option. The reward would be that the larger amount of domain specific knowledge in a field that has held my curiosity for several years now, while hitting on. I would also be playing to one of SFU's comparative advantages: the quality of the biomedical faculty here is high relative to other institutions if the exceptions hold, and potentially the relative quality of the computer science and design faculties as well. (This could be an argument for switching institutions if those two skillsets are a "better fit". However, my intuition is that the cost for such is very high and probably wouldn't be worth it.)

Possible points of investigation:

  • What is hooking me most strongly to biomedical engineering were the potentials of cognitive enhancement research and molecular design (like what they have going on at the bio-nano group at Autodesk: http://www.autodeskresearch.com/groups/nano). If these were the careers I was optimizing towards as an ends, it might make more sense to actual model what skills and people will actually be needed to develop these technologies and take advantage of them. After writing this I feel less strongly about these exact fields or careers. Industry research still seems like a good exercise.
  • I will have to be honest that after my experience doing lab work for chemistry at school, I was frustrated by how exhausted I am at the end of each session, physically and mentally. This doesn't necessarily reflect on how all lab work will be, especially if it's more intimately tied with something else I want to achieve. And granted, the labs are three hours long of standing. It does make me question how I would be like in this work environment, however, and that is worth collecting more information for.
  • To get actual evidence of flexibility in skillset it would be worth polling actual alumni from the program, to see if any of the convictions about the program are true.

--

Thoughts, anyone?

LINK: "This novel epigenetic clock can be used to address a host of questions in developmental biology, cancer and aging research."

4 fortyeridania 22 October 2013 07:59AM

The paper is called DNA methylation age and human tissues and cell types and it's from Genome Biology. Here is a Nature article based on the paper.

I have submitted this to LW because of its relevance to the measurement of aging and, hence, to life extension. Here is a bit from the Nature piece:

"Ageing is a major health problem, and interestingly there are really no objective measures of aging, other than a verified birth date," says Darryl Shibata, a pathologist at the University of Southern California in Los Angeles. "Studies like this one provide important new efforts to increase the rigour of human aging studies."

Note: The discrepancy in spelling ("ageing" vs. "aging") is in the original.

[Link] Trouble at the lab

16 [deleted] 21 October 2013 08:51AM

Related: The Real End of Science

From the Economist.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

 

The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.

...


I recommend reading the whole thing.

 

Supposing you inherited an AI project...

-5 bokov 04 September 2013 08:07AM

Supposing you have been recruited to be the main developer on an AI project. The previous developer died in a car crash and left behind an unfinished AI. It consists of:

A. A thoroughly documented scripting language specification that appears to be capable of representing any real-life program as a network diagram so long as you can provide the following:

 A.1. A node within the network whose value you want to maximize or minimize.

 A.2. Conversion modules that transform data about the real-world phenomena your network represents into a form that the program can read.

B. Source code from which a program can be compiled that will read scripts in the above language. The program outputs a set of values for each node that will optimize the output (you can optionally specify which nodes can and cannot be directly altered, and the granularity with which they can be altered).

It gives remarkably accurate answers for well-formulated questions. Where there is a theoretical limit to the accuracy of an answer to a particular type of question, its answer usually comes close to that limit, plus or minus some tiny rounding error.

 

Given that, what is the minimum set of additional features you believe would absolutely have to be implemented before this program can be enlisted to save the world and make everyone live happily forever? Try to be as specific as possible.

How probable is Molecular Nanotech?

45 leplen 29 June 2013 07:06AM

Circa a week ago I posted asking whether bringing up molecular nanotechnology(MNT) as a possible threat avenue for an unfriendly artificial intelligence made FAI research seem less credible because MNT seemed to me to be not obviously possible. I was told to some extent, to put up and address the science of MNT or shut up.  A couple of people also expressed an interest in seeing a more fact and less PR oriented discussion, so I got the ball rolling and you all have no one to blame but yourselves. I should note before starting, that I do not personally have a strong opinion on whether Drexler-style MNT is possible. This isn't something I've researched previously, and I'm open to being convinced one way or the other. If MNT turns out to be likely at the end of this investigation, then hopefully this discussion can provide a good resource for LW/FAI on the topic for people like myself not yet convinced that MNT is the way of future. As far as I'm concerned, at this point all paths lead to victory. 

While Nanosystems was the canonical reference mentioned in the last conversation. I purchased it, then about 2/3rds of the way through this I figured Engines of Creation was giving me enough to work with and cancelled my order. If the science in Nanosystems is really much better than in EoC I can reorder it, but I figured we'd get started for free. 50 bucks is a lot of money to spend on an internet argument.

Before I begin I would like to post the following disclaimers.

1. I am not an expert in many of the claims that border on MNT. I did work at a Nanotechnology center for a year, but that experience was essentially nothing like what Drexler describes. More relevantly I am in the process of completing a Ph.D. in Physics, and my thesis work is on computational modeling of novel materials. I don't really like squishy things, so I'm very much out of my depth when it comes to discussions as to what ribosomes can and cannot accomplish, and I'll happily defer to other authorities on the more biological subjects. With that being said, several of my colleagues run MD simulations of protein folding all day every day, and if a biology issue is particularly important, I can shoot some emails around the department and try and get a more expert opinion.

2. There are several difficulties in precisely addressing Drexler's arguments, because it's not always clear to me at least exactly what his arguments are. I've been going through Engines of Creation and several of his other works, and I'll present my best guess outline here. If other people would like to contribute specific claims about molecular nanotech, I'll be happy to add them to the list and do my best to address them.

3. This discussion is intended to be scientific. As was pointed out previously, Drexler et al. have made many claims about time tables of when things might be invented.  Judging the accuracy of these claims is difficult because of issues with definitions as mentioned in the previous paragraph. I'm not interested in having this discussion encompass Drexler's general prediction accuracy. Nature is the only authority I'm interested in consulting in this thread. If someone wants to make a Drexler's prediction accuracy thread, they're welcome to do so.

4. If you have any questions about the science underlying anything I say, don't hesitate to ask. This is a fairly technical topic, and I'm happy to bring anyone up to speed on basic physics/chemistry terms and concepts.

Discussion


I'll begin by providing some background and highlighting why exactly I am not already convinced that MNT, and especially AI-assisted rapid MNT is the future, and then I'll try and address some specific claims made by Drexler in various publications.

Conservation of energy:

Feynman, and to some extent Drexler, spends an enormous amount of time addressing issues that we are familiar with from dealing with macroscopic pieces of equipment, such as how much space it takes to store things, how parts can wear out, etc. What is not mentioned in how we plan to power these Engines of Creation. Assembling nanotechnology is more than just getting atoms into the individual places you want them, it's a matter of very precise energetic control. The high resolution energy problem is equally as difficult as fine-grain control of atom positions, and this is further complicated by the fact that any energy delivery system you contrive for a nano-assembler is also going to impart momentum. In the macroscale world, your factory doesn't start sliding when you hook it up to the grid. At smaller sizes, that may not be true. It's very unclear in most of the discussions I read about these Nanofactories what's going to power them. What synthetic equivalent of ATP is going to allow us to out-compete the ribosome? What novel energy source is grey-goo going to have access to that will allow it break and reassemble the bonds necessary for nanofabrication?

Modelling is hard:

Solving the Schrodinger equation is essentially impossible. We can solve it more or less exactly for the Hydrogen atom, but things get very very difficult from there. This is because we don't have a simple solution for the three-body problem, much less the n-body problem. Approximately, the difficulty is that because each electron interacts with every other electron, you have a system where to determine the forces on electron 1, you need to know the position of electrons 2 through N, but the position of each of those electrons depends somewhat on electron 1. We have some tricks and approximations to get around this problem, but they're only justified empirically. The only way we know what approximations are good approximations is by testing them in experiments. Experiments are difficult and expensive, and if the AI is using MNT to gain infrastructure, then we can assume it doesn't already have the infrastructure to run its own physics lab. 

A factory isn't the right analogy:

The discussion of nanotechnology seems to me to have an enormous emphasis on Assemblers, or nanofactories, but a factory doesn't run unless it has a steady supply of raw materials and energy resources both arriving at the correct time. The evocation of a factory calls to mind the rigid regularity of an assembly line, but the factory only works because it's situated in the larger, more chaotic world of the economy. Designing new nanofactories isn't a problem of building the factory, but a problem of designing an entire economy. There has to be a source of raw material, an energy source, and means of transporting material and energy from place to place. And, with a microscopic factory, Brownian motion may have moved the factory by the time the delivery van gets there. This fact makes the modelling problem orders of magnitude more difficult. Drexler makes a big deal about how his rigid positional world isn't like the chaotic world of the chemists, but it seems like the chaos is still there; building a factory doesn't get rid of the logistics issue.

Chaos

The reason we can't solve the n-body problem, and lots of other problems such as the double pendulum and the weather is because it turns out to be a rather unfortunate fact of nature that many systems have a very sensitive dependence on initial conditions. This means that ANY error, any unaccounted for variable, can perturb a system in dramatic ways. Since there will always be some error (at the bare minimum h/4π) this means that our AI is going to have to do Monte Carlo simulations like the rest of us smucks and try to eliminate as many degrees of freedom as possible.

The laws of physics hold

I didn't think it would be necessary to mention this, but I believe that the laws of physics are pretty much the laws of physics we know right now. I would direct anyone who suggests that an AI has a shot at powering MNT with cold fusion, tachyons, or other physical phenomena not predicted by the standard model to this post. I am not saying there is no new no physics, but we understand quantum mechanics really well, and the Standard Model has been confirmed to enough decimal places that anyone who suggests something the Standard Model says can't happen is almost certainly wrong. Even if they have experimental evidence that is supposed to 99.9999% percent correct.

 

Specific Claims


Drexler's claims about what we can do now with respect to materials science in general are true. This should be unsurprising. It is not particularly difficult to predict the past. Here are 6 claims he makes that we can't currently accomplish which I'll try and evaluate:

  1. Building "gear-like" nanostructures is possible (Toward Integrated Nanosystems)
  2. Predicting crystal structures from first principles is possible (Toward Integrated Nanosystems)
  3. Genetic engineering is a superior form of chemical synthesis to traditional chemical plants. (EoC 6)
  4. "Biochemical engineers, then, will construct new enzymes to assemble new patterns of atoms. For example, they might make an enzyme-like machine which will add carbon atoms to a small spot, layer on layer. If bonded correctly, the atoms will build up to form a fine, flexible diamond fiber having over fifty times as much strength as the same weight of aluminum." (EoC 10)
  5. Proteins can make and break diamond bonds (EoC 11)
  6. Proteins are "programmable" (EoC 11)
1. Maybe. This depends on definitions. We can build molecules that rotate, and indeed they occur naturally, but those are a long way from Drexler's proposals. I haven't run any simulations as to whether specific designs such as the molecular planetary gear he exhibits are actually stable. If anyone has an xyz file for one of those doodads I'll be happy to run a simulation. You might look at the state of the art and imagine that if we can make atomic flip books that molecular gears can't be too far off, but it's not really true. That video is more like molecular feet than molecular hands. We can push a molecule around on the floor, but we can't really do anything useful with it.

2. True. This isn't true yet, but should be possible. I might even work on this after I graduate, if don't go hedge fund or into AI research.

3. Not wrong, but misleading. The statement "Genetic engineers have now programmed bacteria to make proteins ranging from human growth hormone to rennin, an enzyme used in making cheese." is true in the same sense that copying and pasting someone else's code constitutes programming. Splicing a gene into a plasmid is sweet, but genetic programming implies more control than we have. Similarly, the statement: "Whereas engineers running a chemical plant must work with vats of reacting chemicals (which often misarrange atoms and make noxious byproducts), engineers working with bacteria can make them absorb chemicals, carefully rearrange the atoms, and store a product or release it into the fluid around them." implies that bacterial synthesis leads to better yields (false), that bacteria are careful(meaningless), and implies greater control over genetically modified E.Coli than we have. 

4a. False. Flexible diamond doesn't make any sense. Diamond is sp3 bonded carbon and those bonds are highly directional. They're not going to flex.. Metals are flexible because metallic bonds, unlike covalent bonds, don't confine the electrons in space. Whatever this purported carbon fiber is, it either won't be flexible, or it won't be diamond.

4b. False. It isn't clear that this is even remotely possible. Enzymes don't work like this. Enzymes are catalysts for existing reactions. There is no existing reaction that results in a single carbon atom. That's an enormously energetically unfavorable state. Breaking a single carbon carbon double bond requires something like 636 kJ/mol (6.5eV) of energy. That's roughly equivalent to burning 30 units of ATP at the same time. How? How do you get all that energy into the right place at the right time? How does your enzyme manage to hold on to the carbons strongly enough to pull them apart?

5. "A flexible, programmable protein machine will grasp a large molecule (the workpiece) while bringing a small molecule up against it in just the right place. Like an enzyme, it will then bond the molecules together. By bonding molecule after molecule to the workpiece, the machine will assemble a larger and larger structure while keeping complete control of how its atoms are arranged. This is the key ability that chemists have lacked." I'm no biologist, but this isn't how proteins work. Proteins aren't Turing machines. You don't set the state and ignore them. The conformation of a protein depends intimately on its environment. The really difficult part here is that the thing it's holding, the nanopart you're trying to assemble is a big part of the protein's environment. Drexler complains around how proteins are no good because they're soft and squishy, but then he claims they're strong enough to assemble diamond and metal parts. But if the stiff nanopart that you're assembling has a dangling carbon bond waiting to filled then it's just going to cannibalize the squishy protein that's holding it. What can a protein held together by Van der Waals bonds do to a diamond? How can it control the shape it takes well enough to build a fiber?

6. All of these tiny machines are repeatedly described as programmable, but that doesn't make any sense. What programs are they capable of accepting or executing? What set of instructions can a collection of 50 carbon atoms accept and execute? How are these instructions being delivered? This gets back to my factory vs. economy complaint. If nothing else, this seems an enormously sloppy use of language.

 

Some things that are possible

I think we have or will have the technology to build some interesting artificial inorganic structures in very small quantities, primarily using ultra-cold, ultra-high-vacuum laser traps. It's even possible that eventually we could create some functional objects this way, though I can't see any practical way to scale that production up.

"Nanorobots" will be small pieces of metal or dieletric material that we manipulate with lasers or sophisticated magnetic fields, possibly attached to some sort of organic ligand. This isn't much of a prediction, we pretty much do this already. The nanoworld will continue to be statistical and messy.

We will gain some inorganic control over organics like protein and DNA (though not organic over inorganic). This hasn't really been done yet that I'm aware of, but stronger bonds>weaker bonds makes sense. I think there are people trying to read DNA/proteins by pushing the strands through tiny silicon windows. I feel like I heard a seminar along those lines, though I'm pretty sure I slept through it.

 

That brings me through the first 12 pages of EoC or so. More to follow. Let me know if the links don't work or the formatting is terrible or I said something confusing. Also, please contribute any specific MNT claims you'd like evaluated, and any resources or publications you think are relevant. Thank you.

Bibliography


Engines of Creation

Toward Integrated Nanosystems

Molecular Devices and Machines)

Education control?

12 PhilGoetz 17 May 2013 04:32PM

I'm reading Nurture Shock by Po Bronson & Ashley Merryman. Several things in the book, esp. the chapter on "Tools of the Mind", an intriguing education program, suggest that our education of young children not only isn't very good even when evaluated using tests that the curriculum was designed for, it's worse than just letting kids play. (My analogy and interpretation—don't blame this on the Tools people—is that conventional education may be like a Soviet five-year plan, trying to force children to acquire skills & knowledge that they would have been motivated to learn on their own if there weren't a school, and that early education shouldn't focus entirely on teaching specific facts, but also on teaching how to think, form abstractions, and control impulses.)

Say they're going to play fireman. The Tools teacher teaches the kids about what firemen do and what happens in a fire, and gives the kids different roles to play, then lets them play. They teach facts not because the facts are important, but to make the play session longer and more complicated.  Tools does well in increasing test scores, but even better at reducing disruptive behavior. [1]

Tools has a variety of computer games that are designed to get kids to exercise particular cognitive skills, like focusing on something while being aware of background events. But the games often sound like more-boring ways of teaching kids the same things that video-games teach them.

Tools did no better than the existing curriculum on certain metrics in a recent larger study. But it didn't perform worse, either.

The first study you do with any biological intervention is to compare the intervention to a control group that has no intervention. But in education, AFAIK no one has ever done this. Everyone uses the existing curriculum as the control.

Whatever country you're in, what metrics do you use, and what evidence do you have that your schools are better than nothing at all?

There may be some things that you need to sit kids down and force them to learn—say, arithmetic, math, and typing—but I kinda doubt it's more than 20% of the grade school curriculum. I spent a lot of time practicing penmanship, futilely trying to memorizing the capitals and chief exports of all fifty states, and studying the history of Thanksgiving and the American Revolution over and over again.[2] We could have a short-hours classroom hours control group, where kids spend a few hours a day learning those few facts they need to know, and the rest of the time playing.

ADDED: There is one kind of control--kids who've not gone to pre-school vs. kids who went to pre-school, or who went to Head Start.

 

[1] I fear somebody is going to complain that disruptive behavior is what we need to teach children so they can innovate and question authority. Open to discussion, but if it worked that way, we'd be overwhelmed with innovators and independent thinkers today.

[2] I actually learned the names of all the states from a song, and learned where they are from a jigsaw puzzle.

[Video] Brainwashed - A Norwegian documentary series on nature and nurture

15 GLaDOS 02 March 2013 12:34PM

Related: The Blank Slate, The Psychological Diversity of Mankind, Admitting to Bias

"Hjernevask" a well known (in Norway at least) documentary series that I am sure will be interesting to rationalists here is now available with English subtitles online. Produced by Ole Martin Ihle and Harald Eia a Norwegian documentarian and comedian, it casts a light on both ways in which we know people to be different as well as the culture that is academia in the Nordic country and probably elsewhere as well.

 

The Series

  1. The Gender Equality Paradox - Why do girls tend to go into empathizing professions and boys into systemizing professions? Why does the labor market become more gender segregated the more economic prosperity a country has?
  2. The Parental Effect - How much influence do parents really have on their children? To what degree is intelligence inherited?
  3. Gay/Straight - To what extent is sexual preference innate? Are there differences between heterosexual and homosexual brains? Is homosexuality a result of a choice or is it innate?
  4. Violence - Are people from some cultures more aggressive than others?
  5. Sex - Are there biological reasons men have a greater tendency than women to want sex without obligation?
  6. Race - Are there significant genetic differences between different peoples?
  7. Nature or Nurture - Is personality acquired or inherited?

The link go to the YouTube videos with English subtitles. Because linkrot sucks I'm providing another source for the videos.

 

Some Commentary

There was very little in the series that I found new and disagreed with some presentations. But this is not surprising given my eccentric interest in humans. (^_^) I found the interviews with the scientists and academics interesting and think that overall the series presents a good overview something well worth watching especially considering some of the debates I've seen taken place here recently. (;_;)

I'm somewhat frustrated by the frequent posts warning us about the dangers of Ev. Psych reasoning. (It seems like we average at least one of these per month).

It seems like a lot of this widespread hostility (the reaction to Harald Eia's Hjernevask is a good example of this hostility) stems from the fact that ev. psych is new. New ideas are held to much higher standard than old ones. The early reaction to ev. psych within psychology was characteristic of this effect. Behaviorists, Freudians, and Social Psychologists all had created their own theories of "ultimate causation" for human behaviour. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.

-Knb

But science started to suffer. With so much easy money, few wanted to study the hard sciences. And the social sciences suffered in another way: The ties with the government became too tight, and created a culture where controversial issues, and tough discussions were avoided. Too critical, and you could risk getting no more money.

It was in this culture Harald Eia started his studies, in sociology, early in the nineties. He made it as far as becoming a junior researcher, but then dropped off, and started a career as a comedian instead. He has said that he suddenly, after reading some books which not were on the syllabus, discovered that he had been cheated. What he was taught in his sociology classes was not up-to-date with international research, and more based on ideology than science.

-Bjørn Vassnes

The latter wrote that in a 2010 article on the documentary series that I would also recommend reading. HT to iSteve where it is quoted in full.

Falsifiable and non-Falsifiable Ideas

-1 shaih 19 February 2013 02:24AM


I have been talking to some people (few specific people I thought would benefit and appreciate it) in my dorm and teaching them rationality. I have been thinking of skills that should be taught first and it made me think about what skill is most important to me as a rationalist.

I decided to start with the question “What does it mean to be able to test something with an experiment?” which could also mean “What does it mean to be falsifiable?”

To help my point I brought up the thought experiment with a dragon in Carl Sagan’s garage which is as follows

Carl: There is a dragon in my garage
Me: I thought dragons only existed in legends and I want to see for myself
Carl: Sure follow me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is invisible
Me: Let me throw some flour in so I can see where the dragon is by the disruption of the flour 
Carl: My dragon is incorporeal

And so on

The answer that I was trying to bring about was along the lines that if something could be tested by an experiment then it must have at least one different effect if it were true than if it were false. Further if something had at least one effect different if it were true than if it was false then I could at least in theory test it with an experiment.

This led me to the statement:
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.

Anthony (the person I was talking to at the time) started his counter argument with any object in a thought experiment cannot be tested for but still has a meaning.

So I revised my statement any object that if brought into the real world cannot be tested for has no meaning. Under the assumption that if an object could not be tested for in the real world it also has no effect on anything in the thought experiment. i.e. the story with the dragon would have gone the same way independent of its truth values if it were in the real world.

Then the discussion continued into could it be rational to have a belief that could not even in theory be tested. It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.

I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it. Even further I argued that it would in fact be a false belief because p(world) > p(world)p(impermeable invisible dragon) which is a simple occum’s razor argument.

My intended direction for this argument with Anthony from this point was to apply these points to theology but we ran out of time and we have not had time again to talk so that may be a future post.

 

Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable. I realized then that it might be rational to believe non-falsifiable things for two reasons (I’m sure there’s more but these are the main one’s I can think of please comment your own)

1)   The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)

2)   You believe that the belief will someday allow you to make an original theory which will be falsifiable.

Both of these reasons if not used very carefully will allow false beliefs. As such I myself decided that if a belief or new theory sufficiently meets these conditions enough to make me want to believe them I should put them into a special category of my thoughts (perhaps conjectures).  This category should be below beliefs in power but still held as how the world works and anything in this category should always strive to leave it, meaning that I should always strive to make any non-falsifiable conjecture no longer be a conjecture through making it a belief or disproving it. 

 

Note: This is my first post so as well as discussing the post, critiques simply to the writing are deeply welcomed in PM to me. 

 

Rationalist Lent

44 Qiaochu_Yuan 14 February 2013 06:32AM

As I understand it, Lent is a holiday where we celebrate the scientific method by changing exactly one variable in our lives for 40 days. This seems like a convenient Schelling point for rationalists to adopt, so:

What variable are you going to change for the next 40 days?

(I am really annoyed I didn't think of this yesterday.) 

Statistical checks on some social science

17 NancyLebovitz 17 December 2012 05:23PM

Simonsohn, a social scientist, investigates bad use of statistics in his field.

A few good quotes:

The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.

Laugh or cry?:"He prefers psychology’s close-up focus on the quirks of actual human minds to the sweeping theory and deduction involved in economics."

Last summer, not long after Sanna and Smeesters left their respective universities, Simonsohn laid out his approach to fraud-busting in an online article called “Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone”. Afterward, his inbox was flooded with tips from strangers. People wanted him to investigate election results, drug trials, the work of colleagues they’d long doubted. He has not replied to these messages. Making a couple of busts is one thing. Assuming the mantle of the social sciences’ full-time Grand Inquisitor would be quite another.

This looks like a clue that there's work available for anyone who knows statistics. Eventually, there will be an additional line of work for how to tell whether a forensic statistician is competent.

 

Science, Engineering, and Uncoolness; Here and Now, Then and There

9 Ritalin 08 December 2012 07:52PM

[Feel free to read this poor little unrigorous and unsourced post in JK Simmons' voice. That is entirely optional and you are of course free to read it in any voice you like; I only thought it might be interesting in the light of what is mentioned in the edit at the bottom of the text]

Nowadays, it seems that the correlation between sciency stuff, social ineptitude, and uncoolness, is cemented in the mind of the public. But this seems to be very era-specific, even time-specific.

As a lesswronger, I find what follows ironic: In Islamic countries, "scientists" are called with the same word use for religious leaders and other teachers, "olama", literally "knowers"; historically, there's been a huge overlap between the two, and, when one of these folks speaks, you're supposed to shut up and listen. This is still true to this day. There might not be much wealth to be gained from marrying a scientist, but there was status; amusingly enough, it's in modern-day materialism that is pushing them into irrelevance as money becomes, more and more, the sole measure of status.

In the West, in the XIXth century, Science and Progress were hip and awesome. Being a scientist of some sort was practically a requirement for any pulp hero. In the USA, an era of great works of engineering that had a dramatic impact on life quality made engineers heroes of popular fiction, men of knowledge and rigour who would not bow down to money and lawyer-cushioned bourgeois, or to corrupt and fickle politicians, men who would stand up against injustice and get the job done no matter what. Everyone wanted to call themselves an engineer, and the word was rampantly abused into meaninglessness; florists called themselves "flower engineers"! That's how cool being an engineer was.

In the Soviet Union, as long as they didn't step on the toes of the Party, scientists were highly acclaimed and respected, they got tons of honour and status. There was a huge emphasis on technological progress, on mankind reaching its full potential (at least on paper).

Nowadays, nearly the entire leadership of China is made of technicians and engineers. Not lawyers, or economists, or literati. These people only care about one thing, getting the job done - and that's what Science does.

So, I've really got to ask, when and *how* did Science and Engineering become "uncool", and why are they termed "geek", the term used for sideshow circus performers whose speciality was eating chickens alive (or something like that), and which, before that, used to be synonymous with freak and fook? When and how did we become worse than clowns in the eyes of society?  Most importantly: how can the process be reversed?

After all, from a utilitarian standpoint, Science being cool and appreciated and respectable is kind of important.

 

EDIT: There's also the strange relationship, in the public mind, between science and dangerous, callous, abusive insanity, with a long tradition in popular fiction from Victor Von Frankenstein and Captain Nemo to Tony Stark and GLaDOS, and some Real Life counterparts, especially in brutal totalitarian regimes. Wikipedia has an interesting article on the topic, and how the characterization and prevalence of the Mad Scientist related to time-pertinent perceptions of Science.

For some reason, that aspect is often treated as cool and dramatic and impressive (besides being characterized as repulsive), perhaps because it involves displays of power over others, which is a high-status thing to do. Is that one of the existing paths to social prestige? Achieving power, and being inconsiderate about flaunting it? I'd like to hear more constructive alternatives, because that one doesn't seem viable, from where I stand.

Complement Luke's Mega-Course for Aspiring Philosophers

8 diegocaleiro 07 December 2012 06:14AM

Luke has mentioned much of the research that aspiring philosophers ought to read here.

In fact, he delineated a basis upon which good philosophy can be build, a worldview brought by science and experimentation that relates to, and informs, the kinds of facts which philosophers need to understand to increase their probabilities of asking, and giving good answers to, relevant questions.

Some argued that his list is biased, let us assume for the time being it isn't.

Some argued that the main problem with the list is that it requires either unmanageable amount of time to go through, or improbable levels of intelligence/motivation to do so. This argument does make sense if the purpose of the list was "Let us create a good Philosophy Course".

But this is not the purpose of it. The purpose of it, as most of what Luke publicly does is to save the World. And if doing so requires making people go through an enormous amount of pages of content besides their formal education, well, then so be it. If it has to be a six year course, then it has to.

At the end of his post he says:

You might also let them read 20th century analytic philosophy at that point [after going through his Mega-Course] — hopefully their training will have inoculated them from picking up bad thinking habits.

Now 20th century Analytic Philosophy, and some philosophy that isn't strictly analytic, should definitely be at a philosophy course. I urge other LessWronger philosophers to guide people through that.

Here is a list I have published here before, for Philosophy of Mind and Language (sometimes considered subsets or children of Analytic Philosophy). It covers only the minimal reading necessary to grasp the place of computationalism, and so-called computational theories of mind within the larger debate of philosophy. 

But the last century has seen a lot of good philosophy that by luck didn't conflict with neither the science of the day, nor the science that was developed until 2012. Sometimes authors were very careful when writing their philosophy, and well versed in science, like Dennett, Hofstadter, Putnam, Ned Block, and Chalmers. Finally, frequently the topics at hand are sufficiently orthogonal with scientific development that it simply didn't matter that the author didn't know in 1970 what we (after the Mega-Course) know today. 

So I ask Luke, Pragmatist, Carl Shulman and others to help build the layer that will sit on top of the science layer in the "Philosophy Given Science" Mega-Course for aspiring philosophers. The course will have four layers. Below the science layer, will be its prerequisites (admittedly large), and atop the one I'm suggesting here, we hope to start building a really good philosophy that is compatible with our scientific understanding, tackles mostly Big Questions which are highly likely to be meaningful, and frequently also useful for the major issues we still have time to solve.

This is the pyramidal  structure I suggest we create, 1,2 and 3 being the content of the Mega-Course, and 4 being the likely outcome we expect it to facilitate, made by those who undertake it:

4) Philosophy given 1,2 and 3. Tackling the Big Questions, and making it portable to areas such as AGI, Biotech, etc...

3) Philosophy, up to 2012, that is well informed about or orthogonal to Science so far. Or lucky.

2) Science that is relevant to philosophy. This.

1) Prerequisites for 2.

 

In this post we begin layer three, I'll start by copying the Mind and Language I had sent. After I'll include some of Bostrom's recommendations within philosophy to me as an undergrad, and my selection of Dennett's, and Dennett's selection of science:

Language and Mind:

From Bostrom's suggestions:

  • Philosophical Papers - David Lewis
  • Parfit
  • Frank Arntzenius
  • Timothy Williamson
  • Brian Skyrms

By Dennett:

  • Real Patterns
  • True Believers
  • Kinds of Minds
  • Intentional Systems In Cognitive Ethology
  • Those mentioned above in the Mind and Language list.

Not previously cited, but in Luke's favorites list:

  • Noam Chomsky
  • Stephen Stich
  • Hilary Kronblith
  • Eric schwitzgebel
  • Michael Bishop

Dennett's suggestions on interdisciplinary science (layer 2):

  • The Company of Strangers - Paul Seabright
  • Not by Genes Alone - Boyd and Richerson
  • I Am a Strange Loop. - Hofstadter

By Bostrom

  • Probably easier to list what should not be read...

This may initially appear overwhelming, but it is probably one order of magnitude less content than Luke's original post about layer 2. Once again I ask philosophers to specify more things within areas that are not well addressed here, such as ethics. Also books by scientists dealing with philosophical topics (such as Sam Harris: The Moral Landscape) can be added here. 

The "Philosophy Given Science" MegaCourse may never actually take place, but it will be a very valuable guideline for institutions to influence actual Philosophy courses, for Philosophy teachers to get cohesive and preselected content to teach, and most importantly for diligent aspiring philosophers willing to get to the Big and relevant problems, instead of being the ball in the chaotic Pinball game that academic philosophy has become, despite all good things it brought. When the path is too long, a shortcut is not a shortcut anymore, it is the only way to get there before it is too late.

 

[Link] Epigenetics

5 [deleted] 28 October 2012 11:56AM

Your daily dose of science knowledge will once more be provided by Gregory Cochran, clearing up some misconceptions you may have heard about Epigenetics.

As I understand it, in some circles,  there is a burgeoning hope that practice in this generation will somehow improve performance in the next – based on a word they have heard but do not understand. That word is epigenetics.

Genes can certainly be modified in ways that persist.   For example, the cells in your skin produce more skin cells when they divide, rather than muscle cells or neurons.  Most of your cells have a copy of the entire human genome, but only certain elements are expressed in a particular type of cell, and that pattern persists  when that  kind of cell divides. We understand, to a degree, some of the chemical changes that cause these lasting changes in gene expression patterns.   One is methylation,  a method of suppressing gene activity.  It involves attaching a methyl group to a cytosine base. This methylation pattern is copied when somatic cells divide.

The question is whether A. such changes can persist into the next generation and B. if they do, is this some sort of adaptive process, rather than an occasional screwup?  We’re  interested in whether this happens in humans,  so we’ll only consider mammals.

It’s rare, but sometimes it happens.  It has only been found to happen at a few sites in the genome, and when it does happen,  only a fraction of the offspring are affected. Probably the best known example is the agouti yellow allele in mice.  Mice that carry this allele are fat, yellow, and prone to cancer and diabetes – some of them. Yellow mothers tend to have yellow babies,  while genetically identical brown mothers mostly have brown babies.  The agouti yellow allele is the product of a recent insertion in the genome, about 50 years ago.  For the overwhelming majority of genes, the epigenetic markers are reset in a new embryo, which means that epigenetic changes induced by the parent’s experiences disappear.  The embryo is back at square one.   This agouti yellow allele is screwed up – somehow the reset isn’t happening correctly.

In mice, the mammalian species in which most such investigations have been done,  the few other locations in the genome where anything like this happens are mainly retroposons and other repeated elements.

There is another way that you can get transmission across generations without genetic change.  Rats that are nurtured by stressed mothers are more likely to be stressed.  This isn’t transmitted perfectly, but it happens.  Presumably the uterine environment,  or maybe maternal behavior, is different in stressed mice in a way that stresses their offspring.   This reminds me of a science fiction story that abused this principle.  The  idea was that alligators (or maybe it was crocodiles) almost have a four-chambered heart, which is generally associated with higher metabolism and friskiness. Our protagonist operates on an alligator and soups up its heart: the now-more-vigorous animal has better blood circulation and lays healthier eggs that develop into babies that also have a working four-chambered heart. So ‘normal’ alligators were like stressed mice: fix the problem and you get to see what they’re really capable of. The problem was that the most interesting consequence was growing wings, flying around and eating people. Alligators turned out to be stunted dragons. Not so good.

Anyhow, what reason is there to believe that reading Gradshteyn and Ryzhik until your eyes bleed will plant the seeds of math to come in your descendants?  None. Oh, I can come up with a scenario, if you want: but it requires that civilization (in particular, the key part of civilization, heavy use of weird definite and indefinite integrals and vast reproductive rewards for those skilled in such things) has risen and fallen over and over again at fairly short (but irregular)  intervals, so that humans have faced this adaptive problem over and over and over again.  A little like the way in which generations of aphids do different things in the summer (parthenogenesis) than in the late fall (sexual reproduction) – although that probably depends on direct cues like length of day rather than epigenetic changes.  Something like Motie history, maybe. But  I don’t believe it.  Not even a little bit.

Nature hasn’t even figured out how to have Jewish boys be born circumcised yet.

So why are people talking about this? Why do people like Tyler Cowen invoke it to ward off evil facts?

Because they’re chuckleheads, what else?

I think we can be a bit more specific that that so lets take it as an exercise. Motivated cognition for starters.

If you want to learn why the Conan the Barbarian was generated by better priors than modern history books, what the blind idiot god may have in store for you or how to solve thick problems check out other articles from the blog shared under the tag: westhunter 

[Link] The real end of science

14 [deleted] 03 October 2012 04:09PM

From Gene Expression by Razib Khan who some of you may also know from the old gnxp site or perhaps from his BHTV debate with Eliezer.

Fifteen years ago John Horgan wrote The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age. I remain skeptical as to the specific details of this book, but Carl’s write-up in The New York Times of a new paper in PNAS on the relative commonness of scientific misconduct in cases of retraction makes me mull over the genuine possibility of the end of science as we know it. This sounds ridiculous on the face of it, but you have to understand my model of and framework for what science is. In short: science is people. I accept the reality that science existed in some form among strands of pre-Socratic thought, or among late antique and medieval Muslims and Christians (not to mention among some Chinese as well). Additionally, I can accept the cognitive model whereby science and scientific curiosity is rooted in our psychology in a very deep sense, so that even small children engage in theory-building.

That is all well and good. The basic building blocks for many inventions and institutions existed long before their instantiation. But nevertheless the creation of institutions and inventions at a given moment is deeply contingent. Between 1600 and 1800 the culture of science as we know it emerged in the West. In the 19th and 20th centuries this culture became professionalized, but despite the explicit institutions and formal titles it is bound together by a common set of norms, an ethos if you will. Scientists work long hours for modest remuneration for the vain hope that they will grasp onto one fragment of reality, and pull it out of the darkness and declare to all, “behold!” That’s a rather flowery way of putting the reality that the game is about fun & fame. Most will not gain fame, but hopefully the fun will continue. Even if others may find one’s interests abstruse or esoteric, it is a special thing to be paid to reflect upon and explore what one is interested in.

Obviously this is an idealization. Science is a highly social and political enterprise, and injustice does occur. Merit and effort are not always rewarded, and on occasion machination truly pays. But overall the culture and enterprise muddle along, and are better in terms of yielding a better sense of reality as it is than its competitors. And yet all great things can end, and free-riders can destroy a system. If your rivals and competitors and cheat and getting ahead, what’s to stop you but your own conscience? People will flinch from violating norms initially, even if those actions are in their own self-interest, but eventually they will break. And once they break the norms have shifted, and once a few break, the rest will follow. This is the logic which drives a vicious positive feedback loop, and individuals in their rational self-interest begin to cannibalize the components of the institutions which ideally would allow all to flourish. No one wants to be the last one in a collapsing building, the sucker who asserts that the structure will hold despite all evidence to the contrary.

Deluded as most graduate students are, they by and large are driven by an ideal. Once the ideal, the illusion, is ripped apart, and eaten away from within, one can’t rebuild it in a day. Trust evolves and accumulates it organically. One can not will it into existence. Centuries of capital are at stake, and it would be best to learn the lessons of history. We may declare that history has ended, but we can’t unilaterally abolish eternal laws.

Update:

Link to original post.

High School Lecture - Report

19 Xece 23 September 2012 02:06AM

This post is a followup report to this.

 

On Friday's lecture, I was able to briefly cover several topics as an introduction. They centred around rationality (what it is), truth (what it is and why we should pursue it), and Newcomb's Paradox.

The turnout was as expected (6 out of a total 7 group members, with 1 having other obligations that day). Throughout the talk I would ask for some proposed definitions before giving them. It is unfortunate when I asked what "truth" is, mysterious answers such as "truth is the meaning of life", and "truth is the pursuit of truth". When asked what they meant by their answers, they either rephrased what they said with the same vagueness or were unable to give an answer. One member, however, did say that "Truth is what is real", only to have other members ask what he meant by "real". It offered a rather nice opportunity for a map-and-territory tangent before giving some version of "The Simple Truth".

I used the definitions given in 'What Do We Mean By "Rationality"?' to describe epistemic and instrumental rationality, and gave several examples as to what rationality is not (Dr. Spock, logic/reason, etc). As a practice, I introduced Newcomb's Paradox. There was ample debate with an even split between one-box and two-boxers. Due to time constraints, we weren't able to come to a conclusion (although the one-boxing side was making a stronger argument). By the end of lunch period, everyone seemed to have a good grasp that rationality is simply making the best decision to achieve one's goals, whatever they may be.

Overall, I'd say it was successful. My next turn is on October 3rd, and apart from a little review, I'm going to go over the 5-second level, and use of words. Saying what they mean is something we as a group need to work on.

Scientists make monkeys smarter using brain implants [link]

22 Dreaded_Anomaly 15 September 2012 06:48PM

Article at io9. The paper is available here.

The researchers showed monkeys specific images and then trained them to select those images out of a larger set after a time delay. They recorded the monkeys' brain function to determine which signals were important. The experiment tests the monkey's performance on this task in different cases, as described by io9:

Once they were satisfied that the correct mapping had been done, they administered cocaine to the monkeys to impair their performance on the match-to-sample task (seems like a rather severe drug to administer, but there you have it). Immediately, the monkeys' performance fell by a factor of 20%.

It was at this point that the researchers engaged the neural device. Specifically, they deployed a "multi-input multi-output nonlinear" (MIMO) model to stimulate the neurons that the monkeys needed to complete the task. The inputs of this device monitored such things as blood flow, temperature, and the electrical activity of other neurons, while the outputs triggered the individual neurons required for decision making. Taken together, the i/o model was able to predict the output of the cortical neurons — and in turn deliver electrical stimulation to the right neurons at the right time.

And incredibly, it worked. The researchers successfully restored the monkeys' decision-making skills even though they were still dealing with the effects of the cocaine. Moreover, when duplicating the experiment under normal conditions, the monkeys' performance improved beyond the 75% proficiency level shown earlier. In other words, a kind of cognitive enhancement had happened.

This research is a remarkable followup to research that was done in rodents last year.

High School Lectures

8 Xece 15 September 2012 06:05AM

Just recently at my high school, a group of classmates and I started a science club. A major component of this is listening and giving peer lectures on topics of physics, math, computer science, etc. I picked a topic a bit off to the side: philosophy and decision making. Naturally, this includes rationality. My plan is to start with something based off the sequences, specifically "How to Actually Change Your Mind" and "A Human's Guide to Words".

I was hoping the Less Wrong community could give me some suggestions, tips, or even alternative ways to approach this. There is no end goal, we just want to learn more and think better. All our members are among the top 5% academically of their own grade. Most of us are seniors and have finished high school math, taking AP Calculus this year. We have covered basic statistics and Bayes' Theorem, but only applied it to the Disease Problem.

Any help or ideas are appreciated.

 

Update: Thank you for all these suggestions! They are incredibly helpful for me. I will attempt to make a recording of the lecture period if possible. I will make another discussion post sometime next weekend (the first lecture is next Friday) to report how it went.

 

Update 2: Report here.

[Link] The Greek Heliocentric Theory

34 hegemonicon 12 June 2012 05:18PM

Summary: The Greeks likely rejected a heliocentric theory because it would conflict with the lack of any visible stellar parallax, not for egotistical, common-sense, or aesthetic reasons.

I had always heard that the Greeks embraced a geocentric universe for common-sense, aesthetic reasons - not scientific ones. But it seems as if the real story is more complicated than that:

From Isomorphismes:

Now this is the kicker in your Popperian dirtsack. The Greeks had the right theory (heliocentric solar system) but discarded it on the basis of experimental evidence! Never preach to me about progress-in-science when all you’ve heard is a one-liner about Popper and the communal acceptance of general relativity. Especially don’t follow it up by saying that science marches toward the Truth whilst religion thwarts its progress. According to Astronomer Lisa, it’s not true that the Greeks simply thought they and their Gods were at the centre of the Universe because they were egotistical. They reasoned to the geocentric conclusion based on quantitative evidence. How? They measured parallax.(Difference in stellar appearance from spring to fall, when we’re on opposite sides of the Sun.) Given the insensitivity of their measurement tools at the time, the stars didn’t change positions at all when the Earth moved to the other side of the Sun. Based on that, they rejected the heliocentric hypothesis. If the Earth actually did move around the Sun, then the stars would logically have to appear different from one time to another. But they remain ever fixed in the same place in the Heavens, therefore the Earth must be still (geocentric).

I dug a little bit deeper, and this seems to be more or less accurate. From The Greek Heliocentric Theory and its Abandonment:

This paper then examines possible reasons for the Greek abandonment of the heliocentric theory and concludes that there is no reason to deplore its abandonment. In developing the heliocentric theory the Greeks had run the gamut of theorizing. We are indebted to the Alexandrians and Hipparchus for turning away from speculation to take up the recording of precise astronomical data. Here was laid the foundation upon which modern astronomy was built.

Let  us now suppose that Aristarchus’ theory was widely circulated and that it was given careful consideration by leading astronomers. There is one objection that immediately  arises when the earth is put in motion, the very difficulty which must  have disquieted Copernicus and which caused Tycho Brahe shortly  afterwards to renounce Copernicus’ heliocentric system and to put the earth again at rest. (Tycho reverted to a system first suggested by some ancient Greek, who made the planets revolve about the sun and the sun about the earth.) The difficulty is this. As soon as the  earth is set in motion  in an annual revolution  about the sun, the distance between any two of the earth’s positions that are six months apart will be twice as  great as the earth’s distance from the sun. Over such vast distances some displacement in the positions of the stars ought to be observed. The more accurate the astronomical instruments and the greater the estimated distance of the sun, the more reason should there be to expect stellar displacement. Now it so happened that Aristarchus reached his conclusions at the very time when interest was keen at Alexandria and elsewhere in the Greek world in accurate observations and when marked improvements were being made in precision instruments. To appreciate these developments we need only recall the careful stellar catalogues of Aristyllus and Timocharis early in the third century B.C., the work of the latter enabling Hipparchus to discover the precession of the equinoxes, and the armillary sphere of Eratosthenes by which he was able to  determine  the obliquity of the ecliptic and the circumference of the  earth. Hipparchus continued to make improvements in the next century. He, as we shall  see, had a much better appreciation of the sun’s great distance than Copernicus. Of course it was impossible to observe stellar displacement without the aid of a telescope. Inability to observe it left astronomers with only two alternatives: either the stars were so remote that it was impossible to detect displacement, or the earth would have to remain at rest.

..Heath was of  the opinion that Hipparchus was responsible for the  death of Aristarchus’ theory, that the adherence of so preeminent an astronomer to a geocentric orientation sealed the doom of the heliocentric theory. This is a reasonable conjecture. Hipparchus was  noted for his careful observations, his stellar catalogues, and the remarkable precision of his recordings of solar and  lunar  motions. According to Ptolemy he was devoted to truth above all else and  because he did not possess sufficient data, he refused to attempt to account for planetary motions as he had for those of the sun and moon. His discovery of the precession of the equinoxes attests to the keenness of his observations. He came much closer to appreciating the vast distance of the  sun than Copernicus did.

..We do not know whether or not Hipparchus ever seriously entertained  Aristarchus’ views about the earth’s motions, but from what we have seen of his cautious and accurate methods, it is likely that he would have quickly rejected the heliocentric theory in the absence of visible stellar displacement.

And from The Ancient Greek Astronomers: A Remarkable Record of Ingenuity:

Aristarchus was successful in explaining variations in brilliance and reverse courses of the planets, but planetary motions are far more complicated than that. Kepler was the first to realize that the planets do not describe circular orbits, but rather ellipses, and that the sun is not in the middle of these orbits but in the foci of the ellipses. That something was wrong might have been suspected as early as 330  B.C., for Callippus noticed that the seasons were not of the same length. He estimated their lengths between solstices and equinoxes to  be 94, 92, 89, and 90 days- figures that are very nearly correct. Or to show the irregularities that might result from combining the eccentricities of the orbits of two  planets, in  some years Mars and the earth at closest approximation are 36 million miles apart and in other years (as in 1948) may be 63 million miles apart at their nearest approach. Now the Alexandrians were pointing their precision sights at the planets and must have been disturbed by these peculiarities. Furthermore they would have been less kindly disposed towards Aristarchus’ explanation of the absence of visible stellar parallax by placing the stars at  an almost infinite distance away because they had a better appreciation of the sun’s vast distance and consequently would have stronger reason to expect to find parallax. It would seem that the more precise the  instruments, the  less  likelihood there would be of the earth’s being in motion.

Satire of Journal of Personality and Social Psychology's publication bias

26 CarlShulman 05 June 2012 12:08AM

Follow-up to:  Follow-up on ESP study: "We don't publish replications", Using degrees of freedom to change the past for fun and profit

As I discussed in the above posts, the Journal of Personality and Social Psychology, a leading psych journal, published a deeply flawed parapsychology study (see the second post for details) which had apparently been tortured to produce results. Then they rejected an attempt to replicate that found no effect, citing a sadly typical policy of not publishing replications. Some of you may enjoy reading one enterprising researcher's amusing satire article, purportedly (not actually) "tallying" past confirmations and disconfirmations in JPSP and drawing conclusions.

 

ETA: To clarify the last sentence, they didn't really find 4800+ confirmation and two disconfirmations. As they say in small print, the data were made up. It's right by the chart.

Computer Science and Programming: Links and Resources

29 XiXiDu 29 May 2012 01:17PM

Updated Version @ LW Wiki: wiki.lesswrong.com/wiki/Programming_resources

Contents

 

How Computers Work

1. CODE The Hidden Language of Computer Hardware and Software

The book intends to show a layman the basic mechanical principles of how computers work, instead of merely summarizing how the different parts relate. He starts with basic principles of language and logic and then demonstrates how they can be embodied by electrical circuits, and these principles give him an opening to describe in principle how computers work mechanically without requiring very much technical knowledge. Although it is not possible in a medium sized book for layman to describe the entire technical summary of a computer, he describes how and why it is possible that elaborate electronics can act in the ways computers do. In the introduction, he contrasts his own work with those books which "include pictures of trains full of 1s and 0s."

2. The Elements of Computing Systems: Building a Modern Computer from First Principles

Indeed, the best way to understand how computers work is to build one from scratch, and this textbook leads students through twelve chapters and projects that gradually build a basic hardware platform and a modern software hierarchy from the ground up. In the process, the students gain hands-on knowledge of hardware architecture, operating systems, programming languages, compilers, data structures, algorithms, and software engineering. Using this constructive approach, the book exposes a significant body of computer science knowledge and demonstrates how theoretical and applied techniques taught in other courses fit into the overall picture.

3. The Write Great Code Series (A Solid Foundation in Software Engineering for Programmers)

Write Great Code Volume I: Understanding the Machine

This, the first of four volumes, teaches important concepts of machine organization in a language-independent fashion, giving programmers what they need to know to write great code in any language, without the usual overhead of learning assembly language to master this topic. The Write Great Code series will help programmers make wiser choices with respect to programming statements and data types when writing software.

Write Great Code Volume II: Thinking Low-Level, Writing High-Level

...a good question to ask might be "Is there some way to write high-level language code to help the compiler produce high-quality machine code?" The answer to this question is "yes" and Write Great Code, Volume II, will teach you how to write such high-level code. This volume in the Write Great Code series describes how compilers translate statements into machine code so that you can choose appropriate high-level programming language statements to produce executable code that is almost as good as hand-optimized assembly code.

4. The Art of Assembly Language Programming

Assembly is a low-level programming language that's one step above a computer's native machine language. Although assembly language is commonly used for writing device drivers, emulators, and video games, many programmers find its somewhat unfriendly syntax intimidating to learn and use.

Since 1996, Randall Hyde's The Art of Assembly Language has provided a comprehensive, plain-English, and patient introduction to assembly for non-assembly programmers. Hyde's primary teaching tool, High Level Assembler (or HLA), incorporates many of the features found in high-level languages (like C, C++, and Java) to help you quickly grasp basic assembly concepts. HLA lets you write true low-level code while enjoying the benefits of high-level language programming.

5. The Art of Computer Programming

This work is not about computer programming in the narrow sense, but about the algorithms and methods which lie at the heart of most computer systems.

At the end of 1999, these books were named among the best twelve physical-science monographs of the century by American Scientist, along with: Dirac on quantum mechanics, Einstein on relativity, Mandelbrot on fractals, Pauling on the chemical bond, Russell and Whitehead on foundations of mathematics, von Neumann and Morgenstern on game theory, Wiener on cybernetics, Woodward and Hoffmann on orbital symmetry, Feynman on quantum electrodynamics, Smith on the search for structure, and Einstein's collected papers.

An Overview of Computer Programming

1. Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages

Ruby, Io, Prolog, Scala, Erlang, Clojure, Haskell. With Seven Languages in Seven Weeks, by Bruce A. Tate, you'll go beyond the syntax-and beyond the 20-minute tutorial you'll find someplace online. This book has an audacious goal: to present a meaningful exploration of seven languages within a single book. Rather than serve as a complete reference or installation guide, Seven Languages hits what's essential and unique about each language. Moreover, this approach will help teach you how to grok new languages.

For each language, you'll solve a nontrivial problem, using techniques that show off the language's most important features. As the book proceeds, you'll discover the strengths and weaknesses of the languages, while dissecting the process of learning languages quickly--for example, finding the typing and programming models, decision structures, and how you interact with them.

2. Programming Language Pragmatics

The ubiquity of computers in everyday life in the 21st century justifies the centrality of programming languages to computer science education.  Programming languages is the area that connects the theoretical foundations of computer science, the source of problem-solving algorithms, to modern computer architectures on which the corresponding programs produce solutions.  Given the speed with which computing technology advances in this post-Internet era, a computing textbook must present a structure for organizing information about a subject, not just the facts of the subject itself.  In this book, Michael Scott broadly and comprehensively presents the key concepts of programming languages and their implementation, in a manner appropriate for computer science majors. 

3. An Introduction to Functional Programming Through Lambda Calculus

This well-respected text offers an accessible introduction to functional programming concepts and techniques for students of mathematics and computer science. The treatment is as nontechnical as possible, assuming no prior knowledge of mathematics or functional programming. Numerous exercises appear throughout the text, and all problems feature complete solutions.

4. How to Design Programs (An Introduction to Computing and Programming)

This introduction to programming places computer science in the core of a liberal arts education. Unlike other introductory books, it focuses on the program design process. This approach fosters a variety of skills--critical reading, analytical thinking, creative synthesis, and attention to detail--that are important for everyone, not just future computer programmers.The book exposes readers to two fundamentally new ideas. First, it presents program design guidelines that show the reader how to analyze a problem statement; how to formulate concise goals; how to make up examples; how to develop an outline of the solution, based on the analysis; how to finish the program; and how to test. Each step produces a well-defined intermediate product. Second, the book comes with a novel programming environment, the first one explicitly designed for beginners.

5. Structure and Interpretation of Computer Programs

Using a dialect of the Lisp programming language known as Scheme, the book explains core computer science concepts, including abstraction, recursion, interpreters and metalinguistic abstraction, and teaches modular programming.

The program also introduces a practical implementation of the register machine concept, defining and developing an assembler for such a construct, which is used as a virtual machine for the implementation of interpreters and compilers in the book, and as a testbed for illustrating the implementation and effect of modifications to the evaluation mechanism. Working Scheme systems based on the design described in this book are quite common student projects.

Computer Science and Computation

1. The Annotated Turing: A Guided Tour Through Alan Turing's Historic Paper on Computability and the Turing Machine

Mathematician Alan Turing invented an imaginary computer known as the Turing Machine; in an age before computers, he explored the concept of what it meant to be computable, creating the field of computability theory in the process, a foundation of present-day computer programming.

The book expands Turing’s original 36-page paper with additional background chapters and extensive annotations; the author elaborates on and clarifies many of Turing’s statements, making the original difficult-to-read document accessible to present day programmers, computer science majors, math geeks, and others.

2. New Turing Omnibus (New Turning Omnibus : 66 Excursions in Computer Science)

This text provides a broad introduction to the realm of computers. Updated and expanded, "The New Turing Omnibus" offers 66 concise articles on the major points of interest in computer science theory, technology and applications. New for this edition are: updated information on algorithms, detecting primes, noncomputable functions, and self-replicating computers - plus completely new sections on the Mandelbrot set, genetic algorithms, the Newton-Raphson Method, neural networks that learn, DOS systems for personal computers, and computer viruses.

3. Udacity

Udacity is a private educational organization founded by Sebastian Thrun, David Stavens, and Mike Sokolsky, with the stated goal of democratizing education

It is the outgrowth of free computer science classes offered in 2011 through Stanford University. As of May 2012 Udacity has six active courses.

The first two courses ever launched on Udacity both started on 20th February, 2012, entitled "CS 101: Building a Search Engine", taught by Dave Evans, from the University of Virginia, and "CS 373: Programming a Robotic Car" taught by Thrun. Both courses use Python.

4. Introduction to Artificial Intelligence

A bold experiment in distributed education, "Introduction to Artificial Intelligence" will be offered free and online to students worldwide from October 10th to December 18th 2011. The course will include feedback on progress and a statement of accomplishment. Taught by Sebastian Thrun and Peter Norvig, the curriculum draws from that used in Stanford's introductory Artificial Intelligence course. The instructors will offer similar materials, assignments, and exams.

Artificial Intelligence is the science of making computer software that reasons about the world around it. Humanoid robots, Google Goggles, self-driving cars, even software that suggests music you might like to hear are all examples of AI. In this class, you will learn how to create this software from two of the leaders in the field. Class begins October 10.

Supplementary Resources: Mathematics and Algorithms

1. Concrete Mathematics: A Foundation for Computer Science

This book introduces the mathematics that supports advanced computer programming and the analysis of algorithms. The primary aim of its well-known authors is to provide a solid and relevant base of mathematical skills - the skills needed to solve complex problems, to evaluate horrendous sums, and to discover subtle patterns in data. It is an indispensable text and reference not only for computer scientists - the authors themselves rely heavily on it! - but for serious users of mathematics in virtually every discipline.

2. Algorithms

The textbook Algorithms, 4th Edition by Robert Sedgewick and Kevin Wayne surveys the most important algorithms and data structures in use today.

3. Introduction to Algorithms

Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.

Practice

1. Project Euler

Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems.

2. The Python Challenge

Python Challenge is a game in which each level can be solved by a bit of (Python) programming.

3. CodeChef Programming Competition

CodeChef is a global programming community. We host contests, trainings and events for programmers around the world. Our goal is to provide a platform for programmers everywhere to meet, compete, and have fun.

4. Write your own programs.

Python

pyscripter

An open-source Python Integrated Development Environment (IDE)

Khan Academy

Introduction to programming and computer science (using Python)

1. Invent Your Own Computer Games with Python

“Invent Your Own Computer Games with Python” is a free book (as in, open source) and a free eBook (as in, no cost to download) that teaches you how to program in the Python programming language. Each chapter gives you the complete source code for a new game, and then teaches the programming concepts from the example.

“Invent with Python” was written to be understandable by kids as young as 10 to 12 years old, although it is great for anyone of any age who has never programmed before.

2. Learn Python The Hard Way

Have you always wanted to learn how to code but never thought you could? Are you looking to build a foundation for more complex coding? Do you want to challenge your brain in a new way? Then Learn Python the Hard Way is the book for you.

3. Python for Software Design: How to Think Like a Computer Scientist

Think Python is an introduction to Python programming for beginners. It starts with basic concepts of programming, and is carefully designed to define all terms when they are first used and to develop each new concept in a logical progression. Larger pieces, like recursion and object-oriented programming are divided into a sequence of smaller steps and introduced over the course of several chapters.

4. Python Programming: An Introduction to Computer Science

This book is suitable for use in a university-level first course in computing (CS1), as well as the increasingly popular course known as CS0. It is difficult for many students to master basic concepts in computer science and programming. A large portion of the confusion can be blamed on the complexity of the tools and materials that are traditionally used to teach CS1 and CS2. This textbook was written with a single overarching goal: to present the core concepts of computer science as simply as possible without being simplistic.

5. Practical Programming: An Introduction to Computer Science Using Python

Computers are used in every part of science from ecology to particle physics. This introduction to computer science continually reinforces those ties by using real-world science problems as examples. Anyone who has taken a high school science class will be able to follow along as the book introduces the basics of programming, then goes on to show readers how to work with databases, download data from the web automatically, build graphical interfaces, and most importantly, how to think like a professional programmer.

6. The Quick Python Book

The Quick Python Book, Second Edition, is a clear, concise introduction to Python 3, aimed at programmers new to Python. This updated edition includes all the changes in Python 3, itself a significant shift from earlier versions of Python.

The book begins with basic but useful programs that teach the core features of syntax, control flow, and data structures. It then moves to larger applications involving code management, object-oriented programming, web development, and converting code from earlier versions of Python.

Haskell

The Haskell Platform

The Haskell Platform is the easiest way to get started with programming Haskell. It comes with all you need to get up and running. Think of it as "Haskell: batteries included".

1. Haskell in 5 steps

This page will help you get started as quickly as possible.

2. Learn Haskell in 10 minutes

3. A brief introduction to Haskell

4. Programming in Haskell

Haskell is one of the leading languages for teaching functional programming, enabling students to write simpler and cleaner code, and to learn how to structure and reason about programs. This introduction is ideal for beginners: it requires no previous programming experience and all concepts are explained from first principles via carefully chosen examples. Each chapter includes exercises that range from the straightforward to extended projects, plus suggestions for further reading on more advanced topics. The author is a leading Haskell researcher and instructor, well-known for his teaching skills. The presentation is clear and simple, and benefits from having been refined and class-tested over several years. The result is a text that can be used with courses, or for self-learning. Features include freely accessible Powerpoint slides for each chapter, solutions to exercises and examination questions (with solutions) available to instructors, and a downloadable code that's fully compliant with the latest Haskell release.

5. Learn You a Haskell for Great Good!

Learn You a Haskell, the funkiest way to learn Haskell, which is the best functional programming language around. You may have heard of it. This guide is meant for people who have programmed already, but have yet to try functional programming.

6. Real World Haskell

This easy-to-use, fast-moving tutorial introduces you to functional programming with Haskell. You'll learn how to use Haskell in a variety of practical ways, from short scripts to large and demanding applications. Real World Haskell takes you through the basics of functional programming at a brisk pace, and then helps you increase your understanding of Haskell in real-world issues like I/O, performance, dealing with data, concurrency, and more as you move through each chapter.

7. The Haskell Road to Logic, Maths and Programming

The textbook by Doets and van Eijck puts the Haskell programming language systematically to work for presenting a major piece of logic and mathematics. The reader is taken through chapters on basic logic, proof recipes, sets and lists, relations and functions, recursion and co-recursion, the number systems, polynomials and power series, ending with Cantor's infinities. The book uses Haskell for the executable and strongly typed manifestation of various mathematical notions at the level of declarative programming. The book adopts a systematic but relaxed mathematical style (definition, example, exercise, ...); the text is very pleasant to read due to a small amount of anecdotal information, and due to the fact that definitions are fluently integrated in the running text. An important goal of the book is to get the reader acquainted with reasoning about programs. 

Common Lisp

1. Land of Lisp: Learn to Program in Lisp, One Game at a Time!

Lisp has been hailed as the world's most powerful programming language, but its cryptic syntax and academic reputation can be enough to scare off even experienced programmers. Those dark days are finally over—Land of Lisp brings the power of functional programming to the people!

With his brilliantly quirky comics and out-of-this-world games, longtime Lisper Conrad Barski teaches you the mysteries of Common Lisp. You'll start with the basics, like list manipulation, I/O, and recursion, then move on to more complex topics like macros, higher order programming, and domain-specific languages. Then, when your brain overheats, you can kick back with an action-packed comic book interlude!

2. Practical Common Lisp

Practical Common Lisp presents a thorough introduction to Common Lisp, providing you with an overall understanding of the language features and how they work. Over a third of the book is devoted to practical examples such as the core of a spam filter and a web application for browsing MP3s and streaming them via the Shoutcast protocol to any standard MP3 client software (e.g., iTunes, XMMS, or WinAmp). In other "practical" chapters, author Peter Seibel demonstrates how to build a simple but flexible in-memory database, how to parse binary files, and how to build a unit test framework in 26 lines of code.

3. ANSI Common LISP

Teaching users new and more powerful ways of thinking about programs, this two-in-one text contains a tutorial—full of examples—that explains all the essential concepts of Lisp programming, plus an up-to-date summary of ANSI Common Lisp, listing every operator in the language. Informative and fun, it gives users everything they need to start writing programs in Lisp both efficiently and effectively, and highlights such innovative Lisp features as automatic memory management, manifest typing, closures, and more. Dividing material into two parts, the tutorial half of the book covers subject-by-subject the essential core of Common Lisp, and sums up lessons of preceding chapters in two examples of real applications: a backward-chainer, and an embedded language for object-oriented programming. Consisting of three appendices, the summary half of the book gives source code for a selection of widely used Common Lisp operators, with definitions that offer a comprehensive explanation of the language and provide a rich source of real examples; summarizes some differences between ANSI Common Lisp and Common Lisp as it was originally defined in 1984; and contains a concise description of every function, macro, and special operator in ANSI Common Lisp. The book concludes with a section of notes containing clarifications, references, and additional code.

4. Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp

Paradigms of AI Programming is the first text to teach advanced Common Lisp techniques in the context of building major AI systems. By reconstructing authentic, complex AI programs using state-of-the-art Common Lisp, the book teaches students and professionals how to build and debug robust practical programs, while demonstrating superior programming style and important AI concepts. The author strongly emphasizes the practical performance issues involved in writing real working programs of significant size. Chapters on troubleshooting and efficiency are included, along with a discussion of the fundamentals of object-oriented programming and a description of the main CLOS functions. This volume is an excellent text for a course on AI programming, a useful supplement for general AI courses and an indispensable reference for the professional programmer.

5. Let Over Lambda

Let Over Lambda is one of the most hardcore computer programming books out there. Starting with the fundamentals, it describes the most advanced features of the most advanced language: COMMON LISP. The point of this book is to expose you to ideas that you might otherwise never be exposed to.

6. Lisp as the Maxwell’s equations of software

These are Maxwell’s equations. Just four compact equations. With a little work it’s easy to understand the basic elements of the equations – what all the symbols mean, how we can compute all the relevant quantities, and so on. But while it’s easy to understand the elements of the equations, understanding all their consequences is another matter. Inside these equations is all of electromagnetism – everything from antennas to motors to circuits. If you think you understand the consequences of these four equations, then you may leave the room now, and you can come back and ace the exam at the end of semester.

R

RStudio

RStudio™ is a free and open source integrated development environment (IDE) for R. You can run it on your desktop (Windows, Mac, or Linux) or even over the web using RStudio Server.

1. R Videos

2. R Tutorials

3. R Tutorials from Universities Around the World

Here is a list of FREE R tutorials hosted in official website of universities around the world.

4. R-bloggers

Here you will find daily news and tutorials about R, contributed by over 300 bloggers.

5. The Art of R Programming: A Tour of Statistical Software Design

R is the world's most popular language for developing statistical software: Archaeologists use it to track the spread of ancient civilizations, drug companies use it to discover which medications are safe and effective, and actuaries use it to assess financial risks and keep economies running smoothly.

The Art of R Programming takes you on a guided tour of software development with R, from basic types and data structures to advanced topics like closures, recursion, and anonymous functions. No statistical knowledge is required, and your programming skills can range from hobbyist to pro.

Along the way, you'll learn about functional and object-oriented programming, running mathematical simulations, and rearranging complex data into simpler, more useful formats.

6. Introduction to Statistical Thinking (With R, Without Calculus)

The target audience for this book is college students who are required to learn statistics, students with little background in mathematics and often no motivation to learn more.

7. Doing Bayesian Data Analysis: A Tutorial with R and BUGS

There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all scenarios addressed by non-Bayesian textbooks--t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis).

This book is intended for first year graduate students or advanced undergraduates. It provides a bridge between undergraduate training and modern Bayesian methods for data analysis, which is becoming the accepted research standard. Prerequisite is knowledge of algebra and basic calculus. Free software now includes programs in JAGS, which runs on Macintosh, Linux, and Windows.

Value of Information: 8 examples

48 gwern 18 May 2012 11:45PM

ciphergoth just asked what the actual value of Quantified Self/self-experimentation is. This finally tempted me into running value of information calculations on my own experiments. It took me all afternoon because it turned out I didn’t actually understand how to do it and I had a hard time figuring out the right values for specific experiments. (I may not have not gotten it right, still. Feel free to check my work!)  Then it turned out to be too long for a comment, and as usual the master versions will be on my website at some point. But without further ado!

continue reading »

Overview article on FAI in a popular science magazine (Hebrew)

16 JoshuaFox 15 May 2012 11:09AM
A new article which I wrote just appeared in Hebrew in Galileo, Israel's top popular science magazine, in hardcopy.
It is titled "Superhuman Intelligence, Unhuman Intelligence" (Super- and  un- are homophones in Hebrew, a bit of wordplay.)
You can read it here. [Edit: Here's an English version on the Singularity Institute site.]
The cover art, the "I Robot" images, and the tag line ("Artificial Intelligence: Can we reign in the golem") are a bit off; I didn't choose them; but that's par for the course.
To the best of my knowledge, this is the first feature article overviewing FAI in any popular-science publication (whether online or  hardcopy).
Here is the introduction to the article. (It avoids weasel words, but all necessary caveats are given in the body of the article).
In coming  decades, engineers will build an entity with intelligence on a level which can compete with humans. This entity will want to improve its own intelligence, and will be able to do so. The process of improvement will repeat, until it reaches a level far above  that of humans; the entity will then be able to achieve its goals efficiently. It is thus essential that its goals are good for humanity. To guarantee this, it is necessary to define the correct goals before this intelligence is built.

[Book Suggestions] Summer Reading for Younglings.

8 Karmakaiser 12 May 2012 04:57PM

I bought my niece a Kindle that just arrived and I'm about to load it up with books to give it to her tomorrow for her birthday. I've decided to be a sneaky uncle and include good books that can teach better abilities to think or at least to consider science cool and interesting. She is currently in the 4th Grade with 5th coming after the Summer.

She reads basically at her own grade level so while I'm open to stuffing the Kindle with books to be read when she's ready, I'd like to focus on giving her books she can read now. Ender's Game will be on there most likely. Game of Thrones will not.

What books would you give a youngling? Her interests currently trend toward the young mystery section, Hardy Boys and the like, but in my experience she is very open to trying new books with particular interest in YA fantasy but not much interest in Sci Fi (if I'm doing any other optimizing this year, I'll try to change her opinion on Sci Fi).

Experiment: a good researcher is hard to find

29 gwern 30 April 2012 05:13PM

See previously “A good volunteer is hard to find”

Back in February 2012, lukeprog announced that SIAI was hiring more part-time remote researchers, and you could apply just by demonstrating your chops on a simple test: review the psychology literature on habit formation with an eye towards practical application. What factors strengthen new habits? How long do they take to harden? And so on. I was assigned to read through and rate the submissions and Luke could then look at them individually to decide who to hire. We didn’t get as many submissions as we were hoping for, so in April Luke posted again, this time with a quicker easier application form. (I don’t know how that has been working out.)

But in February, I remembered the linked post above from GiveWell where they mentioned many would-be volunteers did not even finish the test task. I did, and I didn’t find it that bad, and actually a kind of interesting exercise in critical thinking & being careful. People suggested that perhaps the attrition was due not to low volunteer quality, but to the feeling that they were not appreciated and were doing useless makework. (The same reason so many kids hate school…) But how to test this?

continue reading »

View more: Next