You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[link]Mass replication of Psychology articles planed.

25 beoShaffer 18 April 2012 04:13PM

http://chronicle.com/blogs/percolator/is-psychology-about-to-come-undone/29045

The plan is to replicate or fail to replicate all 2008 articles from three major Psychology journals.

ETA: http://openscienceframework.org/ is the homepage of the group behind this.  It's still in Beta, but will eventually include some nifty looking science toolkits in addition to the reproducibility project.

Server Sky: lots of very thin computer satellites

3 lsparrish 16 April 2012 11:05AM

The following is intended as 1) request for specific criticisms regarding the value of time investment on this project, and 2) pending favorable answer to this, a request for further involvement from qualified individuals. It is not intended as a random piece of interesting pop-sci, despite the subject matter, but as a volunteer opportunity.

Server Sky is a an engineering proposal to place thousands (eventually millions) of micron-thin satellites into medium orbit around the earth in the near term. It is being put forth by Keith Lofstrom, the inventor of the Launch Loop.

Abstract from the 2009 paper:

It is easier to move bits than atoms or energy.  Server­-sats are ultralight disks of silicon that convert sunlight into computation and communications.  Powered by a large solar cell, propelled and steered by light pressure, networked and located by microwaves, and cooled by black­-body radiation. Arrays of thousands of server­-sats form highly redundant computation and database servers, as well as phased array antennas to reach thousands of transceivers on the ground.

First generation server­-sats are 20 centimeters across ( about 8 inches ), 0.1 millimeters (100 microns) thick, and weigh 7 grams. They can be mass produced with off­-the­-shelf semiconductor technologies. Gallium arsenide radio chips provide intra­-array, inter­-array, and ground communication, as well as precise location information. Server­-sats are launched stacked by the thousands in solid cylinders, shrouded and vibration-­isolated inside a traditional satellite bus.


Links:

Papers and Presentations

Slide Show

Wiki Main Page

Help Wanted

Mailing List

Some mildly negative evidence to start with: I have already had a satellite scientist tell me that this seems unlikely to work. Avoiding space debris and Kessler Syndrome, radio communications difficulties (especially uplink), and the need for precise synchronization are the obstacles he stressed as significant. He did not seem to have studied the proposal closely, but this at least tells us to be careful where to set our priors.

On the other hand, it appears Keith has given these problems a lot of thought already, and solutions can probably be worked out. The thinsats would have optical thrusters (small solar sails) and would thus be able to move themselves and each other around; defective ones could be collected for disposal without mounting an expensive retrieval mission, and the thrusters would also help avoid things in the first place. Furthermore the zone chosen (the m288 orbit) is relatively unused, so collisions with other satellites are unlikely. Also the satellites have powerful radar capabilities, which should lead to more easily detecting and eliminating space junk.

For the communications problem, the idea is to use three dimensional phased arrays of thinsats -- basically a bunch of satellites in a large block working in unison to generate a specific signal, behaving as if they were a much larger antenna. This is tricky and requires precision timing and exact distance information. The array's physical configuration will need to be randomized (or perhaps arranged according to an optimized pattern) in order to prevent grating lobes, a problem with interference patterns that is common with phased arrays. They would link with GPS and each other by radio on multiple bands to achieve "micron-precision thinsat location and orientation within the array".

According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.

Has anyone else here researched this idea, or have relevant knowledge? It seems like a great potential source of computing power for AI research, mind uploads, and so forth, but also for all those mundane, highly lucrative near term demands like web hosting and distributed business infrastructures.

From an altruistic standpoint, this kind of system could reduce poverty and increase equitable distribution of computing resources. It could also make solving hard scientific problems like aging and cryopreservation easier, and pave the road to solar power satellites. As it scales, it should also create demand (as well as available funding and processing power) for Launch Loop construction, or some other similarly low-cost form of space travel.

Value of information as to whether it can work or not therefore appears to be extremely high, something I think is crucial for a rationalist project. If it can work, the value of taking productive action (leadership, getting it funded, working out the problems, etc.) should be correspondingly high as well.


Update: Keith Lofstrom has responded on the wiki to the questions raised by the satellite scientist.

Note: Not all aspects of the project have complete descriptions yet, but there are answers to a lot of questions in the wiki.

Here is a summary list of questions raised and answers so far:

  • How does this account for Moore's Law? (kilobug)
    In his reply to the comments on Brin's post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves. Obsolete sats would not stay in use for long.
  • What about ping time limits? (kilobug)
    Ping times are going to be limited (70ms or so), and worse than you can theoretically get with a fat pipe (42ms), but it is still much better than you get with GEO (250+ ms). This is bad for high frequency trading, but fine for (parallelizable) number crunching and most other practical purposes.
  • What kind of power consumption? Doesn't it cost more to launch than you save? (Vanvier)
    It takes roughly  2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. Blackbody cooling is another benefit.
  • Bits being flipped by cosmic radiation is a problem on earth, how can it be solved in space? (Vanvier)
    Flash memory is acknowledged to be the most radiation sensitive component of the satellite. The solution would involve extensive error correction software and caching on multiple satellites.
  • Periodic annealing tends to short circuits. Wouldn't this result in very short lifetimes? (Vanvier)
    Circuits will be manufactured as two dimensional planes, which don't short as easily. Another significant engineering challenge: Thermal properties in the glass will need to be matched with the silicon and wires (for example, slotted wiring with silicon dioxide between the gaps) to prevent circuit damage. Per Vanvier, it may be less expensive to replace silicon with other materials for this purpose.
  • What are the specific of putting servers in space? (ZankerH)
    Efficient power/cooling, increased communications, overall scalability, relative lack of environmental impact.

Yet to be answered:

  • Is the amount of speculative tech too high? E.g. if future kinds of RAM are needed, costs may be higher. (Vanvier)
  • Is it easier to replace silicon with something else than find ways to make the rest of the sat match thermal expansion of silicon? (Vanvier)
  • Can we get more data on economics/business plan? (Vanvier)
  • Solar sails have been known to stick together. Is this a problem for thinsats, which are shipped stuck together? (Vanvier)
  • Do most interesting processes bottleneck on communication efficiency? (skelterpot)
  • What decreases in cost might we see with increased manufacturing yield? (skelterpot)

Insightful comments:

  • Launch energy vs energy collection (answer above is more specific, but this was a commendable quick-check).  (tgb)
  • ECC RAM is standard technology used in server computers. (JoachimShipper)
  • Fixing bit errors outside the memory (e.g. in CPU) is harder, something like Tandem Computers could be used, with added expense. (JoachimShipper)
  • Some processor-heavy computing tasks, like calculating scrypt hashes, are not very parallelizable. (skelterpot)
  • Other approaches like redundant hardware and error-checking within the CPU are possible, but they drive up the die area used. (skelterpot)

Influence of scientific research

7 alex_zag_al 09 April 2012 07:06AM

I'm an undergraduate studying molecular biology, and I am thinking of going into science. In Timothy Gower's "The Importance of Mathematics", he says that many mathematicians just do whatever interests them, regardless of social benefit. I'd rather do something with some interest or technological benefit to people outside of a small group with a very specific education.

Does anybody have any thoughts or links on judging the impact of the work on a research topic?

Clearly, the pursuit of a research topic must be producing truth to be helpful, and I've read Vladimir_M's heuristics regarding this.

Here's something I've tried. My current lab work is on the structure of membrane proteins in bacteria, so this is something I did to see where all this work on protein structure goes. I took a paper that I had found to be a very useful reference for my own work, about a protein that forms a pore in the bacterial membrane with a flexible loop, experimenting with the influence of this loop on the protein's structure. I used the Web of Science database to find a list of about two thousand papers that cited papers that cited this loop paper. I looked through this two-steps-away list for the ones that were not about molecules. Without too much effort, I found a few. The farthest from molecules that I found was a paper on a bacterium that sometimes causes meningitis, discussing about a particular stage in its colonization of the human body. A few of the two-steps-away articles were about antibiotics discovery; though molecular, this is a topic that has a great deal of impact outside of the world of research on biomolecules.

Though it occurs to me that it might be more fruitful to look the other way around: to identify some social benefits or interests people have, and see what scientific research is contributing the most to them.

[LINK] Neil deGrasse Tyson on killer asteroids

2 David_Gerard 03 April 2012 06:32PM

LessWrong is not big on discussion of non-AI existential risks. But Neil deGrasse Tyson notes killer asteroids not just as a generic problem, but as a specific one, naming Apophis as an imminent hazard.

So treat this as your exercise for today: what are the numbers, what is the risk, what are the costs, what actions are appropriate? Assume your answers need to work in the context of a society that's responded to the notion of anthropogenic climate change with almost nothing but blue vs. green politics.

"The Journal of Real Effects"

13 CarlShulman 05 March 2012 03:07AM

Luke's recent post mentioned that The Lancet has a policy encouraging the advance registration of clinical trials, while mine examined an apparent case study of data-peeking and on-the-fly transformation of studies. But how much variation is there across journals on such dimensions? Are there journals that buck the standards of their fields (demanding registration, p=0.01 rather than p=0.05 where the latter is typical in the field, advance specification of statistical analyses and subject numbers, etc)? What are some of the standouts? Are there fields without any such?

I wonder if there is a niche for a new open-access journal, along the lines of PLoS, with standards strict enough to reliably exclude false-positives. Some possible titles:

 

  • The Journal of Real Effects
  • (Settled) Science
  • Probably True
  • Journal of Non-Null Results, Really
  • Too Good to Be False
  • _________________?

 

Art vs. science

5 PhilGoetz 01 March 2012 11:09PM

It struck me this morning that a key feature that distinguishes art from science is that art is studied in the context of the artist, while science is not.  When you learn calculus, mechanics, or optics, you don't read Newton.  Science has content that can be abstracted out of one context - including the context of its creation - and studied and used in other contexts.  This is a defining characteristic.  Whereas art can't be easily removed from its context - one could argue art is context.  When we study art, we study the original work by a single artist, to get that artist's vision.

(This isn't a defining characteristic of art - it wasn't true until the twelfth century, when writers and artists began signing their works.  In ancient Greece, through the Middle Ages in Europe, the content, subject, or purpose of art was considered primary, in the same way that the content of science is today.  "Homer's" Iliad was a collaborative project, in which many authors (presumably) agreed that the story was the important thing, not one author's vision of it, and (also presumably) added to it in much the way that science is cumulative today.  Medieval art generally glorified the church or the state.)

However, because this is the way western society views art today, we can use this as a test.  Is it art or science?  Well, is its teaching organized around the creators, or around the content?

Philosophy and linguistics are somewhere between art and science by this test.  So is symbolic AI, while data mining is pure science.

[Link] An argument for Low-hanging fruit in Medicine

11 [deleted] 22 February 2012 03:43PM

Those of us who have found the arguments for stagnation in our near future by Peter Thiel and Tyler Cowen pretty convincing, usually look only to the information and computer industries as something that is and perhaps even can keep us afloat. On the excellent West Hunters blog (which he shares with Henry Harpending) Gregory Cochran speculates that there might be room for progress in a seemingly unlikely field.

Low-hanging fruit

In The Great Stagnation, Tyler Cowen discusses a real problem – a slowdown in technical innovation,  with slow economic growth as a consequence..   I think his perspective is limited, since he doesn’t know much about the inward nature of innovation. He is kind enough to make absolutely clear how little he knows by mentioning Tang and Teflon as spinoffs of the space program, which is  of course wrong. It is unfair to emphasize this too strongly, since hardly anybody in public life knows jack shit about technology and invention. Try to think of a pundit with a patent.

Anyhow, it strikes me that a certain amount of knowledge  may lead to useful insights. In particular, it may help us find low-hanging-fruit, technical innovations that are tasty and relatively easy – the sort of thing that seems obvious after someone thinks of it.

If we look at cases where an innovation or discovery was possible – even easy – for a long time before it was actually developed, we might be able to find patterns that would help us detect the low-hanging fruit  dangling right in front of us today.

For now, one example.  We know that gastric and duodenal ulcer, and most cases of stomach cancer, are caused by an infectious organism, helicobacter pylori.  It apparently causes amnesia as well. This organism was first seen in 1875 – nobody paid any attention.

Letulle showed that it induced gastritis in guinea pigs, 1888. Walery Jaworski rediscovered it in 1889, and suspected that it might cause gastric disease. Nobody paid any attention.  Krienitz associated it with gastric cancer in 1906.  Who cares?

Around 1940, some American researchers rediscovered it, found it more common in ulcerated stomachs,  and published their results.  Some of them thought that this might be the cause of ulcers – but Palmer, a famous pathologist,  couldn’t find it when he looked in the early 50s, so it officially disappeared again. He had used the wrong stain.  John Lykoudis, a Greek country doctor noticed that a heavy dose of antibiotics coincided with his ulcer’s disappearance, and started treating patients with antibiotics – successfully.   He tried to interest pharmaceutical companies – wrote to Geigy, Hoechst, Bayer, etc.  No joy.   JAMA rejected his article. The local medical society referred him for disciplinary action and fined him

The Chinese noticed that antibiotics could cure ulcers in the early 70s, but they were Commies, so it didn’t count.

Think about it: peptic and duodenal ulcer were fairly common, and so were effective antibiotics, starting in the mid-40s. . Every internist in the world – every surgeon – every GP was accidentally curing ulcers  – not just one or twice,  but again and again.  For decades. Almost none of them noticed it, even though it was happening over and over, right in front of their eyes.  Those who did notice were ignored until the mid-80s, when Robin Warren and Barry Marshall finally made the discovery stick. Even then,  it took something like 10 years for antibiotic treatment of ulcers to become common, even though it was cheap and effective. Or perhaps because it was cheap and effective.

This illustrates an important point: doctors are lousy scientists, lousy researchers.  They’re memorizers, not puzzle solvers.  Considering that Western medicine was an ineffective pseudoscience – actually, closer to a malignant pseudoscience  – for its first two thousand years, we shouldn’t be surprised.    Since we’re looking for low-hanging fruit,  this is good news.  It means that the great discoveries in medicine are probably not mined out. From our point of view, past incompetence predicts future progress.  The worse, the better!

Link to post.

I think Greg is underestimating the slight problems of massive over-regulation and guild-like rent seeking that limits medical research and providing medical advice quite severely. He does however make a compelling case for there to still be low hanging fruit there which with a more scientific and rational approach could easily be plucked. I also can't help but wonder if investigating older, supposedly disproved, treatments and theories together with novel research might bring up a few interesting things.

Many on LessWrong share Greg's estimation of the incompetence of the medical establishment, but how many share his optimism that our lack of recent progress isn't just the result of dealing with a really difficult problem set? It may be hard to tell if he is right.

[LINK] The Hacker Shelf, free books.

9 [deleted] 14 February 2012 04:52PM

Yes, this a repost from Hacker News, but I want to point out some books that are of LW-related interest.

The Hacker Shelf is a repository of freely available textbooks. Most of them are about computer programming or the business of computer programming, but there are a few that are perhaps interesting to the LW community. All of these were publicly available beforehand, but I'm linking to the aggregator in hopes that people can think of other freely available textbooks to submit there.

The site is in its beginning explosion phase; in the time it took to write this post, it doubled in size. If previous sites are any indication, it will crest in a month or so. People will probably lose interest after three months, and after a year the site will probably silently close shop.

MacKay, Information Theory, Inference, and Learning Algorithms

I really wish I had an older version of this book; the newer one has been marred by a Cambridge UP ad on the upper margin of every page. Publishers ruin everything.

The book covers reasonably concisely the basics of information theory and Bayesian methods, with some game theory and coding theory (in the sense of data compression) thrown in on the side. The style takes after Knuth, but refrains from the latter's more encyclopedic tendencies. It's also the type of book that gives a lot of extra content in the exercises. It unfortunately assumes a decent amount of mathematical knowledge — linear algebra and calculus, but nothing you wouldn't find on the Khan Academy.

Hacker Shelf review, book website.

Easley and Kleinberg, Networks, Crowds, and Markets

There's just a lot of stuff in this book, most of it of independent interest. The thread that ties the book together is graph theory, and with it they cover a great deal of game theory, voting theory, and economics. There are lots of graphs and pictures, and the writing style is pretty deliberate and slow-paced. The math is not very intense; all their probability spaces are discrete, so there's no calculus, and only a few touches of linear algebra.

Hacker Shelf review, book website.

Gabriel, Patterns of Software

This is a more fluffy book about the practice of software engineering. It's rather old, but I'm linking to it anyway because I agree with the author's feeling that the software engineering discipline has more or less misunderstood Christopher Alexander's work on pattern languages. The author tends to ramble on. I think there's some good wisdom about programming practices and organizational management in general that one could abstract away from this book.

Hacker Shelf link, book website (scroll down).

Nisan et. al., Algorithmic Game Theory

I hesitate to link this because the math level is exceptionally high, perhaps high enough that anyone who can read the book probably knows the better part of its contents already. But game/decision theory is near and dear to LW's heart, so perhaps someone will gather some utility from this book. There's an awful lot going on in it. A brief selection: a section on the relationship between game theory and cryptography, a section on computation in prediction markets, and a section analyzing the incentives of information security.

Hacker Shelf review, book.

[LINK] Learning enhancement using "transcranial direct current stimulation"

7 Alex_Altair 26 January 2012 04:18PM

Article here;

http://www.ox.ac.uk/media/science_blog/brainboosting.html

Recent research in Oxford and elsewhere has shown that one type of brain stimulation in particular, called transcranial direct current stimulation or TDCS, can be used to improve language and maths abilities, memory, problem solving, attention, even movement.

Critically, this is not just helping to restore function in those with impaired abilities. TDCS can be used to enhance healthy people’s mental capacities. Indeed, most of the research so far has been carried out in healthy adults.

The article goes on to discuss the ethics of the technique.

Visualizing effect sizes

0 EvelynM 04 January 2012 05:18AM

http://healthyinfluence.com/wordpress/steves-primer-of-practical-persuasion-3-0/intro/windowpane/

"The point of this demonstration is to show that you can think with numbers in a practical and efficient way without having a statistician in the room.  Anyone can handle the windowpane approach with numbers.  Just have a clear definition of Changed? (Yes or No) and a clear definition of the Group (Treatment or Control).  Then just count and look for percentage differences.  A 10% difference is small, 30% is moderate, and 50% is large.  And, realize that while “small” may be hard to detect, it can definitely make big practical effect.

Now whether you conceptualize Effect Sizes as windowpanes or jars with marbles, you now understand what the idea, Difference, means.  You can count or see No, Small, Medium, or Large Differences and interpret those complex statistical arguments you encounter all the time.  Realize again, that this approach is not Statistics for Dummies, Idiots, or Fools, but is a standard and mathematically correct way to present quantitative information."

http://www.psychologicalscience.org/journals/pspi/pspi_8_2_article.pdf

tldr; Natural frequencies (ratios of counts of subjects) rather than Conditional probabilities, are easier for people to comprehend.

"Trials and Errors: Why Science Is Failing Us"

7 gwern 19 December 2011 06:48PM

Jonah Lehrer has up another of his contrarian science articles: "Trials and Errors: Why Science Is Failing Us".

Main topics: the failure of drugs in clinical trials, diminishing returns to pharmaceutical research, doctors over-treating, and Humean causality-correlation distinction, with some Ioannidis mixed through-out.

See also "Why epidemiology will not correct itself"


In completely unrelated news, Nick Bostrom is stepping down from IEET's Chairman of the Board.

Why we need better science, example #6,281

32 lukeprog 10 December 2011 11:25PM

Avorn (2004) reports:

In a former British colony, most healers believed the conventional wisdom that a distillation of fluids extracted from the urine of horses, if dried to a powder and fed to aging women, could act as a general tonic, preserve youth, and ward of a variety of diseases. The preparation became enormously popular throughout the culture, and was widely used by older women in all strata of society. Many years later modern scientific studies revealed that long-term ingestion of the horse-urine extract was useless for most of its intended purposes, and that it causes tumors, blood clots, heart disease, and perhaps brain damage.

The former colony is the United States; the time is now; the drug is the family of hormone replacement products that include Prempro and Premarin (manufactured from pregnant mares' urine, hence its name). For decades, estrogen replacement in postmenopausal women was widely believed to have "cardio-protective" properties; other papers in respected medical journals reported that the drugs could treat depression and incontinence, as well as prevent Alzheimer's disease. The first large, well-conducted, controlled clinical trial of this treatment in women was not published until 1998: it found that estrogen replacement actually increased the rate of heart attacks in the patients studied. Another clinical trial published in 2002 presented further evidence that these products increased the risk of heart disease, stroke, and cancer. Further reports a year later found that rather than preventing Alzheimer's disease, the drugs appeared to double the risk of becoming senile. 

Armstrong (2006) adds:

The treatment seemed to work because those who used the drug tended to be healthier than those who did not. This was because it was used by people who were more interested in taking care of their health.

Kickstarter fundraising for largest Tesla Coils in history

1 Kevin 28 November 2011 05:48AM

"If the government is not willing to fund the building of two 10-story tall Tesla Coils, then why the hell do I even pay taxes?"

This seems like by far the best investment of $300,000 out there, if your metric is revolutionary new physics discovered per dollar. I pointed the founder at Thiel's Breakout Labs, which is probably more suited to this kind of thing than Kickstarter. But there is still a very non-negligible chance that the Kickstarter Grant will come to fruition.

Bayes Slays Goodman's Grue

0 potato 17 November 2011 10:45AM

This is a first stab at solving Goodman's famous grue problem. I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem. I haven't looked at many proposed solutions to this paradox, besides some of the basic ones in "The New Problem of Induction". So, I apologize now if my solution is wildly unoriginal. I am willing to put you through this dear reader because:

  1. I wanted to see how I would fare against this still largely open, devastating, and classic problem, using only the arsenal provided to me by my minimal Bayesian training, and my regular LW reading.
  2. I wanted the first LW article about the grue problem to attack it from a distinctly Lesswrongian aproach without the benefit of hindsight knowledge of the solutions of non-LW philosophy. 
  3. And lastly, because, even if this solution has been found before, if it is the right solution, it is to LW's credit that its students can solve the grue problem with only the use of LW skills and cognitive tools.

I would also like to warn the savvy subjective Bayesian that just because I think that probabilities model frequencies, and that I require frequencies out there in the world, does not mean that I am a frequentest or a realist about probability. I am a formalist with a grain of salt. There are no probabilities anywhere in my view, not even in minds; but the theorems of probability theory when interpreted share a fundamental contour with many important tools of the inquiring mind, including both, the nature of frequency, and the set of rational subjective belief systems. There is nothing more to probability than that system which produces its theorems. 

Lastly, I would like to say, that even if I have not succeeded here (which I think I have), there is likely something valuable that can be made from the leftovers of my solution after the onslaught of penetrating critiques that I expect form this community. Solving this problem is essential to LW's methods, and our arsenal is fit to handle it. If we are going to be taken seriously in the philosophical community as a new movement, we must solve serious problems from academic philosophy, and we must do it in distinctly Lesswrongian ways.

 


 

"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green.
(conclusion):
There is a very high probability that a never before observed emerald will be green."

That is the inference that the grue problem threatens, courtesy of Nelson Goodman.  The grue problem starts by defining "grue":

"An object is grue iff it is first observed before time T, and it is green, or it is first observed after time T, and it is blue."

So you see that before time T, from the list of premises:

"The first emerald ever observed was green.
 The second emerald ever observed was green.
 The third emerald ever observed was green.
 … etc.
 The nth emerald ever observed was green."
 (we will call these the green premises)

it follows that:

"The first emerald ever observed was grue.
The second emerald ever observed was grue.
The third emerald ever observed was grue.
… etc.
The nth emerald ever observed was grue."
(we will call these the grue premises)

The proposer of the grue problem asks at this point: "So if the green premises are evidence that the next emerald will be green, why aren't the grue premises evidence for the next emerald being grue?" If an emerald is grue after time T, it is not green. Let's say that the green premises brings the probability of "A new unobserved emerald is green." to 99%. In the skeptic's hypothesis, by symmetry it should also bring the probability of "A new unobserved emerald is grue." to 99%. But of course after time T, this would mean that the probability of observing a green emerald is 99%, and the probability of not observing a green emerald is at least 99%, since these sentences have no intersection, i.e., they cannot happen together, to find the probability of their disjunction we just add their individual probabilities. This must give us a number at least as big as 198%, which is of course, a contradiction of the Komolgorov axioms. We should not be able to form a statement with a probability greater than one.

This threatens the whole of science, because you cannot simply keep this isolated to emeralds and color. We may think of the emeralds as trials, and green as the value of a random variable. Ultimately, every result of a scientific instrument is a random variable, with a very particular and useful distribution over its values. If we can't justify inferring probability distributions over random variables based on their previous results, we cannot justify a single bit of natural science. This, of course, says nothing about how it works in practice. We all know it works in practice. "A philosopher is someone who say's, 'I know it works in practice, I'm  trying to see if it works in principle.'" - Dan Dennett

We may look at an analogous problem. Let's suppose that there is a table and that there are balls being dropped on this table, and that there is an infinitely thin line drawn perpendicular to the edge of the table somewhere which we are unaware of. The problem is to figure out the probability of the next ball being right of the line given the last results. Our first prediction should be that there is a 50% chance of the ball being right of the line, by symmetry. If we get the result that one ball landed right of the line, by Laplace's rule of succession we infer that there is a 2/3ds chance that the next ball will be right of the line. After n trials, if every trial gives a positive result, the probability we should assign to the next trial being positive as well is n+1/n +2.

If this line was placed 2/3ds down the table, we should expect that the ratio of rights to lefts should approach 2:1. This gives us a 2/3ds chance of the next ball being a right, and the fraction of Rights out of trials approaches 2/3ds ever more closely as more trials are performed.

Now let us suppose a grue skeptic approaching this situation. He might make up two terms "reft" and "light". Defined as you would expect, but just in case:

"A ball is reft of the line iff it is right of it before time T when it lands, or if it is left of it after time T when it lands.
 A ball is light of the line iff it is left of the line before time T when it lands, or if it is right of the line after time T when it first lands."

The skeptic would continue:

"Why should we treat the observation of several occurrences of Right, as evidence for 'The next ball will land on the right.' and not as evidence for 'The next ball will land reft of the line.'?"

Things for some reason become perfectly clear at this point for the defender of Bayesian inference, because now we have an easy to imaginable model. Of course, if a ball landing right of the line is evidence for Right, then it cannot possibly be evidence for ~Right; to be evidence for Reft, after time T, is to be evidence for  ~Right, because after time T, Reft is logically identical to ~Right; hence it is not evidence for Reft, after time T, for the same reasons it is not evidence for ~Right. Of course, before time T, any evidence for Reft is evidence for Right for analogous reasons.

But now the grue skeptic can say something brilliant, that stops much of what the Bayesian has proposed dead in its tracks:

"Why can't I just repeat that paragraph back to you and swap every occurrence of 'right' with 'reft' and 'left' with 'light', and vice versa? They are perfectly symmetrical in terms of their logical realtions to one another.
If we take 'reft' and 'light' as primitives, then we have to define 'right' and 'left' in terms of 'reft' and 'light' with the use of time intervals."

What can we possibly reply to this? Can he/she not do this with every argument we propose then? Certainly, the skeptic admits that Bayes, and the contradiction in Right & Reft, after time T, prohibits previous Rights from being evidence of both Right and Reft after time T; where he is challenging us is in choosing Right as the result which it is evidence for, even though "Reft" and "Right" have a completely symmetrical syntactical relationship. There is nothing about the definitions of reft and right which distinguishes them from each other, except their spelling. So is that it? No, this simply means we have to propose an argument that doesn't rely on purely syntactical reasoning. So that if the skeptic performs the swap on our argument, the resulting argument is no longer sound.

What would happen in this scenario if it were actually set up? I know that seems like a strangely concrete question for a philosophy text, but its answer is a helpful hint. What would happen is that after time T, the behavior of the ratio: 'Rights:Lefts' as more trials were added, would proceed as expected, and the behavior of the ratio: 'Refts:Lights' would approach the reciprocal of the ratio: 'Rights:Lefts'. The only way for this to not happen, is for us to have been calling the right side of the table "reft", or for the line to have moved. We can only figure out where the line is by knowing where the balls landed relative to it; anything we can figure out about where the line is from knowing which balls landed Reft and which ones landed Light, we can only figure out because in knowing this and and time, we can know if the ball landed left or right of the line.

To this I know of no reply which the grue skeptic can make. If he/she say's the paragraph back to me with the proper words swapped, it is not true, because  In the hypothetical where we have a table, a line, and we are calling one side right and another side left, the only way for Refts:Lefts behave as expected as more trials are added is to move the line (if even that), otherwise the ratio of Refts to Lights will approach the reciprocal of Rights to Lefts.

This thin line is analogous to the frequency of emeralds that turn out green out of all the emeralds that get made. This is why we can assume that the line will not move, because that frequency has one precise value, which never changes. Its other important feature is reminding us that even if two terms are syntactically symmetrical, they may have semantic conditions for application which are ignored by the syntactical model, e.g., checking to see which side of the line the ball landed on.

 


 

In conclusion:

Every random variable has as a part of it, stored in its definition/code, a frequency distribution over its values. By the fact that somethings happen sometimes, and others happen other times, we know that the world contains random variables, even if they are never fundamental in the source code. Note that "frequency" is not used as a state of partial knowledge, it is a fact about a set and one of its subsets.

The reason that:

"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green.
(conclusion):
There is a very high probability that a never before observed emerald will be green."

is a valid inference, but the grue equivalent isn't, is that grue is not a property that the emerald construction sites of our universe deal with. They are blind to the grueness of their emeralds, they only say anything about whether or not the next emerald will be green. It may be that the rule that the emerald construction sites use to get either a green or non-green emerald change at time T, but the frequency of some particular result out of all trials will never change; the line will not move. As long as we know what symbols we are using for what values, observing many green emeralds is evidence that the next one will be grue, as long as it is before time T, every record of an observation of a green emerald is evidence against a grue one after time T. "Grue" changes meanings from green to blue at time T, 'green'''s meaning stays the same since we are using the same physical test to determine green-hood as before; just as we use the same test to tell whether the ball landed right or left. There is no reft in the universe's source code, and there is no grue. Green is not fundamental in the source code, but green can be reduced to some particular range of quanta states; if you had the universes source code, you couldn't write grue without first writing green; writing green without knowing a thing about grue would be just as hard as while knowing grue. Having a physical test, or primary condition for applicability, is what privileges green over grue after time T; to have a physical consistent test is the same as to reduce to a specifiable range of physical parameters; the existence of such a test is what prevents the skeptic from performing his/her swaps on our arguments.


Take this more as a brainstorm than as a final solution. It wasn't originally but it should have been. I'll write something more organized and consize after I think about the comments more, and make some graphics I've designed that make my argument much clearer, even to myself. But keep those comments coming, and tell me if you want specific credit for anything you may have added to my grue toolkit in the comments.

Cancer scientist meets amateur (This American Life)

1 arundelo 15 November 2011 01:59AM

This American Life episode 450: "So Crazy It Just Might Work". The whole episode is good, but act one (6:48-42:27) is relevant to LW, about a trained scientist teaming up with an amateur on a cancer cure.

It's downloadable until 19 Nov 2011 or so, and streamable thereafter.

(Technical nit: It sounds to me like the reporter doesn't know the difference between sound and electromagnetism.)

Edit: Here's a quick rot13ed summary: Vg qbrfa'g tb jryy. Nagubal Ubyynaq frrf rkcrevzragny pbagebyf naq ercebqhpvovyvgl nf guvatf gung trg va uvf jnl. Ur frrzf gb unir gnxra [gur Penpxcbg Bssre](uggc://yrffjebat.pbz/yj/w8/gur_penpxcbg_bssre/).

The promise of connected science

21 lukeprog 12 November 2011 06:22PM

Sometimes, scientific discovery is just a matter of sitting down and using the tools of "connected science" already available to us. Stories like this one underscore the need for generalists:

Don Swanson seems an unlikely person to make medical discoveries. A retired but still active information scientist at the University of Chicago, Swanson has no medical training, does no medical experiments, and has never had a laboratory. Despite this, he’s made several significant medical discoveries. One of the earliest was in 1988, when he investigated migraine headaches, and discovered evidence suggesting that migraines are caused by magnesium deficiency. At the time the idea was a surprise to other scientists studying migraines, but Swanson’s idea was subsequently tested and confirmed in multiple therapeutic trials by traditional medical groups.

How is it that someone without any medical training could make such a discovery? Although Swanson had none of the conventional credentials of medical research, what he did have was a clever idea. Swanson believed that scientific knowledge had grown so vast that important connections between subjects were going unnoticed, not because they were especially subtle or hard to grasp, but because no one had a broad enough understanding of science to notice those connections: in a big enough haystack, even a 50-foot needle may be hard to find. Swanson hoped to uncover such hidden connections using a medical search engine called Medline, which makes it possible to search millions of scientific papers in medicine—you can think of Medline as a high-level map of human medical knowledge. He began his work by using Medline to search the scientific literature for connections between migraines and other conditions. Here are two examples of connections he found: (1) migraines are associated with epilepsy; and (2) migraines are associated with blood clots forming more easily than usual. Of course, migraines have been the subject of much research, and so those are just two of a much longer list of connections that he found. But Swanson didn’t stop with that list. Instead, he took each of the associated conditions and then used Medline to find further connections to that condition. He learned that, for example, (1) magnesium deficiency increases susceptibility to epilepsy; and (2) magnesium deficiency makes blood clot more easily. Now, when he began his work Swanson had no idea he’d end up connecting migraines to magnesium deficiency. But once he’d found a few papers suggesting such two-stage connections between magnesium deficiency and migraines, he narrowed his search to concentrate on magnesium deficiency, eventually finding eleven such two-stage connections to migraines. Although this wasn’t the traditional sort of evidence favored by medical scientists, it nonetheless made a compelling case that migraines are connected to magnesium deficiency. Before Swanson’s work a few papers had tentatively (and mostly in passing) suggested that magnesium deficiency might be connected to migraines. But the earlier work wasn’t compelling, and was ignored by most scientists. By contrast, Swanson’s evidence was highly suggestive, and it was soon followed by therapeutic trials that confirmed the migraine-magnesium connection.

From Reinventing Discovery by Michael Nielsen (a past Singularity Summit speaker).

Sometimes, talking the issue through *works*

16 lukeprog 11 November 2011 09:51PM

Michael Nielsen's new book Reinventing Discovery is invigorating. Here's one passage on how a small group talked an issue through and had a large impact on scientific progress:

Why is it that biologists share genetic data in GenBank in the first place? When you think about it, it’s a peculiar choice: if you’re a professional biologist it’s to your advantage to keep data secret as long as possible. Why share your data online before you get a chance to publish a paper or take out a patent on your work? In the scientific world it’s papers and, in some fields, patents that are rewarded by jobs and promotions. Publicly releasing data typically does nothing for your career, and might even damage it, by helping your scientific competitors.

In part for these reasons, GenBank took off slowly after it was launched in 1982. While many biologists were happy to access others’ data in GenBank, they had little interest in contributing their own data. But that has changed over time. Part of the reason for the change was a historic conference held in Bermuda in 1996, and attended by many of the world’s leading biologists, including several of the leaders of the government-sponsored Human Genome Project. Also present was Craig Venter, who would later lead a private effort to sequence the human genome. Although many attendees weren’t willing to unilaterally make the first move to share all their genetic data in advance of publication, everyone could see that science as a whole would benefit enormously if open sharing of data became common practice. So they sat and talked the issue over for days, eventually coming to a joint agreement—now known as the Bermuda Agreement—that all human genetic data should be immediately shared online. The agreement wasn’t just empty rhetoric. The biologists in the room had enough clout that they convinced several major scientific grant agencies to make immediate data sharing a mandatory requirement of working on the human genome. Scientists who refused to share data would get no grant money to do research. This changed the game, and immediate sharing of human genetic data became the norm. The Bermuda agreement eventually made its way to the highest levels of government: on March 14, 2000, US President Bill Clinton and UK Prime Minister Tony Blair issued a joint statement praising the principles described in the Bermuda Agreement, and urging scientists in every country to adopt similar principles. It’s because of the Bermuda Agreement and similar subsequent agreements that the human genome and the HapMap are publicly available.

You Are Not So Smart (Pop-Rationality Book)

7 betterthanwell 01 November 2011 07:42PM

Journalist David McRaney has very recently published a popular book on human rationality. The book, You Are Not So Smart, is currently the 3rd best selling book in Nonfiction/Philosophy on Amazon.com after less than a week on the market. (Eighth best selling book in Nonfiction/Education)

The tag-line of the project is: "A celebration of self-delusion." As such the book seems less an attempt at giving advice on how to act and decide, than an attempt to reveal, chapter by chapter, the folly of common sense.

Topics include: Hindsight Bias, Confirmation bias, The Sunk Cost Fallacy, Anchoring Effect, The Illusion of Transparency, The Just World Fallacy, Representativeness Heuristic, The Perils of Introspection, The Dunning-Kruger Effect, The Monty Hall Problem, The Bystander Effect, Placebo Buttons, Groupthink, Conformity, Social Loafing, Helplessness, Cults, Change Blindness, Self-Fulfilling Prophecies, Self Handicapping, Availability Heuristic, Self-Serving Bias, The Ultimatum Game, Inattentional Blindness.

 

 

 

These are topics we enjoy learning about, pride ourself in knowing a lot about, and, we profess, we would want more people to know about. A popular book on this subject is now out. This sounds like a good thing.

I will note that the blog features at least one direct quote from LessWrong.

We always know what we mean by our words, and so we expect others to know it too.  Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant.  It’s hard to empathise with someone who must interpret blindly, guided only by the words.

- Eliezer Yudowsky from Lesswrong.com

One one hand, You Are Not So Smart could bee a boon to Eliezer's popular rationality book by priming the market. His writings on a given topic have rarely been described as redundant. On the other hand, it seems to me that this book closely covers a number of topics, seemingly in a similar style to the treatments that were published on this site and Overcoming Bias. Intended to be published in book form at a later date. I will try to refrain from speculation here.

Sample blook chapters from YouAreNotSoSmart:

For more material, here's a list of all posts at youarenotsosmart.com

 

I'll save the rest of my review until I have actually read the book.

In the meantime I would like to know your thoughts on this project.

Brain emulations and Oracle AI

7 Stuart_Armstrong 14 October 2011 05:51PM

Two talks from the Future of Humanity Institute are now online (this is the first time we've done this, so please excuse the lack of polish). The first is Anders Sandberg talking about brain emulations (technical overview), the second is myself talking of the risks of Oracle AIs (informal presentation). They can be found here:

Fesability of whole-brain emulation: http://www.youtube.com/watch?v=3nIzPpF635c&feature=related, initial paper at http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf, new paper still to come.

Thinking inside the box: Using and controlling an Oracle AI:http://www.youtube.com/watch?v=Gz9zYQsT-QQ&feature=related, paper at http://www.aleph.se/papers/oracleAI.pdf

Don't ban chimp testing

15 PhilGoetz 01 October 2011 05:17PM

The October 2011 Scientific American has an editorial from its board of editors called "Ban chimp testing", that says:  "In our view, the time has come to end biomedical experimentation on chimpanzees... Chimps should be used only in studies of major diseases and only when there is no other option."  Much of the knowledge described in Luke's recent post on the cognitive science of rationality would have been impossible to acquire under such a ban.

I encourage you to write to Scientific American in favor of chimp testing.  Some points that I plan to make:

  • The editors obliquely criticized the NIH to tell the Institute of Medicine to omit ethical considerations from their study of whether chimps are "truly necessary" for biomedical and behavioral research.  But the team tasked with gathering evidence about the necessity of chimps for research shouldn't be making ethical judgements.  They're gathering the data for someone else to make ethical judgements.
  • Saying chimps should be used "only when there is no other option" is the same as saying chimps should never be used.  There are always other options.
  • This position might be morally defensible if humans were allowed to subject themselves for testing.  The knowledge to be gained from experiment is surely worth the harm to the subject if the subject chooses to undergo the experiment.  Humans are often willing to be test subjects, but aren't allowed to be because of restrictions on human testing.  Banning chimp testing should thus be done only in conjunction with allowing human testing.

I also encourage you to adopt a tone of moral outrage.  Rather than taking the usual apologetic "we're so sorry, but we have to do this awful things in the name of science" tone, get indignant at the editors who intend to harm uncountable numbers of innocent people.  For advanced writers, get indignant not just about harm, but about lost potential, pointing out the ways that our knowledge about how brains work can make our lives better, not just save us from disease.

You can comment on this here, but comments are AFAIK not printed in later issues as letters to the editor.  Actual letters, or at least email, probably have more impact.  You can't submit a letter to the editor through the website, because letters are magically different from things submitted on a website.

ADDED:  Many people responded by claiming that banning chimp experimentation occupies some moral high ground.  That is logically impossible.

To behave morally, you have to do two things:

1. Figure out, inherit, or otherwise acquire a set of moral goals are - let's say, for example, to maximize the sum over all individuals i of all species s of ws*[pleasure(s,i)-pain(s,i)].

2. Act in a way directed by those moral goals.

If you really cared about the suffering of sentient beings, you would also care about the suffering of humans, and you would realize that there's a tradeoff between the suffering of those experimented on, and of those who benefit, which is different for every experiment.  That's what a moral decision is—deciding how to make a tradeoff of help and harm. People who call for a ban on chimp testing are really demanding we forbid (other) people from making moral judgements and taking moral actions.  There are a wide range of laws and positions that could be argued to be moral.  But just saying "We are incapable of making moral decisions, so we will ban moral decision-making" is not one of them.

[link] Women in Computer Science, Where to Find More Info?

3 magfrump 23 September 2011 09:11PM

I recently ran across the following link:

A Campus Champion for Women in Computer Science

Which discusses a new president at Harvey Mudd College, and specifically her work in making the computer science major more accessible to women.  This seems neat and interesting except... barely any details are provided whatsoever.

They mention that the introductory computer science course was split into different courses, one of which is taught in Python.  Looking at Maria Klowe's webpage on Harvey Mudd she references that these steps were taken in a three part plan, and says "I encourage you to read more" but there are no obvious links on the page to any specifics.

Is anyone from Harvey Mudd that knows more or how to find out more?  For example, did the increase in female computer scientists go along with an increase in the size of the program (as is implied) or was there a displacement of male computer scientists?  Is the success limited to the one department, or are other engineering and science majors picking up women?  I noticed the front page mentioning that Harvey Mudd was recently named the top engineering school in the US, so presumably the gains don't result from "dumbing down" the program, but I'd like to see more information.

Pressure to publish increases scientists' vulnerability to positive bias

9 lukeprog 08 September 2011 08:49PM

More evidence for this hypothesis:

The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.

Fanelli (2010). Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS ONE 5(4): e10271.

Are there better ways of identifying the most creative scientists?

10 gwern 31 August 2011 07:42PM

Marginal Revolution linked today an old 1963 essay by Isaac Asimov, who argues that a very cheap test for scientific capability in children & adolescents is to see whether they like science fiction and in particular, harder science fiction, "The Sword of Achilles".

I copied it out and made an HTML version of the essay: http://www.gwern.net/docs/1963-asimov-sword-of-achilles

I'd be interested if anyone knows of better tests for such scientific aptitude.

I think it'd also be interesting to see how well the SF test's predictive power has held up. Asimov's numbers seem reasonable for 1963, but may be very different these days: perhaps SF readers back then were <1% of the population and >50% of scientists, so it was a very informative, but these days? SF seems more popular, even discounting the comic books and Hollywood material as Asimov explicitly does, but the SF magazines are mostly dead and my understanding is that scientists are a vastly larger group in 2011 than 1963, both in absolute numbers and per capita.

Why no archive of refuted research?

25 Kaj_Sotala 26 August 2011 08:14AM

From The Atlantic's Lies, Damned Lies and Medical Science:

Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.

[...]

But even for medicine’s most influential studies, the evidence sometimes remains surprisingly narrow. Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.

Here's a suggested solution to the problem of refuted research still being cited. Have some respected agency maintain an archive of studies that have failed to replicate or that have otherwise been found lacking. Once such an archive existed, medical journals could adopt a policy of checking all citations in a proposed article against the archive, rejecting submissions that tried to cite refuted research as valid. This might also alleviate the problem of people not doing replications because replications don't get many cites. Once such an archive was established, getting your results there might become quite prestigious.

The one major problem that I can see with this proposal is that it's not always obvious when a study should be considered refuted. But even erring on the side of only including firmly refuted studies should be much better than nothing at all.

Such a fix seems obvious and simple to me, and while maintaining the archive and keeping it up to date would be expensive, it should be easily affordable for an organization such as a major university or the NIH. Similar archives could also be used for fields other than medicine. Is there some reason that I'm missing for why this isn't done?

Topic Search Poll Results and Short Reports

6 Nic_Smith 09 August 2011 06:28AM

At the end of June, I asked Less Wrong to vote for "What topic[s] would be best for an investigation and brief post?" in order to direct a search for topics to examine here. My thanks to everyone that participated (especially since the comments hint that the poll format was not well-liked). The most-wanted topics follow, and the complete list can be found on Google Docs -- maps and graphs related to the poll are also available on All Our Ideas. A score for a topic in the results below is an "estimated [percent] chance that it will win against a randomly chosen idea."

  1. Systems theory -- 71.6
  2. Leadership -- 70.7
  3. Linguistics (general) -- 70.7
  4. Finance -- 67.0
  5. Bayesian approach to business -- 60.7
  6. Lisp (Programming language) -- 59.7
  7. Anthropology (general) -- 59.4
  8. Sociology (general) -- 59.2
  9. Political Science (general) -- 58.5
  10. Historiography (the methods of history) -- 58.3
  11. Logistics -- 56.8
  12. Sociology of Political Organizations -- 56.0
  13. Military Theory -- 52.1
  14. Diplomacy -- 51.1

Systems theory, in first place, is a topic that I found while rummaging through online sources, including Wikipedia, for items to add to the poll; it's described there as the "study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. [....] In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback." Leadership seems to fall into both the social and "being effective" categories of interest, but has only lightly been touched on in previous discussion here despite a lot of ink spilled on the topic elsewhere -- the top Google results for "leadership" on this site are currently Calcsam's post on community roles and a book review for the Arbinger Institute's Leadership and Self Deception. "To Lead, You Must Stand Up" also comes to mind.

How to Use It

The spreadsheet includes columns for "Currently Investigated By" and "Writeup URLs" -- feel free to add your name or writeup links. If you already know a thing or two about one of the above topics, share your knowledge in a comment below or in a discussion post as appropriate, similar to the earlier "What can you teach us?" If you want to survey what currently exists on a topic, grab a few books, investigate, and then let us know what you found. When a related post instead of just a comment is appropriate, I recommend the tag "topic_search" As mentioned previously, even investigations that end in a comment to this post that a topic isn't useful for LW is still itself useful for the search.

Best Textbook List Expansion

5 magfrump 08 August 2011 11:17AM

A while back, Lukeprog set up an article to list the best textbooks in every subject.  It currently contains a fairly large list of books in a variety of subjects.

I just got an e-mail from Amazon advertising "Up to 90% off textbooks" and I thought "This seems like a good opportunity to check out a bunch of cheap, good textbooks in subjects I want to learn about!"

When I went over to Luke's post, I discovered recommendations for philosophy, psychology, all sorts of math, but almost none in basic science.

I assume that someone here must have read one or a few basic textbooks on physics, biology, and chemistry.  If so, what were they?  How were they?  Would I be better off just trying to take a basic lecture course in the subject, or going through Khan Academy?

Polarized gamma rays and manifest infinity

16 rwallace 30 July 2011 06:56AM

 

Most people (not all, but most) are reasonably comfortable with infinity as an ultimate (lack of) limit. For example, cosmological theories that suggest the universe is infinitely large and/or infinitely old, are not strongly disbelieved a priori.

By contrast, most people are fairly uncomfortable with manifest infinity, actual infinite quantities showing up in physical objects. For example, we tend to be skeptical of theories that would allow infinite amounts of matter, energy or computation in a finite volume of spacetime.

continue reading »

A study in Science on memory conformity

8 dvasya 15 July 2011 05:30PM

I believe this may be a good addition to the cognitive bias literature:

Following the Crowd: Brain Substrates of Long-Term Memory Conformity

  1. Micah Edelson1,*
  2. Tali Sharot2
  3. Raymond J. Dolan2
  4. Yadin Dudai1

1Department of Neurobiology, Weizmann Institute of Science, Israel.

  1. 2Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK.

ABSTRACT

Human memory is strikingly susceptible to social influences, yet we know little about the underlying mechanisms. We examined how socially induced memory errors are generated in the brain by studying the memory of individuals exposed to recollections of others. Participants exhibited a strong tendency to conform to erroneous recollections of the group, producing both long-lasting and temporary errors, even when their initial memory was strong and accurate. Functional brain imaging revealed that social influence modified the neuronal representation of memory. Specifically, a particular brain signature of enhanced amygdala activity and enhanced amygdala-hippocampus connectivity predicted long-lasting but not temporary memory alterations. Our findings reveal how social manipulation can alter memory and extend the known functions of the amygdala to encompass socially mediated memory distortions.

http://www.sciencemag.org/content/333/6038/108.full

http://ifile.it/v76wsi5

Khan Academy: Introduction to programming and computer science

11 XiXiDu 02 July 2011 09:44AM

Khan Academy now also features a Computer Science category. There are not many lessons yet but about 3 new videos are being added each day. They are going to add CS exercises soon too.

If you don't want to wait for the exercises, there is always the incredible Project Euler that you can use to hone your math and programming skills.

Please vote -- What topic would be best for an investigation and brief post?

4 Nic_Smith 30 June 2011 04:50AM

Followup to: Systematic Search for Useful Ideas

I've set up a pairwise poll for this question and additional suggestions are welcome. My original proposal was to examine topics that haven't already been covered here, but instead of that, I'd like to ask people to consider the existing level of discussion on a topic in evaluating what would be "best."

ETA: There are currently over 500 pairs. You don't have to go through all of them -- answer as many or as few as you like.

Biomedical engineers analyze—and duplicate—the neural mechanism of learning in rats [link]

16 Dreaded_Anomaly 27 June 2011 06:35PM

Restoring Memory, Repairing Damaged Brains (article @ PR Newswire)

Using an electronic system that duplicates the neural signals associated with memory, they managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.

This series of experiments, as described, sounds very well-constructed and thorough. The scientists first recorded specific activity in the hippocampus, where short-term memory becomes long-term memory. They then used drugs to inhibit that activity, preventing the formation of and access to long-term memory. Using the information they had gathered about the hippocampus activity, they constructed an artificial replacement and implanted it into the rats' brains. This successfully restored the rats' ability to store and use long-term memory. Further, they implanted the device into rats without suppressed hippocampal activity, and demonstrated increased memory abilities in those subjects.

"These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes," says the paper.

It's a truly impressive result.

Do-it-yourself-science Wiki

4 Drahflow 22 June 2011 10:25AM

While I was busy procrastinating, I produced http://38020.vs.webtropia.com/sciencewiki/index.php/Helium_balloon (the Wiki-Extension for the plot, not the data, that was lying around anyway). This could (if enough people are using it) become quite a useful collection of evidence of various simple sciency questions, and also ultimately motivate some more people to actually do experiments themselves.

Before going further with this (in particular wrt. telling people about it) however, I have a few questions:

  1. Is anybody aware of a similar effort to collect data from hobby scientists in formalized, yet wiki enabled form? I googled, but found nothing, in that case I'd rather not start a new project.
  2. I would like to calculate a metric evaluating which models are "better", i.e. explain the data best, yet are not overfitted. Can anybody recommend a paper or book about this problem? In particular, I need a metric which can handle errors in all variables (not just the dependent ones), and would rather not like to assume a normal distribution globally.

General feedback is obviously also welcome. If anybody has data which needs something different from a scatterplot, just throw it in, I'll see that a decent plot gets implemented.

Less Wrong DC Experimental Society

3 atucker 13 June 2011 03:56AM

During our latest meetup, the DC Less Wrong group has decided that we are interested in experimentally testing various lifehacks on ourselves (on an opt-in volunteer basis of course).

We need two things:

  • Metrics (to actually tell if there's a difference or not, rather than convince ourselves that there is)
  • Things to test

Do any other groups have any measurements that they take to track their various attributes? Anything that they'd be interested in testing?

Proposal: Systematic Search for Useful Ideas

6 Nic_Smith 01 June 2011 12:09AM

LessWrong is a font of good ideas, but the topics and interests usually expressed and explored here tend to cluster over few areas. As such, high-value topics may still be present for the community in other fields which can be systematically explored, rather than waiting for a random encounter. Additionally, there seems to be interest here in examining a wider variety of topics. In order to do this, I suggest creating a community list of areas to look into (besides the usual AI, Cog Sci, Comp Sci, Econ, Math, Philosophy, Psych, Statistics, etc.) and then reading a bit on the basics of these fields. In additional to potentially uncovering useful ideas per se, this also might offer the opportunity to populate the textbooks resource list and engage in not-random acts of scholarship.

Everyone Split Up, There’s a A Lot of Ideosphere to Cover

A rough sketch of how I think the project will work follows. I’ll be proceeding with this and tackling at least one or two subjects as long as there’s at least a few other people interested in working on it too.

Step 1, Community Evaluation: Using All Our Ideas or similar, generate a list of fields to investigate.
Step 2, Sign-Up: People have the best sense of what they already know and their abilities, so at this point anyone that wants to can pick a subject that’s best for them to look into.
Step 3, Study: I imagine this will mostly involve self-directed reading of a handful of texts, watching some online videos, and maybe calling up one or two people -- in other words, nothing too dramatic. If a vein of something interesting is found, it’s probably better that it’s “marked” for further follow-up rather than further examined alone.
Step 4, Post: Some these investigations will not reveal anything -- that’s actually a good thing (explained below); for these, a short “Looked into it, nothing here” sort of comment should suffice. Subjects with bigger findings should get bigger, more detailed comments/posts.

Evaluation of Proposal

As a first step, I’ll use a variation of the Heilmeier questions which is an (admittedly idiosyncratic) mix of the original version and gregv’s enhanced version.

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
    Produce comments or posts providing very brief overviews of fields of knowledge, not previously discussed here, with notes pertaining to Less Wrong topics and interests.
  • Who cares? How many people will benefit?
    This post is partially an attempt to determine that, but there seems to be at least some interest in more variety on the site (see above). Additionally, the posts should be a good general resource for anyone that stumbles across them, and might even make good content for search purposes.
  • Why hasn't someone already solved this problem? What makes you think what stopped them won't stop you?
    The idea is roughly book club meets Wikipedia, but with an emphasis on creating a small evaluative body of knowledge rather than a massive descriptive encyclopedia, and with a LessWrong twist. The sharper focus should make the results more useful to go through than just hitting “random page” in yon encyclopedia.
  • How much have projects like this cost (time equivalent)?
    Some have the ability to take on “whole fields of knowledge in mere weeks” but that’s not typical -- investigating a subject in this case is roughly comparable in complexity to taking an introductory class or two, which people without any previous training normally accomplish over a period of about three to four months at a pace which is not especially strenuous, and with fairly light monetary costs beyond tuition/fees (which aren't applicable here).
  • What are the midterm and final "exams" to check for success?
    For each individual investigation, a good “midterm” check would be for the person looking into a field to have an list of resources or texts they’re working on. The final “exam” is a posting indicating if anything useful or interesting was found, and if so, what.
  • If y [this community search] fails to solve x [uncover useful knowledge in fields previously under-examined on LessWrong], what would that teach you that you (hopefully) didn't know at the beginning?
    Quite possibly, this could be a good thing -- it indicates that the mix of topics on LessWrong is approximately right, and things can continue on. In this case, we’d end up seeing a bunch of short “nothing interesting here” comments, and can rest more or less assured that further investigation into even more minute detail in unnecessarily. This is conditional on not-terrible scholarship and a reasonably good priority list from step 1.

Seeking suggestions: Less Wrong Biology 101

35 virtualAdept 20 May 2011 03:28PM

I’ve been a reader and occasional commenter here for a while now, but previously have not had a solid idea of what I could or wanted to contribute to the community in posting.  In light of recent comments stating an interest in more posts that offer concrete, factual information as well as remembering lukeprog’s call for such things in his Back to the Basics of Rationality post, I am considering a series of condensed posts about biology.  As someone who has spent my formal education on biologically-focused engineering (bioengineering BS, now studying bioinformatics under a chemical engineering department for my PhD) but has always had the bulk of my friends in electrical engineering, computer science, and more traditional chemical engineering, I’ve gotten used to offering such condensed explanations whenever biology works its way into a discussion.  From what I’ve seen on LW thus far, the community educational base leans more in those (non-biology) directions, so I believe this is a niche that could use filling. 

Since biology is a rather broad subject, and you could all go read Wikipedia or a textbook if you wanted a very detailed survey course, my intent is to pick targeted topics that are relevant to current events and scientific developments.  Each post would focus on one such event/Awesome New Study, discussing the biological background and potential implications, including either short explanations or links to the basics needed to understand the subject.  If there are any political ties to the subject, I will withhold my explicit opinions on those aspects unless asked in the comments. 

My questions, then, are the following:

  • Is this something that people here would find interesting/useful in the general sense?  (While I do enjoy talking to myself, doing so on this topic has gotten a bit old, so I really do want to know if no one really thinks this will be helpful.)
  • How long/in-depth would you like?  This question is intended to gauge what my background explanation: background links ratio should be.
  • And most importantly, what are some topics you would like to see discussed?


UPDATE: Having followed the comments so far and done some preliminary outlining, I'm leaning toward a more organized progression of topics that will still tie into current interests and developments, but not be centered on them.  A bit more thought and putting ideas to text indicated that I could group the interest areas into biological categories (molecular, populations, developmental, neuro, etc) fairly easily, which would then allow for a 'foundations' post to introduce each major category, followed by posts that go over What We Know Now, Why We Care, and Where It's Going.  

Explanation found for the Pioneer anomaly

11 arundelo 27 April 2011 04:25AM

Paper here. Lay summary here. Some bits from the latter:

The problem is this. The Pioneer 10 and 11 spacecraft were launched towards Jupiter and Saturn in the early 1970s. After their respective flybys, they continued on escape trajectories out of the Solar System, both decelerating under the force of the Sun's gravity. But careful measuremenrs show that the spacecraft are slowing faster than they ought to, as if being pulled by an extra unseen force towards the Sun.

Spacecraft engineers' first thought was that heat emitted by the spacecraft could cause exactly this kind of deceleration. But when they examined the way heat was produced on the craft, by on board plutonium, and how this must have been emitted, they were unable to make the numbers add up.

Now Frederico Francisco at the Instituto de Plasmas e Fusao Nuclear in Lisbon Portugal, and a few pals, say they've worked out where the thermal calculations went wrong.

Book reviews

3 PhilGoetz 14 April 2011 01:50PM

I'd like to see book reviews of books of interest to LW.  Some suggestions:

  • Dan Ariely (2010).  The Upside of Irrationality: The unexpected benefits of defying logic at work and at home.
  • Sam Harris (2010).  The Moral Landscape: How science can determine human values.
  • Dan Ariely (2009).  Predictably Irrational: The Hidden Forces That Shape Our Decisions.
  • Timothy Harris (2010).  The Science of Liberty: Democracy, Reason, and the Laws of Nature.
  • Joel Garreau (2005).  Radical Evolution.  Book about genetic mods, intelligence enhancement, and the singularity.

ADDED:  I don't mean I'd like to see reviews in this thread.  I'd like each review to have its own thread.  In discussion or on the "new" page is up to you.

Cryptanalysis as Epistemology? (paging cryptonerds)

11 SilasBarta 06 April 2011 07:06PM

Short version: Why can't cryptanalysis methods be carried over to science, which looks like a trivial problem by comparison, since nature doesn't intelligently remove patterns from our observations?  Or are these methods already carried over?

Long version: Okay, I was going to spell this all out with a lot of text, but it started ballooning, so I'm just going to put it in chart form.

Here is what I see as the mapping from cryptography to science (or epistemology in general).  I want to know what goes in the "???" spot, and why it hasn't been used for any natural phenomenon less complex than the most complex broken cipher.  (Sorry, couldn't figure out how to center it.)

 

EDIT: Removed "(cipher known)" requirement on 2nd- and 3rd-to-last rows because the scientific analog can be searching for either natural laws or constants.

Link: "Health Care Myth Busters: Is There a High Degree of Scientific Certainty in Modern Medicine?"

8 CronoDAS 01 April 2011 05:25AM

A feature in Scientific American magazine casts some light on the troubled state of modern medicine.

Health Care Myth Busters: Is There a High Degree of Scientific Certainty in Modern Medicine?

Short excerpt:

We could accurately say, "Half of what physicians do is wrong," or "Less than 20 percent of what physicians do has solid research to support it." Although these claims sound absurd, they are solidly supported by research that is largely agreed upon by experts.

Scientific American often gates its online articles after some time has passed, so I don't know how long it will be available.

Sean Carroll: Does the Universe Need God? [link]

15 Dreaded_Anomaly 23 March 2011 07:31PM

Does the Universe Need God? (essay by Sean Carroll)

In this essay, Sean Carroll:

  • Dissolves the problem of "creation from nothing":

    A provocative way of characterizing these beginning cosmologies is to say that "the universe was created from nothing." Much debate has gone into deciding what this claim is supposed to mean. Unfortunately, it is a fairly misleading natural-language translation of a concept that is not completely well-defined even at the technical level. Terms that are imprecisely defined include "universe," "created," "from," and "nothing." (We can argue about "was.")

    The problem with "creation from nothing" is that it conjures an image of a pre-existing "nothingness" out of which the universe spontaneously appeared – not at all what is actually involved in this idea. Partly this is because, as human beings embedded in a universe with an arrow of time, we can't help but try to explain events in terms of earlier events, even when the event we are trying to explain is explicitly stated to be the earliest one. It would be more accurate to characterize these models by saying "there was a time such that there was no earlier time."

    To make sense of this, it is helpful to think of the present state of the universe and work backwards, rather than succumbing to the temptation to place our imaginations "before" the universe came into being. The beginning cosmologies posit that our mental journey backwards in time will ultimately reach a point past which the concept of "time" is no longer applicable. Alternatively, imagine a universe that collapsed into a Big Crunch, so that there was a future end point to time. We aren't tempted to say that such a universe "transformed into nothing"; it simply has a final moment of its existence. What actually happens at such a boundary point depends, of course, on the correct quantum theory of gravity.

    The important point is that we can easily imagine self-contained descriptions of the universe that have an earliest moment of time. There is no logical or metaphysical obstacle to completing the conventional temporal history of the universe by including an atemporal boundary condition at the beginning. Together with the successful post-Big-Bang cosmological model already in our possession, that would constitute a consistent and self-contained description of the history of the universe.

    Nothing in the fact that there is a first moment of time, in other words, necessitates that an external something is required to bring the universe about at that moment. As Hawking put it in a celebrated passage:

    So long as the universe had a beginning, we could suppose it had a creator. But if the universe is really self-contained, having no boundary or edge, it would have neither beginning nor end, it would simply be. What place, then, for a creator?
  • Uses Bayesian reasoning to judge possible explanations:

    Nevertheless, for the sake of playing along, let's imagine that intelligent life only arises under a very restrictive set of circumstances. Following Swinburne, we can cast the remaining choices in terms of Bayesian probability. The basic idea is simple: we assign some prior probability – before we take into account what we actually know about the universe – to each of the three remaining scenarios. Then we multiply that prior probability by the probability that intelligent life would arise in that particular model. The result is proportional to the probability that the model is correct, given that intelligent life exists.[17] Thus, for option #2 (a single universe, no supernatural intervention), we might put the prior probability at a relatively high value by virtue of its simplicity, but the probability of life arising (we are imagining) is extremely small, so much so that this model could be considered unlikely in comparison with the other two.

    We are left with option #3, a "multiverse" with different conditions in different regions (traditionally called "universes" even if they spatially connected), and #4, a single universe with parameters chosen by God to allow for the eventual appearance of life. In either case we can make a plausible argument that the probability of life arising is considerable. All of the heavy lifting, therefore, comes down to our prior probabilities – our judgments about how a priori likely such a cosmological scenario is. Sadly, prior probabilities are notoriously contentious objects.

    I will consider more carefully the status of the "God hypothesis," and its corresponding prior probability, in the final section. For now, let's take a look at the multiverse.
  • Correctly describes parsimony in terms of Kolmogorov complexity:

    What prior likelihood should we assign to such a scenario? One popular objection to the multiverse is that it is highly non-parsimonious; is it really worth invoking an enormous number of universes just to account for a few physical parameters? As Swinburne says:

    To postulate a trillion trillion other universes, rather than one God in order to explain the orderliness of our universe, seems the height of irrationality.

    That might be true, even with the hyperbole, if what one was postulating were simply "a trillion trillion other universes." But that is a mischaracterization of what is involved. What one postulates are not universes, but laws of physics. Given inflation and the string theory landscape (or other equivalent dynamical mechanisms), a multiverse happens, whether you like it or not.

    This is an important point that bears emphasizing. All else being equal, a simpler scientific theory is preferred over a more complicated one. But how do we judge simplicity? It certainly doesn't mean "the sets involved in the mathematical description of the theory contain the smallest possible number of elements." In the Newtonian clockwork universe, every cubic centimeter contains an infinite number of points, and space contains an infinite number of cubic centimeters, all of which persist for an infinite number of separate moments each second, over an infinite number of seconds. Nobody ever claimed that all these infinities were a strike against the theory. Indeed, in an open universe described by general relativity, space extends infinitely far, and lasts infinitely long into the future; again, these features are not typically seen as fatal flaws. It is only when space extends without limit and conditions change from place to place, representing separate "universes," that people grow uncomfortable. In quantum mechanics, any particular system is potentially described by an infinite number of distinct wave functions; again, it is only when different branches of such a wave function are labeled as "universes" that one starts to hear objections, even if the mathematical description of the wave function itself hasn't grown any more complicated.

    A scientific theory consists of some formal (typically mathematical) structure, as well as an "interpretation" that matches that structure onto the world we observe. The structure is a statement about patterns that are exhibited among the various objects in the theory. The simplicity of a theory is a statement about how compactly we can describe the formal structure (the Kolmogorov complexity), not how many elements it contains. The set of real numbers consisting of "eleven, and thirteen times the square root of two, and pi to the twenty-eighth power, and all prime numbers between 4,982 and 34,950" is a more complicated set than "the integers," even though the latter set contains an infinitely larger number of elements. The physics of a universe containing 1088 particles that all belong to just a handful of types, each particle behaving precisely according to the characteristics of its type, is much simpler than that of a universe containing only a thousand particles, each behaving completely differently.
  • Discusses "meta-explanatory accounts":

    For convenience I am brutally lumping together quite different arguments, but hopefully the underlying point of similarity is clear. These ideas all arise from a conviction that, in various contexts, it is insufficient to fully understand what happens; we must also provide an explanation for why it happens – what might be called a "meta-explanatory" account.

    It can be difficult to respond to this kind of argument. Not because the arguments are especially persuasive, but because the ultimate answer to "We need to understand why the universe exists/continues to exist/exhibits regularities/came to be" is essentially "No we don't." That is unlikely to be considered a worthwhile comeback to anyone who was persuaded by the need for a meta-explanatory understanding in the first place.

    Granted, it is always nice to be able to provide reasons why something is the case. Most scientists, however, suspect that the search for ultimate explanations eventually terminates in some final theory of the world, along with the phrase "and that's just how it is." It is certainly conceivable that the ultimate explanation is to be found in God; but a compelling argument to that effect would consist of a demonstration that God provides a better explanation (for whatever reason) than a purely materialist picture, not an a priori insistence that a purely materialist picture is unsatisfying.

    Why are some people so convinced of the need for a meta-explanatory account, while others are perfectly happy without one? I would suggest that the impetus to provide such an account comes from our experiences within the world, while the suspicion that there is no need comes from treating the entire universe as something unique, something for which a different set of standards is appropriate.

    ...

    States of affairs only require an explanation if we have some contrary expectation, some reason to be surprised that they hold. Is there any reason to be surprised that the universe exists, continues to exist, or exhibits regularities? When it comes to the universe, we don't have any broader context in which to develop expectations. As far as we know, it may simply exist and evolve according to the laws of physics. If we knew that it was one element of a large ensemble of universes, we might have reason to think otherwise, but we don't. (I'm using "universe" here to mean the totality of existence, so what would be called the "multiverse" if that's what we lived in.)

    ...

    Likewise for the universe. There is no reason, within anything we currently understand about the ultimate structure of reality, to think of the existence and persistence and regularity of the universe as things that require external explanation. Indeed, for most scientists, adding on another layer of metaphysical structure in order to purportedly explain these nomological facts is an unnecessary complication. This brings us to the status of God as a scientific hypothesis.
  • Points out the theory-saving in and the predictive issues of God as a hypothesis:

    Similarly, the apparent precision of the God hypothesis evaporates when it comes to connecting to the messy workings of reality. To put it crudely, God is not described in equations, as are other theories of fundamental physics. Consequently, it is difficult or impossible to make predictions. Instead, one looks at what has already been discovered, and agrees that that's the way God would have done it. Theistic evolutionists argue that God uses natural selection to develop life on Earth; but religious thinkers before Darwin were unable to predict that such a mechanism would be God's preferred choice.

    ...

    This is a venerable problem, reaching far beyond natural theology. In numerous ways, the world around us is more like what we would expect from a dysteleological set of uncaring laws of nature than from a higher power with an interest in our welfare. As another thought experiment, imagine a hypothetical world in which there was no evil, people were invariably kind, fewer natural disasters occurred, and virtue was always rewarded. Would inhabitants of that world consider these features to be evidence against the existence of God? If not, why don't we consider the contrary conditions to be such evidence? 
  • And more!

See also his blog entry for more discussion of the essay.

Edit: added the bullet point about "meta-explanatory accounts."

View more: Prev | Next