Is there an automatic Chrome-to-Anki-2 extension or solution?
I'd like to be able to click unfamiliar words in Chrome and automatically create notes in Anki 2 using an online dictionary. It'd also be nice to have an automatic method for sending text and images to Anki notes straight from Chrome. For example, if I read an article here that I want to remember, I'd be able to highlight the title, send it to Anki, and when I review, I'd see the title on the card's front with the reverse being a link to the source if I forgot what the post was about.
I found some Chrome extensions that purport to do this sort of thing, but didn't get any of them to work with Anki 2. Is anyone currently doing this, and if so, what is the solution?
What is the Mantra of Polya?
The other day at dinner, someone showed me this video of a slinky dropping. It shoes that the bottom of the slinky stays perfectly stationary for a while after it's been dropped. (The link goes to the 10-second interesting part).
I spent some time trying to figure out why that happens, but didn't get it. The next day, I spent half an hour writing down the differential equations that describe the slinky's motion and staring at them, with no idea how to proceed. Eventually, I watched the video again with sound, and learned the simple answer, which is that the speed of waves traveling in a slinky is very slow - a few meters per second - and the bottom half sits still until a wave can travel down and inform it that the slinky's been dropped.
The strange thing is that I already knew this, or at least the idea was familiar to me. Also, while at dinner, someone mentioned the "pole-in-the-barn" paradox from special relativity, and mentioned the same speed-of-information-in-materials idea in resolving the paradox, but I still didn't make the connection to the problem I was considering.
I want a simple phrase, similar to "check consequentialism", "take the outside view", or "worth it?" that applies to checking your own thought process while solving problems to stop you from revving your engine in the wrong direction for too long. I realized I've read a book about what to do in such situations. It's George Polya's How to Solve It. (Amazon Wikipedia Google Books) I don't have a copy of the book anymore, and I would like to crowdsource creating a short phrase that captures the general mindset endorsed by it. Some questions I remember the book suggesting are
- Have you seen a similar problem before?
- What are the unknowns?
- What information do you have?
- Is it obvious that the unknowns are enough information?
If calorie restriction works in humans, should we have observed it already?
Although there are no long-term scientific studies of calorie restriction in humans, there are religious groups, cults, and ascetics who voluntarily practice calorie restriction or intermittent fasting. Presumably there have been tens or hundreds of thousands of people who have practiced calorie restriction throughout most of their adult lives. There were/are probably also groups that involuntarily practice calorie restriction - servants, slaves, prisoners, or people who simply regularly don't have enough to eat.
If calorie restriction has a dramatic effect on life expectancy in humans, shouldn't we expect to observe extended life expectancy in at least some groups? Or would each of these groups likely have some mitigating circumstances that would shorten their lifespans, such as lack of medicine?
With an hour on Google, I found some references to Okinawa, to monks on Mount Athos, and to similar groups. In no case was there a reasonable claim of life expectancy over 90 (which would represent just a 10% improvement over life expectancy in Japan).
This paper reviews the evidence on calorie restriction in humans and other animals, including discussion of religious fasting, but there's no evidence there of fasting extending lifespan.
I found a few other sources where people asked this question (or made this point as an attack on CR), but I haven't yet found any good answers on the subject, and didn't find any discussion on LessWrong yet.
Should "latest insights" appear in the front page blurb?
The front page states
We use the latest insights from cognitive science, social psychology,probability theory, and decision theory to improve our understanding of how the world works and what we can do to achieve our goals.
The use of "latest" is a minor turn-off for me. It reminds me of blogs' and mass-media news' uncritical approach to recent papers in health and psychology. It makes me think that LessWrong will be about grabbing a paper, summarizing its result, and explaining how to apply it in daily life.
LessWrong is cognizant of the problems in scientific publishing. We know that any individual paper is likely to be wrong - more so if the conclusions are highly unexpected. (The more sensational the headline, the less likely it is to be true).
LessWrong usually focuses on higher-level summaries when discussing scientific findings, especially in psychology. A typical top-level science post has lots of references, many to review articles and meta-analyses. That's not the impression I get from the front page, though, so I think it we could communicate the community's goals better by dropping the word "latest".
This point is similar to one made by Gabriel in this comment.
Question on math in "A Technical Explanation of Technical Explanation"
"A Technical Explanation of Technical Explanation" (http://yudkowsky.net/rational/technical) defines a proper rule for a betting game as one where the payoff is maximized by betting an amount proportional to the probability of success.
The first example rule given is that the payoff is one minus the negative of the squared error, so for example if you make a bet of .3 on the winner, your payoff is 1-(1-.3)^2 = .51.
This doesn't seem like a good example. It works if there are only two options, but I don't think it works if there are three or more. For example, we imagine P(red) = .5, P(blue) = .2, P(green) = .3. If we place bets of .5, .2, and .3 respectively, the expected return I get is .6. (edit: Fixed a mistake pointed out by Douglas Knight.)
However, if I place bets of .51, .19, .3 the expected return is .60173. I have that the condition for maximization is
(1-R)P(R) = (1-B)P(B) = (1-G)P(G),
which I got by taking partial derivatives of the expectation and setting them equal. ("R" stands for the bet placed on red and "P(R)" for the probability of red, etc.) This is different than simply R=P(R), etc.
So does the article have a mistake, or do I, or did I miss part of the context?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)