PMed all of them. Does anyone else also want to volunteer?
So far only one person (Randaly) has replied. Does any native speaker want to volunteer? Edit: two people (Randaly and Normal_Anomaly)
I think it is important to answer why people go to LessWrong and whether it is perceived to be primarily a place where you go to improve one's rationality that happens to be an internet forum, or an internet forum where you can read interesting things, such as rationality (I think that experiencing an intellectual journey is somewhere in between, but probably closer to the latter). Because there are a lot of large forums where you can read a lot of interesting things - for example, r/askscience and r/askhistorians have hundreds of thousands subscribers and a lot of contributors who produce huge quantities of interesting content.
A place where people go to improve their rationality can take many forms. It doesn't even have to be a blog, a forum, or a wiki. If I allowed myself to be a bit starry-eyed, I would say, that it would be really interesting, if, for example, LessWrong had its own integrated Good Judgement Project. Or if LW had its own karma (or cryptocurrency) denominated prediction markets. Of course, ideas like these would require a lot of effort to implement.
Perhaps you didn't notice, but the paper is gated. It's not possible for me or most people to check the paper.
Most people in the general population can't check the paper but on LW, I don't think that's the case. If you don't have access to a university network http://lesswrong.com/lw/ji3/lesswrong_help_desk_free_paper_downloads_and_more/ explores a variety of ways to access papers.
Perhaps you didn't notice, but the paper is gated. It's not possible for me or most people to check the paper. The description doesn't mention the other two studies. The study described doesn't sound like a strong result. I never suggested it wasn't statistically significant. If it wasn't, it shouldn't be used to adjust one's views at all. I assumed it had achieved significance.
It's also odd for you to criticize me and then ultimately come to a conclusion that could be interpreted as identical to my own or close to it. What do you mean by "too soon to be confident of what the results mean?". That could be interpreted as adjust your prior by 3% which was my interpretation. If you think a number higher than 15% is warranted then that's an odd phrasing to choose which makes it sound like we're not that far apart. Given that I was going by one study and you have three to look at it, it shouldn't be surprising that you would recommend a greater adjustment of ones prior. Going by just the facial expression study, what adjustment would you recommend? Do you think this adjustment is large enough for most people to know what to do with it? What adjustment to ones prior do you recommend after reviewing all three?
Choking Under Social Pressure: Social Monitoring Among the Lonely, Megan L. Knowles, Gale M. Lucas, Roy F. Baumeister, and Wendi L. Gardner
This is a crazy idea that I'm not at all convinced about, but I'll go ahead and post it anyway. Criticism welcome!
Rationality and common sense might be bad for your chances of achieving something great, because you need to irrationally believe that it's possible at all. That might sound obvious, but such idealism can make the difference between failure and success even in science, and even at the highest levels.
For example, Descartes and Leibniz saw the world as something created by a benevolent God and full of harmony that can be discovered by reason. That's a very irrational belief, but they ended up making huge advances in science by trying to find that harmony. In contrast, their opponents Hume, Hobbes, Locke etc. held a much more LW-ish position called "empiricism". They all failed to achieve much outside of philosophy, arguably because they didn't have a strong irrational belief that harmony could be found.
If you want to achieve something great, don't be a skeptic about it. Be utterly idealistic.
Am I correct to paraphrase you this way: maximizing EX and maximizing P(X > a) are two different problems.
Hello from Canada! I study computer science and philosophy at the University of Waterloo. Above anything, I love mathematics. The certainty that comes from a mathematical proof is amazing, and it fuels my current position about epistemology (see below). My favourite courses for mathematics so far have been the introductory course about proofs, and a course about formal logic (the axioms of first order logic, deduction rules, etc). Philosophy has always been very interesting to me: I've taken courses about epistemology, ethics, the philosophy language; I am also currently taking a course about political philosophy, and am reading Nietzsche on the side. I also love to debate. Although I don't practice Christianity anymore, I loved debating about religion with my friends.
I have come to Less Wrong to talk about my epistemological views. It is a form of skepticism. I view (i.e. define) truth exclusively as the outcome of some rational system. I reject all claims unless they are given in terms of a rational system by which it can be deduced. Even when such a system is given, I would call the claim true only given the context of the rational system at hand and not (necessarily) under any other systems.
For example, "2 + 2 = 4" is true when we are using the conventional meanings for 2, 4, +, and =, along with a deductive system that takes expressions such as "2 + 2 = 4" and spits out true or false. On the contrary, "2 + 2 = 4" is false when we use the usual definitions of 2 and 4 and = but + being defined for x and y and the (regular) sum of x and y minus one. This is an illustration of the truth of a claim only making sense once it has precise meaning, axioms that are assumed to be true, and some system of deduction.
When a toddler sees the blue sky and asks his mother why the sky is blue and she responds with something about the scattering of light, he has a choice: either he accepts the system of scattering implies blueness, or he can ask again: "Why?" She might reply with something about molecules, etc... Eventually, the toddler seems to have two choices: either he must accept that the axioms of the scientific method are true just because or reject the whole thing for not being justified all the way through.
My view on epistemology is distinct from the above options. It wouldn't reject the whole system (useless; no knowledge) or truly believe in the axioms of the scientific method (naive; they could be wrong). It would appreciate the intrinsic nature of the ideas; that the scattering of light can imply that the sky is blue. It would view rational systems as tools that can be used and then put away, rather than thing that have to be carried around your whole life.
What do you think about this? Can you suggest any related readings?
This sounds similar to Coherence theory of truth.
Not sure that generalises outside of math. Is it really better to solve one problem really, really thoroughly, than to have a good-enough fix for five? Depends on the problems, perhaps - but without knowing anything else, I'd rather solve five than one.
I don't know the exact context of this particular quote, but George Pólya wrote a few books about how to become a better problem solver (at least in mathematics). In that context the quote is very reasonable.
Is it possible to tame an octopus? Could humanity over several generations tame octopuses and breed them into work animals?
Octopuses are solitary animals, whereas most working animals are social. Which leads to another interesting question - is it possible to breed octopuses to become social animals?
It is better to solve one problem five different ways, than to solve five problems one way
George Pólya, or at least attributed to him, as I am unable to find the exact source, despite its being widely quoted in texts related to mathematics education or problem solving in general.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I just came across this: "You're Not As Smart As You Could Be", about Dr. Samuel Renshaw and the tachistoscope. This is a device used for exposing an image to the human eye for the briefest fraction of a second. In WWII he used it to train navy and artillery personnel to instantly recognise enemy aircraft, apparently with great success. He also used it for speed reading training; this application appears to be somewhat controversial.
I remember the references to Renshaw in some of Heinlein's stories, and I knew he was a real person, but this is the first time I've seen a substantial account of his work.
A few more references:
Wikipedia is rather brief.
Open access review article about work with the tachistoscope, in the Journal of Behavioral Optometry, 2003. This is the closest thing I've found to a modern reference.
An academic paper by Renshaw himself from 1945. Despite its antiquity, it is paywalled. I have not been able to access the full text.
This information is mostly rather old and musty, and there appears to be little modern interest. With current computers, it should be very easy to duplicate the technology, although low-level graphics expertise is likely needed to get very short, precise exposure times.
The Visual Perception and Reproduction of Forms by Tachistoscopic Methods, Samuel Renshaw