From "A Bitter Ending":
...At a conference back in the early 1970s, Danny [Kahneman] was introduced to a prominent philosopher named Max Black and tried to explain to the great man his work with Amos [Tversky]. "I’m not interested in the psychology of stupid people," said Black, and walked away.
Danny and Amos didn’t think of their work as the psychology of stupid people. Their very first experiments, dramatizing the weakness of people’s statistical intuitions, had been conducted on professional statisticians. For every simple problem that fooled undergraduates, Danny and Amos could come up with a more complicated version to fool professors. At least a few professors didn’t like the idea of that. "Give people a visual illusion and they say, ‘It’s only my eyes,’ " said the Princeton psychologist Eldar Shafir. "Give them a linguistic illusion. They’re fooled, but they say, ‘No big deal.’ Then you give them one of Amos and Danny’s examples and they say, ‘Now you’re insulting me.’ "
In late 1970, after reading early drafts of Amos and Danny’s papers on human judgment, Edwards [former teacher of Amos] wrote to complain. In what would be the first of m
We don't have an open quotes thread on the main page, but this made me chuckle:
"mathematician thinks in numbers, a lawyer in laws, and an idiot thinks in words." from Nassim Taleb in
I think this paper implies that rare harmful genetic mutations explains lots of the variation in human intelligence. Since it will soon be easy for CRISPR to eliminate such mutations in embryos, I think this paper's results if true means that genetic engineering for super-intelligence will be relatively easy.
Is it currently legal to run a for-money prediction market in Canada? I assume the answer is "no," but I was surprisingly unable to find a clear ruling anywhere on the Internet. All I can find is this article which suggests that binary options (which probably includes prediction markets) exist in a legally nebulous state right now.
Request for programmers: I have developed a new programming trick that I want to package up and release as open-source. The trick gives you two nice benefits: it auto-generates a flow-chart diagram description of the algorithm, and it gives you steppable debugging from the command line without an IDE.
The main use case I can see is when you have some code that is used infrequently (maybe once every 3 months), and by default you need to spend an hour reviewing how the code works every time you run it. Or maybe you want to make it easier for coworkers to get...
(some previous discussion of predictionbook.com here)
[disclaimer: I have only been using the site seriously for around 5 months]
I was looking at the growth of predictionbook.com recently, and there has been a pretty stable addition of about 5 new public predictions per day since 2012 (that is counting only new predictions, not including additional wagers on existing predictions). I was curious why the site did not seem to be growing, and how little it is mentioned or linked to on lesswrong and related blogs.
(sidebar: Total predictions (based on the IDs of ...
Has there been any discussion or thought of modifying the posting of links to support a couple paragraphs of description? I often think that the title alone is not enough to motivate or describe a link. There are also situations where the connection of the link content to rationality may not be immediately obvious and a description here could help clarify the motivation in posting. Additionally, it could be used to point readers to the most valuable portions of sometimes long and meandering content.
Looks like the 'RECENT ON RATIONALITY BLOGS' section on the sidebar is still broken.
Is this a difficult fix?
What advice would you give to a 12-years old boy who wants to become great at drawing and painting?
(Let's assume that "becoming great at drawing and painting" is a given, so please no advice like "do X instead".)
My thoughts: There is the general advice about spending "10 000 hours", for example by allocating a fixed space in your schedule (e.g. each day between 4AM and 5AM, whether I feel like doing it or not). And the time should be best spent learning and practicing new-ish stuff, as opposed to repeating what you are already...
Does anyone have a backup of that one scifi short story from Raikoth about future AGI and acausal trade with simulated hypothetical alien AGI? The link is broken. http://www.raikoth.net/Stuff/story1.html
"Why Boltzmann Brains Are Bad" by Sean M. Carroll https://arxiv.org/pdf/1702.00850.pdf
Two excepts: " The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological...
Consistency in Arithmetic
Double the debt: 2 -1 = -2 *Ok
But: -2 -1 = 2 *Ok?
Who will allow you to multiple your debt with another's debt to get rid of it?
2 -1 + -2 - 1 = (2 - 2) -1 = 0 -1 = 0
But...
2 -1 + -2 -1 = -2 + -2 * -1 = 0
Therefore...
-2 * -1 = 2
Ian Stewart, Professor Stewart’s Cabinet of Mathematical Curiosities, Profile Books, 2008, pages 37-38;
So mathematics is mentally-created, it looks objective because of primordial choices we have made? As a form of a subconscious of the Species and we've created computers because we think that way ...
Reposting this from last week's open thread because it seemed to get buried
Is Newcomb's Paradox solved? I don't mean from a decision standpoint, but the logical knot of "it is clearly, obviously better two one-box, and it is clearly, logically proven better to two-box". I think I have a satisfying solution, but it might be old news.
It's solved for anyone who doesn't believe in magical "free will". If it's possible for Omega to correctly predict your action, then it's only sane to one-box. Only decision systems that deny this ability to predict will two-box.
Causal Decision Theory, because it assumes single-direction-causality (a later event can't cause an earlier one), can be said to deny this prediction. But even that's easily solved by assuming an earlier common cause (the state of the universe that causes Omega's prediction also causes your choice), as long as you don't demand actual free will.
From "A Bitter Ending":
At a conference back in the early 1970s, Danny [Kahneman] was introduced to a prominent philosopher named Max Black and tried to explain to the great man his work with Amos [Tversky]. "I’m not interested in the psychology of stupid people," said Black, and walked away.
Danny and Amos didn’t think of their work as the psychology of stupid people. Their very first experiments, dramatizing the weakness of people’s statistical intuitions, had been conducted on professional statisticians. For every simple problem that fooled undergraduates, Danny and Amos could come up with a more complicated version to fool professors. At least a few professors didn’t like the idea of that. "Give people a visual illusion and they say, ‘It’s only my eyes,’ " said the Princeton psychologist Eldar Shafir. "Give them a linguistic illusion. They’re fooled, but they say, ‘No big deal.’ Then you give them one of Amos and Danny’s examples and they say, ‘Now you’re insulting me.’ "
In late 1970, after reading early drafts of Amos and Danny’s papers on human judgment, Edwards [former teacher of Amos] wrote to complain. In what would be the first of many agitated letters, he adopted the tone of a wise and indulgent master speaking to his naïve pupils. How could Amos and Danny possibly believe that there was anything to learn from putting silly questions to undergraduates? "I think your data collection methods are such that I don’t take seriously a single ‘experimental’ finding you present," wrote Edwards. These students they had turned into their lab rats were "careless and inattentive. And if they are confused and inattentive, they are much less likely to behave more like competent intuitive statisticians." For every supposed limitation of the human mind Danny and Amos had uncovered, Edwards had an explanation. The gambler’s fallacy, for instance. If people thought that a coin, after landing on heads five times in a row, was more likely, on the sixth toss, to land on tails, it wasn’t because they misunderstood randomness. It was because "people get bored doing the same thing all the time."
An Oxford philosopher named L. Jonathan Cohen raised a small philosophy-sized ruckus with a series of attacks in books and journals. He found alien the idea that you might learn something about the human mind by putting questions to people. He argued that because man had created the concept of rationality, he must, by definition, be rational. "Rational" was whatever most people did. Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, "Any error that attracts a sufficient number of votes is not an error at all.
He argued that because man had created the concept of rationality, he must, by definition, be rational.
Oh my.
Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, "Any error that attracts a sufficient number of votes is not an error at all."
Wondering how many computation cycles humanity has wasted since the beginning of time debating words will give me nightmares. Have we in four thousands years of history accumulated a month of creative, uninterrupted thoughts about truth that wasn't about definitions?
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "