Thanks for writing this up. Very thorough notes and the comprehensive links are helpful.
I'm curious what those who were not present for the meetup will think of the format. My initial impression is that it's falling too far on the side of stream-of-conversation for easy parsing.
Does that mean we'll get more progress if everyone in a certain field gathers in one place? Perhaps.
After thinking about this a bit, I'm not sure I agree. First, gathering everyone together puts all the eggs in one basket, which risks vulnerability to external disruption (e.g. the Nazis taking over Budapest). Second, a brain-drain of intellectuals into one central city deprives up-and-coming students (if they can't afford to relocate) of teachers and mentors.
The following is a writeup (pursuant to Mingyuan's proposal) of the discussion at the Austin LW/SSC Meetup on November 16, 2019, at which we discussed six different SlateStarCodex articles. We meet every Saturday at 1:30pm - if you're in the area, come join us!
You are welcome to use the comments below to continue discussing any of the topics raised here. I also welcome meta-level feedback: How do you like this article format? What sorts of meetups lead to interesting writeups?
Disclaimer: I took pains to make it clear before, during, and after the meetup that I was taking notes for posting on LessWrong later. I do not endorse posting meetup writeups without the knowledge and consent of those present!
The Atomic Bomb Considered As Hungarian High School Science Fair Project
There was a Medium post on John von Neumann, which was discussed on Hacker News, which linked to the aforementioned SSC article on why there were lots of smart people in Budapest 1880-1920.
Who was John von Neumann? - One of the founders of computer science, founder of game theory, nuclear strategist. For all his brilliance he's fairly unknown generally. Everyone who knew him said he was an even quicker thinker than Einstein; but why didn't he achieve as much as Einstein? Perhaps because he died of cancer at 53.
Scott Alexander says: {Ashkenazi Jews are smart. Adaptations can have both up- and down-sides (e.g. sickle cell anemia / malaria resistance); likewise some genes cause genetic disorders and also intelligence. These are common in Ashkenazim.}
Jews were forced into finance because Christians weren't allowed to charge interest on loans, but it turned out interest was really useful.
Scott Alexander says: {And why this time period? Because restrictions on Jews only started being lifted just before this period, and they needed a generation or so to pass before they could be successful. And afterward, Nazis happened. Why Hungary and not Germany? Hungary has a "primate city" (Budapest), i.e. a city that's much more prominent than others in its area, so intellectuals will tend to gather there. Germany, by contrast, is less centralized.}
Simulation of idea-sharing and population density - cities are more likely to incubate ideas (Hacker News discussion). Does that mean we'll get more progress if everyone in a certain field gathers in one place? Perhaps. It's helpful to get feedback for your ideas to get your thinking on the right track, rather than going down a long erroneous path without colleagues to correct you.
Building Intuitions On Non-Empirical Arguments In Science
Scott Alexander says: {Should we reject the idea of a multiverse if it doesn't make testable predictions? No, because it's more parsimonious, contra the "Popperazi" who say that new theories must have new testable predictions.} This article is interesting because it goes as far as you can into the topic without getting into actual advanced physics.
Similar to Tegmark's argument.
What kinds of multiverse are there? Everett (quantum) multiverse, and cosmological multiverse (different Big Bangs with different physical laws coming from them, etc.). This article applies to both (although maybe you could argue that these are both the same thing).
Related LessWrong article: Belief in the Implied Invisible.
But how do you think about the probability of you being in a multiverse, if that multiverse might contain an infinite number of beings? Should we totally discount finite-population universes (as being of almost-zero probability) because infinity always outweighs any finite number? See Nick Bostrom's Ph.D. dissertation (this is not that dissertation but it likely covers substantially the same material).
The reason for accepting the Everett multiverse is Occam's razor, because it makes the math simpler. Is that accurate? - Yes, but there's a fundamental disagreement about what "simpler" means. On the one hand, Schrödinger's equation naturally predicts the Many-Worlds Interpretation (MWI). On the other hand, MWI doesn't explain where the probabilities come from. MWIers have been trying to figure this out for a while.
Generally probability refers to your state of knowledge about reality. But quantum mechanics overturns that by positing fundamental uncertainty that is not merely epistemic.
Re MWI probabilities, see Robin Hanson's "Mangled Worlds": {Multiverse branches that don't obey the 2-norm probability rule (a.k.a. the "Born rule") can be shown to decline in measure "faster" than branches that do, and if a branch falls below a certain limit it ceases to exist in any meaningful sense because it merges into the background noise, etc.}
Robin Hanson's an economist, right? - Yes, but he may have studied physics at one point.
Scott Aaronson's 2003 paper: {Maybe it's natural to use the 2-norm to represent probability, because it's the only conserved quantity. If we didn't, we could arbitrarily inflate a particular branch's probability.}
Autism And Intelligence: Much More Than You Wanted To Know
Tower-vs-foundation model - intelligence is composed of a "tower" and a "foundation", and if you build up the tower too much without building up the foundation, the tower collapses and you end up being autistic. Analogy: MS Word and Powerpoint got better with each update till eventually they got so complex that they're not usable any more.
What mechanisms could explain the tower-vs-foundation model?
Is intelligence linear? You can have e.g. a musical prodigy, or someone who's exceptionally good at specific tasks despite being autistic.
How is intelligence defined here? - By IQ tests, in the cited studies. But these are designed for neurotypical people.
People with autism have higher-IQ families. But maybe such families are simply more likely to take their kids to doctors to get diagnosed with autism - a major confounder.
The studies look mostly at males and the father's genes, but you'd think the mother's genes are equally important.
Facebook post (archive) similar to the tower-vs-foundation concept.
Maybe you could do surveys of lower-income communities to check for autism incidence there - but this is difficult particularly because they're more likely to be mistrustful of strangers asking about such things. Or maybe not; maybe lower-income people are more likely to accept payment for scientific studies.
Testing for autism is questionable - why is there a 3:1 male:female ratio? Is this reflective of reality, or of bias in diagnosis? Perhaps you could tell by seeing if rates of diagnosis increase over time at the same rate for males and females - if females are generally diagnosed later than males, then that might be because of bias in the diagnosis that makes males with autism more likely to be diagnosed than females with autism.
How fuzzy is the category of autism? "It's a spectrum" - or more of a multivariate space?
Article in The Guardian says: {The move to accept (and not treat) autism has been harmful for people with severe autism.}
Scott Alexander says: {If you want to call something a disease, it should have a distinct cluster/separation from non-diseased cases, rather than just a continuum with an arbitrary line drawn on it.} This is particularly important in psychology, because oftentimes we can only observe symptoms and only guess as to the cause (in contrast to e.g. infectious diseases).
Samsara (short story)
In a world where everyone has attained enlightenment, one man stands alone as being unenlightened... He gets more and more stubborn the more the enlightened ones try to reach him, and founds his own school of unenlightenment. We'll stop the discussion here to avoid spoilers, but you should read it.
This is the type of story that would benefit from having padding material added to the end so that you don't know when the ending is about to come, à-la Gödel, Escher, Bach.
It's like that Scottish movie Trainspotting (which requires subtitles for Americans because of the heavy Scottish dialect) - "What if I don't want to be anything other than a heroin addict"?
Financial Incentives Are Weaker Than Social Incentives But Very Important Anyway
Scott Alexander says: {A survey asked people if they would respond to a financial incentive, and if they thought others would respond to the same incentive. People said that others would be more likely to respond to incentives than they themselves were.}
It could be entirely true that most people wouldn't respond to incentives, but some people would, and so when you ask them if "other people" would respond, they answer as if you're asking if "anyone" would. The survey question is unclear.
Social desirability bias - you don't want to be known as someone who accepts incentives easily, because that puts you in a bad negotiating position. Always overstate your asking price.
"Would you have sex with me for a billion dollars..." joke.
Speaking of salary negotiations: Always have a good second option you can tell the employer about. But if a candidate claims that "Amazon and Google" are contacting them, that doesn't mean they're any more desirable - Amazon and Google contact everyone!
You could look at sin taxes to see if they have any effect.
Predictably Irrational by Dan Ariely - a daycare started fining parents who were late in picking up their kids, but this resulted in even more parents being late.
Incentives occur at the margin, so it can be effective to have incentives even if "most" people don't respond.
Social incentives are powerful. Can you set up social incentives deliberately? One example: Make public commitments to do something, and get shamed if you later don't do it. But see Derek Sivers's TED talk Keep your goals to yourself. But did they consider the effect of publicly checking in on your progress later?
With purely financial incentives e.g. Beeminder, you might treat it transactionally like in the daycare example.
Aside: {Multi-armed bandit problem: There are a bunch of slot machines with different payouts. What's the best strategy? Explore vs. Exploit tradeoff. Algorithms to Live By - book by Brian Christian and Tom Griffiths, who were also on Rationally Speaking. E.g. If you find a good restaurant in a city you're visiting for just a few days, you should go back there again, but in your hometown you should explore more different restaurants.}
Hypothesis explaining the survey: You have more information about yourself. If someone estimates that they have a 30% chance of e.g. moving to another city, they'll say "No" to the survey 100% of the time.
Aside: {Yes Minister TV show features Minister Jim Hacker, a typical well-meaning politician concerned about popularity and getting stuff done; and Sir Humphrey, his secretary, a 30-year civil servant who knows how things actually work and is always frustrating the minister's plans. "The party have had an opinion poll done; it seems all the voters are in favour of bringing back National Service. - Well, have another opinion poll done showing the voters are against bringing back National Service!"}
Scott Alexander concludes: {Skeptical of the research, because we do see people respond to financial incentives. Even if most people don't, it could still be important.}
Too Much Dark Money In Almonds
Scott Alexander says: {Why is there so little money in politics? Less than $12 billion/year in the US, which is less than the amount of money spent on almonds. Hypothesis: this is explained by coordination problems.}
Other ideas: People want to avoid escalation since if they spend money their political opponents will just spend more, etc. But this is implausible because it itself requires a massive degree of coordination.
What if money in politics doesn't actually make much difference? If the world is as depicted in Yes Minister, the government will keep doing the same thing regardless of political spending anyway.
Maybe a better comparison is (almond advertising):(political spending)::(almonds):(all government spending).
Spending directly on a goal is more effective than lobbying the government to spend on that goal, e.g. Elon Musk and SpaceX.
What would have more political spending, an absolute monarchy or a direct democracy? (Disagreement on this.)
Why is bribery more common in some places than others? Maybe you just can't get anything done at all without bribes. Or maybe some places hide it better by means of e.g. revolving-door lobbyist deals, "We'll go easy on your cousin who's in legal trouble", etc.
Aside: {Scott Alexander asks: {Is someone biased simply because they have a stake in something?} Total postmodern discourse would entirely discount someone's argument based on their stake in the matter; but we aren't so epistemically helpless that we can't evaluate the actual contents of an argument.}
Aside: {Administrative clawback: If you fix problems, you'll get less money next year - perhaps by more than enough to cancel out the benefits of the fix. They'll expect you to make just as much progress again, which may not be possible. Don't excel because that'll raise expectations for the future.}
Or maybe almonds are a bigger deal than you think!