To me the most important graph is the one that shows both mothers and fathers started spending much more time on child-care in the 90s. What the heck happened? Did children suddenly become that much more difficult to manage? If kids really consume that much time and effort, it's no wonder that people don't want to have kids - it's too much damn work!
The Japanese value stability much, much more than Americans. This harms their economy in various ways:
How much did the supposedly severe decline in Google's organizational health contribute to your decision to change jobs?
Defined benefit pension schemes like Social Security are grotesquely racist and sexist, because of life expectancy differences between demographic groups.
African American males have a life expectancy of about 73 years, while Asian American females can expect to live 89 years. The percentage difference between those numbers may not seem that large, but it means that the latter group gets 24 years of pension payouts (assuming a retirement age of 65), while the former gets only 8, a 3x difference. So if you look at a black man and an Asian woman who have the...
Judging by the hammering that Meta's stock has taken over the last 5 years, the market really disagrees with you.
Here's an argument against radical VR transformation in the near term: some significant proportion of people have a strong anti-VR aversion. But the benefit of VR for meetings has strong network effects: if you have 6 friends you want to meet with, but 2 out of the 6 hates VR, that's going to derail the benefit of VR for the whole group.
The situation is not ‘handled.’ Elites have lost all credibility.
I think it's worth caveating this that not all elites have lost credibility. Elites in places like Singapore, Switzerland, and Finland have a lot of credibility.
Two possibilities:
I don't buy the housing cost / homelessness causation. There are many poor cities in the US that have both low housing costs and high homelessness. This page mentions Turlock, CA, Stockton, CA, and Springfield, MA as among the top 15 places with the highest homelessness rates; a quick Zillow search indicates they all have a fair bit of cheap housing.
The relationship between homelessness and state-wide housing costs is probably caused by a latent variable: degree of urbanization. Cities are both more expensive and have more homelessness, and states vary w...
On the state level, the correlation between urbanization and homelessness is small (R^2 = 0.13) and disappears to zero when you control for housing costs, while the reverse is not true (R^2 of the residual = 0.56). States like New Jersey, Rhode Island, Maryland, Illinois, Florida, Connecticut, Texas, and Pennsylvania are among the most urbanized but have relatively low homelessness rates, while Alaska, Vermont, and Maine have higher homelessness despite being very rural. There's also, like, an obvious mechanism where expensive housing causes homelessness (...
Copied from a previous comment on Hacker News
I wish you well and I hope you win (ed, here I mean I hope the proposal is approved)
I am pessimistic though. I don't think people really understand how much current homeowners do not want additional housing to be built. It makes sense if you consider that the net worth of a typical homeowner is very substantially made up of a highly leveraged long position in real estate. If that position goes south - because of an increase in housing supply, or because of undesirable new people moving into the neighborhood - th...
End Social Security and Other Defined-Benefit Pension Schemes They are intrinsically racist and sexist.
Consider two people, Alice and Bob. Alice is an Asian-American female, while Bob is an African-American male. From the point of view of Social Security, they are identical in every respect: they are the same age, they make the same contributions of the same amount on the same date, and retire at the same time. For the sake of argument, suppose they begin taking SS payments at age 70.
Given that Alice and Bob have made exactly equivalent contributions to ...
Having a budget where initial creation is essentially free (fun!) while maintenance is extremely expensive (drugery!) is a dramatic exaggeration for most software development.
My feeling is that most software development has exactly the same cost parameters; the difference is just that BigTech companies have so much money they are capable of paying thousands of engineers handsome salaries, to do the endless drudgery required to keep the tech stacks working.
The SQLite devs pledge to support the product until 2050.
Thanks for the positive feedback and interesting scenario. I'd never heard of Birobidzhan.
Thanks for the tip about Kusto - it actually does look quite nice.
My prediction is that the main impact is to make it easier for people to throw together quick MVPs and prototypes. It might also make it easier for people to jump into new languages or frameworks.
I predict it won't impact mainstream corporate programming much. The dirty secret of most tech companies is that programmers don't actually spend that much time programming. If I only spend 5 hours per week writing code, cutting that time down to 4 hours while potentially reducing code quality isn't a trade anyone will really want to make.
Why isn't this an argument for banning all politically powerful people from Twitter?
One very important observation related to this issue is the fact that we often observe specific cognitive deficits (e.g. people who can't use nouns) but those specific deficits are almost always related to a brain trauma (stroke, etc.) If there were significant cognitive logic coded into the genome, we should see specific cognitive deficits in otherwise healthy young people caused by mutations.
I'm not sure exactly what you mean, but I'll guess you mean "how do you deal with the problem that there are an infinite number of tests for randomness that you could apply?"
I don't have a principled answer. My practical answer is just to use good intuition and/or taste to define a nice suite of tests, and then let the algorithm find the ones that show the biggest randomness deficiencies. There's probably a better way to do this with differentiable programming - I finished my Phd in 2010, before the deep learning revolution.
In my Phd thesis I explored an extension of the compression/modeling equivalence that's motivated by Algorithmic Information Theory. AIT says that if you have a "perfect" model of a data set, then the bitstream created by encoding the data using the model will be completely random. Every statistical test for randomness applied to the bitstream will return the expected value. For example, the proportion of 1s should be 0.5, the proportion of 1s following the prefix 010 should be 0.5, etc etc. Conversely, if you find a "randomness deficiency", you have...
Cool concepts! What tech stack did you use? Was it painful to get the Facebook API working?
Not a stupid question, this issue is actually addressed in the essay, in the section about interior modeling vs unsupervised learning. The latter is very vague and general, while the former is much more specific and also intrinsically difficult. The difficulty and preciseness of the objective make it much better as a goal for a research community.
I started this essay last year, and procrastinated on completing it for a long time, until recently the GPT-3 announcement gave me the motivation to finish it up.
If you are familiar with my book, you will notice some of the same ideas, expressed with different emphasis. I congratulate myself a bit on predicting some of the key aspects of the GPT-3 breakthrough (data annotation doesn't scale; instead learn highly complex interior models from raw data).
I would appreciate constructive feedback and signal-boosting.
I would add two ideas:
Holden is a smart guy, but he's also operating under a severe set of political constraints, since his organization depends so strongly on its ability to raise funds. So we shouldn't make too much of the fact that he thinks academia is pretty good - obviously he's going to say that.
Interesting analysis. I hadn't heard of Goodman before so I appreciate the reference.
In my view the problem of induction has been almost entirely solved by the ideas from the literature on statistical learning, such as VC theory, MDL, Solomonoff induction, and PAC learning. You might disagree, but you should probably talk about why those ideas prove insufficient in your view if you want to convince people (especially if your audience is up-to-date on ML).
One particularly glaring limitation with Goodman's argument is that it depends on natural l...
Good article. I would advise less emphasis on traditional schooling (reading, writing, 'rithmetic) and more emphasis on relationship intelligence and embodied intelligence (making things with your hands).