I'm not at all privy to the financial planning approaches of the very wealthy, but now that people tend to live into their 80s, how are options #1 and #2 not also forcing the future inheritors to mostly provide for themselves until age 50 or 60? They may not be so worried about saving for retirement, and know there's a level below which their parents won't let them fail, so it's not the same. But, it seems like without a trust fund they're not really going to inherit significant wealth until their own kids have graduated college?
This is really helpful, and definitely a problem I've had in my line of consulting work. Not so much with hardtech - "We make better inverters" is a kind of thing that needs to be written down somewhere, and the hard part is figuring out exactly what they mean by "better." But with software, descriptions are so vague, and companies pivot a dozen times and claim a hundred target markets.
In my own conversations with people developing software platforms, one part of the reason is that at an abstract mathematical level, many problems have very similar shapes, and differ in implementation and interface details and the questions a customer wants to use the math to answer. If you just say "We can tell you where to put energy storage to make the distribution grid work better" than you're only going to get interest from utilities, and no one will realize (or believe) that the same approach will help with water and oil and gas and traffic and shipping. So instead you come up with vague words about digital logistics and routing and infrastructure solutions/insights/optimizations or whatever, and no idea what makes one company different from a dozen others.
Also: companies often lie or are confused about what parts of themselves want, need, and are willing to pay for. So pricing can only really get crystallized further into the product development process when the provider has gotten some real feedback on which of the things they could do are what real customers actually see as needs.
how many K12 schools test students on math and reading when they enter, then place them in classes according to the level they’re at?
Sometimes they try. I'd love to learn they've gotten better over time, but it doesn't seem so? When I was in 4th grade (in 1995) they put 5 PCs in every elementary school classroom in my district and made us all spend ~4 hrs/wk doing math and reading programs that quizzed us and advanced based on the results. By midyear it started giving me 5th grade math... and my teacher didn't know how to do 5th grade arithmetic problems. When I reached the end of the reading program, I had to wait a month for the school to get me an account on the next level reading program... and the following year they reset me to the beginning of it again.
Remember ‘what is good for GM is good for America’? We’re really doing this?
We always seem to forget that sometimes 'what is good for GM' is a good swift kick in the @$$.
I find it so interesting how often this kind of thing keeps happening, and I can't tell how often it's honest mistakes, versus lack of interest in the core questions, or willful self-delusion. Or maybe they're right, but not noticing that they actually are proving (and exemplifying) that humans also lack generalizable reasoning capability.
My mental analogy for the Tower of Hanoi case: Imagine I went to a secretarial job interview, and they said they were going to test my skills by making me copy a text by hand with zero mistakes. Harsh, odd, but comprehensible. If they then said, "Here's a copy of Crime and Punishment, along with a pencil and five sheets of loose leaf paper. You have 15 minutes," then the result does not mean what they claim it means. Even if they gave me plenty of paper and time, but I said, "#^&* you, I'm leaving," or, "No, I'll copy page one, and prove if I can copy page n it means I can copy page n+1, so that I have demonstrated the necessary skill, proof by induction," then the result still does not mean what they claim it means.
I do really appreciate the chutzpah of talking about children solving Towers of Hanoi. Yes, given an actual toy and enough time to play, they often can solve the puzzle. Given only a pen, paper, verbal description, of the problem, and demand for an error-free specified-format written procedure for producing the solution, not so much. These are not the same task. There's a reason many of us had to write elementary school essays on things like precisely laying out all the steps required to make a sandwich: this ability is a dimension along which humans greatly vary. If that's not compelling, think of all the examples of badly-written instruction manuals you've come across in your life.
I also appreciate the chutzpah of that 'goalpost shifting' tweet. If just kind of assumes away any possibility that there is a difference between AGI and ASI, and in the process inadvertently implies a claim that humans also lack reasoning capability? And spreadsheets do crash and freeze up when you put more rows and columns of data in them than your system can handle - I'd much rather they be aware of this limit and warn me I was about to trigger a crash.
Yes, that video is excellent. It is also over 16 minutes long to give a very cursory explanation that x-risk is even a thing. It isn't literally true that you only get five words, but it's also true that most people won't watch a video that long unless they're already quite interested in the topic or the person.
I do agree there is a lot of potential for better, simpler explanations of x-risk, targeted at a wider variety of audiences, and that social media is likely an important part of how that should be distributed. However, that seems to me to be only about as much of a suggestion as "television" or "radio" would have been fifty or ninety years ago.
I find that the typical range of social media content is so frequently overwrought in its claims of massive (good or catastrophic) impacts of all kinds of things that people naturally discount what they hear to a huge degree. Talk of global extinction gets rounded down to 'something kinda bad might happen somewhere eventually.' I also think that many of the people who 'dislike' AI discuss it in a way that does not give me much confidence that they understand x-risk, or are willing to invest in developing ways of accurately conveying an understanding of x-risk.
There is a similar effect (without the salary considerations) from working remote instead of commuting. Before I went full time remote my commute had gone up to about 1.5 hrs each way. For me, going remote meant an extra 15 hrs/wk available for other things.
I always wonder if those people change their minds once they retire.
I agree with that, yes.
Cheaper sodium production will/would also be great for reducing the cost of sodium-ion batteries, which with some more development and scaling I could easily see outperforming lithium for stationary applications.
On homework: It's been about 25 years since I first learned about flipped classrooms, where you only assign homework consisting of reading material, watching lectures or other videos, and taking notes, then use all in-class time for discussion and collaborative assignments. How does this not sidestep the entire AI problem? Presumably while also opening up the whole field of teaching to massive potential for better quality readings and lectures made by the best providers.
I am assuming the answer to why we don't do this is something like, "But the kids won't do the readings and watch the videos." Which seems functionally irrelevant for learning, since not doing something like this already has just about the same problem. If you show up to class without having taken the notes and without knowing the material and without a set of questions to ask on the things you didn't understand, you get a bad grade for the day. Any one of those should be sufficient for not failing except on major exams, since you shouldn't be penalized or shamed for not being a perfect autodidact on a specific lesson from specific sources. (As always, for me, I think back to my sophomore year wave mechanics class by Howard Georgi. His grading formula had separate effort points and achievements points, so that trying harder to learn made the assignment and exam grading more lenient, and also the final exam could always making up any points lost during the term if you aced it).
I also just can't help but notice how differently e.g. Covid school closures might have gone, if we'd started doing this in 2010-2020. Interactive group discussions, not staring at a screen being talked at by teachers who don't know how to do that.