It seems to me that such "unhealthiness" is pretty normal for labor and property markets: when I read books from different countries and time periods, the fear of losing one's job and home is a very common theme. Things were easier in some times and places, but these were rare.
So it might make more sense to focus on reasons for "unhealthiness" that apply generally. Overregulation can be the culprit in today's US, but I don't see it applying equally to India in the 1980s, Turkey in the 1920s, or England in the early 1800s (these are the settings of some books on my shelf whose protagonists had very precarious jobs and housing). And even if you defeat overregulation, the more general underlying reasons might still remain.
What are these general reasons? In the previous comment I said "exploitation", but a more neutral way of putting it is that markets don't always protect one particular side. Markets are two-sided: there's no law of economics saying a healthy labor market must be a seller's market, while housing must be a buyer's market. Things could just as easily go the other way. So if we want to make the masses less threatened, it's not enough to make markets more healthy overall; we need to empower the masses' side of the market in particular.
I think questions of power differences between the "elites" and the "masses" are very relevant to the AI transition, both as a model for intuitions and as a way to choose policy directions now, because AI will tend to amplify and lock-in these power differences and at some point it'll get too late. For more context, see these comment threads of mine: 1, 2, 3, or this book review.
Yeah, I wouldn't have predicted this response either. Maybe it's a case of something we talked about long ago - that if a person's "true values" are partly defined by how the person themselves would choose to extrapolate them, then different people can end up on very diverging trajectories. Like, it seems I'm slightly more attached to some aspects of human experience that you don't care much about, and that affects the endpoint a lot.
Despite our superior technology, there are many things that Western countries could do in the past that we can’t today—e.g. rapidly build large-scale infrastructure, maintain low-crime cities, and run competent bureaucracies.
Why do you focus on these problems? I mean, sure, the average person in the West can feel threatened by crime, infrastructure decay, or incompetent bureaucracy. But they live every day under much bigger threats, like the threat of losing their job, getting evicted, getting denied healthcare, or getting billed or fee-d into poverty. These seem to be the biggest societal (non-health, non-family) threats for our hypothetical average person. And the common pattern in these threats isn't decay or incompetence, it's exploitation by elites.
That tweet doesn't sound right to me. Or at least, to me there's a simpler and more direct explanation of bubbles in terms of real resources, without having to mention money supply or central banks at all.
During a bubble, people are having fun because resources are being misallocated: misallocated to their fun. Some rich chumps are throwing their resources at something useless, like buying tulips. That bankrolls the good times for everyone else: the tulip-growers, the hairdressers that serve the tulip-growers and so on. But at some point the rich chumps realize that tulips aren't that great, and that they burned their resources just to make a big bonfire and make everyone warm for awhile. When they realize that, the tulip growers will lose their jobs, and then the hairdressers who served them and so on. That's the pain of the bubble ending, and it's unavoidable, central bank or no.
(This thread is getting a bit long, and we might not be convincing each other very much, so hope it's ok if I only reply with points I consider interesting - not just push-pull.)
With the concert pianist thing I think there's a bit of type error going on. The important skill for a musician isn't having fast fingers, it's having something to say. Same as: "I'd like to be able to write like a professional writer" - does that mean anything? You either have things you want to write in the way that you want to write, or there's no point being a writer at all, much less asking an AI to make you one. With music or painting it's the same. There's some amount of technique required, but you need to have something to say, otherwise there's no point doing it.
So with that in mind, maybe music isn't the best example in your case. Let's take an area where you have something to say, like philosophy. Would you be willing to outsource that?
Well, there's no point in asking the AI to make me good at things if I'm the kind of person who will just keep asking the AI to do more things for me! That path just leads to the consumer blob again. The only alternative is if I like doing things myself, and in that case why not start now. After all, Leonardo himself wasn't motivated by the wish to become a polymath, he just liked doing things and did them. Even when then they're a bit difficult ("chores").
Anyway that was the theoretical argument, but the practical argument is that it's not what's being offered now. We started talking about outsourcing the task of understanding people to AI, right? That doesn't seem like a step toward Leonardo to me! It would make me stop using a pretty important part of my mind. Moreover, it's being offered by corporations that would love to make me dependent, and that have a bit of history getting people addicted to stuff.
There's no "line" per se. The intuition goes something like this. If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn't even need a brain. But if my value system is about doing stuff myself, then the logical endpoint is Leonardo da Vinci. To me that's obviously better. So there are quite a lot of skills - like doing math, playing musical instruments, navigating without a map, or understanding people as in your example - that I want to do myself even if there are machines that could do it for me cheaper and better.
This seems like one-shot reasoning though. If you extend it to more people, the end result is a world where everyone treats understanding people as a chore to be outsourced to AI. To me this is somewhere I don't want to go; I think a large part of my values are chores that I don't want to outsource. (And in fact this attitude of mine began quite a few steps before AI, somewhere around smartphones.)
I agree this distinction is very important, thank you for highlighting it. I'm in camp B and just signed the statement.