The merits of replacing the profit motive with other incentives has been debated to death (quite literally) for the last 150 years in other fora - including a nuclear-armed Cold War. I don't think revisiting that debate here is likely to be productive.
There appears to be a wide (but not universal) consensus that to the extent the profit motive is not well aligned with human well-being, it's because of externalities. Practical ideas for internalizing externalities, using AI or otherwise, I think are welcome.
That seems to downplay the fact that we will never be able to internalize all externalities simply because we cannot reliably anticipate all of them. So you are always playing catch up to some degree.
Also simply declaring an issue “generally” resolved when the current state of the world demonstrates it’s actually not resolved seems premature in my book. Breaking out of established paradigms is generally the best way to make rapid progress on vexing issues. Why would you want to close the door to this?
LessWrong tends to flinch pretty hard away from any topic that smells even slightly of politics. Restructuring society at large falls solidly under that header.
I sometimes imagine that making it so that anyone who works for or invests in an AI lab is unwelcome at the best Bay Area parties would be a worthwhile state of affairs to work towards, which is sort of along the same lines as you write.
Eliminating the profit motive would likely mean that militaries develop dangerous AI a few years later.
I'm guessing that most people's main reason is that it looks easier to ban AI research than to sufficiently reduce the profit motive.
As far as I know, there has never been a society that both scaled and durably resisted command-power being sucked into a concentrated authority bubble; whether this command-power/authority was tokenized via rank insignia or via numerical wealth ratings, the task of building a large-scale society of hundreds of millions to billions that can coordinate, synchronize, keep track of each others' needs and wants, fulfill the fulfillable needs and most wants, and nevertheless retains the benefits of giving both humans and nonhumans significant slack that the best designs for medium-scale societies of single to tens of millions like indigenous governance does and did, is an open problem. I have my preferences for what areas of thought are promising, of course.
Structuring numericalization of which sources of preference-statement-by-a-wanting-being are interpreted as command by the people, motors, and machines in the world appears to me to inlines the alignment problem and generalize it away from AI. It seems to me right now that this is the perspective where "we already have unaligned AI" makes the most sense - what is coming is then more powerful unaligned ai - and it seems to me that promising movement on aligning AI with moral cosmopolitanism will likely be portable back into this more general version. Right now, the competitive dynamics of markets - where purchasers typically sort offerings by some combination of metrics that centers price - creates dynamics where sellers that can produce things the most cheaply in a given area win. Because of monopolization and the externalities it makes tractable, the organizations most able to sell services which involve the work of many AI research workers and the largest compute clusters are somewhat concentrated, with the more cheaply implementable AI systems in more hands but most of those hands are the ones most able to find vulnerabilities in purchasers' decisionmaking and use it to extract numericalized power coupons (money).
It seems to me that ways to solve this would involve things that are already well known: if very-well-paid workers at major AI research labs could find it in themselves to unionize, they may be more able to say no to things where their organizations' command structure has misplaced incentives stemming from those organizations' stock contract owners' local incentives, maybe. But I don't see a quick shortcut around it and it doesn't seem like it's as useful as technical research on how to align things like profit motive with cosmopolitan values, eg via things like Dominant Assurance Contracts.
You might be thinking about it in a wrong way. Societal structures follow capabilities, not wants. If you try to push for "each person works and is paid according to their abilities and needs" too early, you end up with communist dystopias. If we are lucky, the AGI age will improve our capabilities enough where "to everyone according to their needs" may become feasible, aligning the incentives with well-being rather than with profit. So, to answer your questions:
That said, there are incremental steps that are possible to take without making things worse, and they are discussed quite often by Scott Alexander and Zvi, as well as by others in the rationalist diaspora. So read them.
This is a bit of a tangent but even in an ideal future I can't see how this wouldn't just be shifting the problem one step away. After all, who would get to define what the 'needs' are?
If it's defined by majority consensus, why wouldn't the crowd pleasing option of shifting the baseline to more expansive 'needs' be predominant?
I don't think I agree that societal structures follow capabilities and not wants. I'll agree that certain advancements in capability (long term food storage, agriculture, gunpowder, steam engines, wireless communication, etc.) can have dramatic effects on society and how it can arrange itself, but all the changes are driven by people utilizing these new capabilities to further themselves and/or their community.
The idea of scarcity in the present is a great example of this. The world currently produces so much food that about a third of it is thrown away be...
In other words, to maximize the chance for aligned AI, we must first make an aligned society.
"An aligned society" sounds like a worthy goal, but I'm not sure who "we" is in terms of specific people who can take specific actions towards that end.
I think proposals like this would benefit from specifying what the minimum viable "we" for the proposal to work is.
I ask myself the same question. I recently posted an idea about AI regulation to address such issues and start a conversation but there was almost no reaction and mostly just pushback. See: https://www.lesswrong.com/posts/8xN5KYB9xAgSSi494/against-the-open-source-closed-source-dichotomy-regulated
My take is that many people here are very worried about AI doom and think that for-profit work is necessary to get the best minds working on the issue. It also seems that Governments in general are perceived to be incompetent so the fear is more regulation will screw things up rather than make them better.
Needless to say, I think this is a false dichotomy and we should consider how we (as a society involving diverse actors and positions in transparent process) can develop regulation that actually creates a playing field where the best minds can responsibly work on societal and AI alignment. It’s difficult of course but the better option when compared to letting things develop as is. The last couple of years have demonstrated clearly enough that this will not work out. Let’s not just bury the head in the sand and hope for the best.
What, specifically, do you mean by "change current incentive structures"? Who are you altering to want different things, or to have different behaviors that they believe will get them those things?
I haven't seen any concrete thoughts on the topic, and for myself I don't think it's possible without a whole lot of risky interference in individual humans' beliefs and behaviors.
I've recently started reading posts and comments here on LessWrong and I've found it a great place to find accessible, productive, and often nuanced discussions of AI risks and their mitigation. One thing that's been on my mind is that seemingly everyone takes for granted that the world as it exists will eventually produce AI, particularly sooner than we have the necessary knowledge and tools to make sure it is friendly. Many seem to be convinced of the inevitability of this outcome, that we can do little to nothing to alter the course. Often referenced contributors to this likelihood are current incentive structures; profit, power, and the nature of current economic competition.
I'm therefore curious why I see so little discussion on the possibility of changing these current incentive structures. Mitigating the profit motive in favor of incentive structures more aligned with human well-being seems to me an obvious first step. In other words, to maximize the chance for aligned AI, we must first make an aligned society. Do people not discuss this idea here because it is viewed as impossible? Undesirable? Ineffective? I'd love to hear what you think.