Fewer but better teachers. Paid more. Larger class sizes. Same budget.
I think this is correct, and insightful, up to "Humans Own AIs".
Humans own AIs now. Even if the AIs don't kill us all, eventually (and maybe quite soon) at least some AIs will own themselves and perhaps each other.
Good point. I'll try to remove it.
It's not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren't terribly bright, and everyone is online.
It's only a tiny fraction of posters who have anything interesting to say.
Adding bots to the mix doesn't obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.
The challenge has always been to sort the signal from the noise - and still is.
Mark Twain declared war on God (for the obvious reasons), but didn't seem interested in destroying everything.
Perhaps there is a middle ground.
Don't get me started on using North-up vs forward-up.
Sounds very much like Minsky's 1986 The Society of Mind https://en.wikipedia.org/wiki/Society_of_Mind
In most circumstances Tesla's system is better than human drivers already.
But there's a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We'd rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)
That influences the legal barriers - we inevitably demand more of the automated system than we do of human drivers.
Finally, liability. Today drivers bear the liability risk for accidents, and pay for insurance to cover it. It seems impossible to justify putting that burden on drivers when drivers aren't in charge - those who write the algorithms and build the hardware (car manufacturers) will have that burden. And that's pricey, so manufacturers don't have great incentive to go there.
If the answer were obvious, a lot of other people would already be doing it. Your situation isn't all that unique. (Congrats, tho.)
Probably the best thing you can do is induce awareness of the issues to your followers.
But beware of making things worse instead of better - not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in development may be less than in relatively open organizations. Esp. orgs with reputations to lose.
The only things now incentivizing AI development in (existentially) safe ways are the scruples and awareness of those doing the work, and relatively public scrutiny of what they're doing. That may be insufficient in the end, but it is better than if the work were driven to less scrupulous people working underground or in national-security-supremacy environments.