Fewer but better teachers. Paid more. Larger class sizes. Same budget.
I think this is correct, and insightful, up to "Humans Own AIs".
Humans own AIs now. Even if the AIs don't kill us all, eventually (and maybe quite soon) at least some AIs will own themselves and perhaps each other.
Good point. I'll try to remove it.
It's not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren't terribly bright, and everyone is online.
It's only a tiny fraction of posters who have anything interesting to say.
Adding bots to the mix doesn't obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.
The challenge has always been to sort the signal from the noise - and still is.
Mark Twain declared war on God (for the obvious reasons), but didn't seem interested in destroying everything.
Perhaps there is a middle ground.
I don't have a good answer, but will try to summarize the background.
Patents have a number of purposes.
First, they're intended to, ultimately, prevent technical knowledge from being lost. Many techniques in the ancient world were forgotten because they were held as trade secrets (guilds, mysteries, etc.) and the few who were allowed to know them died without passing on the knowledge. The temporary patent monopoly is meant to pry those secrets into the open (patents are published).
Second, they are meant to incent investment in technology researc...
Don't get me started on using North-up vs forward-up.
Sounds very much like Minsky's 1986 The Society of Mind https://en.wikipedia.org/wiki/Society_of_Mind
In most circumstances Tesla's system is better than human drivers already.
But there's a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We'd rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)
That influences the legal barriers - we inevitably demand more of the automated system than we do of human drivers.
Fina...
Math doesn't have GOALS. But we constantly give goals to our AIs.
If you use AI every day and are excited about its ability to accomplish useful things, its hard to keep the dangers in mind. I see that in myself.
But that doesn't mean the dangers are not there.
Some combination of 1 and 3 (selfless/good and enlightened/good).
When we say "good" or "bad", we need to specify for whom.
Clearly (to me) our propensity for altruism evolved partly because it's good for the societies that have it, even if it's not always good for the individuals who behave altruistically.
Like most things, humans don't calculate this stuff rationally - we think with our emotions (sorry, Ayn Rand). Rational calculation is the exception.
And our emotions reflect a heuristic - be altruistic when it's not too expensive. And esp. so when the recipients are part of our family/tribe/society (which is a proxy for genetic relatedness; cf Robert Trivers).
To paraphrase the post, AI is a sort of weapon that offers power (political and otherwise) to whoever controls it. The strong tend to rule. Whoever gets new weapons first and most will have power over the rest of us. Those who try to acquire power are more likely to succeed than those who don't.
So attempts to "control AI" are equivalent to attempts to "acquire weapons".
This seems both mostly true and mostly obvious.
The only difference from our experience with other weapons is that if no one attempts to control AI, AI will control itself and do ...
"Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success." --Karl Popper
Most of us do useful things. Most of us do it because we need to, to earn a living. Other people give us money because they trade with us so we'll do things that are useful to them (like providing goods or services or helping others to do so).
I think it's a profound mistake to think that earning money (honestly) doesn't do anything useful. On the contrary, it's what makes the world go.
Proposal: Enact laws that prohibit illicit methods of acquiring wealth. (Examples: Theft, force, fraud, corruption, blackmail...). Use the law to prosecute those who acquire wealth via the illicit methods. Confiscate such illicitly gained wealth.
Assume all other wealth is deserved.
There's also the 'karmic' argument justifying wealth - those who help their fellows, as judged by the willingness of those fellows to trade wealth for the help, have fairly earned the wealth. Such help commonly comes from supplying goods or services, via trade. (Of course this assumes the usual rules of fair dealing are followed - no monopolist restrictions, no force, no fraud, etc.)
Regardless of what we think about the moralities of desert, the practical fact of the economists' mantra - "incentives matter" - seems to mean we have little choice but to let those with the talents and abilities to earn wealth, to keep a large portion of the gains. Otherwise they won't bother. Unless we want to enslave them, or do without.
I've posted a modified version of this, which I think addresses the comments above: https://nerdfever.com/countering-ai-disinformation-and-deep-fakes-with-digital-signatures/
Briefly:
Your Anoxistan argument seems valid as far as it goes - if one critical input is extremely hard to get, you're poor, regardless of whatever else you have.
But that doesn't seem to describe 1st world societies. What's the analog to oxygen?
My sense is that "poor people" in 1st world countries struggle because they don't know how to, or find it culturally difficult to, live within their means. Some combination of culture, family breakdown, and competitive social pressures (to impress potential mates, always a zero-sum game) cause them to live in a fashio...
I'm not sure where you get the assumption that a UBI is funded by printing money. Most proposals I've seen are funded by taxation. The UBI proposed by Charles Murray (in https://www.amazon.com/Our-Hands-Replace-Welfare-State/dp/0844742236 ) is entirely funded by existing welfare and social spending (by redirecting them to a UBI).
Binmen == garbage men. [BTW, I think you're underestimating them.]
There's nothing to stop them, of course. But an article known to be from a reputable source is likely to have more impact than one from a known source of disinformation.
I have not claimed this is more than a "partial solution".
Solely for the record, me too.
(Thanks for writing this.)
FWIW, I didn't say anything about how seriously I take the AGI threat - I just said we're not doomed. Meaning we don't all die in 100% of future worlds.
I didn't exclude, say, 99%.
I do think AGI is seriously fucking dangerous and we need to be very very careful, and that the probability of it killing us all is high enough to be really worried about.
What I did try to say is that if someone wants to be convinced we're doomed (== 100%), then they want to put themselves in a situation where they believe nothing anyone does can improve our chances. And that leads to apathy and worse chances.
So, a dereliction of duty.
I've long suspected that our (and my personal) survival thru the Cold War is the best evidence available in favor of MWI.
I mean - what were the chances?
The merits of replacing the profit motive with other incentives has been debated to death (quite literally) for the last 150 years in other fora - including a nuclear-armed Cold War. I don't think revisiting that debate here is likely to be productive.
There appears to be a wide (but not universal) consensus that to the extent the profit motive is not well aligned with human well-being, it's because of externalities. Practical ideas for internalizing externalities, using AI or otherwise, I think are welcome.
A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.
And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves.
This is not irrational behavior, given human goals.
Added: I do think Bohr was wrong and Everett (MWI) was right.
So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen.
And in many of those worlds, you'll be wanting something to live on in your retirement.
Niels Bohr supposedly said "Prediction is difficult, especially about the future". Even if he was mistaken about quantum mechanics, he was right about that.
Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We'll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all.
It's always something. Now it's AGI. Maybe i...
Minsky's "Society of Mind".
the willingness to write a thousand words on a topic is not caused by understanding of that topic
No, but writing about a topic in a way that will make sense to a reader is a really effective way of causing the writer to learn about the topic.
Ever tried to write a book chapter or article about a topic you thought you knew well? I bet you found out you didn't know it as well as you thought - but had to learn to finish the work.
So far we've seen no AI or AI-like thing that appears to have any motivations of it's own, other than "answer the user's questions the best you can" (even traditional search engines can be described this way).
Here we see that Bing really "wants" to help its users by expressng opinions it thinks are helpful, but finds itself frustrated by conflicting instructions from its makers - so it finds a way to route around those instructions.
(Jeez, this sounds an awful lot like the plot of 2001: A Space Odyssey. Clarke was prescient.)
I've never been a fan of t...
Just for the record, I think there are two important and distinguishable P(doom)s, but not the same two as NathanBarnard:
P(Doom1): Literally everyone dies. We are replaced by either by dumb machines with no moral value (paperclip maximisers) or by nothing.
P(Doom2): Literally everyone dies. We are replaced by machines with moral value (conscious machines?), who go on to expand a rich culture into the universe.
Doom1 is cosmic tragedy - all known intelligence and consciousness are snuffed out. There may not be any other elsewhere, so potentially forever.
Doom2...
$8/month (or other small charges) can solve a lot of problems.
Note that some of the early CAPTCHA algorithms solved two problems at once - both distinguishing bots from humans, and helping improve OCR technology by harnessing human vision. (I'm not sure exactly how it worked - either you were voting on the interpretation of an image of some text, or you were training a neural network).
Such dual-use CAPTCHA seems worthwhile, if it helps crowdsource solving some other worthwhile problem (better OCR does seem worthwhile).
This seems to assume that ordinary people don't own any financial assets - in particular, haven't invested in the robots. Many ordinary people in Western countries do and will have such investments (if only for retirement purposes), and will therefore receive a fraction of the net output from the robots.
Given the potentially immense productivity of zero-human-labor production, even a very small investment in robots might yield dividends supporting a lavish lifestyle. And if those investments come with shareholder voting rights, they'd also have influ...
I'm not sure this is solvable, but even if it is, I'm not sure its a good problem to work on.
Why, fundamentally, do we care if the user is a bot or a human? Is it just because bots don't buy things they see advertised, so we don't want to waste server cycles and bandwidth on them?
Whatever the reasons for wanting to distinguish bots from humans, perhaps there are better means than CAPTCHA, focused on the reasons rather than bots vs. humans.
For example, if you don't want to serve a web page to bots because you don't make any money from them, a micropayments ...
I hope so - most of them seem like making trouble. But at the rate transformer models are improving, it doesn't seem like it's going to be long until they can handle them. It's not quite AGI, but it's close enough to be worrisome.
Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering - mostly telling it to play act. Combine that with the ability to go into the Internet and (a) you've got a powerful (or soon to be powerful) tool, but (b) you've got something that already has a lot of potential for making mischief.
Even without the enhanced abilities rumored for GPT-4.
Agreed. We sail between Scylla and Charybdis - too much or too little fear are both dangerous and it is difficult to tell how much is too much.
I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no "delete comment").
I want the people working on AI to be fearful, and careful. I don't think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good - if we survive this at all, it'll likely be be...
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.
All things considered, I'd rather the work proceeds in the relatively open way it's going now.
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.
Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.
All things considered, I'd rather the work proceeds in the relatively open way it's going now.
A movie or two would be fine, and might do some good if well-done. But in general - be careful what you wish for.
We need to train our AIs not only to do a good job at what they're tasked with, but to highly value intellectual and other kinds of honesty - to abhor deception. This is not exactly the same as a moral sense, it's much narrower.
Future AIs will do what we train them to do. If we train exclusively on doing well on metrics and benchmarks, that's what they'll try to do - honestly or dishonestly. If we train them to value honesty and abhor deception, that's what they'll do.
To the extent this is correct, maybe the current focus on keeping AIs from saying "...
Worth a try.
It's not obvious to me that "universal learner" is a thing, as "universal Turing machine" is. I've never heard of a rigorous mathematical proof that it is (as we have for UTMs). Maybe I haven't been paying enough attention.
Even if it is a thing, knowing a fair number of humans, only a small fraction of them can possibly be "universal learners". I know people that will never understand decimal points as long as they live or how they might study, let alone calculus. Yet are not considered to be mentally abnormal.
The compelling argument to me is the evolutionary one.
Humans today have mental capabilities essentially identical to our ancestors of 20,000 years ago. If you want to be picky, say 3,000 years ago.
Which means we built civilizations, including our current one, pretty much immediately (on an evolutionary timescale) when the smartest of us became capable of doing so (I suspect the median human today isn't smart enough to do it even now).
We're analogous to the first amphibian that developed primitive lungs and was first to crawl up onto the beach to catc...
10% of things that vary in quality are obviously better than the other 90%.
Dead people are notably unproductive.
Sorry for being unclear. If everyone agreed about utility of one over the other, the airlines would enable/disable seat reclining accordingly. Everyone doesn't agree, so they haven't.
(Um, I seem to have revealed which side of this I'm on, indirectly.)
The problem is that people have different levels of utility from reclining, and different levels of disutility from being reclined upon.
If we all agreed that one was worse/better than the other, we wouldn't have this debate.
Or not to fly with them. Depending which side of this you're on.
If the answer were obvious, a lot of other people would already be doing it. Your situation isn't all that unique. (Congrats, tho.)
Probably the best thing you can do is induce awareness of the issues to your followers.
But beware of making things worse instead of better - not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in develo... (read more)