There could be knock-on effects of increasing demand for non-AI-generated analogues, increasing harm.
How long will it take until high-fidelity, AI-generated porn becomes an effective substitute for person-generated porn?
Here are some important factors: Is it ethical? Is it legal? Does the output look genuine? Is it cost-effective?
Possible benefits:
Problems to look out for:
A really unpleasant case:
I'm not sure how relevant the slowdown in compute price decrease is to this chart, since it starts in 2018 and the slowdown started 6-8 years ago; likewise, AlexNet, the breakout moment for deep learning, was 9 years ago. So if compute price is the primary rate-limiter, I'd think it would have a more gradual, consistent effect as models get bigger and bigger. The slowdown may mean that models cost quite a lot to train, but clearly huge companies like Nvidia and Microsoft haven't started shying away yet from spending absurd amounts of money to keep growing their models.
I'd hesitate to make predictions based on the slowdown of GPT-3 to Megatron-Turing, for two reasons.
First, GPT-3 represents the fastest, largest increase in model size in this whole chart. If you only look at the models before GPT-3, the drawn trend line tracks well. Note how far off the trend GPT-3 itself is.
Second, GPT-3 was released almost exactly when COVID became a serious concern in the world beyond China. I must imagine that this slowed down model development, but it will be less of a factor going forward.
On your question about Hitler getting eugenic ideas from the US—yes, there's some evidence that he did. Although I haven't read it yet, the book "Hitler's American Model: The United States and the Making of Nazi Race Law" looks like a readable introduction to this concept.
Yup, it's a problem. As an American I've had an optometrist not want to give me my prescription!
Indeed! It wasn't rare by any means. A great book about this is Illiberal Reformers.
That's definitely fair, though it's plausible that some benefits of education do not depend solely on increases in income or social connections. For example, a meta-analysis by Ritchie et al. suggests that education may itself improve intelligence. I do agree, however, that more fine-grained (and more difficult to measure) metrics than "number of years of education" would help sharpen the argument.
Can we model almost all money choices in our life as ethical offsetting problems?
Example 1: You do not give money to a homeless person on the street, or to a friend who's struggling financially and maybe doesn't show the best sense when it comes to money management. You give the money you save to a homeless shelter or to politicians promoting basic income or housing programs.
Example 2: You buy cheaper clothes from a company that probably treats its workers worse than other companies. You give the money you save to some organization that promotes ethical global supply chains or gives direct money aid to people in poverty.
(Note: In all these examples, you might choose to give the money to some organization that you believe has some larger net positive than the direct offset organization. So you might not give money to homeless people, and instead give it to Against Malaria Foundation, etc. This is a modification of the offsetting problem that ignores questions of fungibility of well-being among possible benefactors.)
The argument for: In the long term, you might promote systems that prevent these problems from happening in the first place.
The argument against: For example 1, social cohesion. You might suck as a friend, might get a reputation for sucking as a friend, and you might feel less safe in your community knowing that if everyone acted the same way as you, you wouldn't get support. For example 2, the market mechanism might just be better—maybe you should vote directly with your money? It's fuzzy, though, since paying less money to companies that pay horribly may just drive down pay more? Some studies on this would be helpful.
Critical caveat: Are you actually shuttling the money you're saving by doing the thing that's probably negative into the thing that's more probably positive? It's very easy to do the bad thing, say you're going to do the good thing, and then forget to do the good thing or otherwise rationalize it away.