I have found it productive to ask people about what they want to happen, about what kind of future sounds good to them, instead of debating which side effect of the AI pill is the most annoying/deadly. I feel like we are presently more in need of targets than antitargets.
I asked about that here, some interesting answers https://www.lesswrong.com/posts/wxbRqep7SuCFwxqkv/what-exactly-did-that-great-ai-future-involve-again
There are ways to address the problem of cheating in school without nuking the datacenters.
For instance: If a student gets an LLM to write their homework essay or term paper, that's not really very different from having another student or an essay-writing service write it — and those are problems that schools and colleges faced before LLMs came along. In these cases, the student will not be able to discuss their work very effectively in class. So, structure the class as a seminar or workshop, in which students are expected to discuss their work. In math classes, have students discuss proofs and constructions, do work on whiteboards, etc.
If the class is structured as a homework-based password-guessing exercise, then the LLM cheaters win. But if the class is structured as an in-person discussion, the LLM cheaters lose.
There are ways to address all of these, but the key question is whether we will. I agree that education is easier to address than some of the other risks I mentioned, but in the 2.5y since ChatGPT came out I've seen very little progress in restructuring education to address these issues.
Perhaps one common theme in the objections you find more compelling is that existing systems of accountability are unprepared to effectively allocate responsibility for outcomes that are (at least partially) generated by AI?
The most concerning mundane risks that come to mind are unemployment, concentration of power, and adversarial forms of RL (I'm missing a better phrase here, basically what TikTok/Meta/the recent o4 model were already doing). The problems in education are partially downstream of that (what's the point if it's not going to help prepare you for work) and otherwise honestly don't seem too serious on absolute terms? Granted, the system may completely fail to adapt, but that seems more to be an issue with the system already being broken and not about AI in particular.
About twice as many Americans think AI is likely to have a negative effect as a positive one. At a high level I agree: we're talking about computers that are smart in ways similar to people, and quickly getting smarter. They're also faster and cheaper than people, and again getting more so.
There are a lot of ways this could go, and many of them are seriously bad. I'm personally most worried about AI removing the technical barriers that keep regular people from creating pandemics, removing human inefficiencies and moral objections that have historically made totalitarian surveillance and control difficult to maintain, and gradually being put in control of critical systems without effective safeguards that keep them aligned with our interests. I think these are some of the most important problems in the world today, and quit my job to work on one of them.
Despite these concerns, I'm temperamentally and culturally on the side of better technology, building things, and being confident in humanity's ability to adapt and to put new capabilities to beneficial use. When I see people pushing back against rapid deployment of AI, it's often with objections I think are minor compared to the potential benefits. Common objections I find unconvincing include:
Energy and water: the impact is commonly massively overstated, and we can build solar and desalination.
Reliability: people compare typical-case AI judgement to best-case human judgement, ignoring that humans often operate well below best-case performance.
Art: technological progress brought us to a world with more artists than ever before, and I'd predict an increase in human-hours devoted to art as barriers continue to lower.
Tasks: it's overall great when we're able to automate something, freeing up humans to work elsewhere. In my own field, a large fraction of what programmers were spending their time on in 1970 has been automated. Now, at companies that draw heavily on AI it's the majority of what programmers were doing just 3-5 years ago. The role is shifting quickly to look a lot more like management.
I'm quite torn on how to respond when I see people making these objections. On one hand we agree on how we'd like to move a big "AI: faster or slower" lever, which puts us on the same side. Successful political movements generally require accepting compatriots with very different values. On the other hand, reflexively emphasizing negative aspects of changes in ways that keep people from building has been really harmful (housing, nuclear power, GMO deployment). This isn't an approach I feel good about supporting.
Other criticisms, however, are very reasonable. A few examples:
Employment: it's expensive to have employees, and companies are always looking to cut costs. Initially I expect AI to increase employment, the same way the development of the railroad and trucking increased demand for horses. In some areas humans (or horses) excel; in others AI (or mechanized transport) does. Over time, however, and possibly pretty quickly, just as horses became economically marginal as their competition became cheaper and more capable, I expect the same to happen to humans.
Scams: these have historically been limited by labor, both in terms of costs and in terms of how many people would take the job. AI loosens both of these constraints dramatically.
Education: cheating in school is another thing that has historically been limited by cost and ethics. But when the AI can do your homework better than you can, cheating is nearly inevitable. You'll be graded on a curve against classmates who are using the AI, your self-control is still developing, and teachers are mostly not adapting to the new reality. Learning suffers massively.
I'd love it if people thought hard about potential futures and where we should go with AI, and took both existential (pandemic generation) and everyday (unemployment) risks seriously. I'm very conflicted, though, on how much to push back on arguments where I agree with the bottom line while disagreeing with the specifics. For now I'm continuing to object when I see arguments that seem wrong, but I'm going to try to put more thought into emphasizing the ways we do agree and not being too adversarial.
Comment via: facebook, lesswrong, mastodon, bluesky, substack