A brief and pleasant exploration of how AI is shaping the world around us, with tips and suggestions for educators, workers, and students to use AI as a Coworker, Tutor, Coach, or another Person. It is best seen as a business book or read by those with little AI exposure.
Admittedly, given how much I enjoy Mollick's tweets about AI, I was hoping for a bit more (but it is a general audience business book, so I understand).
I appreciated the refrain that the AI model you're using now will be the worst one you ever use. Additionally, at the very end, he briefly mentions four possibilities for the future:
1. No AI growth
2. Slow AI growth
3. Exponential AI growth
4. Machine God (end of human dominance)
I think it is useful to describe the main futures we are likely to inhabit, so this was a welcome addition. That said, he very much does not focus on AI as a larger threat to humanity. I think this is fine because a book can't be about everything, but his reasons for avoiding the topic were not compelling. Paraphrasing, he said that larger existential concerns have a lot of uncertainty, and if there was an issue, it's quite disempowering. It's that last part that is the issue. Discussing AI risk doesn't have to be disempowering, it could be galvanizing. We can actually get people to become engaged and be more concerned about AI safety. Mollick seems to somewhat believe this because he urges his audience to become more engaged on (less dramatic) AI issues lest their future be decided for them. The same companies and complications are involved (to a large extent, anyway) in both forms of advocacy, so I don't know why he codes them differently.
If someone wrote a book about air pollution and wildfires, that's perfectly fine. But if they said they weren't going to discuss climate change because it would be disempowering, that would be an odd move.
Finally, I sympathize with the difficulties of narrating an audiobook and the inclination to have the author read their book, but sorry to say that the listening experience would have been better with another narrator.
A brief and pleasant exploration of how AI is shaping the world around us, with tips and suggestions for educators, workers, and students to use AI as a Coworker, Tutor, Coach, or another Person. It is best seen as a business book or read by those with little AI exposure.
Admittedly, given how much I enjoy Mollick's tweets about AI, I was hoping for a bit more (but it is a general audience business book, so I understand).
I appreciated the refrain that the AI model you're using now will be the worst one you ever use. Additionally, at the very end, he briefly mentions four possibilities for the future:
1. No AI growth
2. Slow AI growth
3. Exponential AI growth
4. Machine God (end of human dominance)
I think it is useful to describe the main futures we are likely to inhabit, so this was a welcome addition. That said, he very much does not focus on AI as a larger threat to humanity. I think this is fine because a book can't be about everything, but his reasons for avoiding the topic were not compelling. Paraphrasing, he said that larger existential concerns have a lot of uncertainty, and if there was an issue, it's quite disempowering. It's that last part that is the issue. Discussing AI risk doesn't have to be disempowering, it could be galvanizing. We can actually get people to become engaged and be more concerned about AI safety. Mollick seems to somewhat believe this because he urges his audience to become more engaged on (less dramatic) AI issues lest their future be decided for them. The same companies and complications are involved (to a large extent, anyway) in both forms of advocacy, so I don't know why he codes them differently.
If someone wrote a book about air pollution and wildfires, that's perfectly fine. But if they said they weren't going to discuss climate change because it would be disempowering, that would be an odd move.
Finally, I sympathize with the difficulties of narrating an audiobook and the inclination to have the author read their book, but sorry to say that the listening experience would have been better with another narrator.