Stumbled across a book in the new section of the library: "AI For Humanity," by Andeed Ma, James Ong (founder of the think tank AIII, which is also the sound I make when thinking about AI risk), and Siok Siok Tan. It's a mass-market-ish book about, well, AI for humanity, by a sampler of Singapore-centered technologists.
Chapter-by-chapter summary:
Different people have different opinions along both axes of "AI Impactful?" and "AI Risky?" We'll try to take them all seriously. Here's the example of Geoff Hinton talking about AI risk.
The nearest/most important 'AI Trap' is misinformation and general destruction of the epistemic environment.
Covers some sort of history of AI, with an emphasis on how vague the category 'AI' is and how functionally equivalent technology merely went "in camouflage" during putative AI Winters.
Meanderingly talks about approx. three "dilemmas": "Do we use AI at all?" "How to deal with the epistemic AI Trap?" and "How to align AI to human values [as yet unclear if this really means all human values or "fairness and transparency" sort of stuff] even though it's 'organic' [-ly adapting and improving]."
Four principles are supposed to deal with everything: Humans should be in control, AI should be contained, the human-AI relationship should be symbiotic, and humans should nurture the AI. A mixture of empty-feeling buzzword-based reasoning and non-empty discussion of historical chatbots ensues.
Starts talking about AGI, introduces the new buzzword "Sustainable AI" with three pillars: "governing AI to align with human values," "combine human desirability, technological feasibility, and business viability to weather AI boom-and-bust cycles," and "delivering on the greater good of sustainability and impact, such as the UN SDGs [United Nations sustainable development goals, e.g. ensuring clean water, affordable clean energy]." This section (pp 131-134) has some actual hot takes:
"Governance" (unclear if this means governments or internal corporate policy) should mandate AI that's aligned with humans and AI that helps predict and manage threats from AI.
Just fuse with the AI lol.
There should be massive impact-based funding of public-private partnerships using AI for the benefit of humankind.
...before whipping back to corporate-speak to say IBM could have done better with its foray into healthcare AI by using their framework. But surprise again, because they say the point is actually about AI governance. And also there's something in here about drone warfare.
Opening quote for Ch. 7 is from AI as a Positive and Negative Factor in Global Risk by Eliezer (also cited in this chapter is the post Ten Levels of AI Safety Difficulty by Sammy Martin.), and the chapter is tangentially related; it's about their vision of AI governance:
IBM's problem (supposedly) was people didn't like or trust their product, because they didn't cultivate the culture of alignment and transparency needed to produce AI products that are good for humanity. Another example given of bad AI is the Dutch welfare fraud detection system. Also the UN SDGs (sustainable development goals) factor into the culture of alignment they mean somehow.
Current risk frameworks, e.g. the EU AI Act, are too 'static'/category-based, rather than evaluated dynamically/inside-view/responsively. They issue a call for governments (probably targeted at the Singaporean government) to move forward, and also say using AI to oversee AI is key for their vision of regulation.
Time to be futurists!
They're really excited about a vision of the future they call "Human-AI symbiotic intelligence"(HASI). Humans and AI nurturing each other rings all their aesthetic bells.
They're also excited about using neural nets as components in a classical framework, as a way to build AI that they think will lead toward their desired future.
People are too focused on improving black-box AI, even when that won't lead to their desired future - they should be building AI that people can trust, and using the authors' SDG-infused good AI ethos.
It would be really really nice if we could somehow find the money and the incentive structure to get AI Companies to 'treat humanity as their customer'. This is kind of analogous to greenhouse gas emissions reduction, where governments, nonprofits, and corporations all have to work together to change the incentive landscape of an entire industry.
Also, AI might take over our governments or manipulate us into nuking ourselves. So support the Artificial Intelligence International Institute (AIII) (affiliated with author James Ong)!
Also, our partners WAIC (China's World AI Conference) and SWITCH (Singapore Week of Innovation and TeCHnology). Cue a list of events that (much like the book) amalgamate talks on how your business can navigate AI with talks on ethics and governance standards for building ethical and 'Sustainable' AI.
Also kudos (they're giving themselves kudos in the book, but sure, I'll also give kudos) to author Andeed Ma, chair of the Risk and Insurance Management Association of Singapore (RIMAS), who wants to take AI risk seriously from an insurance perspective.
And kudos to OpenAI, who at the time of writing (early November 2023 or so?) have recently committed 20% of their compute to building trustworthy AI.
Anyhow, their three step plan is: Arrange international summits, focus AI development on transparent / scalably-supervisable / trustworthy / symbiotic AI, and change the funding model of the entire cutting-edge-AI industry to be more humanity-centric.
Their call to action in a grey box at the end: Think about AI's impact, join an AI community, and spend an hour a week championing AI for good.
On the negative side, there were quite a few quality control oversights I noticed throughout the book, some maybe due to having 3 authors, and some maybe due to overuses of ChatGPT-written text.
The tone see-saws between talking to corporations about how to secure their long-term profits, and talking to the public or government about how not to get pwned by AI. I don't think this is just because of the multiple authors (although that's perhaps some of it), I think it's more a combination of a reflection of their actual thoughts and what's rhetorically convenient to try to get a diverse group of people on board.
Overall? Kind of based, certainly way more than I guessed when I was grabbing a new library book just based on its title. It's true the book is laser-targeted at the demographic "businessperson who wants to learn about AI for good through the medium of buzzwords," which is a bad thing for it in my opinion, but the underlying people and ideas seem like good seeds, latent members of a still-forming global coalition.
Stumbled across a book in the new section of the library: "AI For Humanity," by Andeed Ma, James Ong (founder of the think tank AIII, which is also the sound I make when thinking about AI risk), and Siok Siok Tan. It's a mass-market-ish book about, well, AI for humanity, by a sampler of Singapore-centered technologists.
Chapter-by-chapter summary:
On the negative side, there were quite a few quality control oversights I noticed throughout the book, some maybe due to having 3 authors, and some maybe due to overuses of ChatGPT-written text.
The tone see-saws between talking to corporations about how to secure their long-term profits, and talking to the public or government about how not to get pwned by AI. I don't think this is just because of the multiple authors (although that's perhaps some of it), I think it's more a combination of a reflection of their actual thoughts and what's rhetorically convenient to try to get a diverse group of people on board.
Overall? Kind of based, certainly way more than I guessed when I was grabbing a new library book just based on its title. It's true the book is laser-targeted at the demographic "businessperson who wants to learn about AI for good through the medium of buzzwords," which is a bad thing for it in my opinion, but the underlying people and ideas seem like good seeds, latent members of a still-forming global coalition.