Health New Media Res > Volume 9(1); 2025 > Article
Kim: AI containment problem: it’s time to act before it’s too late

Abstract

The Coming Wave: AI, Power, and Our Future is Suleyman’s effort to initiate more conversation with people who are indifferent to the risk of the current technologies in the coming wave. The coming wave of technologies is double-edged: Suleyman contends that two core technologies, AI and synthetic biology, could dramatically improve our lives, yet these capabilities could lead to catastrophic outcomes. Without a robust containment plan, their characteristics can make such devastating scenarios increasingly probable. In non-technical terms, this book walks us through the history of general-purpose technologies and how the coming wave technologies are different from other technologies, and they are even more difficult to contain. At the end, the author proposes a multi-layered plan consisting of ten steps toward containment.

hnmr-2025-00108f1.jpg
The Coming Wave: AI, Power, and Our Future by Mustafa Suleyman with Michael Bhaskar is a two-year-old book. It may seem a bit late to introduce it now because two years can be long enough for the content to become outdated, especially given the fast progress of technology in the field of artificial intelligence (AI). Nevertheless, the book is still well worth reading. Many of its arguments and suggestions remain valid. It is problematic that they are not taken seriously and widely. For those who have been closely following AI development, the arguments from the book may feel familiar. For educators or team leaders in the field, the book will help them refresh and organize their thoughts. If you are new to the field and looking for a foundational understanding, it can be a great starting point.
There are two opposite schools of thought regarding AI technology and its impact. The techno-optimist faction, on one side, focuses on the prosperity and material abundance that AI can deliver. On the other side, the AI alarmist faction focuses on drawbacks and existential threats posed by the technology and places equally important weights on prevention and caution. This book provides a well-balanced overview of the dilemmas posed by AI technology and outlines proposals for containing the coming wave of technology.
The author expresses the techno-optimistic perspective, claiming that “the technologies of the wave will make life easier, healthier, more productive, and more enjoyable for billions” (p. 140). At the same time, he acknowledges the potential catastrophic risks of new technologies. He contends that suppressing new technologies due to fear may not be a viable option, as history shows that waves are nearly impossible to stop in the long term. He is also skeptical of techno-solutionism, which assumes that while technology may create problems, new technologies will eventually solve those problems. Instead, this book suggests we must explore how we can collectively address these risks-what Suleyman calls the containment plan. He defines containment as a comprehensive approach for controlling, limiting, and shutting down wave technologies at any point from development to implementation if needed. Thus, his containment plan includes “regulation, better technical safety, new governance and ownership models, and new modes of accountability and transparency, all as necessary (but not sufficient) precursors to safer technology” (p. 37).

Irresistible technologies that cannot be unlearned: General-purpose technology

In Part I of the book, Suleyman argues that proliferation is inherent to technology’s nature. He illustrates this by laying out examples of past general-purpose technologies, such as fire, wheels, combustion engines, printing press, and electricity, which profoundly influenced human life and history. These examples show a consistent pattern: once introduced, the technologies become nearly impossible to resist. He explains the underlying mechanism. When a technology provides clear utility, it generates demand. As people see others gaining advantages from the new technology, they quickly embrace and copy it. Then, competition drives improvement and reduces costs, accelerating its expansion. Inventions cannot be uninvented. Knowledge cannot be unlearned. Once antibiotics were discovered, for example, no society could abandon such life-saving medicine. Likewise, ChatGPT cannot be uninvented once people feel empowered by its utility. As Suleyman acknowledges, diffusion is not easy, but “The seeming inevitability of waves comes not from the absence of resistance but from demand overwhelming it” (p. 41).
Despite their benefits, these innovations can bring risks and unintended harms. The antibiotics that have saved numerous people have caused drug-resistant bacteria, presenting new threats. Technologies often have unforeseen applications and produce unexpected consequences when individual users find different ways to use them. For example, Nobel’s explosives were originally for mining and construction but became weapons of war. The future path of a technology is not determined by its inventors; rather, it depends on how users adopt and apply it. Users’ choices can change the trajectory of a technology and make it susceptible to misuse by bad actors. As technological capabilities grow, so does the scale of potential damage. The more powerful the tool, the greater the consequences of its misuse. Drawing on examples like Aum Shinrikyo’s chemical attack in 1995 and deepfakes in India’s 2020 election, Suleyman claims that AI technology and synthetic biology can be weaponized by bad actors, which might cause devastating consequences. Given the unprecedented speed of AI’s proliferation, however, he warns that we may not have sufficient time to understand, regulate, or contain its negative consequences before they become irreversible. This is the crux of the issue and the core of our concern.
Another virtue of this book is that the author clearly explains complex concepts such as how AI technologies work and the entire process of synthetic biology in accessible, non-technical terms without compromising the gist. For instance, he explains backpropagation, the foundational technique behind deep learning, as a technique that “adjusts the weights to improve the neural network; when an error is spotted, adjustments propagate back through the network to help correct it” (p. 59). He also explains well the attention mechanism of large language models using everyday terms that general readers can understand.
Along with AI, Suleyman identifies synthetic biology as the other key technology in the coming wave. He asserts that vast possibilities for DNA creation and manipulation have opened thanks to the successful Human Genome Project. As the costs for human genome sequencing fall and gene editing techniques like CRISPR advance, the cycle of designing, building, testing, and iterating can now happen at an incredibly fast pace. Furthermore, the protein folding problem has been advanced with the aid of Google DeepMind’s AlphaFold. In 2020, AlphaFold 2, powered by deep learning, predicted over 200 million protein structures, which could normally take centuries to accomplish. As seen in COVID-19 vaccine development, knowing the accurate structure of a protein can guide the design of improved vaccine immunogens and antigens. Generative AIs, trained with biochemical data, can suggest new molecules, proteins, and DNA/RNA sequences that scientists can later verify in the lab. Indeed, the sheer complexity of biology requires researchers to deal with vast amounts of data, which exceed the capacity of conventional techniques. Thus, AI-powered tools have become indispensable in the field.
The convergence of AI and biological sciences offers enormous benefits. Since AlphaFold 2 became publicly available in 2022, it has helped address “questions from antibiotic resistance to the treatment of rare diseases to the origins of life itself” (p. 90). The author argues that while nature evolves slowly over generations, this bio-revolution allows humans to design biological systems that are self-replicating, self-healing, and capable of evolving. In addition, synthetic biology techniques can tackle long-sought problems ranging from personalized medicine tailored to each one’s DNA to “viruses that produce batteries, proteins that purify dirty water, organs grown in vats, algae that draw down carbon from the atmosphere, plants that consume toxic waste” (pp. 84-85). However, these capabilities also pose substantial risks regarding synthetic viruses and biological weapons, as well as serious ethical concerns. Just as AI technology can be misused by bad actors, synthetic biology technology can be abused by rogue scientists. In particular, the case of Lulu and Nana, the first gene-edited twins born in 2018, caused international outrage among scientists and calls for a moratorium on genetic engineering.

Why is the containment of wave technologies so difficult?

Suleyman goes on to explain what makes the containment of the coming wave so difficult. The book identifies four characteristics that set these technologies apart from others. First, they are omni-use, meaning they can be applied across a wide range of sectors—from healthcare and education to warfare and finance. The more potential use cases, the more difficult they are to contain. Second, they hyper-evolve, often faster than societies can understand and adapt. For instance, generative AI tools can now produce highly realistic deepfakes, yet laws and regulations have lagged, resulting in a lack of adequate safeguards against fraud and misinformation. Third, they create asymmetric impacts, where an individual actor can produce massive effects on society and global stability. For example, one pathogenic experiment could trigger a pandemic, turning a microscopic manipulation into a massive catastrophe. Just like AI, synthetic biology technologies such as DNA synthesizers have become simple and cheap. This means someone with self-taught biology knowledge can engineer synthetic pathogens using equipment purchased online in their garage, following step-by-step instructions from generative AI tools. Fourth, the new technologies are becoming more autonomous, interacting with their surroundings and taking actions independently without direct human intervention and oversight. The autonomous nature of AI may be a key factor behind growing concerns about AI technologies because it raises uncertainty about how they will behave and whether humans will be able to control them.
One of the challenges in containing the coming wave of technologies is convincing people that their risks are imminent and dangerous. The risks from AI and synthetic biology can be perceived as uncertain, temporally distant, abstract, and personally irrelevant. These perceptions may cause techno-optimists to remain indifferent to containment efforts, which was the motivation of this book. Suleyman cautions against the pessimism-aversion trap, which refers to “the misguided analysis that arises when people are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way” (p. 13), rather than addressing the risks directly. To harness the full potential of new technologies, we must proactively contain their risks and establish safeguards against unforeseen misuse.
Collective action problem presents another significant barrier to containment. Even if we succeed in creating workable containment plans through enormous effort, one selfish actor can disrupt the pact and regulation. To effectively contain the technologies, all stakeholders, from individual scientists and businesses to nation states, should reach a consensus and act together. We have already seen the failure to implement a temporary moratorium on AI research and development in 2023, even though many prominent AI scientists and entrepreneurs signed the call for such a pause. This collective action problem becomes more pronounced among nation states leading the AI race.

Containment action plan

Toward the end of the book, the author proposes ten measures for constraining AI technology, including technical safety measures, audits, choke points to buy time, responsible developers, business self-regulation, governmental regulation, international alliances, cultural norms, and public movements. No single measure is sufficient on its own; rather, their combined application can effectively contain the potential threat posed by AI technology. Suleyman advocates for a three-pillar containment approach consisting of delay, detect, and defend, proposed by biotechnologist Kevin Esvelt. According to Esvelt (2022), advances in biotechnology, particularly gene synthesis, are democratizing the means to create new pathogens. Over the next decade, thousands of skilled individuals could possess the knowledge and tools to engineer pandemic-class viruses. These pose threats comparably lethal to nuclear weapons. To address the potential risks, we need to slow down and prevent the development and misuse of pandemic-class agents by restricting access to critical information and materials (Delay). Detect focuses on developing reliable early warning systems, such as metagenomic surveillance, for real-time detection of biological threats. Lastly, we must increase societal resilience by building effective defense systems. This includes developing broad-spectrum vaccines, safeguarding essential workers, and maintaining vital services, such as food, water, power, and law enforcement, during biological emergencies (Defend).

Conclusion: Limitations and riding the wave

The dilemma posed by the coming wave of technologies makes us oscillate between the opposite ends of the spectrum in the AI discourse. This book provides a moral/normative frame of reference for us when we do not know what to think of the technology risks and how to deal with the dilemma. AI is neither inherently good nor evil—it depends on how we use it. While Suleyman’s ‘warning’ approach has merit and we should not underestimate the scale of the coming wave, it may lean toward technological determinism by understating how human agency can shape technological development through deliberate choices. Furthermore, although the author mentions ‘people power’ as one of the ten key areas, the book primarily focuses on top-down containment strategies, emphasizing what engineers, developers, and governance structures ought to implement. This macro-level approach is undoubtedly necessary, but the discussion tends to overlook the role of individual agency. There is little guidance on what each user can do, which may leave readers feeling informed but not empowered to act as responsible agents in the containment effort. Individual users can significantly influence how AI evolves by providing feedback, flagging issues, or creating novel usage patterns and applications that developers may not anticipate. By keeping in mind that our human agency can make meaningful differences, we can ride the wave and steer where we want to go rather than being swept away by it.
Grounded in his background as a tech entrepreneur, Suleyman offers an encyclopedic view of how AI could develop and affect our lives based on the historical evolution of significant general-purpose technologies. He tackles various domains of AI impact, from deepfakes and misinformation to political power shifts. It should be noted that this generalist approach may result in oversimplification and limited depth when discussing specialized areas, such as media, international relations, or ethics. Despite these limitations, this book remains a valuable read for those who seek a big-picture understanding of where AI will take us.

Reference

Esvelt, K. M. (2022). Delay, detect, defend: Preparing for a future in which thousands can release new pandemics. GCSP Geneva Papers,
TOOLS
METRICS Graph View
  • 0 Crossref
  •  0 Scopus
  • 69 View
  • 4 Download
Related articles


Editorial Office
1 Hallymdaehak-gil, Chuncheon 24252, Republic of Korea
Tel: +82-33-248-3255    E-mail: editor@hnmr.org                

Copyright © 2025 by Health & New Media Research Institute.

Developed in M2PI