Press Archives - AI Now Institute
What Are the Implications if the AI Boom Turns to Bust? This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. Tech Policy Press Jan 13, 2026 Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants 404 Media Dec 4, 2025 Power Companies Are Using AI To Build Nuclear Power Plants Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said. 404 Media Nov 14, 2025 You May Already Be Bailing Out the AI Business Is an artificial-intelligence bubble about to pop? The question of whether we’re in for a replay of the 2008 housing collapse—complete with bailouts at taxpayers’ expense—has saturated the news cycle. For every day that passes without disaster, AI companies can more persuasively insist that no such market correction is coming. But the federal government is already bailing out the AI industry with regulatory changes and public funds that will protect companies in the event of a private sector pullback. The Wall Street Journal Nov 13, 2025 The fusing of AI firms and the state is leading to a dangerous concentration of power With all of this in mind, Hard Reset spoke with researcher Sarah West, the co-executive director of a think tank advocating for an AI that benefits the public interest, not just a select few. We discuss this consolidation of power among a few AI players—and how the government is actually hindering the development of healthier competition and consumer-friendly AI products, while flirting with financial disaster. Hard Reset Oct 31, 2025 The Destruction in Gaza Is What the Future of AI Warfare Looks Like “AI systems, and generative AI models in particular, are notoriously flawed with high error rates for any application that requires precision, accuracy, and safety-criticality,” Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told Gizmodo. “AI outputs are not facts; they’re predictions. The stakes are higher in the case of military activity, as you’re now dealing with lethal targeting that impacts the life and death of individuals.” Gizmodo Oct 31, 2025 ChatGPT safety systems can be bypassed to get weapons instructions “That OpenAI’s guardrails are so easily tricked illustrates why it’s particularly important to have robust pre-deployment testing of AI models before they cause substantial harm to the public,” said Sarah Meyers West, a co-executive director at AI Now, a nonprofit group that advocates for responsible and ethical AI usage. NBC News Oct 31, 2025 The Rise and Fall of Nvidia’s Geopolitical Strategy China’s Cyberspace Administration last month banned companies from purchasing Nvidia’s H20 chips, much to the chagrin of its CEO Jensen Huang. This followed a train wreck of events that unfolded over the summer. Tech Policy Press Oct 31, 2025 Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work? For Heidy Khlaaf, the chief AI scientist at the AI Now Institute with a background in nuclear safety, Anthropic’s promise that Claude won’t help someone build a nuke is both a magic trick and security theater. She says that a large language model like Claude is only as good as its training data. And if Claude never had access to nuclear secrets to begin with, then the classifier is moot. Oct 20, 2025 How AI safety took a backseat to military money AI firms are now working with weapons makers and the military. Safety expert Heidy Khlaaf breaks down what that means. The Verge Sep 25, 2025 ASML-Mistral AI: It’s the Geopolitics, Stupid While subsidies and an EU Chips Act have failed to move the needle, this deal is a blueprint for something better: It plays to Europe’s existing strengths, shows there are alternatives to what AI researcher Leevi Saari calls the “voracious pressures” of US venture capital and strengthens EU suppliers. Bloomberg Sep 11, 2025 Decision in US vs. Google Gets it Wrong on Generative AI Gesturing towards the importance of generative AI in the search engine market then dismissing its actual effects is a dangerous precedent. It is true that tech markets are being shaped by generative AI. But in this case the court failed to accurately examine the broader AI market and the effects of consolidated power. Tech Policy Press Sep 11, 2025 AI is costing jobs, but not always the way you think Demand for AI is strong, but there’s no guarantee this gamble will pay off according to Sarah Myers West at the AI Now Institute. Marketplace Sep 9, 2025 She’s investigating the safety and security of AI weapons systems. Besides being error-prone, there’s another big problem with large language models in these kinds of situations: They’re vulnerable to being compromised, which could allow adversaries to hijack systems and impact military decisions.
Despite these known issues, militaries all over the world are increasingly using AI—an alarming reality that now drives the work of pioneering AI safety researcher Heidy Khlaaf. MIT Technology Review Sep 8, 2025 What’s the real cost of chasing AGI? Power consolidation is just the start, says the AI Now Institute. On today’s episode of Equity, Rebecca Bellan caught up with Amba Kak and Dr. Sarah Myers West from the AI Now Institute, a think tank focused on the social implications of AI and the consolidation of power in the tech industry. Their recent report, dubbed Artificial Power, lays out the political economy driving today’s AI frenzy and what’s at stake for everyone else. TechCrunch Sep 3, 2025 Is the AI Bubble Too Big to Fail? “We’re now locked into a particular version of the market and the future where all roads lead to big tech,” says Amba Kak, co-executive director of the AI Now Institute, which studies AI development and policy. Indeed, the success of major stock indexes—and perhaps your 401(k)—is resting on the continued growth of AI: Meta, Amazon, and the chipmakers Nvidia and Broadcom have accounted for 60 percent of the S&P 500’s returns this year. Inc. Aug 28, 2025 Did Sam Altman Accidentally Admit That the AI Bubble Is Here? Did Sam Altman Accidentally Admit That the AI Bubble Is Here?
The economics seem fuzzy at best, but the largest AI companies are unlikely to take a financial hit because of U.S. government subsidies, says Amba Kak, co-executive director of the AI Now Institute, a policy organization. Inc. Aug 12, 2025 Experts worry about transparency, unforeseen risks as DOD forges ahead with new frontier AI projects “We’ve particularly warned before that commercial models pose a much more significant safety and security threat than military purpose-built models, and instead this announcement has disregarded these known risks and boasts about commercial use as an accelerator for AI, which is indicative of how these systems have clearly not been appropriately assessed,” Khlaaf explained. DefenseScoop Aug 4, 2025 1 / 15
Executive Summary
The article discusses the potential implications of an AI investment boom turning to bust, and how this could reshape AI policy, public sentiment, and narratives about the future. Experts express concerns about the proliferation of large language models, the fast-tracking of nuclear licenses, and the AI-driven push to build more nuclear power plants, citing safety risks and the potential for a concentration of power. The article also highlights the federal government's role in bailing out the AI industry and the need for robust pre-deployment testing of AI systems.
Key Points
- ▸ The AI investment boom may be unsustainable and could lead to a crash
- ▸ The proliferation of large language models and AI-driven nuclear power plant construction poses safety risks
- ▸ The federal government is bailing out the AI industry with regulatory changes and public funds
Merits
Timely Discussion
The article raises important questions about the sustainability of the AI investment boom and its potential implications
Demerits
Lack of Concrete Solutions
The article primarily focuses on highlighting the problems and risks associated with the AI boom, without providing concrete solutions or recommendations
Expert Commentary
The article raises crucial questions about the long-term sustainability of the AI investment boom and its potential implications for public safety, economic stability, and social welfare. As the AI industry continues to grow and evolve, it is essential to prioritize robust regulation, rigorous testing, and transparency to mitigate the risks associated with AI development and deployment. Furthermore, policymakers must consider the potential consequences of an AI investment boom crash and develop strategies to mitigate its impact on the economy and society.
Recommendations
- ✓ Policymakers should establish clear regulations and guidelines for AI development and deployment
- ✓ The AI industry should prioritize robust pre-deployment testing and transparency to ensure public safety and trust