
A New York man used artificial intelligence to build bombs and planned to detonate them in Manhattan, shaking the city and igniting fierce debate over technology, criminal intent, and public safety.
At a Glance
- Michael Gann, 55, charged with plotting to detonate AI-assisted bombs in Manhattan
- Law enforcement credits rapid interagency action for preventing a potential catastrophe
- Case highlights growing concerns about criminals exploiting AI and online resources
- Incident spurs renewed calls for tech regulation and scrutiny of AI providers
AI, Bombs, and Manhattan: A Recipe for Disaster Thwarted
Federal prosecutors have charged Michael Gann, a 55-year-old from Inwood, Long Island, with manufacturing and attempting to use improvised explosive devices across Manhattan. Gann, who has a record of instability and periods of homelessness, used artificial intelligence and online searches to design and construct at least seven bombs. These devices, loaded with flash powder and chlorine, were found on rooftops, subway tracks, and even tossed off the Williamsburg Bridge. The threat was chillingly real: this wasn’t a “what if” scenario, but a “what nearly happened” moment for New York City.
Law enforcement officials say a 55-year-old New York man reportedly used AI to help him build bombs that he planned to detonate in Manhattan. https://t.co/zFcr6ckbzq
— Breitbart News (@BreitbartNews) July 27, 2025
Law enforcement was tipped off earlier this year when Gann had chemicals and bomb-making materials shipped to a Nassau County address. Investigators discovered that he was leveraging online AI tools to fine-tune his bomb designs, effectively lowering the barrier to engineering lethal devices. The investigation culminated in Gann’s arrest in SoHo on June 5, 2025, as he carried another device. He now faces a laundry list of federal charges, including manufacturing explosives and unlawful possession of destructive devices, with a possible forty-year prison sentence hanging over his head.
Tech Gone Rogue: When AI Is a Weapon in the Wrong Hands
Let’s be clear: the internet has always been a double-edged sword, but this case marks a dangerous new frontier. Law enforcement officials from the FBI, NYPD, and the U.S. Attorney’s Office stress that Gann used AI-assisted searches, not just generic instructions, in his campaign to build and deploy bombs. This isn’t the hacker fantasy of old—it’s real, it’s happening now, and it’s proof that when technology is unchained from common sense and moral guardrails, it becomes a tool for chaos. The tech giants who churn out these AI platforms keep telling us they’re building a better world, but who’s stopping them from building a more dangerous one?
Federal and city officials lauded the speed and coordination that foiled Gann’s plot. FBI Assistant Director Christopher Raia and NYPD Commissioner Jessica Tisch both credited quick intelligence sharing for averting disaster. But behind the scenes, there’s growing unease. If one man, barely on the radar, can orchestrate a bomb plot with a laptop and a few online orders, what’s stopping the next copycat? The “see something, say something” mantra suddenly feels quaint when the threat is code, not a suspicious bag.
Who’s Accountable When AI Empowers Evil?
The aftermath of Gann’s arrest is already rippling through the tech industry, law enforcement, and political circles. Safety in Manhattan neighborhoods—especially around the Williamsburg Bridge and SoHo—has been shaken. Business disruptions and subway delays were the immediate price, but the longer-term costs are only starting to emerge. Residents now look over their shoulders, wondering what other threats are lurking in the code that runs our world. And once again, politicians are clamoring for tighter controls on AI and stricter oversight of online marketplaces selling dangerous chemicals. Conservatives, of course, have been warning about the unchecked power of Big Tech for years—now, even the most naive can see where this road leads.
The AI industry, predictably, is circling the wagons. They argue that overregulation will “stifle innovation,” but ask anyone in Manhattan if they’d rather have a little less innovation or a few less bombs on their subway tracks. Law enforcement, meanwhile, is pushing for expanded surveillance and investigative powers, even as the usual suspects cry about “privacy” and “civil liberties.” When the right to code outweighs the right to safety, you know the pendulum has swung too far in the wrong direction.
From Crisis to Consequence: The Next Front in Domestic Security
Experts and law enforcement agree: this case is a wake-up call. The FBI, NYPD, and U.S. Attorney’s Office are now deep-diving into Gann’s online activity, trying to map out exactly how AI helped him skirt traditional barriers to bomb-making. The prosecution, led by U.S. Attorney Jay Clayton, is treating this as a national security priority, not just a local crime spree. And with every new detail revealed, the public’s patience for tech industry excuses is wearing dangerously thin.
This is more than just a crime story; it’s a warning shot. Technology in the wrong hands is a force multiplier for evil. Conservatives have always believed in the sanctity of law and order, the importance of vigilant policing, and the necessity of technology that serves—not sabotages—American values. As the dust settles in Manhattan, it’s clear that the next great battle for public safety won’t just be fought on the streets, but in the algorithms running quietly behind closed doors.