A bipartisan Senate bill aimed at “protecting kids” from AI could end up forcing government-style age checks on everyone while sidelining parents entirely.
Quick Take
- The GUARD Act would bar anyone under 18 from using covered AI chatbots and “companions,” even with parental permission.
- The proposal relies on mandatory age verification, raising privacy and data-collection concerns for adults as well as teens.
- Supporters argue AI can manipulate minors emotionally, echoing earlier fights over social media addiction and youth harms.
- Critics warn broad definitions could sweep in everyday AI tools used for schoolwork, search, and customer service.
What the GUARD Act would do—and why it’s moving now
Sen. Josh Hawley of Missouri and Sen. Richard Blumenthal of Connecticut introduced the Guidelines for User Age-Verification and Responsible Dialogue (GUARD) Act in late 2025 with Sens. Katie Britt, Mark Warner, and Chris Murphy. The measure would prohibit minors under 18 from accessing certain AI chatbots, particularly “companions,” and would require age verification for users. Hawley renewed his push in March 2026, signaling momentum and bipartisan interest ahead of a potential committee markup.
Hawley tied the urgency to rising youth usage and broader public backlash against tech platforms, citing a recent jury verdict he described as a wake-up call on social media addiction harms. That political context matters: lawmakers in both parties face pressure to “do something” about online risks to children, and AI chatbots have become the next target after years of scrutiny on social media. In Washington, child-safety messaging can quickly translate into sweeping rules that outlast the headline that created them.
Parents versus policymakers: a fundamental dispute over who decides
The core conservative concern is less about whether kids can be harmed online—many parents agree they can—and more about who makes the call. The GUARD Act, as framed by critics, defaults to a blanket ban for minors rather than a permission-based framework where parents opt in, set limits, and monitor usage. That structure effectively treats families as incapable of managing the issue themselves, shifting authority to federal rules that don’t vary by maturity, household values, or educational need.
That stands in direct contrast to a competing bipartisan proposal, the CHATBOT Act, which emphasizes family accounts and parental consent and oversight tools. For many voters—right and left—this is the familiar problem of one-size-fits-all Washington governance: lawmakers respond to real harms with policies that assume compliance can only be achieved through centralized mandates. The tradeoff becomes obvious when the policy blocks legitimate, supervised use alongside the risky use it was designed to stop.
Privacy and age-verification: the surveillance question that won’t go away
Age verification sounds simple until it’s implemented at scale. Critics, including digital-rights advocates, argue that requiring platforms to confirm user age often leads to broader identity collection—exactly the kind of data hoarding Americans have learned to distrust. Even if a law’s intent is limited to protecting minors, the mechanism can pull adults into the net by pressuring services to demand IDs, biometrics, or third-party checks before allowing access to basic features.
From a limited-government perspective, the concern is that a child-safety label can become a justification for a permanent verification infrastructure: more data stored, more points of failure, more exposure to breaches, and more opportunities for mission creep. Supporters respond that enforcement requires certainty about age, and that voluntary parental controls may not be enough. What remains unsettled in the available reporting is exactly how narrowly the bill defines covered products and what guardrails would constrain verification methods.
Schools and everyday life: where broad definitions could create real friction
Education groups and observers have warned that the K-12 world is already struggling to adapt to AI tools, with districts and teachers navigating bans, partial allowances, and rapidly changing classroom expectations. A federal under-18 prohibition could create immediate confusion for schools that rely on AI-enabled tutoring, writing assistance, accessibility tools, or administrative chat functions. If the law’s definitions are broad, services not marketed as “companions” could still be affected in practice.
The practical risk is overblocking: companies may restrict access beyond what the law strictly requires to reduce liability, especially when penalties are unclear or severe. That can push teens toward less regulated corners of the internet or cut them off from mainstream tools used for homework and skill-building. At the same time, supporters point to testimony and advocacy claiming AI can simulate empathy, blur healthy boundaries, and manipulate vulnerable minors—concerns that deserve serious scrutiny even if the policy tool is debated.
What to watch as Congress weighs child safety against liberty
As the GUARD Act heads toward possible markup, the key questions are narrow but consequential: How does the bill define an AI chatbot or “companion”? Does it allow parental consent pathways, appeals, or supervised educational use? What age-verification methods would be permitted, and who would hold the data? Those details will determine whether this becomes a targeted child-protection measure or another Washington mandate that expands surveillance and reduces family choice.
GUARD Act Puts Policymakers, Not Parents, in Charge of Kids’ AI Use https://t.co/luezii98mh via @CatoInstitute
— Michael Chapman (@MWChapman) April 29, 2026
Politically, the bipartisan coalition behind AI restrictions shows how quickly “protect the kids” can unite rivals—sometimes at the expense of clear limits and constitutional instincts. Conservatives wary of the deep-state style impulse toward monitoring will want firm guardrails and parental primacy. Liberals skeptical of corporate influence will want accountability without creating an ID regime. Either way, voters should demand specifics, because the infrastructure built for minors rarely stays confined to minors for long.
Sources:
https://www.axios.com/2026/03/25/hawley-ai-chatbots-congress-guard-act











