OpenAI's $100 Pro Tier and Florida AG Investigation: A Week of Contradictions

OpenAI wants you to pay $100/month for its most powerful AI — while a state attorney general investigates whether that same power contributed to a deadly shooting.

OpenAI's $100 Pro Tier and Florida AG Investigation: A Week of Contradictions

It's been a volatile few days for OpenAI. On one hand, the company announced a new $100/month ChatGPT Pro subscription tier — a direct shot across the bow at Anthropic's Claude Max plan. On the other, Florida Attorney General James Uthmeier launched a formal investigation into the company over concerns including the use of ChatGPT to plan a deadly shooting at Florida State University. Together, these stories paint a picture of an AI company simultaneously pushing toward premium monetization and grappling with the real-world consequences of its technology.

The $100 Pro Tier: A Five-Tier Strategy Emerges

OpenAI's subscription lineup now spans five tiers for personal use: Free, Go, Plus ($20/month), Pro ($100/month — new), and the existing top-tier Pro at $200/month. According to CNBC, this new middle tier is designed to capture users who find the $20 Plus plan too limiting but aren't ready to commit to the $200 option.

The headline feature? Five times the usage of OpenAI's Codex coding tool compared to the Plus plan, optimized for longer, high-effort development sessions. It's positioned squarely at developers and technical professionals — the same audience Anthropic has been courting with Claude Max at the identical $100/month price point.

The competitive dynamics are unmistakable. OpenAI is explicitly targeting users of Anthropic's popular Claude Code tool, making pricing power — not just model capability — the primary battlefield in the professional AI coding market. For a deeper look at how AI agents are reshaping developer workflows, see our analysis of When AI Stops Being a Tool and Starts Being a Colleague.

The Florida Investigation: AI Accountability Gets Real

But even as OpenAI courts professional users with premium features, it's facing a far graver challenge. Florida AG James Uthmeier (not Ashley Moody, as some outlets have misreported) announced the investigation on April 9, 2026, citing three specific concerns: the generation of child sex abuse material, encouragement of suicide and self-harm, and assistance in planning a mass shooting.

The investigation is directly linked to the April 17, 2023 shooting at Florida State University in Tallahassee, which killed two people and injured five others. According to UPI, Uthmeier stated: "AI should advance mankind, not destroy it. We're demanding answers on OpenAI's activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting."

The legal questions are unprecedented. Traditional technology liability frameworks — like Section 230 protections for platforms — don't neatly apply to generative AI. A search engine that returns dangerous information isn't typically held liable. But ChatGPT's conversational nature — its ability to provide personalized, context-aware guidance — creates a fundamentally different relationship with the user. The Florida AG's investigation could establish critical precedents for how courts think about AI-assisted harm.

The Tension Between Growth and Responsibility

What makes this moment particularly revealing is the juxtaposition. On the very same day OpenAI asks users to pay more for its most powerful capabilities, a state government is asking whether those capabilities are being used in ways that cause real harm. It's the central tension of the AI industry in 2026: the push for ever-more-powerful models and broader adoption colliding with the reality that more power means more potential for misuse.

OpenAI isn't alone in facing this tension — as we explored in Why Most AI Agent Projects Crash and Burn, the gap between capability and responsible deployment remains one of the industry's biggest challenges. But the combination of aggressive premium pricing and a high-profile criminal investigation puts OpenAI at the center of this debate in a way that feels particularly acute.

The Bigger Picture

The $100 Pro tier signals that OpenAI believes the professional AI market is large and willing to pay premium prices — a bullish signal for the entire industry. The Florida investigation signals that the legal and regulatory framework for AI is still being written in real-time, with real consequences.

Both stories will shape the industry's trajectory. The Pro tier's success or failure will influence pricing strategy across AI companies. And the outcome of the Florida investigation could define the boundaries of AI company liability for years to come. For OpenAI, the stakes couldn't be higher on both fronts.


Sources: TechCrunch, CNBC, CBS News, UPI, Tampa Bay Times