Why This Move Matters Right Now
A new pro‑AI super PAC, Leading the Future, backed by Andreessen Horowitz, OpenAI President Greg Brockman, and other Silicon Valley figures, is mounting a multi‑million‑dollar campaign against New York Assembly member Alex Bores and his congressional bid. Bores is the chief sponsor of New York’s bipartisan RAISE Act, which would require large AI developers to implement and follow safety plans, disclose “critical safety incidents,” and avoid releasing models that pose unreasonable risks-backed by civil penalties up to $30 million. This signals a decisive escalation: the industry is willing to spend heavily to block state‑level AI safety regimes while pushing for a single federal framework.
Key Takeaways
- Capital at stake is significant: the PAC launched with a commitment exceeding $100 million; its leaders have floated a “multi‑billion” effort targeting Bores.
- If enacted, the RAISE Act creates clear obligations (safety plans, incident disclosure) and high penalties (up to $30 million), with third‑party audits notably dropped during drafting.
- Expect intensified lobbying for federal preemption to block state AI rules; efforts to limit state action already surfaced in federal budget negotiations earlier this year.
- Enterprises should prepare for a dual‑track world: either comply with emerging state standards or face a long, messy fight over jurisdiction.
- Public concerns-energy use, climate impact, youth mental health, and job displacement-are shaping the political narrative, not just technical safety.
Breaking Down the Announcement
Leading the Future formed in August with a stated goal to support policymakers favoring a light‑touch or no‑touch approach to AI regulation. Backers include a16z, Brockman, Palantir co‑founder and 8VC managing partner Joe Lonsdale, and AI search engine Perplexity. The PAC’s leaders, Zac Moffatt and Josh Vlasto, say they will work to sink Bores’s campaign, arguing that bills like the RAISE Act would undercut U.S. competitiveness and national security and create a patchwork of state rules that invite foreign manipulation.
Bores counters that states must move where Congress has not, and he has coordinated with other states to reduce the “patchwork” risk. Importantly, he removed some provisions—like mandatory third‑party audits—after consulting with large AI firms, but the remaining obligations still bite: documented safety plans, compliance with those plans, disclosures of critical safety incidents (e.g., model theft), and a prohibition on releasing models that carry unreasonable risks of critical harm.

What This Changes for Operators
If the RAISE Act becomes law, New York would establish a concrete compliance floor for “large AI labs.” Even without formal third‑party audits, the required safety plans and incident disclosures will force enterprises to operationalize risk management, not just publish principles. The $30 million penalty ceiling is large enough to get board attention, and the “unreasonable risk” clause creates legal exposure if release gates and red‑teaming are inadequate.

For vendors training or deploying advanced models, this could mean:
- Standing up auditable safety plans tied to technical controls (evals, alignment tests, red‑team reports) with clear go/no‑go release criteria.
- Building incident response playbooks specifically for model exfiltration, misuse at scale, and other “critical safety incidents,” with timelines and triggers for disclosure.
- Clarifying accountability: who signs off that a release is not an “unreasonable risk,” and based on which quantitative thresholds.
- Considering feature geofencing or staggered rollouts if multi‑state requirements diverge, though this adds product complexity and PR risk.
Industry and Policy Context
The clash exposes a broader strategic bet. Many in Silicon Valley want a single federal framework and view state experimentation as an economic and security liability. Some federal lawmakers have already tried to curb state AI laws via budget riders; those efforts were removed but are resurfacing. Meanwhile, states see themselves as policy laboratories while Congress stalls, and Bores says he’s working to standardize language across states and avoid overlap with the EU AI Act.
For enterprises, the near‑term reality is uncertainty. We could see a handful of states adopt baseline safety‑plan and incident‑disclosure requirements before any federal preemption arrives. Compared with the EU AI Act’s broad risk tiers and conformity assessments, the RAISE approach is narrower but more directly enforceable against frontier developers. The dropped third‑party audit clause reduces audit burden but shifts scrutiny onto internal controls—and, ultimately, onto enforcement agencies and courts to interpret “unreasonable risk.”

Risks and Open Questions
- Scope and thresholds: Which entities qualify as “large AI labs,” and how will thresholds map to compute, capability, or deployment scale?
- Disclosure mechanics: What constitutes a “critical safety incident,” how fast must it be reported, and to whom? Over‑disclosure risks reputational harm; under‑disclosure risks fines and litigation.
- Preemption timing: A late‑breaking federal preemption could upend state regimes, leaving sunk compliance investments but also establishing a national baseline.
- Political backlash: Heavy spending against state sponsors could galvanize counter‑mobilization, raising the odds of stricter proposals in other jurisdictions.
Recommendations for Executives
- Stand up a safety plan now: Tie it to concrete eval metrics, release gates, and red‑team thresholds; ensure the plan is followed in practice, not just on paper.
- Build an incident disclosure playbook: Define “critical safety incidents,” designate owners, set internal SLAs, and rehearse response. Assume model theft and misuse scenarios.
- Adopt a “highest common denominator” approach: Map New York’s requirements against EU AI Act obligations and internalize the stricter standard to avoid product fragmentation.
- Engage early with policymakers: Government affairs should advocate for clarity on definitions and reporting timelines while avoiding a posture that appears anti‑trust or anti‑safety.
- Budget for compliance and comms: Plan for legal review, measurement infrastructure, and transparent public messaging to maintain user and regulator trust.
Bottom line: regardless of whether New York’s governor signs the RAISE Act, the size and speed of this political push mean AI governance is no longer a future concern—it’s a present operational requirement. Treat it accordingly.
Leave a Reply