How New York’s RAISE Act is reshaping AI regulation and vendor risk for business leaders
- CEO snapshot: New York’s RAISE Act forces the largest AI developers to publish AI safety plans, report critical incidents, and submit to state oversight — and major tech backers have spent millions opposing the lawmaker who authored it.
- Why it matters: The law raises immediate vendor risk, procurement exposure, and compliance costs for firms embedding AI agents and foundational models into products.
- What to do now: Map model dependencies, renegotiate SLAs for incident reporting and audits, invest in provenance and anonymization, and prepare public-facing safety communications.
Tight hook: the clash that every C-suite should care about
New York’s RAISE Act forces the biggest AI developers to publish AI safety plans and submit to state oversight — and Silicon Valley money is spending millions to oppose the lawmaker who wrote it. That fight is more than politics: it changes how enterprises buy, deploy, and govern AI automation today.
What the RAISE Act requires (plain language)
The RAISE Act, passed in New York in 2025, targets very large AI developers. According to reporting, it requires covered firms to:
- Publish and follow public AI safety plans that explain how models are governed and tested.
- Disclose “critical safety incidents” — serious harms or near-misses — to a state agency and the public on an annual basis.
- Submit to ongoing state oversight, including audits and annual reporting tied to model complexity and operational scale.
The law uses thresholds designed to capture major vendors (reportedly a model-complexity test plus a roughly $500M/year revenue bar), so most startups are excluded but OpenAI, Anthropic, Google, and Meta would be affected.
Quick technical primer for executives
Short definitions to keep meetings efficient:
- Chain-of-thought: models’ internal step-by-step reasoning that can reveal how they reach answers — useful for transparency but also a source of new risks.
- RLHF (reinforcement learning from human feedback): a training method where human ratings shape model behavior — it improves alignment but also influences capabilities.
- De-anonymization: re-identifying people in datasets by linking leftover identifiers to public records or other datasets.
- AI agents: autonomous or semi-autonomous model-driven processes that perform tasks, often by chaining multiple model calls and tools.
Political escalation: why tech money is flowing into the fight
The law’s author, Alex Bores, is a former Palantir engineer turned New York assemblymember who left Palantir on ethical grounds and has pushed several safety-forward bills. His RAISE framework emphasizes public safety plans, labor impact mitigation, and standards for deepfakes and age verification.
Industry pushback crystallized in late 2025 when a super PAC called Leading the Future began spending reported millions to oppose Bores’ congressional run. Publicized backers include major tech figures and investors — names such as Greg Brockman (reported), Palantir cofounder Joe Lonsdale (reported), and Andreessen Horowitz (reported) — signaling that some corner of industry perceives policy risk to product roadmaps and growth.
At the same time, federal pushback has arrived: a presidential executive order directed at state AI laws threatened to leverage federal funding streams and to challenge “inconsistent” state regulations. The result is a constitutional and commercial tug-of-war between state experimentation and national coordination.
What this really means for procurement and vendor risk management
RAISE-style rules don’t only affect big vendors — they change downstream contracts and operational risk for buyers. Expect these practical shifts:
- New contract clauses: vendors will face demands for faster incident notification, audit rights tied to model complexity, and contractual guarantees about safety testing.
- Higher due diligence bar: procurement teams must validate vendor safety plans, audit histories, and data governance practices before buying AI-enabled services.
- Operational controls: enterprises embedding AI agents will need provenance metadata, documented model cards, and demonstrable anonymization techniques to limit de-anonymization risk.
- Insurance and liability: insurers may demand evidence of third-party audits, incident response capabilities, and KPIs around time-to-detection and mitigation.
Mini case: how a vendor contract might change
Company X licenses a customer-service AI agent from Vendor Y. Under RAISE-style pressure, Company X’s legal and procurement teams will likely add:
- Incident reporting: vendor must notify within 24 hours of any critical safety incident and provide a root-cause analysis within 10 business days.
- Audit rights: Company X may request annual third-party audits of the deployed model, with remediation timelines for any findings that impact user safety.
- Provenance and explainability: vendor must supply model cards and provenance metadata for any model updates affecting production services.
Practical 90-day roadmap for the C-suite
Implementable actions to reduce both regulatory and reputational exposure, with KPIs to track.
- Map dependencies (Days 1–14)
Inventory all models and third-party AI agents. KPI: 100% of AI endpoints cataloged with owner, vendor, and purpose. - Contract triage (Days 7–30)
Prioritize high-risk contracts (external-facing, high-sensitivity data). KPI: Top 20% of vendor contracts flagged and renegotiation plan in place. - Safety baseline (Days 14–45)
Require vendors to provide AI safety plans and model cards. KPI: 90% of strategic vendors provide safety documentation or a remediation timeline. - Technical investments (Days 30–90)
Deploy provenance metadata, enable logging for incident detection, and implement anonymization/differential privacy where needed. KPI: % of production models with provenance tags and standard logging enabled. - Public narrative (Days 30–90)
Publish a customer-facing safety statement and update SLAs. KPI: Public safety statement published and referenced in sales materials.
Estimated budget guidance: for a mid-size enterprise, initial compliance readiness (inventory + contract updates + basic provenance tooling) typically ranges from low six-figures to a few million dollars depending on scale and data sensitivity. Independent audits and deeper technical controls add incremental cost.
Sample contract clause headers to discuss with counsel
- Critical Incident Notification: Vendor to notify Customer within X hours of discovery and provide a remediation plan within Y days.
- Audit and Inspection Rights: Customer may require annual third-party safety audits, with vendor-funded remediation for material noncompliance.
- Provenance and Model Cards: Vendor shall provide model provenance metadata and updated model cards 30 days prior to material model changes.
- Data Handling & De-anonymization Risk: Vendor must disclose re-identification risk assessments and mitigation techniques (e.g., differential privacy).
Legal note: This is analysis, not legal advice. Companies should consult counsel to tailor language to jurisdiction and risk profile.
Broader policy context and international implications
RAISE-style state laws are part of a patchwork. The EU’s AI Act focuses on risk-based categorization and obligations across the continent; New York’s approach targets capable U.S. vendors through size and capability thresholds. Multinational firms will need a strategy for regional alignment vs. regional forks — mapping divergent requirements, consolidating controls where possible, and using the strictest applicable standard as a baseline is often the safest operational choice.
Export controls and competition with China add geopolitical texture. Some executives argue lighter domestic rules speed commercialization; others counter that trusted, aligned AI — with verifiable safety standards — is the competitive differentiator in global markets.
What to watch next
- Litigation timelines: expect legal challenges to state-level thresholds and preemption arguments.
- Federal coordination: potential congressional frameworks or executive directives that harmonize or preempt state laws.
- PAC activity and elections: industry political spending may shift candidate viability and therefore the legislative agenda.
- Standards bodies: look for ISO-like or industry-led governance standards that reduce friction for cross-jurisdiction compliance.
Questions leaders ask
Will state AI laws affect my procurement?
Yes. Even if your company isn’t directly covered, vendors will change contracts and operational practices; buyers must verify vendor compliance and may face new service disruptions or price changes.
Who does the RAISE Act target?
The law is designed to capture the largest model developers through a combination of model complexity and revenue thresholds — reported to be roughly $500M/year — so primarily major vendors of foundational models.
Are industry-funded political campaigns normal here?
Political spending is common, but the scale of tech-funded activity against a regulator highlights how strategic policy fights can become proxy battles over product roadmaps and market structure.
How should boards frame this?
Ask the CIO and GC whether the company has inventoried AI dependencies, updated vendor SLAs for incident reporting, and budgeted for third-party audits. These are governance priorities that boards should monitor quarterly.
Final posture: prepare, don’t panic
Regulatory momentum around AI is accelerating, and state-level experiments like the RAISE Act are shaping vendor behavior faster than many organizations expected. The most defensible strategy for leaders is proactive preparedness: technical controls, contract hygiene, transparent communications, and a clear roadmap for governance. That approach reduces compliance costs, protects customers, and builds trust — which may be the decisive commercial advantage as AI for business moves from experimental to essential.
Sources: reporting on the RAISE Act and related developments, public statements by involved parties, and coverage of political spending and executive actions. Verify campaign finance and legal timelines with official filings and counsel.