RAISE Act

Also known as: Responsible AI Safety and Education Act, New York AI safety law

policy beginner

What is the RAISE Act?

The Responsible AI Safety and Education Act (RAISE Act) is a New York State law — the first major AI safety legislation in the United States. Authored by Assemblyman Alex Bores and modeled after California’s SB 53, it was passed by both chambers of the New York legislature in June 2025 and signed into law by Governor Hochul in December 2025.

The law applies to companies that have spent $100 million or more on training frontier AI models — currently five companies: OpenAI, Anthropic, xAI, Google, and Meta. It requires these companies to publish safety plans, make them public, and report critical safety incidents to state authorities. “Critical safety incidents” are defined specifically in the bill as events involving imminent or actual injury or death — a deliberately high bar.

Key Characteristics

  • Narrow scope: Only applies to the largest frontier model developers (5 companies as of 2026)
  • Safety plans: Companies must create, publish, and adhere to safety protocols
  • Incident reporting: Critical safety events must be disclosed to a new AI oversight office within the NY Department of Financial Services
  • Rising standard: Designed as a template for national legislation, similar to how California often sets environmental standards that become federal ones
  • First-mover: The first enforceable AI safety law in the US, predating any federal framework

Why the RAISE Act Matters

The RAISE Act matters less for what it requires — which is relatively modest — and more for the precedent it sets. It establishes the principle that AI companies must have public, enforceable safety commitments rather than voluntary ones.

Its passage came at a critical moment: the major AI labs had all made voluntary safety pledges in 2023-2024, but by early 2026, most were abandoning those commitments as competition intensified. Anthropic replaced its hard safety commitments with “non-binding publicly declared targets.” OpenAI researchers were resigning over safety concerns. The voluntary model had failed, making the case for enforceable legislation.

The political significance is also notable. A super PAC called Leading the Future — backed by OpenAI co-founder Greg Brockman and Andreessen Horowitz — committed $125 million to defeat Bores in his 2026 Congressional race, explicitly to send a message to other lawmakers about the cost of regulating AI.

Mentioned In

Video thumbnail

Alex Bores

The RAISE Act requires the absolutely largest AI developers to have safety plans that they make public and actually stick to, and disclose critical safety incidents.