119-s2081

S
✓ Complete Data

RISE Act of 2025

Login to track bills
Introduced:
Jun 12, 2025
Policy Area:
Science, Technology, Communications

Bill Statistics

2
Actions
0
Cosponsors
0
Summaries
1
Subjects
1
Text Versions
Yes
Full Text

AI Summary

No AI Summary Available

Click the button above to generate an AI-powered summary of this bill using Claude.

The summary will analyze the bill's key provisions, impact, and implementation details.

Latest Action

Jun 12, 2025
Read twice and referred to the Committee on Commerce, Science, and Transportation.

Actions (2)

Read twice and referred to the Committee on Commerce, Science, and Transportation.
Type: IntroReferral | Source: Senate
Jun 12, 2025
Introduced in Senate
Type: IntroReferral | Source: Library of Congress | Code: 10000
Jun 12, 2025

Subjects (1)

Science, Technology, Communications (Policy Area)

Text Versions (1)

Introduced in Senate

Jun 12, 2025

Full Bill Text

Length: 10,606 characters Version: Introduced in Senate Version Date: Jun 12, 2025 Last Updated: Nov 12, 2025 6:22 AM
[Congressional Bills 119th Congress]
[From the U.S. Government Publishing Office]
[S. 2081 Introduced in Senate

(IS) ]

<DOC>

119th CONGRESS
1st Session
S. 2081

To establish immunity from civil liability for certain artificial
intelligence developers, and for other purposes.

_______________________________________________________________________

IN THE SENATE OF THE UNITED STATES

June 12, 2025

Ms. Lummis introduced the following bill; which was read twice and
referred to the Committee on Commerce, Science, and Transportation

_______________________________________________________________________

A BILL

To establish immunity from civil liability for certain artificial
intelligence developers, and for other purposes.

Be it enacted by the Senate and House of Representatives of the
United States of America in Congress assembled,
SECTION 1.

This Act may be cited as the ``Responsible Innovation and Safe
Expertise Act of 2025'' or the ``RISE Act of 2025''.
SEC. 2.

Congress finds the following:

(1) Artificial intelligence systems have rapidly advanced
in capability and are increasingly being deployed across
professional services, including healthcare, law, finance, and
other sectors critical to the economy.

(2) Industry leaders have publicly acknowledged the
development of increasingly powerful artificial intelligence
systems, with some discussing the potential for artificial
general intelligence and superintelligence that could
fundamentally reshape the society of the United States.

(3) The current lack of clarity regarding liability for
artificial intelligence errors creates uncertainty that impedes
the responsible integration of these beneficial technologies
into professional services and economic activity.

(4) Many artificial intelligence systems operate with
limited transparency regarding their capabilities, limitations,
and default instructions, making it difficult for professional
users to assess appropriate use cases and for legal systems to
fairly allocate responsibility when errors occur.

(5) Learned professionals who utilize artificial
intelligence tools in serving clients have professional
obligations to understand the capabilities and limitations of
the tools they employ, requiring access to clear information
about system specifications and performance characteristics.

(6) Establishing clear standards for artificial
intelligence transparency, coupled with appropriate liability
frameworks, will promote responsible innovation while ensuring
that the benefits and risks of artificial intelligence systems
are properly understood and managed as these technologies
continue to advance.

(7) The development of artificial intelligence systems that
may significantly impact the future of human civilization
warrants a governance approach that balances innovation
incentives with robust transparency requirements and
appropriate allocation of responsibility among developers,
professional users, and other stakeholders.
SEC. 3.

In this Act:

(1) Artificial intelligence.--The term ``artificial
intelligence'' has the meaning given the term in
section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.
of the National Artificial Intelligence Initiative Act of 2020
(15 U.S.C. 9401).

(2) Client.--The term ``client'' means a person that--
(A) engages the services of a learned professional;
(B) relies upon the expertise, judgment, and advice
of the learned professional; and
(C) has a relationship with the learned
professional that is governed by professional
standards, codes of conduct, or regulations.

(3) Developer.--The term ``developer'' means a person
that--
(A) creates, designs, programs, trains, modifies,
or substantially contributes to the creation or
modification of an artificial intelligence product;
(B) exercises control over the design
specifications, functionality, capabilities,
limitations, or intended uses of an artificial
intelligence product; or
(C) markets, distributes, licenses, or makes
available an artificial intelligence product under
their own name, brand, or trademark, regardless of
whether the person creates the original underlying
technology of the artificial intelligence product.

(4) Error.--The term ``error'' means--
(A) any output, action, recommendation, or material
omission by an artificial intelligence product that is
false, misleading, fabricated, deceptive, or incomplete
in a manner that a reasonable developer could foresee
would cause harm; or
(B) any failure of an artificial intelligence
product to perform a function or task that the
artificial intelligence product expressly or implicitly
represents itself as capable of performing.

(5) Learned professional.--The term ``learned
professional'' means an individual who--
(A) possesses specialized education, training,
knowledge, or skill in a profession;
(B) is licensed, certified, or otherwise authorized
by an appropriate Federal or State authority to
practice in that profession;
(C) is bound by professional standards, ethical
obligations, and a duty of care to clients; and
(D) exercises independent professional judgment
when using tools, including artificial intelligence
products, in the course of rendering professional
services.

(6) Model card.--The term ``model card'' means a publicly
available technical document in which a developer describes,
consistent with industry standards and as rigorously as or more
rigorously than industry peers, the training data sources,
evaluation methodology, performance metrics, intended uses,
limitations, and risk mitigations, including detection,
evaluation, management, and safeguards against errors, of an
artificial intelligence product.

(7) Model specification.--The term ``model
specification''--
(A) means the text or other configuration
instructions of an artificial intelligence product--
(i) supplied by a developer;
(ii) that establish the intended base
behavior, tone, constraints, or goals of the
artificial intelligence product; and
(iii) that materially influence the outputs
of the artificial intelligence product across
users or sessions, including the system prompt
provided to the model before engaging with user
queries; and
(B) includes--
(i) the system prompt and any other text or
images that the artificial intelligence product
receives that are not visible to the end user;
(ii) any constitution or analogous guiding
document used when training or fine-tuning of
an artificial intelligence product, including
in automated schemes in which an artificial
intelligence system trains another artificial
intelligence system; and
(iii) the instructions, rubrics, or other
guidance provided to human raters or evaluators
of an artificial intelligence product the
feedback of whom is used to train or fine-tune
the artificial intelligence product.
SEC. 4.
INTELLIGENCE DEVELOPERS.

(a) Safe Harbor Eligibility.--A developer shall be immune from
civil liability for errors generated by an artificial intelligence
product when used by a learned professional in the course of providing
professional services to a client if the developer--

(1) prior to deployment of the artificial intelligence
product, publicly releases and continuously maintains--
(A) the model card for the artificial intelligence
product; and
(B) the model specification for the artificial
intelligence product, which may include redactions--
(i) only relating to information that would
reveal trade secrets unrelated to the safety of
the artificial intelligence product; and
(ii) only if the developer furnishes
contemporaneously with each redaction a written
justification for the redaction identifying the
basis for withholding the information as a
trade secret; and

(2) provides clear and conspicuous documentation to learned
professionals describing the known limitations, failure modes,
and appropriate domains of use for the artificial intelligence
product.

(b) Scope of Immunity.--The immunity provided under subsection

(a) shall be conferred to a developer only for acts or omissions that do
not constitute recklessness or willful misconduct by the developer.
(c) Duty To Update.--Immunity under subsection

(a) relating to an
artificial intelligence product shall not apply to a developer--

(1) that does not update the model card, model
specification, and documentation with respect to the artificial
intelligence product as described in subsection

(a)

(1) by the
date that is 30 days after the date on which the developer--
(A) deploys a new version of the artificial
intelligence product; or
(B) discovers a new and material failure mode
affecting the artificial intelligence product; and

(2) of which the failure to make an update described in
paragraph

(1) by the applicable date described in that
paragraph proximately causes a harm occurring after that date.
(d) Preemption.--

(1) Express preemption.--This section shall apply to any
claim arising under State law against a developer for an error
arising from the use of an artificial intelligence product by a
learned professional in providing professional services if the
developer is immune from civil liability under subsection

(a) .

(2) Claims not preempted.--Nothing in this section shall
apply to a claim arising under State law against a developer
based on fraud, knowing misrepresentation, or conduct outside
the scope of professional use of an artificial intelligence
product by a learned professional.
SEC. 5.

Nothing in this Act shall be construed to affect any immunity from
civil liability established by Federal or State law or available at
common law that is not related to the immunity established under
section 4 (a) .

(a) .
SEC. 6.

This Act--

(1) shall take effect on December 1, 2025; and

(2) shall apply to acts or omissions occurring on or after
the date described in paragraph

(1) .
<all>