Back to 老熟女午夜福利 & Commentary

AI is Infringing on Your Civil Rights. Here鈥檚 How We Can Stop That

The U.S. Capitol building with digital AI graphics overlay, representing artificial intelligence regulation and tech policy in government.
By supporting the AI Civil Rights Act of 2025, Congress can take a monumental step to protect our civil rights in the digital age.
The U.S. Capitol building with digital AI graphics overlay, representing artificial intelligence regulation and tech policy in government.
Jo Gasior-Kavishe,
she/her,
Intern,
ACLU NPAD Democracy and Technology
Share This Page
December 3, 2025

Searching for an apartment online, applying for a loan, going through airport security, or looking up a question on a search engine 鈥 you might not think anything of these exchanges other than that they are mundane things you do, but, in many of these instances, you鈥檙e actually interacting with artificial intelligence (AI).

Avoiding AI in our quotidian activities feels impossible nowadays, especially when it is now used by public and private organizations to make decisions about us in hiring, housing, welfare, , and other high-stakes areas. While proponents of AI usage boast about how efficient the technology is, the decisions it makes about us are oftentimes uncontestable, discriminatory, and infringe on our civil rights.

However, inequity and injustice from artificial intelligence need not be our status quo. Senator Ed Markey and Congresswoman Yvette Clarke have just re-introduced the , which will help ensure AI developers and deployers do not violate our civil rights. The ACLU strongly urges Congress to pass this bill, so we can prevent AI systems from undermining the equal opportunities our civil rights gave us decades ago.


Why Do We Need the AI Civil Rights Act of 2025?

The AI Civil Rights Act shores up existing civil rights law so their protections now apply to artificial intelligence.

Whether you are looking at the Civil Rights Act of 1964, The Fair Housing Act, The Voting Rights Act, the Americans with Disabilities Act, or a multitude of other civil rights statutes, current civil rights laws may not be easily enforced against discriminatory AI. In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact, and developers may not have tested the AI model for discriminatory harms. By covering AI harms in several consequential areas -- employment, education, housing, utilities, health care, financial services, insurance, criminal justice, identity verification, and government welfare benefits -- the AI Civil Rights Act provides interlocking protections against discrimination, testing protocols and notice requirements in numerous sectors for people who have their civil rights eroded by AI systems.


Ensuring AI Doesn't Become a Tool for Discrimination

One of the most important aspects of the AI Civil Rights Act is that it will allow us to better defend against discriminatory AI outputs. A decision from an AI model can often appear objective, but when you open up the algorithm, it can have a disparate impact on protected groups. Disparate impact, in the context of artificial intelligence, is a form of discrimination where an AI model disproportionately harms one group over another in its decision making and has been seen within healthcare, financial services, criminal justice, and other significant areas.

Unfortunately, disparate impact claims can be onerous to bring forward. For one, to prevail on a disparate impact claim, plaintiffs need to demonstrate that an algorithm disproportionately harms a protected group and that a less discriminatory practice exists. However, the difficulty of meeting this burden can be exacerbated when AI companies refuse to disclose their algorithms for these evaluations by claiming they are For another, not all civil rights laws , and President Donald Trump is constantly rolling back the use of disparate impact in civil rights enforcement. This continual weakening of disparate impact protection makes it even more difficult to file AI-related discrimination claims.

To help with this, the AI Civil Rights Act addresses algorithmic discrimination by making it explicitly unlawful for AI developers or deployers to offer, license, promote, sell, or use an algorithm in critical life areas like housing and employment that causes or contributes to a disparate impact. Centering disparate impact in the AI Civil Rights Act ensures that concrete protections exist for individuals affected by discriminatory AI models.


Transparency and Accountability in AI Systems

Beyond safeguarding against AI-powered discrimination with disparate impact protections, the AI Civil Rights Act gives us the transparency we desperately need from AI developers and deployers. The AI Civil Rights Act requires developers, deployers, and independent auditors to conduct pre-deployment evaluations, impact assessments, and annual reviews of their algorithms. These evaluations will be critical in helping determine whether a model has harmful effects on people's civil rights and where, if at all, it can be deployed in a specific sector.

The AI Civil Rights Act also brings clarity to the long-debated question of who should be held accountable for the civil-rights harms caused by algorithmic systems. If passed, the AI Civil Rights Act will make developers and deployers the parties responsible for taking reasonable steps to ensure their AI models do not violate our civil rights. These steps can include documenting any harms that can arise from the model, being fully transparent with independent auditors, consulting with stakeholders who are impacted by AI models, guaranteeing that the benefits of using an algorithm outweigh the harms, and more. If developers and deployers are found violating the act, they risk facing civil penalties, fees, and other consequences at federal, state, and individual levels. The accountability mechanisms in the act are pivotal to empowering individuals against algorithmic harm while ensuring that AI developers and deployers understand that it is their duty to have low risk models.


What is Next?

If we want our AI systems to be safe, trustworthy, and non-discriminatory, the AI Civil Rights Act is how we start.

鈥淎I is shaping access to opportunity across the country,鈥 says Cody Venze, ACLU senior policy counsel. 鈥溾楤lack box鈥 systems make decisions about who gets a loan, receives a job offer, or is eligible for parole, often with little understanding of how those decisions are made. The AI Civil Rights Act makes sure that AI systems are transparent and give everyone a fair chance to compete."

Learn More About the 老熟女午夜福利 on This Page