![](https://www.aclu-nj.org/sites/default/files/styles/thumbnail_130x110/public/field_image/shutterstock_1783490738.jpg?itok=P8x4ZXPJ)
Amid the crush of executive orders and agency directives issued during Donald Trump’s first weeks in office, his administration has begun to demolish the foundations for ensuring that artificial intelligence (AI) in the U.S. is safe and responsible. The president is not only set to completely roll back the fledgling protections Joe Biden’s administration instituted, but also to further accelerate the spread of unchecked AI across American life.
How is President Trump Dismantling AI Protections?
President Trump has undone existing AI protections at a breathtaking pace. One of President Trump’s first actions was repealing the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. That repeal also included instructions to review a presidential directive, known as a National Security Memo, governing national security use of AI. Days later, President Trump issued his own AI executive order, directing the Office of Management and Budget, which coordinates agencies across the federal government, to overhaul its existing directive on federal uses of AI. Federal agencies have followed the president’s lead, scrubbing websites of AI guidelines, protections for jobseekers, and more. President Trump’s goal is clear: undo anything that could get in the way of breakneck AI development and deployment.
That emphasis on speed is dangerous. Artificial intelligence and other automated tools have already been rapidly adopted in the private and public sectors, without first ensuring that the tools are fair or appropriate. In the absence of strong guardrails, those tools are creating real-world harms when companies and government agencies use them to help decide who gets a job, who gets a loan, who goes to jail, and a host of other sensitive decisions.
Why Are AI Safeguards Essential for Civil Rights and Public Safety?
During the Biden administration, federal agencies began to develop guardrails to protect people when AI threatened their civil rights or safety. Those measures included critical protections for at least some of the federal government’s uses of AI and commonsense guidance from agencies on steps the private sector should take to ensure that AI use complies with existing civil rights and other laws.
But President Trump is already rolling back these modest measures with little to replace them. This is a grave mistake. Many of the Biden administration’s directives were basic, common sense steps the government should take any time agencies are experimenting with and deploying a powerful new technology. These steps include robust public transparency and internal oversight (such as agency chief AI officers) as well as regular testing requirements to ensure that AI tools follow existing laws protecting civil rights and civil liberties, accurately perform the tasks they're given, and don't waste agency resources. There's no reason for the Trump administration to jettison those protections.
Who Benefits from Rolling Back AI Regulations?
Rolling back AI protections signals the pronounced power Big Tech has in the new administration. This includes deploying AI to probe and slash critical government programs and grants. We are also seeing Tech’s outsized influence in key personnel decisions and a new executive order that directs federal agencies to “integrate modern technology” into hiring and to "leverag[e] digital platforms to improve candidate engagement,” which we fear is veiled language for the unproven products like gamified assessments, automated video interviews, and chatbots that technology vendors often try to sell with such claims. These technologies have been repeatedly demonstrated to lead to discriminatory harms, and many workers have reported that today’s digital-application platforms are particularly confusing, inaccessible and opaque. Without safeguards, this influence will translate directly into real world harms.
The Trump administration is primed to accelerate AI’s development and deployment without critical guardrails to protect people from harm. Supercharging AI deployment without guardrails will also supercharge the well-documented harms that are already happening. That means more people are being denied jobs because AI ranked them lower than an equally qualified person. More people are also having their benefits cut or flagged for fraud based on erroneous or unfair AI determinations. We've also seen numerous instances where automated systems deployed in government contexts without appropriate guardrails lead to costly, inaccurate, and inefficient outcomes for everyone -- achieving the direct opposite of the oft-stated goals of adopting AI in the first place.
Common sense guardrails are not an impediment to AI innovation; they’re necessary to ensure that innovation is making our lives better rather than worse. Progress is about greater fairness, safety, opportunity and convenience for everyone, not worsening existing discrimination and creating more roadblocks for underrepresented and marginalized people.