April 17, 2025 (Washington, D.C.) – U.S. Representatives Don Beyer (D-VA), Mike Levin (D-CA), and Melanie Stansbury (D-NM) were joined by 45 additional Members of Congress including Ranking Member of the House Science, Space, and Technology Committee Zoe Lofgren (D-CA) and Ranking Member of the Oversight and Government Reform Committee Gerry Connolly (D-VA) to call for the immediate termination of the “Department of Government Efficiency’s” (DOGE) use of unauthorized AI systems, emphasizing the significant security risks posed and potential criminal liability involved. The lawmakers also expressed deep concerns with lack of oversight over AI usage, sharing of non-public or sensitive data, and with Elon Musk’s conflicts of interest as a federal contractor and founder and owner of xAI.
The lawmakers wrote:
“We write to express concern about the use of artificial intelligence (AI) systems within this Administration’s “Department of Government Efficiency” (DOGE), without standards or regard for sensitive data. We understand AI’s potential for modernization and efficiency improvements within the federal government, and support implementation of AI technologies in a manner that complies with existing data security and software development, acquisition, and usage laws, and that provides proper transparency, vetting, and oversight over the use of such AI technologies. We are specifically concerned about reports of Elon Musk and DOGE’s monitoring and sharing of federal employee and non-public federal data using AI tools, and reports of intentions to use sensitive data to train private AI models. These present serious security risks, self-dealing, and potential criminal liability if not handled correctly, and have the potential to undermine successful and appropriate AI adoption.
“In addition, DOGE’s reported use of AI technologies on sensitive information raises significant concerns about data security. Musk’s DOGE team at the Office of Personnel Management reportedly used AI systems to analyze emails from a large portion of the two million person Federal workforce describing their previous week’s accomplishments—without model transparency and without addressing major concerns about security or conflicts of interest. Alarmingly, sensitive data from across the Department of Education was also reportedly fed into an AI system, including data with personally identifiable information for people who manage grants, as well as sensitive internal financial data. Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data. Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place.
“Sharing of such data would constitute a major data privacy and data security risk. Specifically, we are concerned that sharing such data outside of federal systems or lawfully vetted contracts may run in violation of laws such as the Privacy Act of 1974, the E-Government Act of 2002, and the Federal Information Security Modernization Act of 2014. These laws set requirements for the federal government’s collection and use of personal information and sensitive data— including through establishing limits on agency information sharing, and requirements for data minimization, disclosure limitations, cybersecurity, transparency, and privacy impact assessments for developing or procuring information technology. In addition, the federal government is legally obligated to comply with codified requirements for vetting software and cloud products and services across the federal government, through programs such as the Federal Risk and Authorization Management Program (FedRAMP).
“It is clear that DOGE’s use of AI clearly does not meet the standards the previous memoranda set. Worse, existing AI systems like CamoGPT have been used in the misguided purging of federal materials from references to achievements of Americans of color and women, including the Navajo Code Talkers and the Tuskegee Airmen. It is not clear how the use of CamoGPT meets the Congressional authorization for AI usage provided in the 2021 National Defense Authorization Act, but it is alarming that the result of such usage by this Administration was referred to as an error—raising questions about the appropriateness of and lack of sufficient oversight of its use.
“While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data. We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high. We ask that you immediately terminate any use of AI systems that have not been approved by FedRAMP or equivalent formal approval procedures or that do not comply with existing laws. In addition, we ask that you do not use any AI system to make employment termination decisions relating to civil servants.”