Investing

Trump Admin Strips “Safety” from AI Oversight Institute in Move to Rebrand

Pinterest LinkedIn Tumblr

The Trump administration announced a rebrand of the US Artificial Intelligence (AI) Safety Institute, stripping the word “safety” from the organization’s title and mission.

The institute, once tasked with developing standards to ensure AI model transparency, robustness and reliability, will now be known as the Center for AI Standards and Innovation (CAISI). According to the announcement, its focus will be on enhancing US competitiveness and guarding against foreign threats, not constraining the industry with regulations.

The decision, announced on Tuesday (June 3) by US Secretary of Commerce Howard Lutnick, marks a sharp departure from the Biden-era posture on AI governance.

‘For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards,” Lutnick said in a statement.

“CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.”

Established in November 2023 under President Joe Biden’s executive order on AI, the original AI Safety Institute was housed within the National Institute of Standards and Technology (NIST). It aimed to assess AI risks, publish safety benchmarks and convene stakeholders in a consortium focused on responsible AI development.

But with the Trump administration’s return to the White House, the emphasis has shifted.

Instead of curbing AI risks through regulation and safety protocols, the renamed CAISI will now prioritize “pro-innovation” objectives, including the evaluation of foreign AI threats, mitigation of potential backdoors and malware in adversarial models and avoidance of what the administration sees as regulatory overreach from foreign governments.

According to the commerce department, CAISI’s primary tasks will include collaborating with NIST laboratories to help the private sector develop voluntary standards that enhance the security of AI systems, particularly in areas like cybersecurity, biosecurity and the misuse of chemical technologies. The center will also establish voluntary agreements with AI developers and evaluators, and lead unclassified evaluations of AI capabilities that may pose national security risks.

In addition to those directives, CAISI will lead comprehensive assessments of both domestic and foreign AI systems, focusing on how adversary technologies are being adopted and used, and identifying any vulnerabilities, such as backdoors or covert malicious behavior, that could pose security threats.

The center is also expected to work closely with the Department of Defense, the Department of Energy, the Department of Homeland Security, the Office of Science and Technology Policy, and the intelligence community.

CAISI will remain housed within NIST and will continue to work with NIST’s internal organizations, including the Information Technology Laboratory and the Bureau of Industry and Security.

Rise of foreign AI spurs national security concerns

The reformation of the institute reflects Trump’s broader AI strategy: loosen domestic oversight while doubling down on global AI dominance. Within his first week back in office, Trump signed an executive order revoking Biden’s prior directives on AI governance and removed his AI policy documents from the White House website.

That same week, he announced the US$500 billion Stargate initiative — a massive public-private partnership involving OpenAI, Oracle and SoftBank Group (OTC Pink:SOBKY,TSE:9984) that is intended to make the US the global leader in AI.

The Trump administration’s pivot has been partly catalyzed by growing concerns over foreign AI competition, particularly from China. In January, Chinese tech firm DeepSeek unveiled a powerful AI assistant app, raising alarms in Washington due to its technical sophistication and uncertain security architecture.

Trump called the app a ‘wake-up call,” and lawmakers quickly moved to introduce legislation banning DeepSeek from all government devices. The Navy also issued internal guidance advising its personnel not to use the app “in any capacity.”

Signs of an impending transformation had emerged earlier in the year.

Reuters reported in February that no one from the original AI Safety Institute attended the high-profile AI summit in Paris that month, despite Vice President JD Vance representing the US delegation.

Trump’s One Big Beautiful Bill reshaping US AI governance

Trump’s massive One Big Beautiful Bill, which includes much of the aforementioned legislation, is poised to dramatically reshape the landscape of AI regulation in the US. The bill introduces a 10 year moratorium on state-level AI laws, effectively centralizing regulatory authority at the federal level.

This move aims to eliminate the patchwork of state regulations, which the administration claims would foster a uniform national framework to bolster American competitiveness in the global AI arena.

The bill’s provision to preempt state AI regulations has sparked significant controversy.

A coalition of 260 bipartisan state lawmakers from all 50 states has urged to remove this clause, arguing that it undermines state autonomy and hampers the ability to address local AI-related concerns. Critics also warn that the moratorium could delay necessary protections, potentially endangering innovation, transparency and public trust. They argue that it may isolate the US from global AI norms and reinforce monopolies within the industry.

Despite the backlash, proponents within the Trump administration assert that the bill is essential for maintaining US leadership in AI. The One Big Beautiful Bill is currently being debated in the US Senate.

Securities Disclosure: I, Giann Liguid, hold no direct investment interest in any company mentioned in this article.

This post appeared first on investingnews.com