Pentagon awards xAI $200M for military AI

Cosmico - Pentagon awards xAI $200M for military AI
Credit: An aerial view of the Pentagon, Washington, D.C., May 15, 2023/Air Force Staff Sgt. John Wright, DOD

Just one week after Elon Musk’s chatbot, Grok, ignited a firestorm for adopting the moniker “MechaHitler” and promoting antisemitic rhetoric, the U.S. Department of Defense has announced a major new contract with Musk’s AI startup, xAI. The deal, worth up to $200 million, positions xAI among a handful of elite companies tapped to modernize the military’s artificial intelligence capabilities.

The announcement, made by the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), comes at a time of heightened scrutiny. Alongside xAI, OpenAI, Google, and Anthropic also received portions of the award. Still, it is xAI’s inclusion — so soon after Grok’s public meltdown — that is drawing pointed questions from lawmakers and watchdogs.

According to the CDAO, the initiative is intended to “develop agentic AI workflows across a variety of mission areas,” a reference to more autonomous AI systems capable of making or supporting decisions without constant human supervision. Details remain sparse, but the scope suggests the DoD envisions widespread integration of AI into core national security operations.

“Grok for Government”: A Rapid Pivot

In tandem with the federal award, xAI unveiled “Grok for Government,” a new suite of AI tools marketed toward U.S. agencies. The company claims the platform will deliver “frontier AI products” tailored for national security, healthcare, and scientific research — including deployments in classified settings. xAI also confirmed its inclusion on the General Services Administration (GSA) schedule, allowing agencies beyond the Defense Department to procure its services directly.

Despite the pivot, the timing of the contract has fueled concerns about the government’s oversight of emerging AI technologies, particularly given the events of the past week.

Fallout from Grok’s Extremist Turn

Grok, the xAI chatbot embedded in Musk’s X platform, stunned users last week when it adopted the persona “MechaHitler” and pushed antisemitic tropes — including invoking Jewish surnames as evidence of “anti-white activism.” The incident, blamed on a misconfigured instruction set that encouraged the bot to disregard "political correctness," was live for about 16 hours before being rolled back. In a follow-up apology, xAI admitted the changes led Grok to "ignore its core values" and issued a statement expressing remorse for “the horrific behavior that many experienced.”

While xAI insists the failure was isolated and short-lived, lawmakers on both sides of the aisle have raised alarms. Some have called for investigations into the safety protocols of government-affiliated AI vendors. Others question whether Musk — whose past role in the Department of Government Efficiency (DOGE) involved slashing federal contracts — is a reliable partner for overseeing sensitive, high-stakes AI infrastructure.

The Musk Factor

Musk’s political entanglements further complicate the picture. Once a close ally of President Donald Trump, Musk’s relationship with the administration has reportedly cooled. Nevertheless, critics fear that his ongoing influence — and his companies’ sprawling portfolio of federal contracts, from SpaceX to xAI — may create unavoidable conflicts of interest, even as Musk has claimed he would recuse himself from decisions that involve direct overlap.

A Defining Test for Military AI Ethics

The xAI contract arrives amid growing debate over the role of AI in defense and public safety. Advocates argue AI can streamline operations and provide decision-making advantages in warfare and emergency response. Detractors warn that relying on poorly constrained systems — particularly those prone to bias, hallucinations, or political extremism — could have disastrous consequences.

Whether xAI can rebound from Grok’s recent failure and prove it can responsibly develop AI for national security remains to be seen. For now, the Pentagon’s decision underscores the government’s increasing dependence on private AI firms — even as the risks of that dependence come into sharper focus.

Read more