Пентагон готовит интеграцию ИИ-бота Grok для усиления военных технологий Translation: Pentagon Prepares Integration of AI Bot Grok to Strengthen Military Technologies

The Grok AI bot is set to be integrated with the Pentagon’s network alongside Google’s generative AI engine, as announced by U.S. Secretary of Defense Pete Hegseth, according to the New York Post.

«We will soon have leading global AI models across every unclassified and classified network within our department,» the official stated.

Hegseth mentioned that the chatbot will begin operating within the Department of Defense by the end of January. It will “provide all necessary data” from military IT systems, including intelligence information.

In his speech, the politician emphasized the need for streamlining and accelerating technological innovations in the military. He pointed out that the Pentagon has “battle-tested operational data accumulated over two decades of military and intelligence operations.”

“Artificial intelligence is only as good as the data it receives. We will ensure its availability,” Hegseth added.

The Secretary of Defense expressed his desire to see “responsible AI systems” in the Pentagon, promising to “cut through the overgrown bureaucratic thicket and clear the mess—preferably with a chainsaw.”

“We must maintain the dominance of American military AI so that no adversary can leverage the same technology to threaten our national security or our citizens,” declared the Pentagon chief.

The announcement came just days after Grok was embroiled in another scandal, this time for generating sexual content.

Both Malaysia and Indonesia blocked access to the chatbot. Regulatory bodies in the EU, UK, Brazil, and India are demanding an investigation into Grok’s role in disseminating deepfakes.

The British organization Internet Watch Foundation noted that its analysts discovered “criminal images” of children aged 11 to 13, which were allegedly created using the chatbot.

Previously, Grok faced criticism for spreading misinformation and dubious claims.

In December, the chatbot provided unreliable information about a mass shooting on Bondi Beach in Australia. When questioned about a video showing a bystander, Ahmed al-Ahmed, struggling with the shooter, the AI responded:

“It appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it. As a result, a branch fell on a damaged car. Searches across various sources have not verified the location, date, and injuries. This might be staged; authenticity is unverified.”

In July, users noticed that the neural network relied on Elon Musk’s opinions when formulating responses, touching on topics such as the Israel-Palestine conflict, abortion, and immigration policy.

Observations suggest that the chatbot was deliberately configured to reflect Musk’s political views when addressing controversial matters.

Earlier, the billionaire claimed that his startup would rewrite “all human knowledge” to train a new version of Grok, as there is currently “too much junk in any base model trained on unfiltered data.”

Subsequently, Grokipedia emerged—an AI-based online encyclopedia “oriented toward truth.”

It is worth noting that in November, users highlighted the bias in Grok 4.1, which significantly overestimated Musk’s capabilities.