
The US Pentagon approved Elon Musk’s Grok AI for classified military operations while threatening Anthropic with penalties for refusing to remove ethical safeguards from its Claude AI.
Commenting on this development, Jurgita Lapienytė, chief editor at Cybernews, said: “Safety rules are being thrown out”.
She explained, “For the fear of its Claude being used for the surveillance of American citizens or used to develop mass weapons, the US leading AI company has backed out of the deal with the Pentagon, and is now facing penalties for standing its ground.
Yes, the government shouldn’t allow any company to dictate the terms for defence operations. But should AI companies be punished for having safety rules? If the biggest market players are forced to their knees, smaller companies will stop having safety rules, too. Will being “safe” become bad for business?”
Machines are making kill decisions
Madam Lapienytė added that, currently, AI is not only untrustworthy but also very dangerous when unsupervised. In military operations, it can also be used to dehumanise operations by offering gamified experiences for officers and soldiers, and shifting personal responsibility.
Approval based on politics, not security
“You’d expect your government to pick the best technology and go to great lengths to discuss the best possible solutions for American citizens and defence goals. What seems to have happened here is that, in the heat of public discussion, another company got fast-tracked, while at the same time it’s facing hefty fines and even bans in other countries.”
This might be a security issue for other countries, too
“When the world’s most powerful military starts using AI without being transparent about exactly how, one can begin to wonder just how much US operations overseas are influenced by the algorithm. Every country in conflict with the US should keep a close eye on this development.”

