- New communications systems will give commanders faster battlefield information and speed up decision-making.
- Ministry of Defence(MOD) awards contract worth up to £86 million to British-based SME for advanced tactical communication systems, such as radios and tablets.
- Contract creates 12 UK defence industry jobs and builds on successful deployment in Estonia.



AI radios
Jokes aside, this is a common thing in tech / software at the moment.
You can make fantastic software and systems, but unless you slap an AI label on it, big companies and organisations will not want to pay for it, or will pass you over for a product that says it has it, even if it’s dogshit.
AI (or, more accurately, machine learning) can bring value, but so can a lot of other features.
Yeah, we’re currently having discussions at my company about how we’re going to respond if potential clients starting asking about AI or putting it in their RFPs.
And this isn’t a new problem. We make a product that can be hosted in a cloud server if you want to. Because of the nature of the product, this is the stupidest idea imaginable. We straight up tell people not to do it. This is something that absolutely needs to be on-prem. But we made it able to run remote, because sometimes buyers will put out an RFP that says “System must be cloud native.”
That line gets put there by a CTO who can barely open their email, but keeps seeing the word “Cloud” in Business Insider and WSJ, and thinks it must be the future because that’s where their photos get backed up to. No one in their right mind wants it, but we have to offer it or else someone else gets the sale.
Just don’t define the term.
Do you have a remote call center or Dev team? Then your software uses AI (abroad Indians).
If not, maybe you could try chewing on your computer. That’s gross? Then your software is AI (actually inedible)
Object recognition and classification is more narrowly AI, and from the description this thing might have it.
I’m not sure it’d be a good thing, of course - it’s very unlikely it can reliably classify everything , which will create a contrast between what the combatant uses their senses for and what they are hinted on screen. That’s a very ergonomically debilitating effect. Like night lighting makes you blind for everything outside the illuminated area. Or try playing an airplane simulator game with realistic interactive cockpit and an arcade HUD with less information above it, it’s guaranteed you’ll mostly ignore the former and the information it gives you.
Sure, and they’re talking about that like something they might add to it down the line, because at the end of the day these systems are usually just android apps, so you can theoretically add anything.
In practice, what’ll most likely happen is that they’ll try that capability out, decide that it sucks, and quietly ditch it. Or, they’ll roll it out anyway in order to keep the government happy, and then commanders will just tell their troops not to use it. Militaries have always known how to work with and around bad equipment.
If they have to shove in a dumb AI app to get the funding for some actually very useful military equipment approved, well, that’s military procurement for you. Would be nice if the current UK government weren’t so hell bent on shoving AI in everything, but the realistic alternatives currently are “Nazis” and “Sparkling fascists.”
Yes, and in that light one can think that defense corruption traditions have an evolutionary reason - they maintain this ability. It doesn’t really matter if something stupid is mandated for purely honest, just misguided, reason, or for a corrupt one, but corruption requires the ability to avoid the outcomes.
My job now is our VP messaging me daily about when I can deliver AI projects.
If we don’t do so it others will poach our clients. Okay, but if we deliver shit they’ll use it once and stop paying for it.