Recently, TEAM LEWIS VP Matt Robbins gave his analysis of the communications lessons to be drawn from the US Department of War/Anthropic fallout. While this very public bust-up may feel uniquely American in its scale and intensity, the underlying issues need to be considered seriously in the UK anEurope, as governments accelerate defence modernisation and integrate artificial intelligence into national security decision-making and delivery.
Proactive communications, before the crisis
The usual adage is “act in haste, repent at leisure,” but we should give more weight to a communications corollary: fail to be sufficiently proactive, and repent when reactivity is all that’s available to you. With low public understanding of AI in the UK and EU, but increasing concerns about what it means for jobs, ethical decision-making, and sovereignty, there is an opening to inform and build confidence by communicating about ethical positions and safeguards. That opportunity only really exists before a crisis erupts: it is hard to explore ideas while firefighting. And with any fast-moving, revolutionary technology, it’s a question of “when” not “if” there will be controversy.
Building trust in a wary polity
The UK and EU approach the intersection of AI, defence, and public trust from a different cultural and political starting point than the United States. European publics have historically been more sceptical of both large technology companies and military expansion. Consequently, governments tend to place greater emphasis on regulation, privacy protections, and ethical oversight. This, combined with high energy costs, are the commonly cited reasons why UK- and EU-based companies form such a small share of the AI market, despite Google DeepMind’s UK origin.
So the core impetus from governments, in engaging with AI, may be reversed. Where the US government demands full autonomy, evidence of strong safeguards and a corporate ethics-based embrace of AI governance and restriction may be actively appealing to European politicians.
This approach chimes with their previously held assumptions, building trust based on shared priors. It is always important to remember what “your stakeholders’ stakeholders” are demanding: for UK and EU politicians, this is their voters. Our research at TEAM LEWIS shows that the UK public is concerned about preparedness and believes the prospect of being at war in the near future is rising.
Equally, they are uncertain which organisations deserve trust, and see strong transparency and governance measures as key to building that trust. This is a step “behind” (or, more sceptical than) where American voters are on accepting AI’s role in defence. American voters increasingly accept that it will play a major role but ask questions about control. Those pursuing AI integration in the UK and Europe must help the politicians setting policy and awarding funds to make their voters comfortable with the decisions and awards politicians are making.
What this means for defence tech companies
The tension is reflected in Europe’s policy landscape. The European Union’s AI Act reflects a broad instinct to classify AI systems by risk and impose safeguards around transparency, accountability, and human oversight. The UK has taken a somewhat lighter-touch regulatory approach than Brussels. However, British policymakers have still repeatedly stressed the importance of “responsible AI” and democratic accountability in military applications.
So, while the “sword and shield” approach that Matt highlighted matters, European audiences are often less persuaded by arguments rooted purely in technological superiority or national competitiveness. Instead, public support tends to depend on whether institutions can convincingly demonstrate restraint, proportionality, and governance. In practice, that means defence technology companies operating in the UK and EU face an even greater burden to explain not just what their systems can do, but what boundaries they are prepared to maintain.
This means the execution of the “sword and shield” framework that Matt described (what elements fit into the “sword” versus the “shield”) changes. In Europe, ethical positioning can absolutely strengthen a company’s reputation, but only if it appears consistent and operationally credible. Companies that present themselves as values-driven but then appear opaque about military partnerships risk accusations of hypocrisy from activists, journalists, and policymakers. Those that openly articulate guardrails and governance structures and lead the way on setting transparency and accountability standards, will find that becomes a competitive advantage, rather than a procurement obstacle.
European governments themselves are balancing contradictory pressures. Russia’s invasion of Ukraine fundamentally changed the defence conversation across the UK and Europe, accelerating military spending and creating stronger political support for advanced defence technologies. Capabilities like AI-enabled intelligence, logistics, cyber capabilities, and autonomous systems have rocketed up the priority list. Voters that may once have been instinctively cautious about defence investment are now more accepting of the security imperative, as our TEAM LEWIS Defence Legitimacy and Investment Index quantifies for the first time.
At the same time, though, European citizens remain deeply sensitive to issues of surveillance, civil liberties, and corporate power. Concerns about facial recognition, algorithmic bias, and automated decision-making are more politically mainstream in Europe than in much of the United States. A Parliamentary petition against UK Prime Minister Keir Starmer’s planned introduction of digital ID attracted almost 3 million signatures, one of the largest ever. Any perception that governments or defence contractors are deploying AI without democratic oversight can rapidly become politically toxic. Especially when it is used in domestic security applications.
Proactive communication in these markets is, therefore, increasingly inseparable from risk management. Many firms offering dual-use technologies have until recently been reticent to openly market their defence and security technologies. But with public agreement on the need for defence rising, this is changing – such as Anduril’s high-profile UK capability marketing in 2025. Those who continue to prioritise risk aversion (e.g. little or no public communication) in their strategies will lose the substantial market opportunity. This will go to those who deploy politically-aware communications and marketing as an essential part of risk mitigation.In a market where support for innovation is jostling alongside deep institutional scepticism, trust is becoming as strategically important as the technology.
Ready to put trust, transparency and governance at the centre of your narrative? Contact one of our defence and advanced technology experts to discuss how we can support your communications strategy.