AI for freedom: Ukraine’s digital strategy in the information war
When Russia unleashed its full-scale invasion in 2022, it bombarded Ukraine not only with missiles and drones but also with an unprecedented barrage of disinformation. Deepfakes, automated propaganda, and artificial-intelligence(AI)–generated narratives flooded the information space. Russia’s influence playbook now treats AI as both a factory and a force multiplier. On the production side, generative AI systems can create text, audio, and video at negligible cost, enabling high-quality fabrication of narratives tailored to specific audiences, languages, and regional contexts. On the distribution side, automated networks seed, amplify, and retarget that content across the Russian-founded Telegram messaging app, fringe news sites that mimic legitimate outlets, and social platforms, synchronizing information operations with battlefield events to maximize cognitive shock.
Blended campaigns, mixing authentic and synthetic material, have proven especially effective: a brief AI-generated audio track overlaid onto genuine video, or a fabricated paragraph embedded within an otherwise authentic article, can blur the boundary between truth and fiction. The sheer volume and realism of such composites often mislead audiences before fact-checkers can intervene.
A newer tactic reaches even deeper into the information stack via so-called “LLM grooming.” Here, AI becomes not merely a weapon but the battlefield itself, as Russian actors attempt to infiltrate their narratives into the training datasets of the world’s largest language models, subtly distorting the outputs. To that end, Russia has built a network of websites collectively branded “Pravda” — a vast array of pages not meant for human readers. These sites are clumsy, lacking search bars and stylistic coherence, yet prolific: they reportedly produced 3.7 million articles in the past year alone. Their purpose is not persuasion but contamination — seeding the open Internet with propaganda that will be harvested by web-crawlers for future model training. In doing so, the operators hope to embed Russian narratives inside generative AI systems such as ChatGPT, allowing those viewpoints to resurface later under the guise of neutral machine intelligence.
Yet, as with all dual-use technologies, AI can also serve democratic ends. Within three years of the full-scale invasion, Ukraine has turned this same technological frontier into a tool of defence and renewal. Across ministries, startups, and watchdog NGOs, AI now powers early warning systems against coordinated inauthentic behaviour and supports rapid, multilingual messaging that rallies allies abroad and sustains morale at home. Ukraine’s experience demonstrates that democracies can wield AI not only to defend truth but to project it.
AI as shield and megaphone
Ukraine’s information defence ecosystem is deliberately dual: AI operates as both a shield and a megaphone. On the defensive side, machine learning pipelines built by AI startups such as Osavul, Let’s Data, Open Minds, and Mantis Analytics scour millions of online items daily, flagging coordinated activity, deepfakes, and emerging hostile narratives. Some of the companies work directly with the national government, state communication units, and the National Security and Defense Council of Ukraine (NSDC). This real-time coordination allows Ukrainian agencies to dismantle falsehoods before they metastasize.
On the offensive side, the same logic of speed and precision drives Ukraine’s public messaging. Ukrainians increasingly deploy generative media to subtitle speeches, localize visuals to increase outreach and gather international support, and even experiment with synthetic presenters such as Victoria Shi, the Foreign Ministry of Ukraine’s AI-generated spokesperson who delivers scripted statements in multiple languages.
Together, these capabilities demonstrate Ukraine’s institutionalization of AI for strategic communication while striving to keep its methods defensible and democratically accountable.
Architecture of defence
Behind this agility lies institutional design. Ukraine’s 2021 Information Security Strategy established the blueprint for countering foreign information manipulation, assigning strategic coordination to the NSDC while embedding operational tasks in line ministries and the Ministry of Culture and Information Policy. Two dedicated bodies — the Centre for Strategic Communication and Information Security and the Centre for Countering Disinformation — now form the civilian and intelligence pillars of this system. The first shapes public-facing messaging and media literacy; the second conducts threat analysis, issues daily briefs, and coordinates inter-agency responses.
Legislative scaffolding has evolved almost as rapidly. The Laws of Ukraine “On Media” (2022) and “On Advertising” (2023), both aligned with the EU’s Digital Services Act, introduced requirements for algorithmic transparency and user rights protection. Together with new regulations on cloud services and digital economy development, they set the legal baseline for AI-driven content moderation and accountability. For a country under attack, such alignment with European digital norms is more than bureaucratic; it anchors Ukraine’s wartime improvisation within a rules-based democratic framework.
Plural innovation
What distinguishes Ukraine’s response is its pluralism. Instead of a centralized counter-propaganda office, the system relies on a constellation of actors linked by data flows and shared urgency. Venture-backed AI firms analyze information threats for the government and defence sector, while NGOs and journalists use the same analytical infrastructure to verify claims independently.
The investigative newsroom Texty.org.ua runs AI-assisted monitoring of roughly one thousand Russian sites and Telegram channels, surfacing weekly propaganda trends. Watchdogs VoxUkraine and Detector Media have transformed their long-standing projects into machine learning classifiers that map the migration of falsehoods across languages and platforms. This distributed architecture prevents monopolization of truth: verification capacity is spread across society, not concentrated in the state. In practice, that diffusion has created an information shield — a civic-technological immune system constantly updating itself to respond to new threats.
Education as security infrastructure
Strategic application of technology alone does not explain Ukraine’s resilience. Kyiv treats education as an essential component of national security. Since 2022, media literacy rates have risen sharply, supported by the state programme Filter, the Diia.Education online platform, and the PROMPTO AI Academy, which trains officials, journalists, and students in prompt engineering, fact-checking, and the ethics of synthetic media.
The collaboration of the Ministry of Digital Transformation with top international technical universities is fruitful and promising. Grassroots tech hubs such as AI House and Mantis Analytics host hackathons, where teams prototype new detectors of misleading content. NGOs convert verified datasets into open “disinformation detectors,” accessible to teachers and local media. This convergence of civic engagement, professional training, and government support has channelled digital literacy into a milestone of resilience and resistance.
Sovereign language and digital independence
In parallel, Ukraine is investing in technological sovereignty. The Ministry of Digital Transformation, in partnership with the Kyivstar telecommunications operator, is building the country’s first sovereign Ukrainian-language large language model under the newly launched WINWIN AI initiative. Trained on curated national corpora and processed on domestic Graphics Processing Units clusters, the model aims to reduce dependence on foreign AI services while enabling Ukrainian chatbots, legal-act analyzers, and curriculum enhancement.
Deputy Prime Minister Mykhailo Fedorov describes the project as both economic and cultural: ensuring that future Ukrainian AI “thinks and speaks in Ukrainian.” Beyond symbolism, such localization mitigates the risk of foreign influence embedded in external models and keeps sensitive data within national jurisdiction. In essence, Ukraine seeks not only to survive the information war but to own the technological terrain on which it is fought.
Lessons for democratic resilience
The Ukrainian case challenges assumptions that democracies must remain reactive in the face of AI-powered disinformation. Instead, it shows how openness, distributed expertise, and regulatory foresight can outperform autocratic centralization tactics. By fusing machine learning with journalistic verification, Ukraine has accelerated what analysts call “cognitive defence”: the ability to detect, interpret, and respond to hostile influence operations before they have the opportunity to shape public perception.
This capacity rests on three interconnected elements: first, technological multiplicity: a network of state, private, and civic actors sharing data and purpose; second, legal legitimacy: reforms that align with European standards and safeguard rights, even under martial law; and third, human capital: a population that recognizes manipulation and values factual discourse. Together, they transform resilience from an emergency response into a sustainable democratic competence.
Ukraine’s wartime experience reveals a paradox of modern democracy: the same algorithms that threaten truth can also defend it. Machine learning, when embedded in transparent institutions and literate societies, becomes not a weapon of control but an instrument of trust. Certainly not everything is smooth, as Ukraine faces both external and internal challenges in implementing, deploying, and sustaining AI tools for strategic communication to deter Russia’s cognitive warfare.
In a world where information itself is a battlefield, Ukraine’s information shield offers more than tactical lessons. It embodies a philosophy of governance suited to the digital century — adaptive, participatory, and ethical. As foreign interference grows more sophisticated, Ukraine reminds other democracies that resilience is not built in isolation or secrecy but through the constant interaction of code, law, and conscience. Artificial intelligence, properly governed, can indeed serve humanity’s most worthy endeavour: the collective will to remain free.