ALJAZEERA: What do Ukraine’s robot soldiers mean for the future of warfare?

Enrich your Al Jazeera experience by signing in or creating an account. Navigation menu

News|Russia-Ukraine war

What do Ukraine’s robot soldiers mean for the future of warfare?

Remote-controlled weapons have been used for some time, but AI is now on the cusp of making battlefield decisions. Listen (8 mins)

Save

Click here to share on social media

Share

Add Al Jazeera on Google

epaselect epa12916799 Attendees watch a robot on the second day of the 'Road to URC – Security and Defense Dimension' conference at the Rzeszow University of Technology in Rzeszow, Poland, 27 April 2026. The event, which takes place on 26-27 April, is part of the preparations for the international Ukraine Recovery Conference 2026 (URC 2026), which will be held on 25-26 June in Gdansk. EPA/WOJTEK JARGILO POLAND OUT
Attendees watch a robot on the second day of the Road to URC – Security and Defense Dimension conference at the Rzeszow University of Technology in Rzeszow, Poland, on April 27, 2026 [Wojtek Jargilo/EPA]

By Nils Adler

Published On 1 May 2026 1 May 2026

In a scene reminiscent of a computer war game, three battle-fatigued soldiers, dressed in white snow camouflage, emerge from a war-torn alley with their hands raised above their heads.

They crouch down, following the orders being blasted at them, fear and shock etched across their faces as they stare down the barrel of a machinegun mounted on a so-called ground robot.

Recommended Stories

list of 4 items

end of list

This footage, released in January by Ukrainian defence company DevDroid, is said to show the moment Russian soldiers were captured by a Ukrainian robot using artificial intelligence.

In April, Ukrainian President Volodymyr Zelenskyy said that, for the “first time in the history of this war, an enemy position was taken exclusively by unmanned platforms – ground systems and drones”.

“Ground robotic systems have already carried out more than 22,000 missions on the front in just three months,” he wrote in a post on X, alongside images of green machines with tank tracks and weapons mounted on top.

But for analysts who have studied the intersection of artificial intelligence (AI) and warfare, the footage reflects an expected evolution – one that will unfold far beyond the front lines in Ukraine as the world wrestles with the ethical implications of controlling it.

    UAVs, naval drones and robot dogs

    For years, militaries have used ground robots primarily for bomb disposal and reconnaissance.

    But in Ukraine, their role has expanded rapidly, with some brigades reporting that up to 70 percent of front-line supplies are now delivered by robotic systems rather than soldiers.

    These machines transport ammunition, food and medical supplies, and evacuate wounded troops from dangerous positions.

    Get instant alerts and updates based on your interests. Be the first to know when big stories happen.Yes, keep me updated

    Yet the sight of robotic systems moving across the battlefield is part of a much broader shift in warfare – one that has been building for decades.

    The modern debate about AI in warfare was largely driven by the rise of US unmanned aerial vehicle (UAV) operations in the early 2000s.

    In 2002, the MQ-1 Predator drone was used by the US to carry out one of the first targeted air strikes in Afghanistan, marking a turning point in how wars could be fought remotely.

    Its use expanded rapidly throughout the 2000s and peaked in the late 2000s to mid-2010s, particularly in Pakistan, Yemen and Somalia.

    As AI has advanced, the debate has moved beyond remote-control operations.

    The focus shifted towards systems which can help identify targets, prioritise strikes and guide battlefield decisions, raising deeper questions about how much autonomy should be delegated to machines.

    Analysts say the question of autonomy must remain central, rather than being overshadowed by rapid technological developments, however striking the sight of increasingly anthropomorphic machines on the battlefield may be.

    “These technologies are here to stay,” Toby Walsh, an AI expert at the University of New South Wales, told Al Jazeera. He described AI-driven military operations as “the third revolution of warfare”.

    The transformation is also spreading beyond land targets.

    Naval drones packed with explosives have already reshaped battles in the Black Sea, while autonomous underwater systems are being developed for surveillance, mine clearance and sabotage missions by militaries worldwide.

    Robotic dogs, meanwhile, are already being tested for surveillance, reconnaissance and bomb-disposal missions, with some experimental versions even fitted with weapons.

    Human involvement

    In recent years, the emergence of fully autonomous drones or so-called “killer robots” has triggered a fierce debate after a United Nations report suggested that Turkish-made Kargu-2 loitering munition drones, operating in fully autonomous mode, had identified and attacked fighters in Libya in 2020.

    The incident prompted intense discussions among experts, activists and diplomats worldwide, as they grappled with the moral and ethical implications of a machine making – and executing – the decision to take a human life.

    However, there needs to be more focus on regulatory debate about the use of semi-autonomous weapon systems, “where humans are still so-called in the loop”, Anna Nadibaidze, a postdoctoral researcher in international politics at the Centre for War Studies, University of Southern Denmark, told Al Jazeera.

    A major concern, she said, is whether “enough time and space” is being given to the “exercise of human judgement that’s necessary in the context of warfare”.

    The extent of human involvement is often something observers have to take militaries at their word on; a difficult task when their actions leave trust in short supply, said Toby Walsh.

    In the case of ground robotics in Ukraine, a human operator has, so far, remained in control, directing machines that can still be halted by obstacles such as uneven terrain.

    However, when AI is involved in the decision-making process, as is the case in Israel’s attacks on Gaza and the wider region, the scale of attacks which have resulted in “huge collateral damage and civilian casualties for a small number of military targets” challenges the rules of international humanitarian law and, in particular, the idea of proportionality, Walsh said.

    The issue, Nadibaidze said, is that it is hard to enforce rules on the use of AI in warfare as it is essentially “a matter of each military to decide what they consider to be a citizenship role for the human, and there isn’t enough international debate on that”.

    An April report by the Stockholm International Peace Research Institute warned that the AI supply chain is also fragmented, global and heavily dependent on civilian technologies, further complicating efforts to govern or control military uses of AI.

    The United States Department of Defense and the Pentagon are consistently incorporating privately developed software systems into their war apparatus.

    In the middle of last year, the Defense Department awarded OpenAI a $200m contract to implement generative AI into the US military, alongside $200m contracts for xAI and Anthropic.

    “If we’re not careful, warfare will be much more terrible, much more deadly, a much quicker, much faster thing that humans can no longer actually really be participants in, because humans won’t have the speed, won’t have the accuracy or the ability to respond,” Walsh warned.

    How AI is being used to target Palestinians

    Ukraine as a testing ground

    Technology and AI are not inherently harmful, experts say – it is how they are used that matters.

    In Ukraine, ground robotic systems have also been used to rescue civilians and provide logistical support in heavily mined and treacherous conditions.

    Yet what is unfolding on the front line is, in many ways, a testing ground, and the international community will need to look ahead to how these technologies might be applied and regulated in future conflicts.

    There is also room for cautious optimism. Despite the “moral failure” over Israel’s actions in Gaza, Walsh said, there is a recognition in the international community that these issues must be addressed, including a series of UN meetings focused on regulating Lethal Autonomous Weapons Systems.

    The United Nations Institute for Disarmament Research (UNIDIR), an autonomous body within the UN which conducts independent research on disarmament and international security, is set to meet in June to examine the implications of AI for international peace and security.

    It is not the first time new weapons technologies have threatened to upend the rules-based order, said Walsh, pointing to chemical weapons as an example. While imperfect, international agreements were eventually put in place to bring those under some level of control.

    “There are a lot of actors based in the Global South that do want regulation, so there might be regional initiatives forming,” said Nadibaidze, adding that even if such efforts do not initially include major powers or leading tech developers, they could still help to shape emerging norms.

    Unknown's avatar

    About michelleclarke2015

    Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
    This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

    Leave a comment