War is moving into cities — how to protect civilian residents

Serviceman of the Ukrainian Armed Forces with a drone. Photo: General Staff of the Armed Forces of Ukraine

Warfare is increasingly moving to cities and becoming high-tech, involving drones, algorithms, and autonomous systems. Under these circumstances, democratic states are faced with a challenging dilemma: how to protect their citizens while complying with international humanitarian law, particularly when the enemy purposely operates among civilians. 

This was reported by Aviv Bar Zohar, an expert on UAV countermeasures and airspace defense, Colonel of the Israeli Air Force (Reserve), in a column for Novyny.LIVE.

Operational dilemmas in dense urban areas

When a democratic state deploys robotic platforms in a densely populated city while trying to adhere to international humanitarian law (IHL) and military ethics, several levels of risk arise.

Flying or ground-based vehicles usually do not create significant legal pressure in unarmed roles such as reconnaissance, surveillance, and data collection. The likelihood of violating basic principles at this stage is low, and the aerial view sometimes helps reduce harm to civilians.

Remotely piloted systems, however, present a different situation. When a person remains in the circuit, these systems can be considered, in the legal sense, as classic manned weapons. In Israel, such vehicles are often referred to as remotely piloted aircraft rather than drones. The emphasis is placed on the fact that it is a person who actually controls the decision to use force, not the operator's location. In this case, the structure of the legal analysis hardly changes.

The real dilemma begins with lethal autonomous platforms. This is especially problematic when the enemy deliberately embeds itself in residential neighborhoods, stores munitions in house basements, or launches attacks from school and hospital territories. In such a scenario, there is a serious risk that the algorithm will not reliably distinguish between combatants and civilians. Consequently, the basic principle of IHL — distinction — may not be fulfilled.

Therefore, any democracy considering the use of fully autonomous strike platforms, in which the machine selects and attacks targets independently, must first answer whether distinction is ensured in practice. Only then can they proceed to analyze proportionality, precautionary measures, and other principles of humanitarian law.

Red lines and guidelines for commanders and operators

Aviv Bar Zohar noted several principles that should guide decisions at all levels of government, in my opinion.

The first is the strong connection between people and technology. This is perhaps one of the most difficult challenges for modern armies. This connection cannot be based on blind enthusiasm for "smart" technology. Rather, it is based on predictability and an understanding of how the platform behaves in typical situations, where it makes mistakes, and how it behaves in extreme scenarios.

The second principle is meaningful human involvement. Even if an autonomous system performs most of a task, including the potential use of force, a human should remain involved in the decision-making process. This allows you to trace the last human action, compare the commander's intent with the machine's actual behavior, and explain why it worked the way it did. To accomplish this, all key actions must be recorded and stored, even if the media is destroyed during the operation.

The third benchmark is the ability to interrupt the mission at any time. The commander or operator must have actual control, not just the appearance of control, over stopping the process. If the platform has gone beyond this level of control, then the acceptable use limit has been exceeded.

Where the law lags behind the urban battle

Legally speaking, the biggest gaps do not arise because the tools are new; rather, they arise because of the nature of modern conflicts. They are no longer classic clashes between armies of states. The enemy deliberately mingles with civilians, uses human shields, does not wear uniforms, and launches missiles or drones from hospitals, schools, and government institutions.

There are formal rules in IHL that can cover many of these situations. However, the basic assumption of clear separation between civilians and combatants often does not correspond to the reality of hostilities. The large-scale destruction of residential areas in the Gaza Strip in recent years shows how difficult it is to implement the principles of distinction and proportionality when civilians and combatants are physically intermingled.

The second issue is whether specific technological regimes are necessary for autonomous strike platforms. In 2024, the General Assembly received a report from the UN Secretary-General on lethal autonomous systems. In the report, states emphasized the problems of reliability and accuracy, as well as the ability of such systems to distinguish between military and civilian targets. Also in 2024, within the framework of the Convention on Certain Conventional Weapons, the Group of Governmental Experts reached a preliminary consensus on the operational characteristics of lethal autonomous systems as complexes capable of independently selecting and engaging targets without operator intervention.

According to the expert, existing regulations should generally be applied and interpreted rather than constructing a new regime for each technology. However, there will be situations in which narrow, targeted rules will be necessary, as recent discussions about autonomous weapons confirm.

Rules of engagement and C2 for autonomous solutions

Even when autonomous platforms are present on the battlefield, states can and should prescribe rules of engagement and command procedures.

This requires a clear understanding of the limitations. Operators must undergo structured training and understand the capabilities and limitations of the system. An analogy with large language models is appropriate here: while they can invent facts, if the user understands their limitations, the tool remains usable.

Trust between the user and technology is key to the effective combat use of these systems. Meaningful human involvement must be maintained, and responsibility cannot be limited to the operator who presses the button. This responsibility extends to commanders and the defense industry that creates these solutions. Commanders must do more than just "sign the IHL document"; they must apply these principles when planning and executing operations.

At the same time, we must honestly acknowledge that civilian casualties cannot always be avoided when the enemy deliberately uses human shields and hides behind civilian infrastructure. In some cases, military necessity takes precedence, and people who are not participating in hostilities may unfortunately become part of the permissible collateral damage under humanitarian law. This reality exists regardless of whether an autonomous platform or a "classical" weapon is used.

Key risks of urban combat with autonomous systems

  • The extent of algorithmic errors often remains hidden until the consequences become politically and morally unacceptable. Investigations into the use of digital tools to automate the selection of thousands of targets in Gaza revealed how the desire to accelerate the cycle of destruction can result in minimal human intervention and high casualties.
  • Normalizing high collateral damage. According to human rights activists and UN experts, some digital targeting tools used in recent operations may have relied on incomplete data and overly broad tolerances for civilian deaths. This calls into question whether such practices comply with the principles of distinction and proportionality.
  • There is a blurring of responsibility between developers, commanders, and algorithms. When a technical system actually makes a decision, the temptation to say "the machine decided" becomes very dangerous. This is why some states explicitly emphasize the need for clear accountability standards for autonomous weapons in their submissions to the UN.
  • The arms race in the field of autonomy puts additional pressure on democracies. They see non-democratic regimes experimenting with fewer restrictions and find themselves caught between the desire to keep up technologically and their obligation to their societies to adhere to IHL.

Accountability and verification of decisions

The key elements of accountability appear long before the first combat use. Industrial developers must be provided with a clear framework that includes the following: the rules of engagement included in the algorithm, the interfaces that users will have, and how humanitarian law restrictions are technically coded.

Units should be trained to use AI solutions and to understand the law of armed conflict on a practical level. The technology itself should be as explainable and traceable as possible, with the ability to stop at any time, register anomalies, and record all key decisions.

Trust in such decisions is built over time. Democracies should not jump straight to completely lethal, autonomous platforms. The logical progression is to start with simpler functions, such as surveillance and detection in unarmed scenarios. Only then can we, step by step, move on to more sensitive applications.

Autonomy does not cancel responsibility. Rather, it makes three things even more important: an honest look at the reality of urban combat, clear rules for algorithms and people, and a willingness to explain every shot afterward — even if it was not made by a pilot, but by code.

Read more: