The major powers focus on science and technology development in order to build military power with strategic impact. High-technology weapons, available also to non-state actors, are assumed they would shape the nature of warfare in the twenty-first century. Semiconductors, cloud computing, robotics, and big data are all part of the components needed to develop the AI that will model and define the future battlespace. Artificial intelligence will apply to nuclear, aerospace, aviation and shipbuilding technologies to provide future combat capabilities. The incorporation of AI into military systems and doctrines will shape the nature of future warfare and, implicitly, will decide the outcome of future conflicts. Before fielding a weapon system, military and political leaders should think about how it can be used and should it be used in a certain manner. A strong and clear regulatory framework is needed. The use of automatic processing of plans and orders (automatic control) needs a policy control. Autonomous machines need some level of human control and accountability. Imagine what could happen if a system, like HAL 9000 or the War Games supercomputer, could make an autonomous decision. Some fictional stories have imagined a dystopian future where machine intelligence increases and surpasses human intelligence until machines exert control over humans. As Freedman concludes in The Future of War, most claims from the military futurists are wrong, but they remain influential nonetheless. The tendency of humans is to give more responsibility to machines in collaborative systems. In the future, automatic design and configuration of military operations will be entrusted more and more to the machines. Given human nature, if we recognize the autonomy of machines, we cannot expect anything better from them than the behavior of their creators. So why should we expect a machine to ‘do the right thing’? In the light of what has been discussed here, it could be argued that some military applications of EDTs may jeopardize human security. The total removal of humans from the navigation, command and decision-making processes in the control of unmanned systems, and as such away from participation in hostilities, makes humans obsolete and dehumanizes war. Because of the nature and the technological implications of automated weapons and AI-powered intelligence-gathering tools it is likely that boots on ground will become an exception. Cyber soldier probably will be a human vestige behind the machine. The rules that will apply to battlespace are unknown. Increased machine autonomy in the use of lethal force raises ethical and moral questions. Is it an autonomous system safe from error? Who will bear the responsibility and accountability for the wrong decision: politicians, low-makers, policy-makes, engineers, or military? Guidelines are needed, and ethical and legal constraints should be considered. Lexicon and definition of terms are essential, and the international community should find common, undisputed and unambiguous legal formulations. The difference between conventional/unconventional, traditional/non-traditional, kinetic/non-kinetic, and lethal/non-lethal seems to be outdated. A knife, a broken bottle neck (if it cuts your jugular), even a fork, a hammer, a baseball bat, or a stone – according to the biblical story David kills Goliath by hurling a stone from his sling and hitting him in the center of forehead – are all unconventional, kinetic, and potentially lethal weapons. Nevertheless, distinguishing between weapons, their effect and consequence, is necessary in order to avoid a cascade effect and undesirable outcomes. The LAWS can lead to an acceleration of a new arms race and to proliferation illegitimate actors – non-state actors and terrorist groups – cyber-attacks and hacking, lowering of the threshold for the use of force. The debate on the application of technology to warfare should cover international law, including IHL, ethics, neuroscience, robotics and computer science. It requires a holistic approach. It is necessary to investigate whether the new domains are actually comparable to the classical ones, and whether current rules are applicable, or if new ones are necessary. Further considerations deriving from the extension of the battlefield to the new domains of warfare concern the use of artificial intelligence in the decision-making process, which, in a fluid security environment, needs to be on target and on time in both the physical and virtual informational spaces. It is not just a legal debate, but also moral and ethical that should be deepened. A multi-disciplinary approach would be useful for designing the employment framework for new warfare technologies.

Military Emerging Disruptive Technologies: Compliance with International Law and Ethical Standards

marsili
Writing – Original Draft Preparation
2023-01-01

Abstract

The major powers focus on science and technology development in order to build military power with strategic impact. High-technology weapons, available also to non-state actors, are assumed they would shape the nature of warfare in the twenty-first century. Semiconductors, cloud computing, robotics, and big data are all part of the components needed to develop the AI that will model and define the future battlespace. Artificial intelligence will apply to nuclear, aerospace, aviation and shipbuilding technologies to provide future combat capabilities. The incorporation of AI into military systems and doctrines will shape the nature of future warfare and, implicitly, will decide the outcome of future conflicts. Before fielding a weapon system, military and political leaders should think about how it can be used and should it be used in a certain manner. A strong and clear regulatory framework is needed. The use of automatic processing of plans and orders (automatic control) needs a policy control. Autonomous machines need some level of human control and accountability. Imagine what could happen if a system, like HAL 9000 or the War Games supercomputer, could make an autonomous decision. Some fictional stories have imagined a dystopian future where machine intelligence increases and surpasses human intelligence until machines exert control over humans. As Freedman concludes in The Future of War, most claims from the military futurists are wrong, but they remain influential nonetheless. The tendency of humans is to give more responsibility to machines in collaborative systems. In the future, automatic design and configuration of military operations will be entrusted more and more to the machines. Given human nature, if we recognize the autonomy of machines, we cannot expect anything better from them than the behavior of their creators. So why should we expect a machine to ‘do the right thing’? In the light of what has been discussed here, it could be argued that some military applications of EDTs may jeopardize human security. The total removal of humans from the navigation, command and decision-making processes in the control of unmanned systems, and as such away from participation in hostilities, makes humans obsolete and dehumanizes war. Because of the nature and the technological implications of automated weapons and AI-powered intelligence-gathering tools it is likely that boots on ground will become an exception. Cyber soldier probably will be a human vestige behind the machine. The rules that will apply to battlespace are unknown. Increased machine autonomy in the use of lethal force raises ethical and moral questions. Is it an autonomous system safe from error? Who will bear the responsibility and accountability for the wrong decision: politicians, low-makers, policy-makes, engineers, or military? Guidelines are needed, and ethical and legal constraints should be considered. Lexicon and definition of terms are essential, and the international community should find common, undisputed and unambiguous legal formulations. The difference between conventional/unconventional, traditional/non-traditional, kinetic/non-kinetic, and lethal/non-lethal seems to be outdated. A knife, a broken bottle neck (if it cuts your jugular), even a fork, a hammer, a baseball bat, or a stone – according to the biblical story David kills Goliath by hurling a stone from his sling and hitting him in the center of forehead – are all unconventional, kinetic, and potentially lethal weapons. Nevertheless, distinguishing between weapons, their effect and consequence, is necessary in order to avoid a cascade effect and undesirable outcomes. The LAWS can lead to an acceleration of a new arms race and to proliferation illegitimate actors – non-state actors and terrorist groups – cyber-attacks and hacking, lowering of the threshold for the use of force. The debate on the application of technology to warfare should cover international law, including IHL, ethics, neuroscience, robotics and computer science. It requires a holistic approach. It is necessary to investigate whether the new domains are actually comparable to the classical ones, and whether current rules are applicable, or if new ones are necessary. Further considerations deriving from the extension of the battlefield to the new domains of warfare concern the use of artificial intelligence in the decision-making process, which, in a fluid security environment, needs to be on target and on time in both the physical and virtual informational spaces. It is not just a legal debate, but also moral and ethical that should be deepened. A multi-disciplinary approach would be useful for designing the employment framework for new warfare technologies.
2023
Intelligent and Autonomous: Transforming Values in the Face of Technology
File in questo prodotto:
File Dimensione Formato  
9789004547223_VIBS_390_Ch02_Marco Marsili.pdf

non disponibili

Tipologia: Versione dell'editore
Licenza: Copyright dell'editore
Dimensione 285.23 kB
Formato Adobe PDF
285.23 kB Adobe PDF   Visualizza/Apri
Intelligent & Autonomous (Marco Marsili).pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Accesso libero (no vincoli)
Dimensione 425 kB
Formato Adobe PDF
425 kB Adobe PDF Visualizza/Apri
Intelligent & Autonomous (Marco Marsili).pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Accesso libero (no vincoli)
Dimensione 425 kB
Formato Adobe PDF
425 kB Adobe PDF Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5043792
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact