Context: Artificial Intelligence (AI) has the potential to accelerate the implementation of the United Nations Sustainable Development Goals (SDGs) in a variety of industries. In the health sector, AI has the potential to augment clinical decision-making, especially in highly specialized disciplines like trauma and emergency surgery, where several factors, like the patient's identity, the trauma causes, and the care preferences, may be unknown, and time pressure is high. Still, the attitudes of medical professionals, patients, technology providers, developers, and policymakers toward the effective development and use of digital AI-based tools are still significantly impacted by ethical concerns. Methods: This investigation deepens the ethical challenges of using AI in surgical decision-making in trauma and emergency contexts, including data privacy and transparency, technical robustness and safety, responsibility, and human agency. The study was conducted through a survey endorsed by the World Society of Emergency Surgery (WSES). A full research protocol was developed by the principal investigators, starting from the most recent academic and practice literature. The European Commission's Ethics Guidelines for Trustworthy Artificial Intelligence and the Technology Acceptance Theory were the primary sources to develop the protocol and survey structure. Besides clinicians, experts in social and health statistics, epidemiology, public health and healthcare management, law, innovation, medical ethics, and information technology were invited to join the leading team. The survey protocol was published after a blind review process. The investigation, advertised among WSES 900+ members, collected responses from 650 physicians operating in 72 countries in the five continents. Results: The findings emphasize the necessity of privacy, transparency, and explainability of data, as well as robust governance, collaborative efforts among stakeholders, and accountability in all decision-making processes to promote the appropriate, sustainable, and responsible use of AI in surgery. Discussion: The results enabled the development of a conceptual model that reconciles the ethical obligations to safeguard patients and guarantee sustainable healthcare outcomes with the technological advancements of AI. The conceptual model developed from the study may be beneficial to policymakers, health institutions, and universities in order to encourage the sustainable use of AI-based applications in critical health disciplines, such as surgery, and to foster health innovation and digital transformation, bearing in mind the need to meet the SDGs.
Digital transformation and the sustainable and ethical use of Artificial Intelligence in trauma and emergency surgery. Results from a World Society of Emergency Surgery worldwide investigation
Francesca Dal Mas
;Maurizio Massaro;Stefano Campostrini;
2025-01-01
Abstract
Context: Artificial Intelligence (AI) has the potential to accelerate the implementation of the United Nations Sustainable Development Goals (SDGs) in a variety of industries. In the health sector, AI has the potential to augment clinical decision-making, especially in highly specialized disciplines like trauma and emergency surgery, where several factors, like the patient's identity, the trauma causes, and the care preferences, may be unknown, and time pressure is high. Still, the attitudes of medical professionals, patients, technology providers, developers, and policymakers toward the effective development and use of digital AI-based tools are still significantly impacted by ethical concerns. Methods: This investigation deepens the ethical challenges of using AI in surgical decision-making in trauma and emergency contexts, including data privacy and transparency, technical robustness and safety, responsibility, and human agency. The study was conducted through a survey endorsed by the World Society of Emergency Surgery (WSES). A full research protocol was developed by the principal investigators, starting from the most recent academic and practice literature. The European Commission's Ethics Guidelines for Trustworthy Artificial Intelligence and the Technology Acceptance Theory were the primary sources to develop the protocol and survey structure. Besides clinicians, experts in social and health statistics, epidemiology, public health and healthcare management, law, innovation, medical ethics, and information technology were invited to join the leading team. The survey protocol was published after a blind review process. The investigation, advertised among WSES 900+ members, collected responses from 650 physicians operating in 72 countries in the five continents. Results: The findings emphasize the necessity of privacy, transparency, and explainability of data, as well as robust governance, collaborative efforts among stakeholders, and accountability in all decision-making processes to promote the appropriate, sustainable, and responsible use of AI in surgery. Discussion: The results enabled the development of a conceptual model that reconciles the ethical obligations to safeguard patients and guarantee sustainable healthcare outcomes with the technological advancements of AI. The conceptual model developed from the study may be beneficial to policymakers, health institutions, and universities in order to encourage the sustainable use of AI-based applications in critical health disciplines, such as surgery, and to foster health innovation and digital transformation, bearing in mind the need to meet the SDGs.| File | Dimensione | Formato | |
|---|---|---|---|
|
EHMA AI.pdf
accesso aperto
Tipologia:
Versione dell'editore
Licenza:
Accesso libero (no vincoli)
Dimensione
459.43 kB
Formato
Adobe PDF
|
459.43 kB | Adobe PDF | Visualizza/Apri |
I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



