Feasibility of AI-powered war weapons criminal liability and the challenge of impunity in the International Criminal Court

Document Type : Original Article

Authors

1 Assistant Professor, Faculty of Law and Political Science, University of Tehran, Tehran, Iran.

2 Ph. D. Candidate in Criminal Law and Criminology, Faculty of Law and Political Science, University of Tehran, Tehran, Iran

3 Ph.D. student in Criminal law and Criminology university of Tehran, Tehran, Iran

Abstract

The issue of delegating the authority to kill humans to artificial intelligence in wartime is one of the most challenging emerging disciplines, as it raises numerous legal issues. This is the first time that legal knowledge has encountered a phenomenon that, despite being created by humans, is independent of humans in thinking and decision-making, and it may be possible after multiple stages that even the creator of the system cannot comprehend or control its operation. Therefore, this article discusses the obstacles that the International Criminal Court must overcome to apply direct criminal responsibility and the doctrine of commander's responsibility in cases where artificial intelligence weapons have led to the killing of civilians and war crimes, as well as the steps that should be taken to overcome these obstacles. Following a documentary approach and a descriptive-analytical analysis, this article concludes that the current provisions of the statute regarding artificial intelligence need to be revised due to the emergence of a concept called the gap in responsibility, which is caused by the unique nature of artificial intelligence. To adapt to the new conditions, the international community must require manufacturers to provide maximum transparency and prohibit producing certain types of artificial intelligence. Regarding the possible killings caused by the use of artificial intelligence weapons with the ability to self-learning after production, the court statute must explicitly and exclusively specify the rules of criminal responsibility so that the existence of AI weapons does not lead to impunity.

Keywords

Main Subjects


ابوذری، مهرنوش (1400). حقوق و هوش مصنوعی، تهران: نشر میزان.
تخشید، زهرا (1400). مقدمه‌ای بر چالش‌های ‌هوش مصنوعی در حوزۀ مسئولیت مدنی. مطالعات حقوق خصوصی، 18(1)، 227-250.
رجبی، عبداله (1398). ضمان در هوش مصنوعی. مطالعات حقوق تطبیقی، 10(2)، 449-466.
سیفی، بهزاد (1399). مسئولیت کیفری به‌کارگیری وسایل بدون سرنشین خودکار از دید حقوق بین‌الملل بشردوستانه. آفاق امنیت، ۱۳(۴۹)، 29-54.
صالح‌آبادی، رامین (1398). تعیین حدود تحقیقات هوش مصنوعی از منظر حق و مصلحت عمومی. پایان‌نامۀ کارشناسی ارشد، دانشکدۀ حقوق دانشگاه شهید بهشتی.
عبداللهی، محسن، شهریاری، عبدالنعیم، بابایی، یوسف و یعقوبی، اسماعیل (1396). دیوان کیفری بین المللی؛ درنگی در الحاق جمهوری اسلامی ایران، تهران: انتشارات دبیرخانۀ مجمع تشخیص مصلحت نظام، چاپ اول.
عزیزی بساطی، مجتبی و سکوتی، مرضیه (1394). واکاوی تأثیر سلاح‌های خودکار بر صلح و امنیت بین‌المللی. سیاست خارجی، ۲۹(۳)، 35-56.
فروغی‌‌نیا، حسین (1398). موانع و چالش‌های اجرایی دیوان کیفری بین‌المللی در تقویت ضمانت اجرای حقوق بین‌‌الملل بشردوستانه، فصل‌نامۀ تعالی حقوق، 3۵(۱)، ۷۴-۳۷.
کسمتی، زهرا (1387). اصول انتساب مسئولیت فردی کیفری در چارچوب اساسنامۀ دیوان بین­المللی کیفری. پایان ‌نامۀ دکتری، دانشکدۀ حقوق و علوم سیاسی، دانشگاه علامه طباطبایی.
هالوی، گابریل (1398). مسئولیت کیفری رباتها: هوش مصنوعی در قلمرو حقوق کیفری. ترجمۀ فرهاد شاهیده و طاهره قوانلو، تهران: نشر میزان.
واثقی، محسن، حکمت‌نیا، محمود و محمدی، مرتضی (1398). مسئولیت مدنی ناشی از تولید ربات‌های مبتنی‌بر هوش مصنوعی خودمختار. حقوق اسلامی، 16(60)، 231-258.
ولی‌پور، علی و اسماعیلی، محسن (1400). امکان‌ سنجی مسئولیت مدنی هوش مصنوعی عمومی ناشی از ایجاد ضرر در حقوق مدنی. فصل‌نامۀ اندیشۀ حقوقی، 2(6)، 1–16.
Acquaviva, G. (2022). Autonomous weapons systems controlled by Artificial Intelligence: a conceptual roadmap for international criminal responsibility. The Military Law and the Law of War Review, 60(1), 89-121.‏
Asaro, P. (2012). On banning autonomous weapon systems: human rights, automation, and the dehumanisation of lethal decision-making. International Review of the Red Cross, 94(886), 687-709.‏
Barber, I. A. (2020). Autonomous Weapons Systems & Accountability: Rethinking Criminal Responsibility for War Crimes at the ICC. SOAS LJ7, 5.‏
Bathaee, Y. (2017). The artificial intelligence black box and the failure of intent and causation. Harv. JL & Tech, 31(20), 889 – 938.
Chao Yi (2019). The Concept of International Criminal Responsibility for Individuals and the Foundational Transformation of International Law. In Philosophical Foundations of International Criminal Law: Foundational Concepts, Brussels: Torkel Opsahl Academic EPublisher (pp. 65 – 139).
Hallevy, G. (2015). Liability for crimes involving artificial intelligence systems (Vol. 257). New York, NY, USA: Springer International Publishing.‏
International Human Rights Clinic (2015). Mind the Gap: The Lack of Accountability for Killer Robots, Zurich: IHRC.
Lansford, T, & Muller, T. (2012). Political Handbook of the World, sage publisher.
Larina, E. S., & Ovchinsky, V. S. (2018). Artificial intelligence. Big data. Crime. Moscow: Knizhny Mir.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology, 6(3), p. 175-183.
Mosechkin, I. N. (2019). Artificial intelligence and criminal liability: problems of becoming a new type of crime subject. Bulletin of St. Petersburg university law, 10(3), p. 461–476.
Mueller, J. P., & Massaron, L. (2021). Artificial intelligence for dummies. New Jersey: John Wiley & Sons.
Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. WW Norton & Company.‏
Schmitt, M. N. (2013). Autonomous weapon systems and international humanitarian law: A reply to the critics,” Harv. Nat’l Sec. J. Features, 1(4).‏ p. 1-37. Available at http://centaur.reading.ac.uk/89864/
Selbst, A. D. (2020). Negligence and AI's human users. BUL Rev.100, 1315.‏
Song, S. H. (2012). The role of the International Criminal Court in ending impunity and establishing the rule of law. UN Chronicle, 49(4).
Swanson, G. (2018). Non-autonomous artificial intelligence programs and product liability: how new AI products challenge existing liability models and pose new financial burdens. Seattle UL Rev.42(3), 1201-1222.‏