Introduction. States at the present stage announce the need for active development of scientific research in the field of artificial intelligence, the creation of appropriate technologies with subsequent widespread introduction in all areas of human activity. Some first persons of leading countries declare a direct dependence of the quality of ensuring national security on the level of development and use of artificial intelligence technologies. At the same time, no one officially indicates the global risks of artificial intelligence, which has the ability to make and implement decisions beyond human control. For this reason, so far no legal mechanisms have been formed to level the threats posed by the activities to create a fully autonomous artificial intelligence. Objective. The present study was conducted to outline the global risks, the basis of which is formed by the uncontrolled development and spread of artificial intelligence technology, as well as the rationale for the need to criminalize certain actions in the development of artificial intelligence technologies. Research methodology, methods and techniques. In the process of research a variety of general scientific and private scientific methods were used, which are traditionally used in the humanities. Thus, dialectical and formal-logical methods provided a comprehensive study of artificial intelligence, which allowed to establish not only the positive results of its implementation, but also significant risks for society, which are seen in the uncontrolled spread of artificial intelligence. Among the private scientific methods of this study are system-structural, comparative-legal methods, survey, method of expert evaluations and others. Results. The study determined that in the medium term there will be created a full-fledged analogue of natural intelligence, with the capabilities of independent analytical thinking and learning. In a changing environment this does not exclude complete or partial destruction of the population of the planet. The introduction of real legal responsibility, primarily criminal, for the creation of «autonomous intelligence» is not planned either in international or in national law, which generates insecurity of relations in the field of public safety. Scientific novelty. The study substantiates the social danger of uncontrolled spread of artificial intelligence technologies in the world and proposes the establishment of criminal responsibility for the commission of these acts by analogy with the prohibition on the creation and proliferation of weapons of mass destruction. Practical significance. The formulated proposals may be taken into account in the preparation of bills on amendments and additions to the existing criminal law on crimes against public safety, as well as against the peace and security of mankind.
artificial intelligence, natural intelligence, human intelligence, threat to public safety, legal regulation
1. Begishev, I. R., Hisamova, Z. I. (2021). Iskusstvennyj intellekt i ugolovnyj zakon: monografiya. Moskow: Prospekt.
2. Biryukov, P. N. (2019). Iskusstvennyj intellekt i «predskazannoe pravosudie»: zarubezhnyj opyt. Lex Russica, 11, 79–87.
3. Boyarkin, A. (2020). Prediktivnaya analitika: pol’za, instrumenty i primery. URL: https://sales-generator.ru/blog/prediktivnaya-analitika/#3 (data obrashcheniya: 15.11.2022).
4. Bruskin, S. N. (2017). Modeli i instrumenty prediktivnoj analitiki dlya cifrovoj korporacii. Vestnik REU im. G. V. Plekhanova, 5, 135–139.
5. Darvin, Ch. (2017). Proiskhozhdenie vidov putyom estestvennogo otbora, ili Sohranenie blagopriyatnyh ras v bor’be za zhizn’ (per. s angl. K. A. Timiryazev, M. A. Menzbir, A. P. Pavlov, I. A. Petrovskij). Moskow: AST.
6. Kovachich, L. (2020). Kitajskij opyt razvitiya otrasli iskusstvennogo intellekta: strategicheskij podhod. Moskovskij centr Karnegi. URL: https://carnegieendowment.org/2020/07/07/ru-pub-82172 (data obrashcheniya: 11.11.2022).
7. Korobeev, A. I., Chuchaev, A. I. (2018). Bespilotnye transportnye sredstva, osnashchyonnye sistemami iskusstvennogo intellekta: problemy pravovogo regulirovaniya. Aziatsko-Tihookeanskij region: ekonomika, politika i pravo, 3(20), 117–132.
8. Laptev, V. A. (2019). Ponyatie iskusstvennogo intellekta i yuridicheskaya otvetstvennost’ za ego rabotu. Pravo. Zhurnal Vysshej shkoly ekonomiki, 2, 79–102.
9. Makludova, A. M. (2019). Robot v sudejskoj mantii: vozmozhnosti i problemy. Ustojchivoe razvitie nauki i obrazovaniya, 11, 138–142.
10. Mosechkin, I. N. (2020). Iskusstvennyj intellekt v ugolovnom prave: perspektivy sovershenstvovaniya ohrany i regulirovaniya: monografiya. Kirov: Vyatskij gosudarstvennyj universitet.
11. Ob”yom investicij v tekhnologii iskusstvennogo intellekta dostig pochti $68 mlrd. (2021). URL: https://tass.ru/ekonomika/10835935 (data obrashcheniya: 10.11.2021).
12. Putin: monopolist v sfere iskusstvennogo intellekta mozhet stat’ vlastelinom mira (2019). URL: https://tass.ru/ekonomika/6489864 (data obrashcheniya: 31.10.2021).
13. Roboty v zakone. Dolzhen li iskusstvennyj intellekt otvechat’ za svoi prostupki (2018). URL: https://issek.hse.ru/ news/227178200.html (data obrashcheniya: 11.11.2022).
14. Yani, P. S. (2019). O kibernetizacii processa kvalifikacii prestuplenij. Zakonnost’, 12, 42–45.
15. Future of Artificial Intelligence Act of 2017 (2017). URL: https://www.congress.gov/bill/115th-congress/senate-bill/2217/text (data obrashcheniya: 11.11.2022).
16. Liu, Hin-Yan (2017). Irresponsibilities, inequalities and injustice for autonomous vehicles. Ethics and Information Technology. Ethics and Information Technology, 19, 193–207.
17. Maintaining American Leadership in Artificial Intelligence (2019). A Presidential Document by the Executive Office of the President on 02/14/2019. URL: https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence (data obrashcheniya: 11.08.2022).