In recent years, artificial intelligence (AI) has evolved from an experimental technology into one of the key drivers of transformation in the pharmaceutical industry and the healthcare system as a whole. Machine learning and big data analysis algorithms are used at all stages of the drug life cycle—from molecular target discovery and molecule design to clinical trials, pharmacovigilance, and personalized therapy. At the same time, AI is increasingly being used directly in medical practice: in diagnosis, disease progression prediction, clinical decision support, and robotic surgery.
Such widespread implementation of AI is accompanied by significant legal, ethical, and regulatory challenges. These include the opacity of algorithms ("black box"), the risk of systematic errors and data bias, issues of liability allocation between the physician, the medical organization, and the developer, as well as the protection of patients' personal medical data. This review combines an analysis of AI regulation in pharmaceuticals and medicine in Russia, the European Union, and the United States with practical cases of AI use and court practice, forming a comprehensive picture of the current state and trends in development.
1. The role and areas of application of AI in pharmaceuticals and medicine
AI is used throughout the entire pharmaceutical industry chain:
-
Preclinical research: virtual screening of compounds, identification of molecular targets, prediction of toxicity and pharmacokinetics.
-
Drug development: optimization of chemical structures, modeling of molecular interactions.
-
Clinical research: optimization of study design, patient selection and stratification, analysis of real-world data.
-
Manufacturing: quality control, predictive equipment maintenance, automation of production processes.
-
Pharmacovigilance: identification of safety signals based on large data sets.
-
Diagnostics: analysis of medical images (radiology, pathology), detection of early signs of disease.
-
Clinical decision support: recommendations for diagnosis and treatment, risk assessment, therapy selection.
-
Personalized medicine: taking into account the patient's genetic, clinical, and behavioral data.
-
Robotic surgery and telemedicine: improving the accuracy of interventions and the accessibility of medical care.
These areas overlap: AI systems developed in the pharmaceutical industry are increasingly influencing clinical decisions and patient safety.
2. AI regulation in Russia
The Russian model of AI regulation is framework-based and is formed through a set of strategic and experimental regulatory acts. The key ones are:
-
Decree of the President of the Russian Federation No. 490 (2019)1, which approved the National Strategy for the Development of Artificial Intelligence until 2030. This document set priorities for the implementation of AI in the economy and social sphere, including healthcare, but does not contain industry-specific requirements for pharmaceuticals.
-
Federal Law No. 258-FZ (2020)2, which regulates experimental legal regimes (ELRs) for digital innovations. It allows AI solutions to be tested, including in medicine and pharmaceuticals, with a temporary exemption from certain mandatory requirements.
-
Federal Law No. 123-FZ (2020)3, which established a special legal regime for the use of AI in the city of Moscow, which has become a pilot site for the introduction of medical and pharmaceutical AI systems.
Together, these acts form the institutional framework for AI regulation in Russia, but they are not specialized regulations for the pharmaceutical industry and healthcare.
2.2. Industry regulation: healthcare and medical devices
Sectoral regulation of AI in medicine is developing primarily through ethical and technical standards. According to the Code of Ethics for the Use of AI in Healthcare4, AI is classified as a high-risk technology. The inadmissibility of complete autonomy of such systems and the need for constant human control, reproducibility of results, and monitoring are emphasized.
Key requirements include:
-
ensuring high quality and representativeness of data;
-
validation and clinical validity of models;
-
transparency and explainability of algorithms;
-
management of the life cycle of AI systems, including their decommissioning.
These provisions are detailed in PNST 961-20245, which is dedicated to the ethical aspects of AI use in healthcare. The standard stipulates that:
-
AI must be trained exclusively on accurate, verifiable, and representative biomedical data;
-
algorithms must be interpretable, and results must be verifiable and accessible for independent verification;
-
AI cannot function without human involvement in decisions affecting patient safety and drug therapy;
-
Developers, owners, and users are responsible for the accuracy, safety, and monitoring of AI at all stages of its life cycle.
-
Strict data anonymization, minimization of risks to patients, and prohibition of misuse are required.
A practically significant innovation is the introduction of the concept of AI entities (developer, owner, operator, user, regulator), as well as the establishment of the obligation to inform patients about the use of AI in the provision of medical care.
2.3. Regulation of AI in the pharmaceutical industry
In the pharmaceutical sector, AI is regulated primarily within the framework of general standards and related legislation:
-
GOST R 59921.6-20216, which establishes the basic definitions and characteristics of AI systems;
-
Federal Law No. 61-FZ "On the Circulation of Medicines"7;
-
regulatory acts governing the conduct of clinical trials (Ministry of Health of the Russian Federation, EEC).
In practice, AI in the Russian pharmaceutical industry is most often used for:
-
preclinical modeling and virtual screening;
-
predicting toxicity and efficacy;
-
analysis of large arrays of real clinical data;
-
decision support systems for doctors, which in some cases are classified as medical devices.
If an AI system is recognized as a medical device, it is subject to mandatory registration with Roszdravnadzor and further regulatory control.
2.4. Application practices and trends
In the Russian Federation, AI is most actively developing in the following segments of pharmaceuticals and related fields:
-
analysis of clinical practice data;
-
pilot projects within the framework of experimental legal regimes;
-
production automation;
-
pharmacovigilance and identification of safety signals in big data.
A key trend is the gradual formation of requirements for the "evidential value" of AI systems, following a logic similar to the European approach, which indicates the convergence of Russian regulatory practices with international standards. Nevertheless, in Russia, financial liability to the patient is usually borne by the medical organization. The use of AI as a "second opinion" does not change the standard of proof, and the key problem remains establishing a causal link between the AI recommendation and the harm caused.
3. European Union: a comprehensive and rigorous approach
The European Union has built the most systematic model for regulating AI by developing a document known as the AI Act8.
AI systems used in medicine and pharmaceuticals are classified as high-risk. They are subject to strict requirements for risk management, data quality, documentation, transparency, and human control. Special attention is paid to universal general-purpose models, which are increasingly used in pharmaceutical research.
In addition, the document that provides for the creation of uniform principles for the processing of medical data and facilitates cross-border access to information for research and drug development is the European Health Data Space Regulation (European Health Data Space Regulation)9. It entered into force on March 26, 2025, and additional related legal acts are expected to be developed in the future. Today, the European Health Data Space strengthens personal data protection requirements. The document has a significant impact on the pharmaceutical industry by providing:
-
cross-border access to medical data,
-
legal use of data for research and drug development,
-
opportunities for secondary use of data.
3.1. Responsibility and role of the EMA (European Medicines Agency). EMA guidance on the use of AI10
The updated liability rules apply to software and adaptive AI systems. The document directly regulates the use of AI at all stages of the drug life cycle:
-
preclinical studies,
-
development and manufacturing,
-
clinical trials,
-
pharmacovigilance.
The EMA requires:
-
interpretability of algorithms,
-
data quality assessments,
-
bias prevention,
-
validation on independent datasets,
-
control of continuously learning models.
Overall, the European approach appears to be the most rigorous of all the approaches considered.
4. United States: flexibility and focus on innovation
There is no single law on AI in the US, but the US Food and Drug Administration (FDA) is actively shaping practice through recommendations and guidelines.
The key principle is Total Product Lifecycle11, which involves monitoring AI systems from development to decommissioning. The FDA allows adaptive algorithms, but requires transparency, clinical validity, and continuous monitoring.
The American model of AI application is characterized by a high rate of innovation and the significant role of judicial practice in shaping the boundaries of responsibility12.
5. Cases of AI application in medicine and pharmaceuticals
Cases involving the use of robotic and software-algorithmic systems in medicine demonstrate the complex distribution of responsibility between the doctor, the medical organization, and the technology developer. In common law jurisdictions, such systems are not generally considered to be independent decision-makers, but algorithm errors may be subject to judicial review in terms of medical care standards and product liability.
Courts in the United States consistently hold that AI does not replace a physician. The physician is responsible for complying with the standard of medical care, even if he or she uses the algorithm's recommendations. Following AI does not exempt a physician from liability if the decision contradicts generally accepted clinical standards13.
5.2. Pharmaceuticals and robotic surgery
A notable case is Skounakis v. Sotillo14, which examined liability for harm caused to a patient as a result of following the recommendations of the proprietary software of the Dr. G clinic. The program approved a combination of drugs (fendimetrazine and liothyronine) which, according to the plaintiff, led to the patient's death. The appellate court pointed out that the key issue was not the technical evaluation of the software itself, but whether the recommended treatment complied with clinical standards. The court accepted the expert opinion of a clinician that the drug therapy was inappropriate, emphasizing that medical standards apply regardless of who made the recommendation — a doctor or a computerized decision support system.
Similar approaches can be seen in cases directly related to robotic surgery. For example, in Mracek v. Bryn Mawr Hospital15, during an operation using the da Vinci surgical system, the plaintiff filed claims against the medical organization and the robot manufacturer, pointing to technical malfunctions during the operation and the subsequent development of erectile dysfunction. The court left the system developer as a defendant, as a person bearing increased responsibility for product quality, but dismissed the claim due to the lack of a proven causal link between the robot's errors and the harm to the patient's health.
At the same time, in Singh v. Edwards Lifesciences16, the court took the opposite position, finding that there was a software defect that caused harm during cardiac surgery and holding the developer company liable.
Recent practice confirms that lawsuits against manufacturers of robotic systems remain relevant. For example, in the US, a number of lawsuits have been filed against Intuitive Surgical, the developer of the da Vinci surgical robot, including a case involving the death of a patient after surgery in 2021. The plaintiffs point to design defects in the robot's instruments (in particular, a breach of electrical insulation), as well as insufficient information and training for medical personnel. According to open sources, the company is a defendant in dozens of product liability cases, with courts still requiring plaintiffs to strictly prove a causal link between the system defect and the harm caused17.
In general, judicial practice shows that neither the doctor nor the developer of the robotic system is automatically exempt from liability. However, in order to hold the manufacturer liable, evidence of a defect in the technology and its direct impact on the outcome of treatment remains key, while the actions of the doctor and the medical organization continue to be assessed through the prism of compliance with medical care standards and proper control over the use of high-tech instruments. Hospitals may be directly liable for the selection and implementation of AI systems, insufficient staff training, and lack of proper control. AI is considered only as part of the healthcare infrastructure.
Nevertheless, AI has become an integral part of modern pharmaceuticals and medicine, and the regulatory models of Russia, the EU, and the US differ in terms of strictness and detail, but converge on one key point: AI is considered a high-risk tool that requires human control, evidence, and accountability. Practical application and court cases show that responsibility always lies with the individual or medical organization. It is expected that in the coming years, the development of medical care standards and regulatory requirements will increasingly integrate AI into clinical and pharmaceutical practice, while simultaneously raising the bar for its safety and reliability.
[2] Federal Law No. 258-FZ of July 31, 2020 (as amended on July 31, 2025) "On Experimental Legal Regimes in the Field of Digital and Technological Innovations in the Russian Federation."
[3] Federal Law No. 123-FZ of April 24, 2020 (as amended on August 8, 2024) "On conducting an experiment to establish special regulation in order to create the necessary conditions for the development and implementation of artificial intelligence technologies in the constituent entity of the Russian Federation - the city of federal significance Moscow, on the specifics of personal data processing when forming regional data sets and providing access to regional data sets, and amending Articles 6 and 10 of the Federal Law "On Personal Data."
[4] "Code of Ethics for the Use of Artificial Intelligence in Healthcare. Version 2.1" (approved by the Interdepartmental Working Group under the Ministry of Health of Russia on the creation, development, and implementation in clinical practice of medical devices and services using artificial intelligence technologies, protocol No. 90/18-0/117 dated February 14, 2025).
[5] Preliminary national standard in healthcare. Artificial intelligence systems in healthcare. Ethical aspects. Approved and put into effect by Order of the Federal Agency for Technical Regulation and Metrology dated October 25, 2024, No. 68-pnst.
[6] GOST R 59921.6-2021. Artificial intelligence systems in clinical medicine. Part 6. General requirements for operation. Federal Agency for Technical Regulation and Metrology. Date of entry into force: 01.03.2022.
[7] Federal Law No. 61-FZ of April 12, 2010 (as amended on July 23, 2025) "On the Circulation of Medicines."
[8] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance).
[9] European Health Data Space Regulation (EHDS): https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space-regulation-eh...
[10] Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle. Adopted. Reference Number: EMA/CHMP/CVMP/83833/2023: https://www.ema.europa.eu/en/use-artificial-intelligence-ai-medicinal-product-lifecycle-scientific-g...
[11] Dolores R. Serrano, Francis C. Luciano, Brayan J. Anaya, Baris Ongoren, Aytug Kara, Gracia Molina, Bianca I. Ramirez, Sergio A. Sánchez-Guirales, Jesus A. Simon, Greta Tomietto, Chrysi Rapti, Helga K. Ruiz, Satyavati Rawat, Dinesh Kumar, Aikaterini Lalatsa. Artificial Intelligence (AI) Applications in Drug Discovery and Drug Delivery: Revolutionizing Personalized Medicine. MDPI. Journals. Pharmaceutics. Volume 16, Issue 10, p. 4. Available at: https://www.mdpi.com/1999-4923/16/10/1328.
[12] Price, W. Nicholson, II, Sara Gerke, and I. Glenn Cohen. "Liability for Use of Artificial Intelligence in Medicine." In Research Handbook on Health, AI and the Law, edited by Barry Solaiman and I. Glenn Cohen, p. 150. Cheltenham, U.K.: Edward Elgar Publishing, 2024 Available at: https://repository.law.umich.edu/book_chapters/564/.
[13] Price, W. Nicholson, II, Sara Gerke, and I. Glenn Cohen. "Liability for Use of Artificial Intelligence in Medicine." In Research Handbook on Health, AI and the Law, edited by Barry Solaiman and I. Glenn Cohen, p. 152-166. Cheltenham, U.K.: Edward Elgar Publishing, 2024. Available at: https://repository.law.umich.edu/book_chapters/564/.
[14] Skounakis v. Sotillo, A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018).
[15] Mracek v. Bryn Mawr Hospital, 610 F. Supp. 2d 401 (E.D. Pa. 2009)
[16] Singh v. Edwards Lifesciences, 151 Wn. App. 137, 151 Wash. App. 137, 210 P.3d 337 (Wash. Ct. App. 2009).
[17] In the US, a surgical robot manufacturer is being sued over the death of a patient. Business FM. February 16, 2024. Available at: https://www.bfm.ru/news/544297