Abstract
-
The use of artificial intelligence (AI) in medicine and surgery is currently predicted to be very promising.
-
However, AI has the potential to change the doctor’s role and the doctor–patient relationship. It has the potential to support people’s desires for health, along with the potential to nudge or push people to behave in a certain way.
-
To understand these potentials, we must see AI in the light of social developments that have brought about changes in how medicine’s role, in a given society, is understood.
-
The trends of ‘privatisation of medicine’ and ‘public-healthisation of the private’ are proposed as a contextual backdrop to explain why AI raises ethical concerns different from those previously caused by new medical technologies, and which therefore need to be addressed specifically for AI.
Introduction: technology, past and future
Artificial intelligence (AI) is on the rise in almost all areas of society. Medicine is no exception, and orthopaedic surgery is also starting to get involved with AI. AI technologies are being tested for diagnostics, robotic surgery, prognosis, outcome prediction, rehabilitation monitoring and surgical training (1). These applications are viewed with optimism and enthusiasm, but at the same time, critical voices are warning of limitations and ethical implications (2). At the heart of most of these considerations is the fact that AI is not human, and the question arises as to whether its non-human qualities have the potential to harm patients, physicians or societies at various levels.
AI is a catch-all term, encompassing a wide range of approaches and technologies. There is a complex of stakeholders involved, ranging from the technical experts, computer scientists and engineers at one end to the medical and health practitioners at the other. AI in medicine has been discussed since around the 1970s. Early knowledge-based or expert systems used ontologies, conditional probabilities and Bayesian modelling. However, between 1985 and 2015, there was an important shift in the methodological approach to AI in medicine, which strongly impacted how the potential, benefits and risks of AI would be assessed (3). On the one hand, the underlying methods changed from knowledge-based approaches to data-driven methods (4), meaning that the underlying knowledge base changed from what was perceived as canonical expert knowledge to statistics and probabilities. On the other hand, machine learning (ML) and deep learning (DL) have led to a fundamental shift from statistical support systems based on expert knowledge, hypotheses and models to systems that learn from the data they have, aiming at optimising pattern recognition and following predictions (5, 6). From the expert's point of view, ML and DL could, therefore, have the power to eclipse human knowledge by surpassing it. For example, chatbots already give health advice or link to health websites (7). This is causing professional unease and fear of a fundamental change in the patient–doctor relationship. Above all, patient autonomy, the human concerns of the patient beyond quantifiable characteristics, and the professional role of the physician seem to be at stake (8).
Criticism of technological innovation in medicine is not new. In the past, new medical technologies have often been viewed with mixed feelings by the medical community. When the sphygmomanometer was introduced to measure blood pressure in order to prevent ‘apoplexy’, The British Medical Journal, which was generally supportive of the approach, noted critically that ‘it may reasonably be doubted whether instrumental research will ever be as useful in the investigation of blood pressure in man as the fingers and ears of a cultivated observer. There is a certain risk that the multiplication of instruments tends to pauperise the senses and to weaken their clinical acuity’ (9, 10). At the same time, other doctors reported an increase in patient demand and their own use of technical devices in diagnostics. According to these proponents, the new technologies were augmenting human senses in a positive way, enhancing physicians’ skills and leading to improved diagnostics and therapies (11, 12).
This tension between the (possible) added value of new technologies and the fear of de-professionalisation through technology on the one hand versus de-qualification (loss of skills that are no longer practised) on the other hand has been a basso continuo since at least the early 19th century. Nevertheless, AI seems to have added new and different concerns to these reservations. While classical medical technology still requires the human mind to apply and interpret what is perceived as medical knowledge, machine learning is projected as an unsupervised creation of knowledge and an augmentation of capabilities beyond human imagination. There is a fear of losing control of the technology because of its complexity, and it is unclear where responsibilities fall if the technology fails or patients are harmed. In addition to these general techno-critical concerns, several applied ethical considerations have been raised, focussing on patient autonomy, data protection or social justice (13, 14, 15).
In this paper, I argue that, although the debate about the ethical implications of AI in medicine shares similarities with older debates about medical technologies, the current debate has a new quality. My thesis is that, over the last 50 years, Western societies have experienced crucial changes in their conceptions of medicine, health and individual responsibility for one's health, which influence the framing and perception of ethics related to AI applications beyond classical bioethical considerations and the disruptive potential of ML and DL. After a brief reflection on current debates about the opportunities and risks of AI at different levels, I explain two antiparallel trends which I label the ‘privatisation of medicine’ and the ‘public-healthisation of the private’. The aim is to understand and explain overarching developments in medicine which, beyond the assessment of their opportunities and risks, influence the ethics debate on AI in surgery.
From risk to ethics
In orthopaedic surgery, the use of AI to process clinical data for prognostic purposes, image interpretation, augmenting surgical skills or improving or automating surgical tasks is already in effect and offers promising avenues for the future. The number of articles focussing on potential applications is growing, with the knee, spine and hip being the most studied body regions, where AI could assist in surgery (16). For example, AI may be useful in classifying implants from X-rays (17). At the same time, there are no standard clinical applications yet, and authors reviewing the research to date rather warn that, despite its bright future, more research is needed before AI can become an integral part of routine clinical practice and research (18, 19, 20). In addition, most authors point out AI’s fundamental limitations. Concrete practical limitations include, for example, systematic biases (caused by the underlying data or mislabeled data), lack of interpretability of the output or results of algorithmic processes, unclear responsibilities or questions of accountability in case of surgical failure (21), misinterpretation or lack of harmonised evaluation formats (22).
Most of these feared deficiency expectations or shortcomings are triggered by medical risk perceptions (calculable risk as opposed to uncontrollable danger) (23). Risks are located on different levels: the technical level, the legal level, the ethical level and the social level. Physicians using or developing AI need to navigate carefully at the intersection of these four levels, asking whether the technology works well, obeys the law, and addresses moral and social values. To provide guidance in this situation, institutional stakeholders have, in recent years, published guidelines on AI use that address these very levels. For example, the European Commission’s Ethics Guidelines for Trustworthy Artificial Intelligence serve the purpose of controlling AI risks in order to deem AI trustworthy (https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai). The guidelines explicitly call for the application of AI to be technically robust, lawful, ethical and respectful of the social environment.
At the heart of medical ethics is respect for other human beings. The canonical guiding questions to be addressed echo the four-principles-approach to bioethics proposed by Beauchamp and Childress. These questions are: does the intended AI tool potentially harm a patient, is it beneficial to the patient’s health, does the patient want it to be used, and is its application just and fair (14, 24)?
In line with this orientation, with a broader perspective at both the individual and societal levels, the World Health Organization (WHO) has proposed six ‘key ethical principles for the use of AI for health’ which include: (1) protecting human autonomy, (2) promoting ‘human well-being and safety and the public interest’, (3) ensuring ‘transparency, explainability and intelligibility’, (4) promoting ‘responsibility and accountability’, (5) ensuring ‘inclusivity and equity’ and (6) promoting ‘AI that is responsive and sustainable’ (25). Other authors highlight liability, data protection and privacy as important areas to consider (e.g. 3). Oniani et al. proposed to adopt and extend ‘ethical principles for generative artificial intelligence from military to healthcare’, mapping the WHO guidelines to ethical principles proposed by the US Department of Defense and the North Atlantic Treaty Organisation (NATO). As a result, they proposed considering nine ethical principles when assessing the moral implications of AI technologies, addressing governability, reliability, justice, accountability, traceability, privacy, lawfulness, empathy and autonomy (22).
Privatisation and public-healthisation
This extension as well as adaptation of the classic four principles to incorporate values such as transparency, public interest, accountability, responsibility, privacy or sustainability is not limited to technology per se. Such extension seems reasonable only in the light of the social developments which have taken place in the Western world in recent years and which have led to changes in the understanding of the role of medicine in a given society. In what follows, I argue that two ongoing developments are of particular importance. They are closely related to, or part of, the ‘technoscientific transformations’ of health that Clarke et al. have summarised into ‘biomedicalisation’. This transformation includes a ‘focus on health itself and the elaboration of risk and surveillance biomedicines’, the ‘production of new individual and collective technoscientific identities’ and increasing commodification of health services (26).
The first development I would like to consider here is the ‘privatisation of medicine’. By this, I do not mean merely that public funding of individual healthcare is being replaced by private insurance or private funding. In a broader sense, I am referring to the phenomenon that medical practices, procedures and approaches – which used to be either state-directed or carried out in more or less public arenas – have, over the last 50 years, become a private matter. In other words, the shared responsibility for health and healthy living (27) has gradually shifted from the public to the individual. An example of this process is the way in which hypertension and blood pressure measurement have been dealt with. After the introduction of blood pressure monitoring, hypertension was medicalised as a biomedical problem. With the results of the Framingham study, blood pressure became a public issue, with anti-hypertension campaigns and the possibility of measuring it in the family doctor’s office or in healthcare facilities (28). The focus on ideal values led to a growing market for measurement devices. It also shifted the responsibility for measurement and monitoring from doctors to patients. Self-measurement was recommended by physicians and politicians. For example, a health report published by the German government in 2008 stated: ‘... further advantages of self-monitoring are the possibility to control therapies cost-effectively or the involvement and empowerment of the patient in the management of his or her disease’ (29). As a result, devices became smaller and took on simpler interfaces, leaving the doctor’s office and entering the patient's home. Blood pressure and its measurement thus moved from a shared public–private domain to the individual responsibility of the patient, a private matter in private rooms. However, the user interface took away the user’s technical involvement. The measurement itself was black-boxed; users were excluded from a dialogue with the instrument. Even routine calibrations and system status indicators are now often reserved for a visiting technician from the manufacturer; the user is not allowed to know whether the instrument is reading true. He or she has to trust in the measurement.
The second development is a somewhat opposite but simultaneous process. I call it the ‘public-healthisation of the private’. Borrowing from Peter Conrad’s idea of medicalisation as the description of problems that were not previously medical in medical terms (28, 30), I mean by this that there are private issues that have become subject to public health considerations. An ideal example is the management of body weight. Body weight has been a social and health issue since the 19th century, with a wide variety of physical conditions and their interpretations as desirable or undesirable. This changed in the early 20th century when fat and thin were increasingly seen as ‘health opposites’, with heavy bodies, in particular, becoming both a stigma and a medical problem (31). At the same time, the scale became a tool of public interest around 1900, as the quintessential instrument for measuring the body. It was publicly displayed and used for fun at fairs, and chocolate factories sold scales that dumped chocolate as they weighed the user’s body. This public use for fun became a serious domestic issue, especially after the Second World War. Bathroom scales were brought into the home, and personal body weight could be monitored in these private spaces. However, private use coincided with public interest in body weight, and being overweight became both stigmatised and unhealthy (32, 33). Since the 1970s at the latest, obesity has been seen not only as a matter of personal well-being but also as a chronic disease, ideally managed by private means in the home, including measuring and recording one’s weight. It is not fully medicalised and is still seen as a public health issue (34). While measured in private, individual weight is under constant public scrutiny.
Thus, in the faces of both ‘privatisation of medicine’ and ‘public-healthisation of the private’, medicine becomes private and the private becomes a public issue. Both processes are supported by technologies of measurement and control (in these cases, the devices for measuring blood pressure and weight). In these partly contradictory but simultaneous social developments, AI tools extend the power of existing technologies beyond measuring and counting: Until now, the quantification of health and health risks has been seen by doctors as a tool to advise patients based on data and probabilities. Concepts of health, illness, patient preferences, and the doctor's expertise regularly converged, influenced each other, and the next course of action was negotiated. AI can now trigger a loop effect that amplifies its power. It could become an advisory tool whose advice is not negotiated because it is seen as superior to the human, and because its advice is not negotiable, it is taken for granted, with the effect that it is no longer questioned. Thus, AI tools have the potential to support people’s desires for health, along with potential to nudge or push people to behave in a certain way.
Resulting tensions
In other words, AI is gaining some control over human behaviour, and if it is not the AI itself, it is the minds behind the AI who, at the same time, may no longer feel responsible or accountable. In bariatric surgery, for example, AI could be used to assess risks, improve monitoring during surgery, and manage post-operative complications and outcomes (35). At the same time, each user runs the risk of subscribing to the technical, social and ethical norms inherent in the algorithm’s definition of good vs bad, desirable vs undesirable behaviour and body weight. In addition, one must be aware that machine learning applications in healthcare can be seen as part of a data ecosystem that connects multiple datasets and the underlying interests of diverse stakeholders (36). As a result, there is a constant risk of health information being connected, reinterpreted and misused, for example to nudge or even punish behaviour that is considered unhealthy. However, once the technology is in place, it may become virtually impossible to opt out or not use it – either because that would lead to worse treatment, so that it would be considered unethical not to use the technology, or because the social pressure to use the existing AI tools for one’s own sake becomes so strong that people feel compelled to subscribe to norms inherent in the algorithm (for example, ideal weight or blood pressure). Incidentally, these risks are not specific to AI, but to any measurement and quantification of health. What is specific to AI is that it becomes difficult to account for responsibility for advice and nudging and that ML and DL may develop health logics that are far removed from human behaviour and needs.
This leads to some fundamental tensions and dilemmas: To enable patient well-being and autonomy, there may be good reasons to use AI technology. At the same time, like many other data-driven surveillance or monitoring technologies, temporal AI technologies (active over a long period of time and addressing longitudinal patterns of behaviour or physical status) tend to compromise patient privacy and control patient behaviour because their findings and recommendations are taken for granted. As a result, patients may change their behaviour to conform to the rules of the algorithms, which may benefit their health status, even if it undermines their personal needs and preferences (37). The data used by AI are abstractions from reality, the result of a virtualisation that requires interpretation. Data that does not conform to a once-defined standard is considered an alarm of an unhealthy state. The idea behind the data is an ideal type of human. Unideal types may be stigmatised or forced to become ideal in public. In private, they adopt the ideal type, may stigmatise themselves and adjust their behaviour to conform to the algorithm. This is where AI also becomes a question of power and fairness.
In a nutshell, users and developers of AI technology need to be aware that the data fed to AI and the results of AI learning processes are not free of value. The social pressures and agendas underlying the data are reflected in the recommendations made by AI. Nevertheless, it is clear that any doctor who does not use AI will also allow his or her own social and moral values to influence treatment recommendations and practices. Incidentally, it is unfair to accuse AI of being opaque in this respect and to imply that human thinking is value-free and transparent. However, just as you would ask a doctor to explain his or her reasoning, AI technologies should be as accessible and explainable as possible about their decisions and recommendations. It would be an unsatisfactory situation if a doctor had to explain to a patient that the algorithm recommended surgery, but he/she himself/herself could not say why.
Conclusion
Although the medical community's reactions to AI are similar to their reactions to innovative technology seen in the past, the context for evaluating AI in medicine differs from earlier phases of implementing medical technologies in medical practice. The differences affect not only the professional role and self-image of physicians (e.g. the extent to which they trust and negotiate health data and consequences) but also how we consider the ethical implications of AI use for individual patients and society as a whole.
First of all, it is as if ML and DL are perceived as an insult or an affront to human expertise. Fears of deprofessionalisation, which have occasionally accompanied innovative technologies in medicine in the past, address a new quality of eclipse of human expertise by self-learning computer systems. However, given that most systems are currently trained under supervision, some objectors may be afraid of a straw man.
Secondly, and perhaps more importantly, not only is AI technology new to medicine, but the use, understanding and perception of medicine and its role in human well-being have changed as a result of at least two somewhat antiparallel developments occurring simultaneously. The consequences of privatisation and public-healthisation in medicine affect the evaluation of AI technologies. In addition to the classical principles of modern bioethics, which focus on the patient–doctor relationship, other values are being addressed in moral analyses of the consequences of AI. I argue that these values come to the fore because the social developments described above meet the presumed power of ML and DL tools. Patient safety, well-being, autonomy and equitable access to beneficial technologies remain paramount. However, other values such as transparency and privacy are being introduced in the light of changing perceptions of health and the felt, implied or urged need to use AI technologies.
Consequently, from a normative perspective, AI-induced changes in the doctor–patient relationship and in the understanding of medicine, health and behaviour should be monitored and evaluated beyond the honeymoon phase of AI applications. The danger of epistemic narrowing should also be considered: a doctor may have access to patient information that the AI system does not take into account (e.g. posture, mood, hair condition, spots on the skin, asymmetric gait, suggestions implicit in the patient's statements). Particularly in practical, skills-intensive disciplines such as surgery, the consequences of delegating tasks to AI need to be analysed in terms of the role of the doctor and issues of accountability. Finally, the extent to which AI learning algorithms reflect assumptions about disease prevention and human behaviour, and how this might affect human autonomy, including the freedom to choose lifestyles that do not conform to society’s expectations of healthy behaviour, needs to be monitored.
ICMJE Conflict of Interest Statement
The author declares that there is no conflict of interest that could be perceived as prejudicing the impartiality of the instructional lecture.
Funding Statement
This instructional lecture did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
References
- 1↑
Lisacek-Kiosoglous AB, Powling AS, Fontalis A, Gabr A, Mazomenos E, & Haddad FS. Artificial intelligence in orthopaedic surgery. Bone and Joint Research 2023 12 447–454. (https://doi.org/10.1302/2046-3758.127.BJR-2023-0111.R1)
- 2↑
Cobianchi L, Verde JM, Loftus TJ, Piccolo D, Dal Mas F, Mascagni P, Garcia Vazquez A, Ansaloni L, Marseglia GR, Massaro M, et al.Artificial intelligence and surgery: ethical dilemmas and open issues. Journal of the American College of Surgeons 2022 235 268–275. (https://doi.org/10.1097/XCS.0000000000000242)
- 3↑
Matsuzaki T. Ethical issues of artificial intelligence in medicine. California Western Law Review 2018 55 Article 7. Accessible at https://scholarlycommons.law.cwsl.edu/cwlr/vol55/iss1/7
- 4↑
Peek N, Combi C, Marin R, & Bellazzi R. Thirty years of artificial intelligence in medicine (AIME) conferences: a review of research themes. Artificial Intelligence in Medicine 2015 65 61–73. (https://doi.org/10.1016/j.artmed.2015.07.003)
- 5↑
Bicer EK, Fangerau H, & Sur H. Artifical intelligence use in orthopedics: an ethical point of view. EFORT Open Reviews 2023 8 592–596. (https://doi.org/10.1530/EOR-23-0083)
- 6↑
Chang AC, & Limon A. Chapter 1: Introduction to artificial intelligence for cardiovascular clinicians. In Intelligence-Based Cardiology and Cardiac Surgery, pp. 3–120. Chang AC, & Limon A, Eds. Academic Press 2024. (https://doi.org/10.1016/B978-0-323-90534-3.00010-X)
- 7↑
Hopkins AM, Logan JM, Kichenadasse G, & Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectrum 2023 7. (https://doi.org/10.1093/jncics/pkad010)
- 8↑
Heyen NB, & Salloch S. The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory. BMC Medical Ethics 2021 22 112. (https://doi.org/10.1186/s12910-021-00679-3)
- 11↑
Evans H. Losing touch: the controversy over the introduction of blood pressure instruments into medicine. Technology and Culture 1993 34 784–807. (https://doi.org/10.1353/tech.1993.0006)
- 12↑
Martin M, & Fangerau H. Listening to the Heart's Power: designing blood pressure measurement. Icon 2007 13 86–104.
- 13↑
Solar A, & Weber K. Lernen aus der Vergangenheit. Die prägende Rolle der frühen Jahre der KI-Entwicklung für heutige Debatten (auch in der Medizin). In Künstliche Intelligenz und Gesundheit Ethische, Philosophische und Sozialwissenschaftliche Explorationen, pp. 133–154. Sonar A, & Weber K, Eds. Stuttgart: Steiner. 2022.
- 14↑
Keskinbora KH. Medical ethics considerations on artificial intelligence. Journal of Clinical Neuroscience 2019 64 277–282. (https://doi.org/10.1016/j.jocn.2019.03.001)
- 15↑
Neri E, Coppola F, Miele V, Bibbolino C, & Grassi R. Artificial intelligence: who is responsible for the diagnosis? Radiologia Medica 2020 125 517–521. (https://doi.org/10.1007/s11547-020-01135-9)
- 16↑
Federer SJ, & Jones GG. Artificial intelligence in orthopaedics: a scoping review. PLoS One 2021 16 e0260471. (https://doi.org/10.1371/journal.pone.0260471)
- 17↑
Ren M, & Yi PH. Artificial intelligence in orthopedic implant model classification: a systematic review. Skeletal Radiology 2022 51 407–416. (https://doi.org/10.1007/s00256-021-03884-8)
- 18↑
Benzakour A, Altsitzioglou P, Lemée JM, Ahmad A, Mavrogenis AF, & Benzakour T. Artificial intelligence in spine surgery. International Orthopaedics 2023 47 457–465. (https://doi.org/10.1007/s00264-022-05517-8)
- 19↑
Yin J, Ngiam KY, & Teo HH. Role of artificial intelligence applications in real-life clinical practice: systematic review. Journal of Medical Internet Research 2021 23 e25759. (https://doi.org/10.2196/25759)
- 20↑
Guni A, Varma P, Zhang J, Fehervari M, & Ashrafian H. Artificial intelligence in surgery: the future is now. European Surgical Research 2024 1. (https://doi.org/10.1159/000536393)
- 21↑
Hashimoto DA, Rosman G, Rus D, & Meireles OR. Artificial intelligence in surgery: promises and perils. Annals of Surgery 2018 268 70–76. (https://doi.org/10.1097/SLA.0000000000002693)
- 22↑
Oniani D, Hilsman J, Peng Y, Poropatich RK, Pamplin JC, Legault GL, & Wang Y. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. NPJ Digital Medicine 2023 6 225. (https://doi.org/10.1038/s41746-023-00965-x)
- 23↑
Schlich T. Objectifying uncertainty: history of risk concepts in medicine. Topoi 2004 23 211–219. (https://doi.org/10.1007/s11245-004-5382-9)
- 24↑
Beauchamp TL, & Childress JF. Principles of Biomedical Ethics, 7th ed. New York: Oxford University Press 2013.
- 25↑
World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva: World Health Organization 2021. Accessible at https://www.who.int/publications/i/item/9789240029200
- 26↑
Clarke AE, Shim JK, Mamo L, Fosket JR, & Fishman JR. Biomedicalization: technoscientific transformations of health, illness, and U.S. biomedicine. American Sociological Review 2003 68 161–194. (https://doi.org/10.1177/000312240306800201)
- 27↑
Resnik DB. Responsibility for health: personal, social, and environmental. Journal of Medical Ethics 2007 33 444–445. (https://doi.org/10.1136/jme.2006.017574)
- 28↑
Conrad P, & Kawachi I. Medicalization and the pharmacological treatment of blood pressure. In Contested Ground Public Purpose and Private Interest in the Regulation of Prescription Drugs, pp. 26–41. Davis P, Ed. New York, Oxford: Oxford University Press 1996.
- 29↑
Janhsen K, Strube H, & Starker A. Gesundheitsberichterstattung des Bundes, Heft 43: Hypertonie. Berlin: Robert-Koch Institut 2008. (https://www.rki.de/DE/Content/Gesundheitsmonitoring/Gesundheitsberichterstattung/GBEDownloadsT/hypertonie.pdf?__blob=publicationFile)
- 30↑
Conrad P. The shifting engines of medicalization. Journal of Health and Social Behavior 2005 46 3–14. (https://doi.org/10.1177/002214650504600102)
- 31↑
Hutson DJ. Plump or corpulent? Lean or Gaunt? Historical categories of bodily health in nineteenth-century thought. Social Science History 2017 41 283–303. (https://doi.org/10.1017/ssh.2017.4)
- 32↑
Frommeld D. Die Personenwaage. Ein Beitrag zur Geschichte und Soziologie der Selbstvermessung. Bielefeld: Transcript 2019.
- 33↑
Bivins R, & Marland H. Weighting for health: management, measurement and self-surveillance in the modern household. Social History of Medicine 2016 29 757–780. (https://doi.org/10.1093/shm/hkw015)
- 34↑
Ciciurkaite G, Moloney ME, & Brown RL. The incomplete medicalization of obesity: physician office visits, diagnoses, and treatments, 1996–2014. Public Health Reports 2019 134 141–149. (https://doi.org/10.1177/0033354918813102)
- 35↑
Bellini V, Valente M, Turetti M, Del Rio P, Saturno F, Maffezzoni M, & Bignami E. Current applications of artificial intelligence in bariatric surgery. Obesity Surgery 2022 32 2717–2733. (https://doi.org/10.1007/s11695-022-06100-1)
- 36↑
Gerhards H, Weber K, Bittner U, & Fangerau H. Machine learning healthcare applications (ML-HCAs) are no stand-alone systems but part of an ecosystem: a broader ethical and health technology assessment approach is needed. American Journal of Bioethics 2020 20 46–48. (https://doi.org/10.1080/15265161.2020.1820104)
- 37↑
Fangerau H, Hansson N, & Rolfes V. Electronic health and ambient assisted living: on the Technisation of ageing and responsibility. In Cultural Perspectives on Aging, pp. 49–62. Andrea H-E, Ed. Berlin, Boston: De Gruyter 2022. (https://doi.org/10.1515/9783110683042-004)