AWARE BIOMEDICAL TECHNOLOGIES. Ethics-driven Expertise and Innovation

di:

Fabrizia Abbate

Fabrizia Abbate

VEDI PUBBLICAZIONI

Table of contents

Introduction. The three fundamental “Cs”

  1. Consciousness and valuational progress
  2. The Black Mirror nightmare: bodily unawareness and ethical regression
  3. Corporeality, Fem Tech, Data Feminism

Conclusions. Looking ahead, to future capabilities

Abstract

AWARE BIOMEDICAL TECHNOLOGIES: ETHICS-DRIVEN EXPERTISE AND INNOVATION This proposal is based on the three Cs: consciousness, corporeality and capabilities are the three essential Cs for ethics to transcend the mere “functioning” of technologies. The concept of awareness is fundamental to our thinking. We see it in Paul Ricoeur’s portrayal of Oedipus at Colonus and in Philip Kitcher’s notions about valuational progress. Taking into account the crisis of expertise and its genuine quest for efficiency, we will explore why and how this awareness is essential at the fundamental level of social interaction. A fictional episode from the famous television series Black Mirror, as well as the spread of fem-tech and recent AI medical devices, help us understand the ethical issues at stake.

Introduction. The three fundamental “Cs”

From a political-strategic point of view, especially in the complex scenario of development balances between Europe, the United States and China, the recent European Artificial Intelligence Act, with its “risk-based approach” and “regulatory sandboxes”, is of enormous importance for investments in research and production of robotic systems and applied artificial intelligence[1].

The future of “androids” linked to social and medical robotics was showcased at the 2025 Expo in Osaka, Japan. The first words of the new Pope, Leo XIV concerned the social question raised by the revolution in AI and new technologies (he said he chose his name to continue the work of Leo XIII, whose encyclical Rerum Novarum addressed the social question of the first great industrial revolution).

The standards that Europe has decided to adopt first (reaffirming a human-centered vision of technological development) reopen the debate on the regulation and protection of new technological discoveries and applications, especially in the health and social sphere. Here, the ethical debate surrounding the functions of devices, as well as democratic and egalitarian access to new forms of progress, has a moral impact on how risks and limitations are perceived, and on the awareness and social acceptance of AI systems and integrated robotics. However, these tasks of protection and safety do not express the full essence of ethics[2]. There is a surplus that is expressed not only in terms of duties, rules and sanctions, but also in broader terms of purpose and freedom, in terms of capacities and possibilities, and ultimately in terms of the good.

This proposal is developed along the lines of the three “Cs”: Consciousness, Corporeality, Capacity. In this ethical framework, consciousness as awareness is the key concept, taking into account the crisis of expertise and its genuine quest for efficiency and effectiveness. We will also explore why and how this awareness is essential at the fundamental level of social interaction. The fictional episode of the famous TV series Black Mirror, as well as certain biomedical devices that use artificial intelligence, help us to understand the ethical issues at stake.

Small steps should be taken.

1. Consciousness and valuational progress

The topic of an ethics of technology in the age of the artificial is difficult. It is also risky. And, above all, it can be ambiguous. When we want to demonstrate vigilance in relation to new digital technologies and the use of AI, we often turn to ethics as a means of combatting the potential negative consequences that concern us. We turn to ethics to defend ourselves against something that we cannot, do not want to, and must not stop doing, but something we sense the danger of: a particular form of ethics, all resting on a negative freedom, which, with Isaiah Berlin, we might say is the freedom to say no, to put stakes, to curb, to prevent, to discipline, in short, a defensive freedom.

To understand this, we must consider privacy, the defence of personal data, consumer safety, informed consent, the boundaries of one’s body, intellectual property and environmental resources. In short, we need an ethic of protection and control. It is an ethic that presupposes an enemy: ourselves.

We’re not interested in discussing this particular ethical issue; instead, we’ll steer the conversation towards something else. To do so, we’ll begin with Hieronymus Bosch and his completely surreal and unconventional painting, a departure from the style that preceded it.

We are immediately struck by the sense of unreality in the work of this Flemish painter, who lived between 1453 and 1512. On his panels, we see flying fish, anthropomorphic figures emerging from vegetables and human legs emerging from broken, hollow or pierced shells. Men with cloaks and bird heads. Bosch was a visionary; a different kind of Renaissance artist. But why does he interest us so much?

This dreamlike, grotesque sensibility, which is all made up of allegories, myths and symbols, is the other side of Europe’s early modernity, representing the other side of the scientific Renaissance, the age when scientists began to observe nature directly and explain it[3]. Discoveries are made in the natural sciences, including biology, zoology and botany. Investigations are also conducted into the human body, with holistic medicine breaking away from magic; meanwhile, mathematics, astronomy and engineering are configured as sciences of the real. Leonardo da Vinci opens the doors. Science is born.

And yet Bosch paints allegories: his era requires this surreal imagination as much as it requires the natural sciences.

It is the allegory of the encounter between science and the marvelous that exceeds reality. Orhan Pamuk’s novel The White Castle features a scientist character who is told by the author that they were supposed to search for the strange and the wonderful “in the world”, and not just within themselves[4]. Therefore, if we are to avoid a purely defensive ethics, we must not separate wonder from science, technology and expertise.

We need to be amazed and guided by that rush of strangeness. However, there is one condition: it must satisfy our need for human knowledge; it must be free; and it must find a way to work with the economy and the market without becoming the enemy of humanity.

Science and research remain ethically valid to the extent that they do not exhaust the meaning of our human experience. In this sense, ethics provides us with boundaries. But this limit is not something imposed from the outside: it is an inherent limit, through which ethics itself has always defined and identified its force field.

It may be possible to define the ethical field by the first boundary, which is the awareness that accompanies actions. When we use this term, we think of a careful consideration that goes into the evaluative process. Let’s try to understand it.

We will use an example from the great Greek tragic narratives, but filtered through the moral reflection of French philosopher Paul Ricoeur: the figure of Oedipus at Colonus. To use the Ricoeurian language of his latest book Parcours de la reconnaissance, the recognition of the tragic does not stop with the figure of Oedipus Rex, who, in Sophocles’ tragedy of the same name, suffers the doom of fate. However this recognition also triggers a forward movement, an awareness that is itself a moral evolution. This is the meaning of Oedipus at Colonus, the other tragedy that is a sequel to the first. In this play, the protagonist stops suffering the condition that fate has given him, to “takes responsibility ”for it.

S’il est une chose que démontre Œdipe à Colone, c’est que le personnage tragique, aussi accablé soit-il par le sentiment du caractère irrésistible des forces surnaturelles qui gouvernent les destinées humaines, reste l’auteur de cette action intime qui consiste à évaluer ses actes, singulièrement dans la condition de rétrospection. Si le malheur est la note dominante de la tragédie d’Œdipe à Colone, au point de réfuter la culpabilité ancienne, ce malheur devient une dimension de l’agir lui-même, en tant qu’enduré d’une façon responsable. La pièce échelonne, sur ce trajet d’endurance, la progression du malheur subi au malheur assumé. C’est le renversement de l’accusation en disculpation qui rythme cette progression intime de l’endurance. Le vieil Œdipe, aveugle[5].

This is not about the young Oedipus the King in his prime, making immediate decisions and being deceived by fate precisely because of his ability to act. Instead, it is about the elderly, exiled Oedipus, accompanied by his daughters, seeking a place to rest. He is a hero looking back and coming to terms with himself. This demonstrates the idea that selfhood (ipseity) can only be constructed through narrative, based on one’s own actions according to the principle of responsibility.

It is a form of recognition that hints at reflexivity without the clarity of Cartesian subjectivity. Nevertheless, it is already present in embryonic form in the early Greek consciousness. The tension between fallibility and human capacity gives rise to the awareness that comes with old age: phronesis, which accompanies the decay of the body. We can understand this character because, as Ricoeur writes, referring to Bernard Williams’ words about Oedipus in Shame and Necessity, «nous savons que dans l’histoire de toute vie il y a le ponds de ce qu’on a fait, et pas seulement de ce qu’on a fait intentionnellement»[6].

Ricoeur seems to be telling us that, despite everything, Oedipus remains the author of the intimate act of evaluating his own actions retrospectively. With this capacity for evaluation, we enter the realm of responsible awareness[7].

This is where valuational progress comes in. And this is precisely what we need in this context of technological innovation and credibility of expertise. In the face of an increasing crisis of expertise, we must take the failure to recognise skills seriously. The erosion of shared knowledge resulting from the increasingly widespread “personalisation” of knowledge, and the tendency to look back on the past with nostalgia, contrasting it with a present perceived as unstable and unreliable, are risks to the bridge between science and society. These could have worrying consequences on a social, democratic and ethical level. The evaluative progress to which reference is made concerns the development and refinement of personal, social and cultural values.

According to Philip Kitcher’s perspective, progress is not teleological, but contextualised.

It is achieved in specific areas through critical and open discussions among individuals and communities who are working together towards shared goals. Rational and democratic dialogue allows us to correct mistakes, listen to marginalised voices and build fairer practices, which is why it is such a valuable tool.

For example, scientific progress consists of identifying problematic aspects of the current state of a science, examining them, and finding solutions. In order to understand this process, we need to focus on investigative methods.

Valuing is taken to be justified when it emerges from investigations conducted by using reliable methods.  When a valuation continues to be justified as value inquiry continues to be pursued by such methods it is correct.  Truth is understood as what remains stable in the course of indefinitely pursued reliable inquiry; it accrues to a value judgment when justification sticks.  I adopt William James’s version of a pragmatist approach to talk of truth: truth is what is “expedient in the way of our thinking” – in the long run and on the whole.  I also share the parallel he draws with morality: “just as ‘the right’ is only the expedient in our way of behaving”, again, in the long run and on the whole[8].

For Kitcher, moral progress is also measured in practical terms. In The Ethical Project, he puts forward a constructivist view of morality, arguing that it does not stem from divine commandments or immutable, objective moral principles, but rather from an evolutionary, historical and social process. According to Kitcher, moral progress occurs when ethical practices lead to a reduction in suffering and an improvement in living conditions for more people.

Throughout history, progress has been achieved precisely when these obstacles have been removed or overcome, through an evaluative exercise that supports abilities. But how does this happen? «It happens when the deliberation becomes more inclusive, when those who engage in it have better factual information, and when there is a serious effort at mutual engagement. Mutual engagement is a matter of entering into one another’s lives, so that we see the world as our interlocutors see it»[9]. Our grasp of the facts is forever limited and always open to change, but we can strive to eradicate errors, make sure that viewpoints beyond our own are included in the conversation, and ensure that the statements made by other participants are backed by solid evidence[10].

The approach to moral and ethical methodology outlined here clearly takes justification in the moral and ethical domains to result from a social process.  It abandons the idea of individuals as having final authority, in any of the forms that idea has taken.  It says farewell to the rule of the shaman, the seer, the priest, the sage, the philosopher, or the professional ethicist.  None of these can claim decisive expertise. That does not mean, however, that such folk may not have particular talents for facilitating deliberation. In the conversations through which moral and ethical issues are uncovered and resolved, it would be very good to have John Rawls or Bernard Williams in the room. Their part: not to dominate the conversation but to help it to go better[11].

We are calling for “better factual information” and “a serious effort at mutual engagement” in decision-making processes related to the use of new technologies. This is the kind of awareness that we are looking for, developed from the bottom up and from within. We firmly believe that this approach can be successful when it comes to implementing the general rules and codes of conduct that govern scientific and technical innovation: it is at this basic level of social interaction that conscious evaluation is needed. This is precisely where the crisis of expertise leaves a dangerous void.

2. The Black Mirror nightmare: bodily unawareness and ethical regression

The concept of ethical awareness is closely intertwined with identity and relationships, even in the context of interaction with medical robotics and generative AI in healthcare. In this scenario, the distinction between knowledge and “decision-making” is pivotal. Think about how medical ‘informed consent’ has changed over time and the different levels of moral judgement that come up in the therapeutic relationship.

At the end of the last century, Paul Ricoeur discussed this topic at the international conference Ethics. Codes in Medicine and Biotechnology, held in Freiburg im Breisgau, Germany, in 1997, and in his publication in Esprit, entitled Les trois niveaux du jugement médical, referring to a first level of prudence, a second level of deontology, and finally, the third level, called reflection[12].

But how can we cultivate this reflective awareness in the context of the new paradigm of robotic similarity? Consider, for example, the geminoid robots developed by Professor Hiroshi Ishiguro at Osaka University, which are perfect copies of their creators: they enable cognitive and social sciences to study aspects of humanity from a totally different perspective than the scientific approaches followed until now, and are potentially very useful in the study of human perception and the brain as a system for controlling and mediating bodily communications. Ishiguro argues that some questions about human beings can only be answered through the experimental use of androids «because of their resemblance to people, they have the potential to contribute to an understanding of human behavior and the roles of our brains and bodies play in it»[13].

But who has access to this understanding? Progress is not a solution for everyone, because not everyone can access it and because we are all different. These are the boundaries that must be negotiated for moral progress[14].

An episode of the British television series Black Mirror, created and produced by Charlie Brooker, provides an example[15]. This anthology series features different stories, characters and scenarios in each episode, but they all share a strong common theme: the challenges of new technologies in the context of current events, as well as the risks and side effects of addiction. Several episodes refer to neuroscience and biomedical neural technologies.

Common People is perhaps the most disturbing episode of the seventh series. Amanda, a teacher and Mike’s wife, falls into a coma and is diagnosed with a brain tumour. Desperate for a solution, Mike turns to Rivermind Technologies, a tech start-up, who offer them a life-saving procedure through Gaynor, one of their black representatives: a free surgical procedure that replaces damaged brain tissue with a synthetic alternative, powered by a cloud-based backup[16]. However, a monthly subscription is required to maintain online access and functionality. At first, everything goes well and Amanda seems truly in control of herself, able to recognise herself in her identity. But then, gradually, the side effects emerge: Amanda starts sleeping too much and, while talking, suddenly and involuntarily starts repeating advertising slogans.

So, they talk to Gaynor again and discover that Rivermind offers different subscription levels, ranging from the standard one (which causes all those inconveniences for “common people”), to Plus and Lux. Naturally, each upgrade in neural performance is associated with an increase in subscription cost.

Amanda and Mike find it difficult to afford their Rivermind subscription on their salaries alone. Mike secretly performs degrading acts on the livestream platform “DumDummies” to pay for Amanda’s subscription levels. He then starts selling his teeth, personal belongings and furniture.

Soon the couple reaches financial collapse, and Amanda, forced to remain at the standard level, suffers a steady physical decline. The final sad act is their joint decision to end the nightmare: desperate, Mike ends Amanda’s life by placing a pillow over her face while she continues to recite advertising slogans. He then locks himself in a room for the final livestream, perhaps with the intention of committing suicide. However, the story does not explain or illustrate anything.

This plot is certainly a punch in the stomach. Firstly, it is significant that the two characters directly involved in using the new neural device are both economically disadvantaged black women. This highlights one of the weaknesses of the healthcare system. According to the Centers for Disease Control, black women in the US are three times more likely to die from pregnancy or childbirth-related complications than white women. Furthermore, the Black Women’s Health Study at Boston University shows that these disparities extend far beyond pregnancy, with black women being at greater risk of cardiovascular disease, hypertension, stroke, lupus and certain types of cancer[17].

However, the issue is no longer simply material inequality in access to care. Today, we are dealing with artificial inequality. They are underrepresented in the acquisition and processing of data for medical diagnostic algorithms, which leads to distorted results due to human biases altering the original training data of the algorithms. This results in distorted and potentially harmful outputs, highlighting the issue of bias in machine learning and generative artificial intelligence.

The tech dystopian narrative also takes us to a new level of ethical evaluation, which immediately becomes political. In the European system, health is a ‘right’. In Black Mirror, however, the US system is referenced, in which health is a ‘commodity’. Unsurprisingly, the story centres on healthcare devices that are treated as such[18]. We can explore the differences between devices such as cardiac pacemakers and Rivermind’s neural prosthesis. In the former case, the device is internal, whereas in the latter case, the device is both internal and external, as it is connected to the cloud. We will discuss this in more detail later on when we talk about fem-tech.

However, the discussion here immediately shifts to the issue of socio-economic inequalities linked to capitalism. Low-income families are vulnerable to price exploitation. In Common People, Gaynor, a Rivermind representative, is emblematic of the disadvantaged who have become part of the system they once needed. Two aspects were reflected on: the transformation of treatments considered ‘essential’ into endless subscriptions (indeed, subscription levels determine quality of life), meaning turning illness and treatment time into profit; and the risk of neural devices being subject to planned obsolescence or reduced service provision, as is the case with digital applications (consider the cancellation of Skype, which until a few years ago was the fundamental tool for online communication).

The debate, at first glance, may seem to be solely socio-political, but it is also ethical. This is because it concerns justice and the common good, but above all because it concerns the philosophy of action: practical action, its methods and possibilities, and the bodies that perform physical actions. The philosophy of action is closely linked to the philosophy of the body. Experiences involving human prosthetics, robotic organs and radiomics can challenge the mere ethics of functioning.

From an aesthetic and moral point of view, the mediation of “form” ensures social acceptance, recognisability and trust. While this may seem to be only one aspect of the impact of new technologies, it is a decisive one that affects perception of space and time, as well as the notion of similarity and difference. It also affects the neuroaesthetic and linguistic mechanisms that AI uses[19].

3. Corporeality, Fem Tech, Data Feminism

The second “C” we mentioned in the introduction is corporeality.

Whether it is the “robotic skin” covering the sensors of prostheses or humanoid robots, or medical “nanorobotics” for surgical, oncological or vital function support, such as respirocytes, clottocytes or pharmacocytes, ethics concerns the dimensions of the body as perceived and experienced by each agent or patient.

Moral philosophy brings to mind Spinoza’s studies on the body, as well as the work of Maurice Merleau-Ponty and Jean-Luc Nancy on flesh and skin. In The Phenomenology of Perception, Merleau-Ponty develops his expansive notion of “corporeality” as a means of transcending the confinement of the body as an object. The body is, in essence, the source of perception and the arena in which it is experienced. In this interplay between corporeality and external space, a fusion of meanings emerges[20].

Biomedical engineering has a bright future ahead, promising to enhance health and well-being. However, for this to be achieved, we must approach the body as a collection of organs and functions to be studied, rather than as a person with a distinct identity. In this perspective, the individual is reduced to their body, and their personal identity becomes secondary[21].

This has led to a shift in perspective where treatment has become more technical, and we have come to rely on this technique for all our knowledge about the body. Michel Foucault’s writing on the subject is very clear: the medical question changes from «how are you to where does it hurt?», and this is the starting point for every prescription and regulated therapy[22].

Robotic transformation does not stop at the surface, but penetrates deep into our flesh[23]. Consider, for example, the developments in ‘precision’ medicine and diagnostic imaging, such as nanotechnology and radiomics. The emerging field of nanoethics deals with the changes that nanotechnology can bring about, such as the need to regulate the environmental and health effects of these nanomaterials, and the privacy issues that arise when it becomes easier to collect large amounts of data about our bodies through new forms of genetic analysis enabled by nanotechnology.

Talking about enhancement also means referring to these technical and scientific advances that are destined to have decisive consequences on the existential experience of patients, which must be taken into account not only by law, psychology and sociological statistics, but also by ethics, with the question of what level of safety is fair for a predictive system to be reliable before it has an impact on human life expectancy[24].

To what extent does the functionality of digital archives for research purposes undermine freedom and the uniqueness of personal destinies?

From an ethical and anthropological perspective, we must ask ourselves how values evolve, how the experience of the body changes, how expectations of life and death alter, and how social relations are modified by this transformation in medicine. Without treating values as facts and without uncritically accepting normative discourse (valuational progress), we must continue to ask ourselves which social goods we intend to preserve and defend in order to define ourselves as human beings[25].

We referred to a Black Mirror episode. It is fiction. But fem-tech is very real.

Female technology refers to health technology products and services dedicated to women’s health and well-being. While gender medicine addresses all medical and health differences between the sexes − including those relating to drug metabolism and the incidence and prevalence of certain diseases – fem-tech mainly concerns technological devices connected to women’s bodies. These include fertility and family planning devices, apps for monitoring the menstrual cycle and tracking symptoms, dates and activities related to a specific phase, and technologies for managing pregnancy, providing information on foetal growth and advice on maternal health before and after childbirth.

This is a rapidly growing area of research for various reasons, including women’s tendency to use wellness-related applications and devices, but also a greater need to manage their health and bodily needs, especially at times when existential decisions are influenced by the body (sexuality, fertility, menopause)[26]. In 2023, the femtech market was worth almost 52 billion U.S. dollars worldwide. After sustained growth, the market was forecast to reach over 117 billion U.S. dollars by 2029[27]. Six per cent of FemTech initiatives in Italy concern the adoption of innovative devices, such as wearables and trackers[28].

However, the question we want to ask links the two concepts of awareness and corporeality: is this autonomous technological “initiative” supported by adequate awareness of the body, its limitations and its potential?

The loss of taboos surrounding menstruation, pregnancy, assisted reproduction and, finally, menopause has gone hand in hand with the growing availability of information on these topics, in the form of websites, television channels, podcasts and the usual press. This is a market that has a significant economic impact and promises solutions designed for women, offering both free basic services and paid “premium” services. “Basic” services for menstrual cycle tracking apps include the ability to record menstruation, as well as an electronic calendar that reminds you when your next period is due, what stage of your cycle you are in and whether there are any abnormalities. You can also record symptoms, which are analysed in the “premium” version, as well as activities, mood, feelings, and emotions.

These apps provide informative articles on women’s health, covering topics such as sexual pleasure and risky unprotected sex. In some cases, questionnaires are used to assess certain aspects of premenstrual syndrome. While fem-tech can facilitate personalised medical care through wearable devices, we cannot ignore the fact that these apps generate huge amounts of data.

This extracted data becomes a reserve of medical information. However, it is no longer “ours”. It does not belong to any specific medical service. We are not entrusting it to that service. We do not share a purpose of use.

The initial personalisation of the service is contrasted by the complete depersonalisation of the bodily responses it generates. In medical and scientific research, a great deal of work has always been done to ensure that the reasons and purposes of both parties are explained, understood, and shared, based on regulations that have been developed over time. Regulatory work: it has taken place over time, and between different parties. But it has not taken place in fem-tech.

The issue becomes more complicated when we consider all the apps dealing with women’s mental health. Gender medicine has shown that women are more susceptible to anxiety, depression and the stress of a greater family burden and socio-economic differences. This was well represented by the protagonist of Common People. Not to mention the vulnerability associated with abuse and rape. Technological devices are available to promote inner peace, mindfulness and psychological assistance for mothers, among other things[29].

Are we witnessing the evolution of expertise in the 4.0 era, or its obscuration?

This deconstructs the bioethical dialectic that has underpinned medical care until now. Could it improve the quality of available information and facilitate access to disease prevention? Or might it undermine the medical ethics we have known until now?

In the field of female healthcare robotics, for example, solutions are emerging that are more effective and personalised. An innovative project at the University of Pisa, for instance, aims to give surgical robots a “sense of touch” by equipping them with sensors and haptic interfaces. This technology could be particularly useful for complex gynaecological procedures such as the removal of uterine fibroids and the treatment of uterine prolapse[30].

Conclusions. Looking ahead, to future capabilities

The question of ethics is how to save our human capabilities.

Can we think in terms of robotic enhancement or devices that have the impersonality of a prosthesis, which is separate from the arm that wears and animates it? Or are these personal capabilities the ultimate goal and the basis of every enhancement?

Is Husserl’s concept of «lived experience» invalidated when the body is transformed into an artificial one that responds to technological rather than natural laws? Or could it provide the basis for greater robotic achievements?

We wonder whether the capability approach of American philosopher Martha C. Nussbaum might address the question of human ownership of the body and well-being entrusted to the cloud, and the “always-mine” nature of this technological prosthesis[31]. In which ethical framework do we inscribe our power of action?

Virtual medicine is replacing face-to-face care and enabling increasingly precise personalisation. Robotic surgeons will be able to perform emergency operations in small rural hospitals that previously closed due to a lack of medical staff.

However, what if a patient rejects treatment provided by a robot, either because he or she does not want to take the medicine at that time, or because he or she has doubts about the therapy’s validity?

No ‘postulate of similarity’ can help us redistribute utility between ourselves and machines: unless this radical difference is gradually mitigated by increasingly humanised artificial intelligence, that reproduces human cognitive behavioural models.

By the end of the three Cs journey, we hope to discover that an ethical space does exist. We believe that this space lies precisely in the area where technological enhancements to awareness and abilities must be transformed into capabilities connected to the initiative of each individual.

This is where the battle for a convincing ethical project is fought. This project is not limited to a deontological function; rather, it reclaims its full power as a «proposal for a habitable world», to use a term dear to Paul Ricoeur. Habitable means a place where one can live in a network of relationships with others and for others, in the certainty of institutions that know how to accompany this immense technological change with the appropriate criteria of justice and political awareness.


[1] https://artificialintelligenceact.eu/ (ultimo accesso 15 dicembre 2025)

[2] A. Casilli, Waiting for Robots: The Hired Hands of Automation, The University of Chicago Press, Chicago 2025.

[3] B. Aikema, F. Checa Cremades, Hieronymus Bosch & the Other Renaissance: How Monsters and Creatures Captured the Imaginations of the 16th Century and Beyond, Abrams, New York 2024.

[4] O. Pamuk, The White Castle (1985), en. tr. Faber&Faber, London 2015.

[5] P. Ricoeur, Parcours de la Renaissance. Trois Études, Stock, Collection ‘Les Essais’, Paris 2004.

[6] Ibid.; B. Williams, Shame and Necessity, University of California Press, Berkeley 1993.

[7] P. Ricoeur, Autonomie et vulnérabilité, in La philosophie dans la Cité. Hommage à Hélène Ackermans, Presses Universitaires Saint-Louis Bruxelles, Bruxelles 1997, pp. 121-141; E. Boublil, Instaurer la “juste distance”. Autonomie, justice et vulnérabilité dans l’oeuvre de Paul Ricœur, in «Études Ricoeuriennes / Ricoeur Studies», 6, 2, 2016; F. Abbate, Paul Ricoeur, un’etica per la medicina e l’intelligenza artificiale, in Paul Ricoeur tra moderno e postmoderno, in «In Circolo. Rivista di Filosofia e Culture», 12, 2021, pp. 135-152.

[8] P. Kitcher, Progresso valutativo, in F. Abbate, G. Pintus (eds.), Il progresso morale. Sfide, opportunità e prospettive future, Inschibboleth, Roma 2025; W. James, Pragmatism, Harvard University Press, Cambridge/MA 1975, p. 106.

[9] Ibidem.

[10] P. Kitcher, The Rich and the Poor, Polity Press, London 2025, ch.  5,; cfr. Id., The Ethical Project, Harvard University Press, Cambridge/MA 2014.

[11] Id., Progresso valutativo, cit. 

[12] P. Ricoeur, Les trois niveaux du jugement médical, in «Esprit», 12, 1996, pp. 21-33.

[13] K. MacDorman, H. Ishiguro, The uncanny advantage of using androids in cognitive science research, in «Interaction Studies», 7, 3, 2006, pp. 297–337; cfr. T. Kanda, H. Ishiguro, Human-Robot Interaction in Social Robotics, CRC Press, Florida 2012.

[14] M. Coeckelbergh, Robot Ethics, MIT Press, Cambridge/MA 2022; Id., Personal robots, appearance, and human good: a methodological reflection on roboethics, in «International Journal of Social Robotics», 1, 3, 2009, pp. 217–221; C. Bartneck, C. Lütge, A. Wagner, S. Welsh, An Introduction to Ethics in Robotics and AI, Springer, Berlin 2021; V. Bonnemains, C. Saurel, C. Tessier, Embedded Ethics: Some Technical and Ethical Challenges, in «Ethics and Information Technology», 20, 2018, pp. 41–58.

[15] It premiered on Channel 4 on 4 December 2011. Since the third season, it has been broadcast internationally on the Netflix streaming platform.

[16] https://decider.com/2025/04/11/rashida-jones-and-tracee-ellis-ross-on-their-healthcare-centered-black-mirror-season-7-episode-its-a-very-cynical-use-of-the-idea-that-representation-matters/; https://www.washingtonpost.com/entertainment/tv/2025/04/10/black-mirror-season-seven-review/;

https://www.washingtonpost.com/news/arts-and-entertainment/wp/2016/10/28/black-mirror-usually-shows-the-dark-side-of-technology-one-episode-shows-it-can-give-us-hope/ (visited on 15 December 2025). 

[17] https://www.bu.edu/bwhs/ (visited on 15 December 2025).

[18] https://www.forbes.com/councils/forbesbusinesscouncil/2022/04/08/healthcare-is-a-finite-commodity-how-providers-can-differentiate-themselves/ (visited on 15 December 2025).

[19] B.C. Stahl, M. Coeckelbergh, Ethics of healthcare robotics: Towards responsible research and innovation, in «Robotics and Autonomous Systems», 86, 2016, pp. 152-161; J. Sung, Artificial Intelligence, in Medicine: Ethical, Social, and Legal Perspectives», in «Annals Singapore – Academy of Medicine, Singapore», 52, 12, 2024, pp. 695-699.

[20] M. Merleau-Ponty, Phenomenology of Perception (1945), en. tr. Routledge, London 2002.

[21] D. Le Breton, Antropologia del corpo e modernità (1990), tr. it. Giuffrè, Milano 2007.

[22] M. Foucault, Nascita della clinica (1963), tr. it. Einaudi, Torino 1969, pp. 106-107.

[23] F. Abbate, Pelle robotica. Nuovi compiti per l’antropologia nell’epoca dell’artificiale, in S. Barone et al. (a cura di), L’uomo animale tecnologico. Itinerari riflessivi sulla condizione tecno-umana, Salvatore Sciascia Editore, Caltanissetta-Roma 2024, pp. 91-116; cfr. Ead., Biotecnologie robotiche, benessere, formazione: l’etica del progresso resiste, in Progettare futuri possibili. Pluralismo dei Paradigmi e Tras-Formazione, Pensa Multimedia per EduVersi SIREF, Lecce 2025, pp. 54-67; Ead., Ed-Humanizing Tech-Societies. Keeping Imaginative Education Alive, in «Formazione&Insegnamento», 1, 1, 2022, pp. 169-178.

[24] J. Van Den Hoven, P.E. Vermaas, NanoTechnology and Privacy: On Continuous Surveillance Outside the Panopticon, in «Journal of Medicine and Philosophy», 32, 3, 2007, pp. 283-297; S. Van Rysewyk, M. Pontier (eds.), Machine Medical Ethics, Springer 2015; A. Van Wynsberghe, Healthcare Robots. Ethics, Design and Implementation, Routledge, London 2015.

[25] S. Tiribelli, Identità personale e algoritmi. Una questione di filosofia morale, Carocci, Roma 2023.

[26] https://forbes.it/2022/09/15/femtech-rivoluzione-assistenza-sanitaria-femminile/

https://www.jnjmedicalcloud.it/it-it/services/news-center/blt5ad7cfde89493c04

https://www.agendadigitale.eu/sanita/femtech-un-nuovo-mercato-sanitario-che-avanza-cose-e-prospettive-di-sviluppo/Websites (visited on 15 December 2025).

[27] United States Government Accountability Office (GAO), Artificial intelligence in health care: Benefits and challenges of technologies to augment patient care, GAO, 2020. (https://www.statista.com/statistics/1483460/femtech-market-size-worldwide).

[28] https://www.corrierecomunicazioni.it/pa-digitale/e-health/femtech-la-connettivita-chiave-di-volta-per-il-futuro-della-salute-femminile/  (visited on 15 December 2025).

[29] D. Garcha, D. Geiskkovitch, R. Thiessen, Face to Face with a Sexist Robot: Investigating How Women React to Sexist Robot Behaviors, in «International Journal of Social Robotics», 15, 2023, pp. 1809-1828.

[30]https://old.unipi.it/index.php/news/item/25910-dotare-i-robot-chirurghi-del-senso-del-tatto-l-impatto-sulla-salute-delle-donne (visited on 15 December 2025).

[31] M.C. Nussbaum, Creating Capabilities, Belknap Press, Harvard University Press, Cambridge/MA 2013; F Abbate, Il futuro della dignità. Un percorso nell’etica di Martha C. Nussbaum, Edizioni Studium, Roma 2024.