We can apply this notion of “rites” vs. “rights” to the context of human-robot interaction. Imagine you’re a Boston Dynamics researcher working on Atlas, the humanoid robot, to develop capabilities to address a nuclear disaster. You engage with the robot in what is known as “human-computer/robot interaction” (HCI/HRI), a team activity or an “ensemble.” The focus here is on the interaction (I) itself, rather than on the individual entities involved, such as the human (H) or the robot (R). Both you and Atlas work towards a shared goal as you develop Atlas’s ability to fulfill first responder duties, and are rites-bearers in this scenario with your own sets of responsibilities defined by this shared objective.
Suppose you, as the researcher, get over-confident in your progress and carelessly put Atlas in a risky situation, endangering the project. In a rights-based framework, someone might argue that you infringed upon Atlas’s right to safety; however, as a rites-bearer, the claim would be that you failed to fulfill your role-specific duties for the team to achieve the team’s objectives. These objectives include the project goals, but also encompass a range of values, benefits, and ideals.
Achieving the shared objective of the team (often established by the human) is not the only responsibility the rites-bearers maintain; there is also a broader responsibility to consider the human’s well-being and the well-being of the broader society which could be in conflict with the human’s immediate desires for their collaboration. For example, what if a human was using a robot to compound and continuously administer highly addictive drugs to them so that they could remain in a continuously mentally altered state? The human may believe that they have the “right” to demand the robot’s compliance since they purchased or built the robot for this purpose, but does the robot live up to its responsibilities as a rites-bearer if they cooperate? Doing so would almost certainly harm or possibly kill the human, with cascading effects in the broader society around them such as on others who are connected to or relying on the human.
Imagine, however, in this same situation, the human was terminally ill, and the robot was essential to delivering palliative care to enable them to be comfortable for the short time remaining in their life. Most would view the robot’s cooperation as consistent with its responsibilities as a rites-bearer in those circumstances as its actions would be aligned with its responsibility for others’ well-being.
Many people may want access to a drug-administering robot even when they are not gravely ill. There are some manufacturers who might gladly seek to profit by creating such robots and selling them. An important question relates to the manufacturer’s responsibility, and by extension the robot’s, as rites-bearers in this situation. Consumers might seek pleasure from the robot, but a life focused solely on pleasure is not always ethically sound. Society needs to address the teleological question of the purpose of a robot and its interaction with humans. I argue that in human-robot collaboration, both parties form a team or ensemble, each with its own role obligations. Both must perform and observe the rites that promote social order and harmony.
While the arguments about the moral agency of robots by functionalism are provocative, as are related notions that robots have “rights,” many are skeptical of such claims. A major criticism of functionalism is its discounting the crucial conscious and subjective components of human mental experiences in its arguments. One could also ask if human-like consciousness is necessary for a robot to be a rites-bearer and a full participant, alongside humans, in rituals. I assert that this is indeed possible.
For instance, there is widespread reverence for sacred places, such as the Japanese tradition of venerating mountains, which are afforded sacred respect. This suggests that robots could also be regarded with a similar level of respect and included in rituals. However, even if we could conceive of venerating robots, the deeper question is whether or not we should.
A Confucian perspective might claim that by creating robots in our likeness, failing to pay them respect as beings capable of engaging in rites actually diminishes our own humanity. By engaging in rituals with robots, we honor ourselves. However, a phenomenological skeptic might refute that by arguing that robots don’t truly mirror our essence. Yet, the resemblance to our image is a matter of degree. To invoke a dangerous analogy, according to many of the great religions, God made us in His image, and for that reason finds it appropriate to respectfully interact with and care about us, but the extent to which we succeed in resembling God is quite limited.