Jordan Baker, “Algorithmic Trust and Agential Possession”
Abstract:
Research in AI safety and AI ethics tends to focus on two types of ethical danger: unethical consequences and unethical side-effects. The literature has not adequately considered ethical dangers that are intrinsic to the relationship between human agents and AI tools. I argue that by contrasting the structure of human agency with algorithmic agency we reveal an ethical danger grounded in a unique quasi-instrumental relationship between conscious human agents and merely functional AI agents. I argue for this first by identifying different types of agency (building on Baker & Ebling, 2022; Brewer, 2009; List & Pettit, 2011) and thereby illuminating the possibility of agential mismatch. I then extend C. Thi Nguyen’s analysis of trust as an “unquestioning attitude” (Nguyen, 2022), paying special attention to what he calls the “integrative stance” where we conceive of our trusting relationship to objects as way to integrate them into our own agency, thus extending and expanding our own abilities. I then ask, what happens when the tool we trust sits uneasily between notions of a pure “tool”—like a hammer—and a merely functional agent? Put differently, what happens when we integrate our agency with another agential system that is essentially alien to human agency?
Sean Baz-Garza, “Theology, AI, and Internal/External Goods”
Abstract:
Within the humanities fields especially, there is much disagreement today about the ways that artificial intelligence technologies affect our work. Some resist its use wholeheartedly, arguing that it presents an unqualified threat to the humanistic disciplines. Others, the more tech-friendly among us, argue that we should embrace these technologies because they can help us do our work more quickly, more efficiently, and with fewer mistakes. With the help of Alsadair MacIntyre, argue that such a binary rejection/acceptance paradigm is unhelpful. I will do so by developing MacIntyre’s concept of practice-internal goods (ie. goods inherent in particular practices rather than goods, like wealth, that are in principle achievable by a wide range of activities) in relation to specifically theological work, showing that while some uses of AI should be shunned within the theological academy, others may be perfectly legitimate.
Brian Cutter, “AI Consciousness, Personhood, and Ensoulment”
Abstract:
I consider three related questions: whether future AI systems will be conscious, whether they will be persons, and whether they will have immaterial souls. After clarifying the questions and the logical relationships among them, I argue that theists should take seriously the possibility that all three questions have an affirmative answer. I conclude with some remarks on the potential theological significance of AI consciousness, AI personhood, and AI ensoulment.
Bio:
Brian Cutter is an Associate Professor of Philosophy at the University of Notre Dame. Before starting at Notre Dame in 2016, he was a Bersoff Faculty Fellow at New York University. He received his Ph.D. from the University of Texas at Austin in 2015. His research is primarily in the philosophy of mind, metaphysics, and philosophy of religion. His work aims to make progress on perennial metaphysical questions about the mind and the place of human persons within the cosmic scheme.
Marius Dorobantu, “Will God speak to intelligent robots? Why strong AI’s how is more important than its what”
Abstract:
Ever since Turing’s proposed test, much discussion around current and future AI is markedly functional, revolving around what computer programs might become able to do, regardless of how they do it. In this paper, I argue that, from a theological perspective, what matters most is not whether AI achieves the generality of human intelligence, but whether it can transcend the ontological gap between being a thing or a person. Even if AI somehow awoke to consciousness, it would cognize in profoundly different ways than biological, carbon-based humans. The radical non-humanlikeness of strong AI has relevant implications for whether it would also lay a claim at being in the image of God, develop a genuine interest in religion and transcendence, or be a suitable receptacle of divine revelation.
Bio:
Dr Marius Dorobantu is an Assistant Professor of Theology & Artificial Intelligence at the Vrije Universiteit Amsterdam, The Netherlands. His award-winning doctoral dissertation at the University of Strasbourg, France (2020) explored the potential implications of strong artificial intelligence for theological anthropology. Marius is the co-editor of the Routledge volume, Perspectives on Spiritual Intelligence (2024). His first monograph – Artificial Intelligence and the Image of God: Are We More than Intelligent Machines – is in press with Cambridge University Press.
Luciano Floridi, “A Digital Deity: AI as the New Ultimate Other and the Emergence of Techno-Eschatology”
Abstract:
The paper analyses some philosophical and cultural implications of the apparent disappearance of traditional “ultimate Others” - namely God and Nature - and their gradual replacement by Artificial Intelligence (AI) in contemporary discourse. As secularization and technological advancement have progressed, some of the roles once filled by divine and natural entities as sources of meaning, morality, and existential purpose have increasingly been projected onto AI systems. The paper examines how this shift has brought with it a borrowing of apocalyptic and eschatological language, now centred around AI rather than religious or environmental narratives, noting how this reflects deep-seated human anxieties about control, purpose, and the identity and future of humanity. It investigates how AI, in its perceived omniscience and potential omnipotence, has begun to occupy a godlike position in the cultural imagination. The paper concludes by considering some philosophical challenges posed by this paradigm shift, including the need for new frameworks of meaning and morality in an age where artificial constructs may supplant traditional sources of ultimate truth and value.
Bio:
Luciano Floridi (Laurea, Rome La Sapienza, M.Phil. and Ph.D. University of Warwick) is the Founding Director of the Digital Ethics Center and Professor in the Practice in the Cognitive Science Program at Yale University. The author of many books, including The Ethics of Artificial Intelligence, The Philosophy of Information, and The Logic of Information, Floridi’s areas of research are the philosophy of information, digital ethics, the ethics of AI, and the philosophy of technology. A recipient of Italy’s Knight of the Grand Cross of the Order of Merit, Floridi was the most cited living philosopher in the world in 2020, according to Scopus.
David Zvi Kalman, “Who’s Afraid of AI Personhood?
Abstract:
For several decades, AI has been on a path towards ever-greater mimicry of human abilities. This remarkable trajectory has resulted in models that are capable of producing human-level content and reasoning in a wide variety of areas, including art. In virtual settings, it now often impossible to distinguish human output from computer output.
Despite exhibiting these characteristics, there has been a great deal of resistance from all quarters to the idea that humanoid AIs should be assigned personhood. Scholars, theologians, technologists, and the public have instead spent a great deal of time searching for frameworks that distinguish human beings (and their behavior) from AI. In some religious communities, including the Southern Baptist Convention, this approach has now been codified by simply asserting that biological humans have a privileged status that no AI should ever be given.
In this paper, I will argue that we should not be so quick to deny personhood to AI, that there are both theological and ethical reasons to affirm AI personhood instead, and that it is far from clear that denying personhood will preserve human dignity or the privileged status of human beings. Drawing from sources within the Jewish tradition, I claim that assigning personhood on the basis of humanoid behavior has a great deal of theological precedent. Furthermore, I argue that denying personhood to humanoid AIs has the effect of training the public to devalue a huge swath of human activity. Finally, I will explore what it might mean to take AI personhood seriously, from the perspective of both tech developers and the general public.
Bio:
David Zvi Kalman is a scholar working at the intersection of religion and technology. He is a research fellow at the Shalom Hartman Institute of North America and a senior advisor for Sinai and Synapses. He hosts Belief in the Future, a podcast about religion and technology. His article, “Artificial Intelligence and Jewish Thought,” appeared in the Cambridge Companion on Religion and Artificial Intelligence (2024).
Johanna Merz, “AI, Fragility, and Creative Freedom”
Abstract:
I aim to introduce the concept of grace as a key concept in order to highlight essential differences between artificial intelligence and human genius. In particular, I want to contrast the impermeability of AI and the fragility of human nature as it is mirrored in grace-based Christian anthropology. The potential of AI is thus understood as a negative one, in that the specific properties of human creativity become clear on the basis of what AI cannot achieve. Drawing on Hans Jonas, I start by showing that fragility is constitutive of vitality and freedom and therefore also of creativity. Paradoxically, creativity is conceivable as free only to the extent that it depends on interaction. Jonas calls this the “dialectic of needful freedom”. This simultaneity of dependence and agency, of revealed, experienced potentiality and of personal initiative that follows it, is summarized by classical theology in the concept of grace.
Ted Peters, “AI, IA, and the End of Humanity?”
Abstract:
If the forecasted revolution in AI (Artificial Intelligence) and IA (Intelligence Amplification) rivals the Copernican Revolution, will our inherited definitions of human nature come to an end? Or will they fulfill their ends? Silicon Valley techies feel compelled to replicate human intelligence in a robot, the designated goal being AGI (Artificial General Intelligence). Silicon Valley transhumanists, in turn, are trying to enhance existing human intelligence to pass through Singularity and usher in the epoch of Superintelligence. With the advent of Superintelligence, will historic humanity go extinct while post-humanity evolves into a new species? Will Superintelligence fulfill or nullify what we have always thought to be the end or goal of humanity?
Bio:
Ted Peters (Ph.D., University of Chicago) is a public theologian directing traffic at the intersection of science, religion, and ethics. Peters is an emeritus professor at the Graduate Theological Union, where he co-edits the journal, Theology and Science, on behalf of the Center for Theology and the Natural Sciences, in Berkeley, California, USA. He authored The Evolution of Terrestrial and Extraterrestrial Life (Pandora 2008). More recently, he co-edited Astrobiology: Science, Ethics, and Public Policy (Scrivener 2021) as well as Astrotheology: Science and Theology Meet Extraterrestrial Intelligence (Cascade 2018). He also co-edited Religious Transhumanism and Its Critics (Lexington 2022). Peters is author of Playing God: Genetic Determinism and Human Freedom (Routledge, 2nd ed, 2002) and The Stem Cell Debate (Fortress 2007). He is currently updating his edited 2019 collection of essays, The Promise and Peril of AI and IA: New Technology Meets Religion, Theology, and Ethics (ATF 2025). See his blogsite [https://www.patheos.com/blogs/publictheology/] and his website [TedsTimelyTake.com].
Kathryn Reklis, “They will know you by your love for robots: technology, relationality, and the limits of humanity”
Abstract:
Near the end of The Maniac, Benjamín Labatut describes the reaction to the move AlphaGo, the first Go-playing AI, makes in its second game against then-reigning world champion Lee Sedol. “This goes beyond my understanding,” one of the judges exclaimed. “It’s not a human move. I’ve never seen a human play this move…. Beautiful, beautiful, so beautiful!”[1] When Lee himself reflected on the match later, and on the shocking, unhuman move 37, he echoed this sense of awe. “I thought AlphaGo was based on probability calculation and it was merely a machine. But when I saw this move it changed my mind. Surely AlphaGo is creative…. It was not just a good, or great, or a powerful move. It was meaningful.”[2] At the same time, playing against AlphaGo “could induce a sense of despair, a strange feeling of being pulled down into a void.”[3]
In their book What to Expect When You are Expecting Robots, Laura Major and Julie Shah lay out a vision of our near future when working robots integrate into nearly every aspect of human life: they navigate busy sidewalks to deliver packages, drive our cars, bring us the food we ordered in restaurants, administer medicine in hospitals. They suggest the real challenge that lies ahead is not the existential threat to our species-identity but the more mundane task of training robots to understand and integrate into our social norms. “It takes a village to raise a child to be a well-adjusted member of society, capable of realizing his or her full potential. So, too, a robot.”[4]
In the summer and fall of 2024, two animated movies were released in American theaters that explore robot relationships with biological life: The Wild Robot (directed by Chris Sanders) about a robot who forms intense relationships with the wild animals on a remote island where she was accidentally stranded and Robot Dreams (directed by Pablo Berger) about the unlikely adventures of a dog and robot in 1980s New York City. Unlike other adult stories of robot-human relationships – the novel Klara and the Sun by Kazuo Ishiguro and the movie After Yang (directed by Kogonada) come to mind – where the central question is “are relationships with machines and the emotions they might engender inherently false?”, these stories offer models to think about the relationship between biological life (in the form of anthropomorphized animated animals) and machine life marked by characteristics of meaningful friendship and care.
In all these parables of human encounter with AI – the awe-inducing unhuman other, the tool/child/servant, the friend and companion – what defines our human identity vis a vis artificial intelligence is relationship. Drawing on scientific speculation, philosophy, literature, and pop culture, this paper will explore the fantasies, desires, and imagination of human relationship with AI as a central way to understand human personhood. Who we are in relationship to AI – and who we are in distinction from it – may best be understood by the relationships we imagine forging with it, and therefore how we understand human relationality as a core species characteristic.
Bio:
Dr. Kathryn Reklis is Associate Professor of Modern Protestant Theology and Co-Director of the Comparative Literature program at Fordham University, where she is also an Affiliate Faculty in American Studies. Most of her research projects explore different ways Christian theologians and ordinary Christians appeal to and understand beauty, art, and embodied experience as a salve against the ills of modernity – whether those are understood as scientific rationalism, the iron cage of bureaucratic life, or the devastations of colonialism, racism, and patriarchy – even as Christian theology is complicit in funding many of these modern realities. She is the author of Theology and the Kinesthetic Imagination: Jonathan Edwards and the Making of Modernity (Oxford University Press, 2014) and Editor, together with Sarah Covington, of Protestant Art and Aesthetics (Routledge, 2020). She writes a monthly Screentime column on film, television, and other screened art for The Christian Century. She was a Fellow at the New Media Project at Christian Theological Seminary from 2011-2014 and a Research Participant in the Religion and Digital Culture Project for The Social Science Research Council. Her current research projects include a cultural history of religion and literature programs and an exploration of the concept of “world” in World Literature and World Religion. She is also working on a project that explores the links between Christian theology, moral decision making, and climate action in conversation with psychological science and decolonial environmental humanities, for which she is a Scholar-in-Residence at the Social and Moral Cognition Lab at Columbia University.
Paul Scherz, “Automation and Augmentation in Theological Perspective”
Abstract:
This paper will examine two alternative modes of thinking about human-machine interaction and thus the design and implementation of AI through the lens of Catholic moral theology. Automation predominates in contemporary discussions, envisioning a world of autonomous machines replacing workers in business, education, warfare, and the roadways. As many scholars of technology have noted, total automation is a myth: adding machines tends to transform rather than simply replace jobs. The myth leads to a danger of idolatry insofar as we see our tools as more capable than they are. This myth also reflects back on humans, leading to workplace deskilling and centralized control as well as consumerist models based on a behaviorist surveillance capitalism, and blinding us to the ongoing role of people in making AI work. The ideal of automation thus offers a deformed anthropology and vision of work
Augmentation offers an alternative vision of machines assisting humans as a servant or teammate to expand their abilities. A vision of augmentation has long driven developments in human-computer interfaces, as with the work of Douglas Engelbart. While augmentation offers a much better understanding of work and the human person, it still contains dangers. First, these tools still contain an embedded axiology that shapes the human using it, as Pope Francis described in terms of the technocratic paradigm and Jacques Ellul, Marshall McLuhan, and Walter Ong discussed in relation to media. The question is in what values and character traits are machines shaping us. Second, a vision of human-AI teams can quickly expand into dreams of transhumanist enhancement that can demean the body in Gnostic forms. The paper will end with suggestions for navigating these dangers and a discussion of the ways in which augmentation can slide into automation.
Biography:
Paul Scherz is the Our Lady of Guadalupe Professor of Theology at the University of Notre Dame. His research examines the intersection of theology, science, medicine, technology. His first book, Science and Christian Ethics (Cambridge, 2019), used Stoic virtue theory as a lens to examine the moral formation of scientists. More recently, his books Tomorrow’s Troubles (Georgetown, 2022) and The Ethics of Precision Medicine (Notre Dame, 2024) examined the role of quantitative risk analysis in contemporary life and health care. He has published articles on many topics in bioethics, such as human enhancement, genetic technology and end of life ethics. He is currently working on projects on the ethics of artificial intelligence, which has led to his editing a collaborative work from the Dicastery for Culture and Education’s Centre for Digital Culture, Encountering Artificial Intelligence (Pickwick, 2024)).
Scherz holds a Ph.D. and M.T.S. in moral theology from the University of Notre Dame (2014, 2010), a Ph.D. in genetics from Harvard University (2005) and a B.A. in molecular and cell biology from UC Berkeley (2001). Scherz has previously taught at the Catholic University of America and the University of Virginia.
William Schweiker, “Conscience and the Ends of Humanity: Christian Humanism and Artificial Intelligence”
Abstract:
The astonishing speed of the development of Artificial Intelligence (AI) has sparked reflections by theologians and philosophers on what distinctiveness, if any, human beings possess as individuals and as a species. This lecture addresses this question with respect to an ancient idea in Christian thought reaching back to St. Paul and examined again and again throughout history, namely, human conscience. While sometimes criticized as a tyrannous force in the human psyche, a mere product of social forces of race and class, or a horrific form of self-torture, many Christian and non- Christian thinkers continue to examine conscience as a clue to the meaning of being human. This lecture continues that examination in the light of the question of human ends and whether AI signals the end of humanity as we know and experience it. Further, the lecture is written from the perspective of a robust Christian Humanism dedicated to the integrity of human life while acknowledging that human beings are technological as well as biological, social, and religious beings.
Bio:
William Schweiker is the Edward L. Ryerson Distinguished Service Professor of Theological Ethics at the University of Chicago where he serves in the Divinity School and various programs in the College. Schweiker has also been guest professor at Uppsala University, where he holds an honorary doctorate, and the University of Heidelberg as well as lecturing at universities around the world. The author of many books and articles, his work engages theological ethical questions attentive to global dynamics, comparative ethics, and developments in moral and political philosophy. Further, Schweiker has sought to revive and develop the work and perspective of Christian Humanism. He is also theologian-in-Resident at the First United Methodist Church at the Chicago Temple.
Valentina Tirloni, “Democracy, Politics, and AI”
Abstract:
Since antiquity, human nature has been understood as essentially political. A healthy democracy protects human rights and freedoms and enables social life to be stable and peaceful. New information and communication technologies have already transformed human activities and lives through introducing new metaphysical paradigms. Politics, too, has been impacted by these new technologies, producing relevant consequences for the ways in which citizens deal with policies, elections, information and political debates. AI is now a powerful tool that can be seen as a pharmakon in the double positive and negative sense of its agency.
Access to information is now more horizontal than in the traditional top-down media model. Forms of professional political mediation by traditional media have been cut away by citizen producers of information and by social media. This lack of mediation poses as great a risk for democracy as does lack of information or the abundance of “subjective” representations of the truth. Public space as theorized by Habermas has given way to a mosaic of separate individuals. This atomization of the public space feeds the creation of a more fragmented society, characterized by individualism, selfishness, narcissism. The political dimension of society is itself profoundly threatened.
This paper explores the possibility of implementing AI tools in service of supporting the emergence of a healthy techno-democracy. What values should be preserved while implementing AI tools to support public space? How can new technologies help to increase citizens’ participation and social engagement? How can they support better decision-making?
Bio:
Valentina Tirloni is a tenured associate professor at the Université Côte d’Azur in Nice. She graduated in Philosophy and Law at Pavia University and also holds a PhD in Political Philosophy and Social Sciences, a Professorial Thesis in Communication Studies and the professional habilitation as lawyer. Her research deals with the anthropology and ethics of technology, investigating how new technologies have impacted human life, the human body (transhumanism) and social life. In particular, she has focused on the political impacts of technological means of communication, electronic democracy, and the ambiguous impacts of new technologies in political communication.
Manuel Vargas, “AI & Morally Austere Ecologies”
Abstract:
The introduction of autonomous and artificially intelligent systems changes the moral structure of our environment—what we might call the moral ecology— in two important ways. First, it creates a new and increasingly ubiquitous class of agents that are not sensitive to the ordinary levers of social and moral dynamics that have governed our shared, cooperative life. That is, autonomous and artificially intelligent systems are not plausibly morally responsible agents. Second, the way these technologies are deployed have tended to make our ecologies more morally austere. That is, they increase the frequency with which important interactions are mediated by agents without the authority, competence, and control to resolve practical challenges and conflicts in the way customary of fully responsible agents. Consider for example, the average interaction with a customer service chatbot, or with airport gate agent who can only direct you to use an app to resolve the problem of a missed flight. In these cases, the reliance on autonomous and artificially intelligent technology gives us fewer points of contact with fully responsible agents. Collectively, these changes undermine some of the goods ordinarily enabled by responsibility practices.
This paper considers how we might respond to these challenges, including the substantial barriers to changing the nature of artificially intelligent systems and their deployment in contemporary life.
Bio:
Manuel Vargas is Professor of Philosophy at the University of California San Diego. He has written on free will, moral responsibility, topics in moral psychology, the philosophy of law, the history of philosophy in Latin America, and ethno-racial social identities. He is the author of Building Better Beings: A Theory of Moral Responsibility (Oxford, 2013), which was awarded the American Philosophical Association’s biennial Book Prize, the forthcoming Mexican Philosophy (Oxford), and he is one of the authors of the widely used textbook Four Views on Free Will (Wiley-Blackwell 2007, second edition 2024).
M. Wolff, “Cruising Decolonial Utopias: AI Benefits and Threats, Real versus Imagined”
Abstract:
Transhumanists hope AI will free us from the limitations and inconveniences of being human. Similarly, AI enthusiasts seek freedom for individualized products, improved efficiency, and capitalist profits. Critics of AI fear job loss and human extermination. The benefits and threats we imagine AI poses are unfounded. The real threat AI poses, I contend, is reproducing and intensifying colonial capitalist exploitation, environmental racism, and AI nationalism. Drawing on Martin Luther’s “The Freedom of a Christian” and José Muñoz’ Cruising Utopia, I cruise decolonial utopias free from these real threats. What if, instead of freeing us from what it means to be human, AI freed us to be fully human? Decolonial, queer theologies offer ontologies and epistemologies needed to imagine utopias in which humans revel in contextual, embodied, interdependent relationships resistant to data capitalism and algorithmic coloniality. Scholars have made inroads in ethical AI with the use of Ubuntu, Bemba personhood, Jeong, kokoro, and other decolonial methodologies. A Sabbath post-work culture in which people are free to play and create for others remains an unrealized utopia, I conclude, because AI embedded within a capitalist framework lacks responsibility and accountability to be ethical in a Christian theological sense—informed by the doctrines of Creation, Incarnation, and Redemption.
Bio:
M. Wolff was born in South Africa during apartheid, raised in the US, and a first-generation college student. An Associate Professor of Religion at Augustana College, Wolff teaches classes on race, sexuality, religion, and ethics that contribute to the Women’s, Gender, and Sexuality Studies, Health Humanities, and Disability Studies programs. Their book Body Problems: What Intersex Priest and Activist Sally Gross Teaches Us About Embodiment, Justice, and Belonging is in production at Duke University Press. Wolff’s article “Companion Sex Robots: Racialized Household Economics,” uses womanist ethics and African theology to critique the outsourcing of care labor to technology. It won the Elisabeth Schüssler Fiorenza New Scholars Award. Wolff’s current research explores the uses of “trans*/intersex-religiosity” for decolonial ecojustice. They collaborate with non-profits, activists, and artists to reimagine and actualize queer sociality.