
With the number and autonomy of robots in military conflicts continue to rise, it’s becoming increasingly urgent for Christians to grapple with the ethical questions surrounding the automation of warfare.
We must consider: What are the implications for Just War doctrine? Traditionally, this doctrine has been grounded in the belief that combatants are human. But with the advent of increasingly autonomous machines, that presumption is being challenged. So, what does that mean for our understanding of just warfare?
We must also ask: Can a robot make a moral decision? Can it truly grasp the complexities of proportionality in conflict? If the answer is no, how can we ethically justify the increasing autonomy granted to these machines?
Beyond the philosophical considerations, there are more practical questions to explore:
- Can a robot kill humans in self-defence, or must a human life be at risk?
- Can a robot exercise autonomy where collateral damage risks exist?
I am reminded of Asimov’s laws of robotic, but I am not sure they are going to cut it in the real world. As we navigate this new landscape, we’re confronted with an ethical minefield unlike any we’ve seen before. The robotisation of war invites us to rethink our assumptions and deepen our understanding of morality in a rapidly changing world.
So, what now? How do we, as a community, engage with these pressing questions in a meaningful way?







Leave a reply to Matt Stone Cancel reply