This evening I came across a provocative but thought provoking comment by Robotics ethicist Wendell Wallach which suggests that AI research has the potential to scientize ethics, by making ethics subject to experimental verification:
Successfully building a moral machine, however we might do so, is no proof of how human beings behave ethically. At best, a working machine could stand as an existence proof of one way humans could go about things. But in a very real and salient sense, research in machine morality provides a test bed for theories and assumptions that human beings (including ethicists) often make about moral behavior. If these cannot be translated into specifications and implemented over time in a working machine, then we have strong reason to believe that they are false or, in more pragmatic terms, unworkable. In other words, robot ethics forces us to consider human moral behavior on the basis of what is actually implementable in practice. It is a perspective that has been absent from moral philosophy since its inception.
But is this valid? What counts as a ‘workable’ ethic? Is it possible to test for a ‘workable’ ethic without predetermining your answer before you ask the question?







Leave a comment