Got an alert for an interesting new book entitled Moral Machines: Teaching Robots Right from Wrong.
“Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don’t seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun. Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics.”
Now, given what I’ve said on the inseparability of Christian ethics from Christian soteriology, what the heck does that machine ethics??? That’s the question I have for myself.
Insofar as the machine is NOT a moral agent, its actions are the responsibility of the one who deploys it. So you would have the machine programmed not to do anything you would not do. If you have the morality programmed by another, this would be an issue of trust. Kind of like trusting a dog trainer. It is wrong to introduce a dangerous dog to a situation where it is likely to do harm. But a trustworthy dog is not considered a problem.
LikeLike
“So you would have the machine programmed not to do anything you would not do.”
And how exactly do you go about doing that. If you’re putting a machine into a combat situation (an example Matt is reasonably fond of), there are all kinds of situations, with virtually countless courses of actions. Programming a machine with every possible situation so that it knows exactly what you would do in each of them is, for all practical purposes, impossible.
I can imagine similar situations in the realm of health care, and just about any other area we might let machines loose.
LikeLike
Rick, if only it were so easy. Multiple programs involved. Multiple programmers involved in each program. Multiple operators in their field. Any one of these could fail, and that alone makes it hard to assign moral responsibility. But above and beyond that there is also the challenge of the emergent properties of complex systems. Perfectly functional programs may not interact perfectly. Robotics experts working for the military have already acknowledged this problem is much, much deeper than most pundits realize.
And that’s before we even begin to discuss the more advanced AI.
Furthermore, your dog analogy fails because the benefit of these machines is precisely their ability to do harm. What is being aimed for are machines which can outthink, outmaneuver and outkill humans. Forget Asimoz, they are trusted to be dangerous. There was a notable incident a few years back when an automated sentry malfunctioned and began shooting in circles, killing 9 soliders and wounding a similar amount if memory serves me right. The manufacturer determined it was a software glitch and issued a patch. Now, what if it had killed some noncombatants instead? Where could the families of the collateral damage have sought justice? Say it was your son who’d been killed? How would you prove negligence if a programmer had been negligent?
Is it wrong to ask some questions? Is it inappropriate for us to test the ethics?
LikeLike
Jarred, you make a good point, health care is another field where questions are coming up, particularly in the area of aged care. Programmers are foreseeing situations where they would actually want machines to ignore instructions for occupational health and safety reasons. Because you can’t pre-program for every contingency. But what if an accident happens as a consequence? You’ve already give the robot permission to ignore humans under some circumstances. This raises the question of programming rudimentary ethical decision matrixes.
Now imagine the thorny issues that could come up in child care.
LikeLike
Matt,
As a software engineer, I probably have a pretty good insight into the woes of trying to program anything as complex as decision making. I’ve also had limited exposure to automated train systems (where the safety guys lose sleep wondering if all of their methods of ensuring that two trains don’t “accidentally” get incorrect instructions or interpret instructions incorrectly are sufficient) and medical devices (the amount and complexity of software and hardware mechanisms put in place to insure that a malfunctioning insulin pump can’t randomly inject a lethal dose of insulin is mind-boggling) and have a strong appreciation for how difficult programming for safety is when the potential for disaster exists. And the kinds of machines they’re talking about now are capable of far more complex.
LikeLike
And this is precisely why I focus on the military robotics. There is huge potential for catastrophic malfunction, and what will the likely response be to the relatives of the dead? Oops, that wasn’t supposed to happen, but don’t worry, version 2.0 will be much better. In the name of keeping the troops safe we’re exposing civilians to huge risk. And yet many Christians who claim to be interested in ethics seem indifferent and even defensive.
LikeLike
I’m a bit surprised that I was read as suggesting this was easy. My main point was that the moral decision is made by people, not machines.
“Furthermore, your dog analogy fails because the benefit of these machines is precisely their ability to do harm. What is being aimed for are machines which can outthink, outmaneuver and outkill humans.”
So computers which make banking transactions and service robots that care for the elderly are being made with the intention of outkilling people? This was somehow left out of the original post.
Since my point was that the person who deploys the machine is responsible for what is done by it, I don’t see how my dog trainer analogy fails. My point about the dog is that it is intended to be helpful, but you don’t have full control of its behavior, not an ability to predict all of its actions. When it bites, you don’t generally blame the dog for its lack of morality. You blame the person who introduced it into the situation. But there are situations in which we do find the risk worthwhile.
LikeLike