“There are no objective values.”

J.L Mackie, 1977, p. 15

Humanity has no moral canon which a common set of ethics can stem from. There is conflict all around us. Most, if not all conflicts arise out of differences in opinions, which in turn occur because people have their own set of unique morals.

The issue of difference in moral opinions has brought to the table many heated debates. One that has always been seen in the spotlight is policy and law making on various issues, for example: someone is happy with a healthy baby and yet another person is deeming it their right to resort to feticide. Someone who has watched a dear one suffer from a debilitating condition over an extended period of time has a different opinion on euthanasia versus someone who hasn’t. Law enforcement officers in violent neighborhoods who have witnessed fellow officers being shot have a different perspective of whether to fire when in doubt, versus human rights activists who haven’t.

Parleys between parties who have different sets of moral values are indisputably endless, and therefore, I will dwell no more on those than necessary. The aspect of this debate that I intend to focus on in this article is the labyrinthine world of machine ethics. Machine ethics, also known as computational ethics is a part of the ethics of artificial intelligence concerned with adding moral behaviors to machines which use artificial intelligence, otherwise known as artificial intelligent agents. As the sphere of technology is constantly being updated, we are now seeing more autonomous machines than ever. This has presented programmers with a large scale problem, unprecedented and intricate.

Let’s take the example of self driving cars, an innovation that is taking off big time. You are in a self driving car that is running on the road, “boxed in” by vehicles all around. On your right side is a motorcycle, to the left is a regular SUV, in front and to the back of you are two large trucks. The safety of each party is considered by the smart car before it comes to its decision. To elaborate, the smart car knows the following: swerving right immediately results in high damage to the motorcyclist, heading back or front means you will be compromising on your safety. Swerving left to the SUV may be slightly less damaging in terms of net harm done. What should the smart car do?

 Another scenario: say you’ve been found in the same situation, but with motorcyclists on both sides. However, the one to your left is wearing a helmet, and the one to your right is not. What will your robot car crash into? Forward, endangering your own life?  Left: to the safer, helmet-armed passenger? or to the right: the exposed, law breaking motorcyclist? “If you say the biker because she’s more likely to survive, then aren’t you penalizing the motorist? If instead, you swerve towards the biker without the helmet because he’s acting irresponsibly, then you’ve gone way beyond the initial design principle about minimizing harm, and the robot car is now meting out street justice” These are the scenarios displayed in a TED-Ed video:

If a human was driving the same car, whatever they do can be written off as human impulse, and they may not be punished. However, the action of the smart car is pre-meditated, and may pose legal confusions. How can this be resolved?

Coming back to our smart car, what will it do? Well, obviously what it’s been programmed to do. What will the programmer do then? Now, that is a tricky question, which can cause an outcry in any way it is answered. As discussed before there are no objective values and therefore, no one can say there is a correct answer. Trying to program the smart car to act in a certain way in various situations will always bring up questions like: is the life of a younger person more valuable than an older one? Would you kill one person if it means saving five even though your inaction means the one person is safe? Dilemmas like this present themselves all the time in machine ethics, and it is unclear if there can ever be a unambiguous answer.

How can researchers equip a robot to react when it is “making the decision between two bad choices”.Computer scientists working on rigorously programmed machine ethics today favour code that uses logical statements, such as ‘If a statement is true, move forward; if it is false, do not move.’ Logic is the ideal choice for encoding machine ethics, argues Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon. “Logic is how we reason and come up with our ethical choices,” he says. But how can this be done when they all have different morals? Isn’t this somewhat like placing justice on these programmers’ plates?

McDonald and Pak (1996), while researching cognitive frameworks used by people to make decisions, identified eight frameworks that influence ethical decisions:

  • Self-interest: selfishly gaining the greatest degree of personal satisfaction
  • Utilitarianism: the decision to produce the greatest ratio of good over bad for everyone
  • Categorical Imperative: regardless of the consequences, the decision is either morally right or wrong
  • Duty: the decision may be inherently right because of the duty one has
  • Justice: concerned the fairness of the decision
  • Neutralization: the decision to reduce the possible impact of norm-violating behaviors upon self-concept and social relationships
  • The light of the day: the decision to consider the question what if this information went public?

Since it is obvious that the degree to which each of these frameworks applies to each of us, how is the problem of machine ethics solved? Is it right to program robots to the customer’s own personal moral values? Is it even feasible? Should a government regulated set of rules be applied to this, similar to normal law making?

These are a few questions that I think us, as a society should discuss and reflect upon. The pace of development is such that these difficulties will soon affect health-care robots, military drones and other autonomous devices capable of making decisions that could help or harm humans.

Note: the words ‘morals’ and ‘values’ have been used interchangeably. So have the words ‘computer scientists’ and programmers’. This article is more discussion-al than anything and is intended to be opened up for further  meaningful dialogue.

 Credits: medium.com, wikipedia.com, springer link 

This article was written by Bharati Challa a new contributor and soon to be writer

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *