Introducing Uncertainty In AI For More Ethical Decision-Making

In a recently published paper, Peter Eckersley, the director of research for the Partnership on AI, proposed introducing a sense of uncertainty to make AI algorithms more ethical.

Arguing that AI as we know it today, is built only to pursue a single mathematical goal, he raises the question of how ethical the decisions of AI algorithms are. In fact, there are many cases when there exist multiple competing objectives, and it can be a huge problem if any of those objectives concerns the ethical side of the problem. For example: if a self-driving car cannot stop itself from killing one of two pedestrians, how should the car’s control software choose who lives and who dies?

There is an unknown and infinite number of these situations that can happen in reality and AI systems have to be able to respond accordingly.

That’s why in his paper Eckersley proposes to inject uncertainty in the decision-making process in AI. What he suggests is to use two paradigms: partially ordered preferences and probability distributions over total orders.

The first one simply does the ordering of the preferences while skipping (not defining an ordering) for a small part of them. This partial ordering introduces slight uncertainty into the process.

In the second one, there are several lists of absolute preferences (full ordered) but each one has a probability attached to it. This is another way to introduce slight randomness in the process and directly express uncertainty.

In fact, the ultimate goal is to make the decisions uncertain, by that make the system unsure and move the dilemma back to human experts for a final decision. Eckersley believes that in this way we can enforce more ethical decisions from our AI systems.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments