We can train AI to identify good and evil, and then use it to teach us morality

Can we code right and wrong?
Can we code right and wrong?
Image: Reuters/Stephane Mahe
We may earn a commission from links on this page.

When it comes to tackling the complex questions of humanity and morality, can AI make the world more moral?

Morality is one of the most deeply human considerations in existence. The very nature of the human condition pushes us to try to distinguish right from wrong, and the existence of other humans pushes us to treat others by those values.

What is good and what is right are questions usually reserved for philosophers and religious or cultural leaders. But as artificial intelligence weaves itself into nearly every aspect of our lives, it is time to consider the implications of AI on morality, and morality on AI.

There are many conversations around the importance of making AI moral or programming morality into AI. For example, how should a self-driving car handle the terrible choice between hitting two different people on the road? These are interesting questions, but they presuppose that we’ve agreed on a clear moral framework. Though some universal maxims exist in most modern cultures (don’t murder, don’t steal, don’t lie), there is no single “perfect” system of morality with which everyone agrees.

But AI could help us create one.

In his 1986 book Law’s Empire, the late renowned legal philosopher and scholar Ronald Dworkin described the idea of Judge Hercules, an imaginary, idealized jurist with exceptional, superhuman abilities to understand the law in its fullest form. Not only does Judge Hercules understand how to best apply the law in a specific instance, but he also understands how that application might have implications in other aspects of the law and future decisions. According to Dworkin, Judge Hercules epitomizes the sort of legal understanding we strive for in our continued studying and application of legal frameworks.

Inherent in this theory is the presumption that the law is an extension of consistent moral principles, especially justice and fairness. By extension, Judge Hercules has the ability to apply a consistent morality of justice and fairness to any question before him. In other words, Hercules has the perfect moral compass.

Like the Hercules of Roman mythology, Judge Hercules will never exist. But perhaps AI and machine-learning tools can help us approach something like it.

Let us assume that because morality is a derivation of humanity, a perfect moral system exists somewhere in our consciousness. Deriving that perfect moral system should simply therefore be a matter of collecting and analyzing massive amounts of data on human opinions and conditions and producing the correct result.

What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What if we could collect data on what goes into moral decisions and their outcomes? With enough inputs, we could utilize AI to analyze these massive data sets—a monumental, if not Herculean, task—and drive ourselves toward a better system of morality.

For example, AI could help us defeat biases in our decision-making. Biases generally exist when we only take our own considerations into account. If we were to recognize and act upon the wants, needs, and concerns of every group affected by a certain decision, we’d presumably avoid making a biased decision. Consider what that might mean for handling mortgage applications or hiring decisions—or what that might mean for designing public policy, like a healthcare system, or enacting new laws. Perhaps AI Hercules could even help drive us closer to Judge Hercules and make legal decisions with the highest possible level of fairness and justice.

To be fair, because this AI Hercules will be relying on human inputs, it will also be susceptible to human imperfections. Unsupervised data collection and analysis could have unintended consequences and produce a system of morality that actually represents the worst of humanity. However, this line of thinking tends to treat AI as an end goal. We can’t rely on AI to solve our problems, but we can use it to help us solve them.

If we could use AI to improve morality, we could program that improved moral structure output into all AI systems—a moral AI machine that effectively builds upon itself over and over again and improves and proliferates our morality capabilities. In that sense, we could eventually even have AI that monitors other AI and prevents it from acting immorally.

While a theoretically perfect AI morality machine is just that, theoretical, there is hope for using AI to improve our moral decision-making and our overall approach to important, worldly issues. AI could make a big difference when it comes to how society makes and justifies decisions. If we could paint a clearer picture of how our actions will affect people, ranging from everyday decisions to massive social or international programs, we could likely improve humanity and make decisions better rooted in justice and fairness.