In March 2016, the AI program AlphaGo shocked the world by defeating the world champion Lee Sedol in a widely publicized event watched by more than 200 million people. Go, the ancient game most likely originated in China, was until then believed to be beyond the power of any machine. But in the end, the program developed by DeepMind—a British-based company funded by American Alphabet dollars—soundly defeated the legendary South Korean champion.
When it comes to AI, there is a tension between East and West. In AlphaGo’s case, an ancient game of the East was pitted against a software program developed in the West. The defeat sparked a surge in interest in AI in many Asian countries, and now China is hot on the US’s robotic heels when it comes to AI funding.
Society is witnessing a blossoming of cultural traditions underlying both the design of, and the expectations for, AI. This new confluence in ethical thinking will hopefully arise from this collective—one that is beneficial toward a common good. But that’s not guaranteed.
Today’s AIs are ubiquitous and contain the potential and power to change the world in ways we can scarcely imagine, for better or worse. This exponential surge in AI has thankfully prompted an interest in ethics and ethical considerations for it. However, much of this thinking is dominated by Western theories.
As the field has become more global, more cultures, particularly Eastern ones, are becoming more active players in the field. The thinking behind AI ethics therefore needs to include the traditional thinking systems of these cultures. Though China is close to dominating global business and research in the field, other cultural influences are finding a place too, including those from Africa, the Middle East, and the rest of the Global South.
We can no longer simply apply a Western value set to AI—but we shouldn’t start applying a wholly Eastern set, either. As AI becomes more global, the theories that underpin its ethics must therefore also take on global dimensions
The difference between Western and Eastern philosophy
What are the main differences between Western and non-Western underpinnings in AI ethics?
We should start by acknowledging that both the West and the East (including Africa and the Middle East) are vast regions, each containing a huge variety of different traditions. But despite the numerous differences and expectations in our varied traditions, we find common ethical ground in the development and design of AI.
Western ethics are dominated by ethical theories such as deontology and utilitarianism. The first emphasizes the use of reason and logic to find what is believed to be the right answer to ethical problems, and then demands adherence to moral decisions irrespective of the consequences. The second seeks the greatest good for the greatest number of people. This seem to be a simple and practical solution, but the difficult part is working out what exactly is the good and how to measure its quantity.
Both the dominant theories are based on the individual as the determining factor in judging whether an action is good or bad; they’re individualistic in the sense that both reason and utilities belong to particular individuals. There is another theory—virtue ethics—which looks at cultivation of moral character as key to ethical judgment. Of the three, virtue ethics appears to be closest to the theories of the East.
The perspective is different when we consider the dominant traditions of ethics in the East, however. For example, the Ubuntu tradition in Africa advocates a focus not on the individual, but the community. The word “ubuntu” is a Bantu term meaning “humanity,” or more precisely “the bond that binds all of humanity together in a single whole.” This bond is given preference over single individuals—the difference is between humanity being put together as a single unit and atomic individual persons. Dominant Western theories emphasize the latter, with the result that it gives rise to individuals building walls against one another, both literally and metaphorically. With the Ubuntu concept, on the contrary, individuals seek out others, and hold hands together.
Buddhism also has a profound view on ethics. An ethical action is one that leads the one doing it to achieve the eventual goal, which is supreme happiness. This is the kind of happiness that results when one is completely attuned with nature: It is not the same as individual preferences or utilities as in utilitarianism. In other words, supreme happiness for Buddhism is not the same as supreme pleasure, and it is not individualistic because it is the same for everybody.
Furthermore, the indigenous spiritual system of Japan—shinto—holds that there are spirits everywhere, such as in forests, wind, water, and so on. Its ethics are thus based on harmony with nature; an unethical action is one that breaks the bond that humans already have with their natural environment.
Applying the similarities to AI
Nonetheless, even with these great differences among these traditions, we can still find some similarities. It is these intersections that paves the way toward a truly global ethics in the age of AI.
Honesty, truthfulness, compassion, and altruism are some of the examples that are praised across all traditions, East or West. One should be honest because doing so is to follow the universal maxim, according to the Kantian strand of Western ethics. On the other side of the world, the Ubuntu and Shinto traditions believe that one should also be honest, because being honest reinforces the bonds that tie us all together. As philosophers and computer scientists collectively and globally ponder the question of the “good life,” this is the basis for a global AI ethics system.
In the end, we need manufacturers of AI to focus on augmenting design with values like honesty, loyalty, truthfulness, and altruism. If embedded correctly, these attributes can be manifested to end users. The environment that surrounds AI needs to be immersed in human-centric concerns that are common to all people, not just those in the East or the West.
After all, AIs are manufactured by people, and its system design needs to reflect human virtues. A system consisting of an ethical awareness of the people involved in its creation and implementation will create a technological future that accounts for culturally varied virtues.