There is no doubt that AIs are biased. But many declare AI’s inequalities exist because we humans are flawed, rather than the machines. “Are machines doomed to inherit human biases?” the headlines read. “Human bias is a huge problem for AI. Here’s how we’re going to fix it.” But these narratives perpetuate a dangerous algorithm-first fallacy that needs to be nixed.
Yes, humans are subjectively biased. Yes, despite conscious and unconscious efforts not to, we discriminate, stereotype, and make all sorts of value judgements about people, products, and politics. But our biases aren’t being maliciously measured or modeled by the machines. No, machine biases are due to the very logic of data collection: the binary system.
The binary system is the string of 0s and 1s that serves as the foundation of all computer systems. This mathematical method does two things. Firstly, it enables large numbers to be reduced and calculated efficiently. Secondly, it enables the conversion of the alphabet and punctuation into ASCII (American Standard Code for Information Interchange).
Don’t be fooled, though: These 0s and 1s don’t mean that the machine understands the world and languages like we do.”Most of us, most of the time are following instructions delivered to us by computers rather than the other way around,” says technology historian George Dyson. In order to be able to communicate with computers, we’re being fitted and biased toward their logic, not them fitting into ours.
Binary reduces everything to meaningless 0s and 1s, when life and intelligence operates XY in tandem. It makes it more convenient, efficient, and cost-effective for machines to read and process quantitative data, but it does this at the expense of the nuances, richness, context, dimensions, and dynamics in our languages, cultures, values, and experiences.
But we shouldn’t bemoan Silicon Valley developers for the biased binary system—we should blame Aristotle.
When you think of Aristotle, you probably think of the Ancient Greek philosopher as one of the founding fathers of democracy, not the progenitor of centuries of flawed machine logic and scientific methods. But his theory of “dualism”—whereby something is one or other, true or false, logical or illogical—is what landed us in this sticky situation in the first place.
Around 350 BC, Aristotle wanted to reduce and structure the complexity of the world. To do this, he borrowed from Pythagoras’s Table of Opposites, in which two items are compared:
- finite, infinite
- odd, even
- one, many
- right, left
- rest, motion
- straight, crooked
But instead of applying this dualism to values-neutral geometry, as Pythagoras had, Aristotle applied this dualism to people, animals, and society. By doing so, he socially engineered a hierarchical patriarchy and divisive polarity that was rooted in his internal values and biases against others: The items he ordained to have more worth became 1s, and those of lesser importance 0s. When it came to women, for example, he wrote, “The relation of male to female is by nature a relation of superior to inferior and ruler to ruled.”
Alas, Aristotle’s hierarchical classification system got implemented into AI, load-weighting in favor of men like him. The very system on which all modern technology is built contains the artefacts of sexism from 2,000 years ago.
- 1 = true = rational = right = male
- 0 = false = emotional = left = female
If Aristotle had created democracy—and democracy is meant to be about true representation—women and people of color should have had equal access to education, voices in the forums, and the right to vote back in 350 BC. There’d have been no need to fight until 1920 for the female vote to be ratified in the US. There’d have been no slavery and no need for the Civil Rights Movement. Everyone would have been classified and considered equal from the start.
Aristotle should have read the memos from his predecessor, Socrates. According to Plato’s recollections, Socrates credited the female oracles at Delphi as “an essential guide to personal and state development.” Furthermore, in Plato’s Symposium, Socrates recalls his time as a student of Diotima of Mantinea, a female philosopher whose intellect he held in high regard. In Book V, Socrates is credited with suggesting that women are equally qualified to lead and govern: “There is no practice of a city’s governors which belongs to a woman because she’s a woman, or to a man because he’s a man.”
But instead of Socrates’ ideas of equality rooting Western ideas of intelligence, we wound up with Aristotle’s logic. Aristotle’s biased ranking is now being looped and reinforced by more than 15 million engineers, without them being aware of its binary and undemocratic origins.
But let’s not lay the blame only on Aristotle. Two other villains contributed to these social and scientific problems: Descartes and Leibniz.
In the case of Descartes—the 17th century French philosopher who coined the phrase “I think, therefore I am”—he planted the idea that a subject has no matter or value other than what the observer assigns and infers. (If he had said “We think, therefore we are,” it would have better reflected how we’re symbiotically informed by each other’s perceptions.)
Moreover, Descartes proposed a further separation of mind from the body and emotions in his 1641 treatise, Meditations on First Philosophy. He argued that our minds are in the realm of the spiritual while our bodies and emotions are in the realm of the physical, and the two realms can’t influence each other. This has caused problems in AI because now we’re stacking emotion units on top of binary classification layers in an unnatural and non-integrated way.
Descartes’ deductive-inductive logic, which he explored in his 1637 Discourse on the Method, was conceived because he became disillusioned with the unsystematic methods used by scientists of his time. He argued that mathematics was built on a “solid foundation,” and so he sought to establish a new system of truth based on Aristotle’s 1 = true = valid, and 0 = false = invalid. The difference was that he put Aristotle’s lines of syllogistic logic into a tree structure. These tree structures are now used in NLP’s (Natural Language Processing) recurrent neural networks.
Then there’s Leibniz, the German philosopher and lawyer who invented calculus independently from his contemporary, Newton. He created the binary system between 1697 and 1701 as a way to get to “yes/no” verdicts faster and to reduce big numbers into the more manageable units of 0s and 1s.
Unlike the others, Leibniz was a Sinophile. In 1703, the Jesuit priest Bouvet sent him a copy of I Ching (the Book of Changes), which is a Chinese cultural artifact that can be traced back 5,000 years. He was fascinated by the apparent similarities between the horizontal lines and gaps of the book’s hexagrams and the 0 and 1 vertical lines of his binary system. He misinterpreted the gaps as being about nothingness and zeros, so he (wrongly) believed the hexagrams confirmed that his binary system was the right basis for a universal logic system.
Leibniz made three further major errors. Firstly, he rotated the hexagrams from their natural horizontal positions into vertical ones to match with his binary lines. Secondly, he separated them from the context of the accompanying Chinese symbols and corresponding numbers. Thirdly, since he wasn’t Chinese and didn’t understand the philosophical heritage or the language, he’d assumed the hexagrams represented the numbers 0 and 1 when they’re actually representations of negative and positive energies, male and female YinYang. These mistakes meant Leibniz lost a lot of information and insights about I Ching’s codes and its hexagrams’ true meanings.
Instead of creating a coherent universal system, Leibniz’s binary system reinforced Descartes’ Western models of thinking and added to Aristotle’s biased basis, further gridlocking us and the machines we created toward unnatural logic.
Aristotle’s binary classifications are now manifest throughout today’s data systems, serving, preserving, propagating, and amplifying biases up and across the machine-learning stack.
Examples of binary bias in front-end user interfaces and data processing include:
- swipe right = 1, swipe left = 0
- clicking “like” on Facebook = 1, not clicking like = 0
- our complex emotions being crudely assigned as positive = 1, negative = 0 in NLP frameworks
- converting pairs of objects being compared and their features into 0 or 1, such as apple = 1, orange = 0, or smooth = 1, bumpy = 0
- rows and columns full of 0s and 1s in giant “big data” graphs
But the problem with binary logic is that it provides no scope for understanding and modeling why and how people have chosen one option over another. The machines are simply registering that people have made a choice, and there’s an outcome.
The machines are therefore benchmarking to their binary biases, not ours. Sure, we’re filled with our own very human flaws and weaknesses—but the existing frameworks of computing are set up to be incapable of righting those wrongs (and engineers are only writing code that fits the limitations of the legacy logic).
Thankfully, there is an alternative. Aristotle, Descartes, and Leibniz’s Western philosophies are the opposite to Eastern ones that are based on natural balance, coherency, and integration. The Chinese concept of YinYang, for example, emphasizes the equal and symbiotic dynamics of male and female in us and the universe. These ideas were written in I Ching, but Leibniz did not recognize them.
Nature also rejects a binary system. Billions of years before Aristotle’s bias was imprinted into Western computer logic, nature codified intelligence as the entwined co-existence of female X and male Y in our DNA. Moreover, from quantum research, it’s been shown that particles can have entangled superposition states where they’re both 0 and 1 at the same time— just like YinYang. Nature doesn’t pigeonhole itself into binaries—not even with pigeons. So why do we do it in computing?
We don’t classify and qualify the world around us with Aristotle’s hierarchical binary biases. But the way data is collected is black (0) and white (1), with shades of grays provided by percentages of confidence. Meanwhile, nature and Eastern philosophies show that our perceptions are whole waves of rainbow colors.
Until we design non-binary and more holistic modes of categorization into AI, computers won’t be able to model the technicolor moving picture of our intelligence. Only then will the machines represent our diverse human languages, reasoning, values, cultures, qualities, and behaviors.