MIT developed a course to teach tweens about the ethics of AI

Redesigning YouTube.
Redesigning YouTube.
Image: Blakeley Payne
By
We may earn a commission from links on this page.

This summer, Blakeley Payne, a graduate student at MIT, ran a week-long course on ethics in artificial intelligence for 10-14 year olds. In one exercise, she asked the group what they thought YouTube’s recommendation algorithm was used for.

“To get us to see more ads,” one student replied.

“These kids know way more than we give them credit for,” Payne said.

Payne created an open source, middle-school AI ethics curriculum to make kids aware of how AI systems mediate their everyday lives, from YouTube and Amazon’s Alexa to Google search and social media. By starting early, she hopes the kids will become more conscious of how AI is designed and how it can manipulate them. These lessons also help prepare them for the jobs of the future, and potentially become AI designers rather than just consumers.

“Kids today are not just digital natives, they are AI natives,” said Cynthia Breazeal, Payne’s advisor and the head of the personal robots group at the MIT Media Lab. Her group has developed an AI curriculum for preschoolers.

Payne is keen to open up the AI field, which many say is rife with bias. According to the latest AI Index (pdf) affiliated with Stanford University, the applicant pool for jobs in AI is 71% male. The sooner you open up the black box of AI, the more accessible it may become to future engineers, the thinking goes. “It’s important for the diversity and the inclusivity battle,” Breazeal said.

Not everyone thinks AI deserves special attention for students so young. Some argue that developing kindness, citizenship, or even a foreign language might serve students better than learning AI systems that could be outdated by the time they graduate. But Payne sees middle school as a unique time to start kids understanding the world they live in: it’s around ages 10 to 14 year that kids start to experience higher-level thoughts and deal with complex moral reasoning. And most of them have smartphones loaded with all sorts of AI.

What can you teach 10 year-olds about AI?

Payne majored in computer science and math, and teamed up with researchers and teachers at the Harvard Graduate School of Education to design the AI ethics curriculum. The course is now comprised of eight modules that can be incorporated into everything from English to science classes. Students learn technical concepts—how to train a simple classifier—to the ethical implications those concepts entail, such as algorithmic bias.

For example, students have to write an algorithm for the “best peanut butter and jelly sandwich,” which can piggyback on a popular technical writing exercise that asks the same question.

Kids debate what “best” means. Best tasting? Best looking? “Isn’t this just my opinion?” one child asked, revealing just how easy it can be to understand how bias is built into algorithms.

Knowledge is power.
Knowledge is power.
Image: Blakeley Payne

Another activity that explains algorithmic bias is the training of a cat-dog classifier. Kids are given images of both and use Google’s Teachable Machines to train the data. The kids hypothesize what it will be able to do, and are then surprised when the AI is better at recognizing cats than dogs. It’s because Payne gave them a biased data set, so they discuss, alter the system, and then discuss it some more.

One project exposes kids to a GAN (generative adversarial network, a type of AI system) and then asks them to write a fictional piece about the best benefits it might offer, and what dangers it could pose. Payne explained that it’s the “What is the Black Mirror episode of this, except they don’t know what Black Mirror is.”

Beta test

Payne piloted the program with 225 students in fifth to eighth grade at the David E Williams school outside Pittsburgh last autumn. The innovative middle school is replacing all its typical electives with AI-inspired ones: media literacy during library became the AI and ethics course; music class is being replaced with AI-generated music; and geography will use AI to track changes on the surface of the earth.

When Payne arrived, the school was just starting to alter its curriculum. “The kids thought AI was cool but they had no idea what it was,” she said. Most thought it was related to robots, and were disappointed to learn they would not be building Jarvis or Wall-E.

Payne ran into plenty of problems with the course, from the disappointment over the lack of robots to the tween mindset on ethics. Groups discussed the trolley dilemma: a trolley is on a set of rails and set to kill five people, but if you pull a lever it can change course and kill only one. This scenario is often imagined for self-driving cars: if the car’s brakes fail, should the autopilot be programmed to kill the driver or swerve and kill five pedestrians?

The kids all wondered why the car couldn’t go in reverse, or whether they could just jump out of it.

“They want to get out of the ethical dilemma,” Payne said. As a result, she altered the discussions to focus more on a middle-schooler’s experience: “We can’t talk about self-driving cars; we can talk about YouTube.”

This summer, Payne took her adapted course and teamed up with a STEM nonprofit called Empow to offer an AI ethics camp for 28 kids. It was held at the Media Lab, and cost $150 for the week. By the end of it, she said kids were able to think of AI in more sophisticated ways.

For example, at the beginning of the workshop students were asked: “Who is a stakeholder in YouTube?” The typical student could identify an average of 1.25 stakeholders, with the top three responses being “I don’t know,” “parents,” and “viewers”. By the end of the week that figure had risen to 3.19 stakeholders, with the top responses changing to “viewers”, “content creators”, and the “YouTube Company”.

AI curricula are gaining steam—or STEAM, to name-check the increasingly popular acronym for the sought-after educational mixture of science, technology, engineering, arts, and math. According to EdWeek, AI4ALL, a non-profit trying to increase diversity and inclusion in AI education, research, and policy, ran AI camps on 11 college campuses, including Stanford and Princeton. “It’s easy to portray the danger [of AI] as killer robots,” Tess Posner, AI4ALL’s chief executive officer, told the publication. But the real danger of new technology is that it “might be stopping someone from getting a job because of a biased algorithm that we don’t even know is a biased algorithm,” she added. Fully 78% of last summer’s participants at the AI camps were women and 84% were students of color.

For all the buzz about AI, teachers say there are few materials for them to use in class. “Right now it’s a real challenge to find authentic, meaningful, engaging work,” for middle schoolers said Saber Khan, who teaches computer science to middle and high schoolers in Brooklyn, and helped found #ethicalCS on Twitter to gather and share resources after Obama’s White House announced its “Computer Science for All” initiative in 2106.

He said Payne’s curriculum is “one of the rare ones that is classroom-ready.” One thing he particularly likes is that it allows kids to consider ethics in the context of building AI—the peanut butter and jelly sandwich or the cat-dog classifier—rather than passively reading about it and then reflecting on it. “You can use computation to think ethically,” he added.

Road blocks

It is unclear where to fit in another subject at school these days. Teachers are meant to cover a range of academic subjects alongside building social and emotional skills, a growth mindset, shooter drills, hygiene, and sex ed, among other things. Teaching AI ethics will inevitably be part of a larger movement to teach more computational thinking, sometimes pitched as computer science, or coding.

Payne believes that, at minimum, kids need to know what they are digesting as consumers. “In the same way you have media literacy initiatives and we teach kids how bias can appear in a news article, we should teach them that in this Google search there’s a negotiation going on,” she said. “Google wants to give you the best information so you come back to Google, but Google also wants to get you to click on more ads.”

Khan, the teacher from Brooklyn, agrees. Everyone, including kids (sometimes especially kids) thinks about fairness and equity, he said. When we put kids in front of screens but don’t teach them to think critically and ethically, they will feel helpless—that their privacy is lost, or that certain voices are naturally magnified and others muffled. “I want them to imagine a better and kinder world,” Khan said, “and thinking ethically is the way to do that.”