Artificial intelligence can say yes to the dress

Only two of these images were taken by a camera. (Spolier: Row 1, C and Row 2, D)
Only two of these images were taken by a camera. (Spolier: Row 1, C and Row 2, D)
We may earn a commission from links on this page.

The sometimes glamorous job of modeling may be going the way of elevator operator.

Online fashion tech startup is selling technology that analyzes pieces of clothing and automatically generates an image of the garment on a person of any size, shape, or wearing any kind of shoes. The company is currently talking to retailers to replace the continuous stream of photo shoots fashion retailers arrange for each new run of clothing.

Instead of hiring a professional photographer, models, and a studio, retailers only have to take a picture of the garment laid out on a plain surface. The AI can generate a human figure, then predict how the garment would fit. Since there is no real-life model, the AI can generate any kind of body or skin type. This isn’t a death knell for high fashion photography or artistic cover shoots, but the days of a headless model photographed against a white background will soon be over.

The technology, developed by’s Anand Chandrasekaran and Costa Colbert, uses a machine learning approach called generative adversarial networks, or GANs. The system has two AIs: an generator and a critic. The generator tries to make an image that looks good, and the critic decides if it looks good enough. GANs are a relatively new concept, credited to Google’s Ian Goodfellow in 2014, and work especially well generating images.’s innovation enables GANs to specify how each image should be generated. Neural networks, the technology that GANs are built on, are an approximation of how our brain works: Millions of tiny, distributed neurons, processing data and passing them along to the next neuron. By breaking apart an image into millions of pieces of different levels of abstraction, each neuron learns a little piece about the data its meant to process: what we would see as the shape of an elbow, a hip, or color. These networks are trained on thousands of images, and the neurons learn to distinguish different kinds of elbows, hips, and colors.

But those millions of little neurons needed for the signals to travel means the network is more complex than even its creator understands. As an rough example, software programmers don’t need to know how a computer processor works to write code.

Chandrasekaran and Colbert thought that if they could figure out the specific connections that correlated with generating a specific part of the image, they could have more control over the image being generated. So they started to tinker. At first, Colbert says he altered half the neurons in one of the network’s layers, and tested it to see what the network would produce. Through trial and error, the two engineers figured out exactly the right neurons to alter the size, weight, or shape of a person, or the hardest part, the shoes they’re wearing. Modify a few neurons, and make the person in the generated image larger. Change a few more to change boots to sandals. is now working with North American retailers to implement this technology; can’t say what companies it is working with, but you might have already seen one of its AI-generated creations.