By Tom Fleischman for the Cornell Chronicle
When Lee Humphreys ’99 noticed that models for a retail company – one known for its diversity – appeared lighter-skinned in still photos than in videos, she didn’t think it was intentional.
She reached out to the CEO, who promised to get back to her. She was eventually told that the discrepancy must have been because of her own computer.
But she knew that wasn’t true – she’d asked colleagues to check on their computers, too. The experience sparked an investigation by Humphreys, professor and chair of the Department of Communication in the College of Agriculture and Life Sciences, and doctoral student Chelsea Butkowski, into the role of the depiction of models’ skin color in the online retail environment.
Their study, “Computing Colorism: Skin Tone in Online Retail Imagery,” published March 13 in Visual Communication, found that still images of models had statistically lighter skin tones than in videos of the same product and model. They also found evidence of “tokenism” – that is, many of the websites had one model who was considerably darker-skinned than the others, as “a kind of stand-in for a wide range of diversity,” said Butkowski, the study’s lead author.
To conduct the research, they developed a method for quantitatively measuring skin lightness and darkness to capture inconsistencies in different depictions of the same model across platforms. To accurately characterize that difference, they enlisted Utkarsh Mall, a doctoral student in computer science, and together they developed a visual analysis procedure.
“We wanted to take what we were seeing and support that through a more quantifiable method, beyond just what we were seeing,” Butkowski said. “We did expect to find that there was going to be a difference, and that it would be statistically significant.”
In product photos from August of 2019, they sampled the first photo and video from 10 women’s dress listings on three retailer websites – Banana Republic, Gap and Old Navy, all subsidiaries of Gap Inc.
Their method involved analyzing two regions of the images: the chin, which was chosen because it is most often front facing, and because still images used by one of the retailers are cropped above the nose; and all visible skin, to compare skin tones in still and video modes.
Greyscale histograms – characterizations of the distribution of pixels – were created for all 30 images, to visualize the intensity and clustering of pixels across the range of possible tones.
What also stood out for Humphreys was diversity in the images, and ambiguity when it came to ethnicity.
“We took that as generally a good thing,” she said, “that we were seeing greater diversity of ethnicity, to some degree. But when it came down to the skin tones, they were still relatively light.”
The difference in skin tones was clear to see, but what is less evident, the researchers said, is which tones most accurately reflect reality.
“We don’t know if the videos were darker but are closer to what the models actually look like, or if the lighter photos accurately represent the models,” Humphreys said. “Maybe the videos just aren’t well-lit, and so they end up being darker than what the model actually is.”
“We don’t know what we don’t know,” she said, “but what becomes really interesting is that the discrepancy itself becomes problematic, and a potential indicator of photo manipulation or technological bias.”
This story originally appeared in the Cornell Chronicle.