Any marketer working with or for a fairly well known brand knows the sad truth – it’s just not possible to sift through all of the online photos posted about their brand. Of course, it would be great to look at all of those photos, as it’s a great way to see how consumers really view and use the product or service. But there are only so many hours in a day!
That’s where Carnegie Mellon University comes in. Eric Xing, an associate professor of machine learning, computer science, and language technologies and Gunhee Kim, a post-doctoral researcher at Disney Research have created an algorithm that mines photos from social sites like Flickr, Pinterest, and TwitPic to figure out what people think of the brand based on variables in the photo. Unfortunately, the researchers couldn’t include Facebook, because they block public crawling. Obviously a letdown, since Facebook is the social site with the largest number of shared photos.
But, even without Facebook, the data is pretty exciting. Digiday spoke to Gunhee Kim about the photo algorithm, what the data mining can mean for brands, and how marketers can use the data to their advantage. Here are some of the highlights from the interview.
Data mining photos just makes sense.
The researchers’ goal was to incorporate photos into the data pool that marketers use to determine how their brand is being perceived online. “Our thought us that text and images complement each other,” Kim said. “The picture can show the connection between users and products, the context and which people use the brand.”
How the algorithm works.
Brands have used surveys in the past. But, of course, surveys are expensive and unreliable. But an algorithm can instantly find the general opinion without having to create a survey and sift through the data. So many people take photos of products they’ve purchased and how they use them in real life. “Looking through the photos, you can see the public perception of the brand on the Web.”
The algorithm is essentially software that crawls images on websites like Pinterest and Flickr to identify brand logos. The algorithm then puts the images into clusters. Kim uses Louis Vuitton as an example. “Louis Vuitton might have several groups – like bags or clothing,” Kim explains. “So, through those clusters, we detect what most images are presenting; then, we look for where the product is most likely located.” Once the clusters are identified, they can see how the public generally views various product lines.
Trends definitely emerge.
When looking at Nike and Reebok, Kim notes, you can see some major differences. Nike images tend toward jogging images. Reebok has more American football and NFL jersey images. But, Nike has more variety and a higher volume of images. So, you can ascertain that Nike is more popular than Reebok.
Kim says that another (probably obvious) trend is that depending on the season, images change quite a bit. Same goes for events. People take a lot of photos at events. Kim and his team saw images associated with Rolex around horse racing, car events, or yacht events because they sponsor many of those types of events.
Marketers can really use this data.
Obviously it’s great to have this data. But, the researchers also thought ahead to how exactly marketers can use the algorithm. For example? “One possibility is to understand the interaction between a person and product,” Kim said. “We then detect the activity of the person.” Obviously, this is where it gets interesting. By seeing what else the person is doing (via their photos), marketers can serve up more relevant ads. It’s an easy way to get a snapshot of consumers’ personalities.