October 6, 2015
Marketers have a huge blind spot when they’re monitoring and analysing social media conversations.
For years we’ve been using text-analytics to track and understand what’s being said about brands, or topics, within social media discussions. We use keyword searches to find the mentions that are relevant to us, and then we analyse the full text of the content we find, whether they’re tweets, blog or forum posts, news articles or whatever else. This enables us to gain all sorts of useful insight, but it’s got a big limitation; it doesn’t help us understand the millions of photos that are shared on social channels every year. Most people aren’t particularly meticulous about accurately tagging the photos they share online which means it’s hard for brands to identify relevant images.
On Instagram alone, 80 million photos are posted every day, with similar figures for Twitter and Facebook, so there are vast swathes of potentially valuable user generated content going unnoticed. It’s also a problem for crisis-comms, because potentially damaging photos can spread virally to a large audience before a brand realises that it needs to take some kind of damage limitation action.
Images are more impactful, shareable and engaging than text, but they’re also really hard for brands to accurately monitor and analyse. If somebody tweets “Enjoying an ice-cold beer in the sun!” that’s pretty inconsequential from a brand perspective. But if they post a photo of themselves drinking a bottle of beer, we can see what brand they’re drinking, where they are, who they’re with, what they’re doing and all sorts of other useful contextual information, if only we can tap into it somehow.
The good news is that with recent developments in artificial intelligence this is all changing and brands will finally be able to automatically mine the rich seam of insight buried in social media images. Companies like Indian start-up, Gaze Metrix, (which Sysomos recently acquired) have developed technologies which can analyse the content of photographs, even without any textual clues from tags or descriptions.
This means that marketers are now able to ask their social listening tool to find all images that contain their logo or branding, and can analyse the returned images in sophisticated ways. The same algorithm that can identify logos in pictures can also be used to identify almost anything else, people, objects, environments.
So you might want to find images that contain your products alongside people smiling (or maybe just a specific gender or age group), and perhaps if you’re doing some summer promotional activity you might want to focus only on outdoors environments like beaches and parks. You could also use this information for analytics, to answer questions such as “where are women most likely to use our product in winter?”
There are other applications too, such as easily spotting misappropriation of your trademarks or instances of brand-jacking. You might also want to identify any relevant images which are starting to go viral, as this could present either an opportunity for your brand or an emerging crisis that requires an urgent response.
The ability to identify the content of images provides marketers with a whole new level of insight. It’s not just about knowing when your brand is featured in a photo, it’s about really understanding the context of those images in far more sophisticated ways than we can with text-analytics. Those of us in the social analytics space are just beginning to implement the technologies which make this possible, but the real innovation will happen once our users start finding interesting ways to use the tools.
Lance Concannon is the UK marketing manager at social media analytics provider, Sysomos