Skip to main content

The simple technique of blacklisting keywords to place ads on relevant and appropriate content is not a foolproof solution.

Ask any advertiser, “How do you handle Brand Safety?” Most of them will answer that they configure keywords by blacklisting the ones that are inappropriate or do not resonate with their brand’s image and philosophy. It appears very simple to handle. In reality, it’s not that straight.

Recently, a client of ours, a leading D2C player approached us for a Brand Safety audit, though it was very confident of its existing solution. To their surprise, our tool captured a very damaging find. This was not any simple Brand Safety issue!

In this case, we found that the ads of this D2C brand were appearing on a channel that was promoting terrorism in India. There were loads of videos spreading venom and talking anti-national. For any brand, this is the worst of their nightmares. Being a home-grown brand there was an added layer of patriotism and the brand felt like a ‘party in the crime’.

Advertisers relying on rudimentary checks around Brand Safety become easy victims of the loopholes which are exploited by such anti-national and hatred-spreading, propaganda channels. Such channels are widely available on UGC platforms like YouTube.

Let’s explain a bit further how does the basic Brand Safety check that comes embedded with such platforms fails content. Any such platform allows one to configure a blacklist keyword list, which blocks or filters any such channel/content meta-tagged to such keywords. In the above example, keywords like ‘terrorism’, ‘terrorist’, ‘war’, and a few more similar ones were already configured to be blocked. But the channel on which the ads started appearing was using description and keywords in ‘Hinglish’ (which is writing a Hindi/Urdu word in English script). The filter configured did not see anything wrong with the channel content and considered it among one of the potentially high-engagement content for the advertiser. Hence started showing ads on this channel.

mFilterIt’s superior Brand Safety solution, which has been precisely and specifically developed to manage Brand Safety, gets deeper than a keyword. It does a ‘scenario analysis’ where it senses the contextuality of the medium (channel) as well as the content (message) and only then allows the advertisement to be served, even if based on target parameters, the channel seems to be one of the most promising to find the audience.

While the association with adult and porn content is an increasing concern for ‘socially responsible’ brands, the increasing instances of advertisements landing on terrorist and anti-national propaganda channels is a bigger concern. It is making an advertiser inadvertently fund anti-national propaganda against the nation. This will not be tolerated by any brand as it not only risks their reputation but also the entire business operations could be jeopardized, especially when the IT and other related laws are becoming more stringent to tackle anti-national sentiments, fake news, etc., all over the webspace.

 

Leave a Reply