Better Media Measurement through Content Resonance

TONIK+
6 min readOct 10, 2019

--

From Estimation to ‘Perfect’ Measurement

The evolution of advertising since the advent of digital ads has been nothing short of revolutionary. For decades, performance was based entirely on estimations: estimated number of viewers per day of a billboard, estimated listeners during a radio broadcast, estimated readers of a page 2 ad in the local paper. Nielsen jumped ad performance analytics ahead through their “Nielsen Households”, a selected sub-section of households across the country whose TV viewing habits were tracked and expounded into a reflection of the nation as-a-whole. Despite excellent design and execution, Nielsen ratings are still an (extremely well-executed) estimate, so their sample does not have a perfect correlation with reality.

The rise of digital advertising over the past 20 years has led to the next great leap forward in evaluating ad performance. Digital ad platforms allow for near 1:1 tracking of user actions from ads, providing a bevy of metrics that are as close to perfect as we’ve been able to come by so far. This level of accuracy has led to a massive diversification in terms of optimizing to specific outcomes, especially in the social ad market. While CPM used to be the standard bearer, we can now run ads that focus on clicks, engagements, views, conversions, and many more. This diversification of objectives has led to a corresponding diversification of media: still images, video, slideshow, canvas, and more. This diversification is done in the hopes of finding an ad type that works best with a particular ad objective in order to increase performance even further. As an example, a video ad might give you a 6x Return on Ad Spend (ROAS), but a slideshow ad might give you a 7x ROAS. However, this raises the question of whether or not performance is really being driven by the media type, or if something else providing that performance improvement. What’s the driving force here, and is it something that can be isolated and used to improve ad performance?

A Peek Under the Hood

At TONIK+, we’ve been diving into video data in order to determine this underlying factor in content performance. TONIK+ Video Intelligence (TVI) combines analyses from video retention data and identification of visual aspects (characters, scene type, styles, etc) via machine learning to create a “TVI Score”, a 0–100 rank of scene performance relative to the rest of the video. This allows TONIK+ to trim away poor-performing aspects of the analyzed video, paring a remix down to a streamlined, shorter spot containing the best scenes from the original. As one expects, this process leads to content that greatly outperforms non-TONIK+ creative, with TONIK+ remixes typically outperforming by 20%+ in View-based objective campaigns.

However, we also see TONIK+ remixes outperform non-TONIK+ equivalents in non-View objective campaigns as well. TONIK+ delivered a 36% increase in Landing Page Views along with a 26% decrease in costs per Landing Page View for an Auto company, while a campaign for a recently-released film drove a 37% increase in Showtime Lookups. Neither of these sets of TONIK+ remixes had the benefit of conversion optimization, yet they were still able to outperform typical conversion-oriented content. These results, amongst others, indicate an underlying determinant of ad performance that is not specifically tethered to an objective or an ad type. Conceptually, it is the idea that the resonance of the ad has a major impact on performance: high-resonance ads will deliver outsized performance relative to normal or low resonance ads. Simply put, the quality of the ad content itself can be a larger determinant of performance than objective, type, and even audience. This logically rings true: good ads will outperform bad ads. But how can we define “good” and “bad” within the ad itself, and how can we remove the “bad” parts and emphasize the “good”?

A Path Forward

This exclusion of “bad” parts and the emphasis of “good” parts is what TONIK Video Intelligence does in a nutshell. Audience retention for a specific scene is a reflection of how engaging that scene is with viewers. If a scene has a high dropoff rate it has not only lost the user in that moment, but removed the possibility that they’ll see any subsequent scene in the video, representing a huge opportunity cost. High-quality scenes will have a low dropoff rate, especially in relation to the overall dropoff rate for the piece of content. By combining these top-performing scenes in a way that makes narrative sense for the message, we can greatly increase the retention of users over the new piece of content. This provides the typical TONIK+ improvement to video completion rate, but also provides an improvement to overall video quality and resonance. This high quality sees outperformance across different objectives at all levels of the funnel, from awareness to purchases. So while it is not feasible to isolate a “resonance” metric to track Cost per Resonance and Resonance Rate, Video Completions provides a solid proxy for content quality.

Content quality being reflected in Video Completions also explains why TONIK+ remixes do well across many KPIs. High Video Completion Rates are indicative of highly resonant content: the more users that watch to the end, the more users are receptive to the message the content provides. This provides a dual benefit, as it 1) provides more users at the end of a video to receive a call to action, and 2) provides users that are more-likely to follow through on that CTA or next step in the relationship with the brand. A TONIK+ remix with a 20% improvement on VCR delivers 20% more users to that CTA messaging at the end of the video and ensures that those 20% are more-likely to continue down the path, be it buying movie tickets, test driving cars, or trialing new products. Video Completion Rate also holds together quite well across different campaign objectives. Two campaigns that run in-parallel with the same audiences and content but have two different campaign objectives (Clicks and Video Views, for example) might have two different View Rates because one campaign is optimizing toward audience members who will most-likely click while the other will optimize toward audience members who will most-likely view. However, Video Completion Rates will hold very similarly across both campaigns since they are a reflection of the quality of the content itself.

What does this all mean? Essentially, it means that producing content that can engage users will have an additive effect on any KPI. A high-quality video ad can drive an order of magnitude more people into the store and to purchase products for a much lower cost. Novel ad types do have their use cases, but they should never be the keystone of an ad strategy. At the end of the day, quality content is quality content.

The vast expansion of digital advertising has left advertisers with an overabundance of choice when it comes to ad creation. Instead of attempting to deploy the perfect ad type for the perfect situation, we believe that focusing ad strategy on the creation of high-quality video ads serves as the most effective starting point for any objective. At TONIK+, we focus on the creation of high-quality remixes through data-driven content creation. By determining the most-resonant scenes of your content and the aspects that make them work, we are able to distill your videos into high-quality remixes that increase Video Completions by more than 20%, driving more high-quality users to your desired endpoint per dollar spent.

Bryan Williams, VP of Data Science at TONIK+

--

--

TONIK+

TONIK+ is a video intelligence and editing solution that utilizes Machine Learning & performance data to maximize the impact of targeted video campaigns.