Dark Mode Light Mode

Turning Attention Metrics Into Performance KPIs in CTV and Display

There is a fundamental disconnect in how the industry talks about attention metrics versus how it uses them.

In boardrooms and whitepapers, attention is hailed as the solution to the “viewability is dead” crisis. It is pitched as the precise measure of whether a human being actually looked at your ad. Yet, in practice, attention scores often end up as vanity metrics—a nice-to-have slide at the end of a quarterly business review that sits totally separate from the hard data used to buy media.

For senior performance marketers and programmatic teams, the value of attention data isn’t in the retroactive report. It’s in the optimization rules.

If attention metrics are to justify their cost, they must move beyond “insights” and into the operational layer of your stack. This means translating high-level attention scores into specific buying thresholds, supply path optimizations (SPO), and performance KPIs that correlate directly with ROAS.

Attention Metrics vs. Traditional Metrics

To operationalize attention, we first need to distinguish it from the legacy metrics currently governing bid logic.

  • Viewability (MRC Standard): Measures if the ad had a chance to be seen (e.g., 50% of pixels in view for 2 continuous seconds on CTV). It is a technical baseline, not a measure of impact.
  • Video Completion Rate (VCR): Measures if the file played to the end. On CTV, this is often a function of the user’s inability to skip rather than their genuine interest.
  • Attention Metrics: These vary by vendor (e.g., Adelaide’s AU, DoubleVerify’s Attention Index, IAS’s Quality Attention) but generally synthesize data signals to predict or measure human focus. This includes exposure (time in view, screen real estate), engagement (touch, volume control, pause/rewind), and eye-tracking/presence panels (verifying if eyes are actually on screen).

The Operational Difference: Viewability is binary (pass/fail). Attention is a spectrum. A placement can be 100% viewable but have an attention score of zero if the user is looking at their phone. Conversely, a placement might have lower traditional viewability but high intensity attention from the target audience.

Optimizing for viewability often pushes spend toward “safe” but low-impact inventory (like small player video on long-tail sites). Optimizing for attention pushes spend toward inventory that captures mental availability.

Where Attention Metrics Break Down

Most brands fail to get value from attention data because they treat it as a standalone trophy.

The common mistake is looking at an aggregated “Campaign Attention Score” of 75 and patting the team on the back. This number is useless for decision-making. An average score hides the variance between your best-performing CTV app and your worst-performing display domain.

Furthermore, comparing attention scores across formats without normalization is dangerous. A 15-second CTV spot will almost always have a higher raw attention score than a 300×250 display banner. Does that mean you should cut all display spend? No. It means you need format-specific benchmarks.

If attention data doesn’t change where you buy or how much you bid, it is just expensive noise.

Translating Attention Into Optimization Signals

To make this data work, you need to map attention signals to specific levers in your DSP. Here is how to operationalize the data:

1. Qualification and Disqualification Rules

Instead of just blocking non-viewable inventory, establish an Attention Floor.

  • Analysis: Review historical data to identify the attention score threshold below which conversion rates drop off significantly.
  • Action: Create pre-bid filters that exclude inventory sources (domains, apps, or specific ad units) that historically perform below this threshold. This immediately redirects budget away from “zombie” inventory—placements that are technically viewable but historically ignored.

2. Supply Path Optimization (SPO)

In programmatic, you can often buy the same impression through multiple exchanges. However, the rendering of that ad (player size, load time, audio settings) can differ, affecting attention.

  • Action: Analyze attention scores by SSP (Supply Side Platform). You may find that SSP A delivers an average attention score of 40 for a specific publisher, while SSP B delivers a 60 for the same publisher due to better technical integration. Consolidate spend onto the high-attention paths.

3. Frequency Capping via Attention

Standard frequency caps (e.g., 3 per day) assume every impression has equal weight. They don’t.

  • Action: Move to Attention-Based Frequency. If a user has been exposed to high-attention placements (e.g., two unskippable CTV spots with high eyes-on-screen metrics), cap them sooner. If they have only been exposed to low-attention display banners, extend the frequency cap to ensure message penetration.

Turning Attention Into Performance KPIs

Attention metrics should serve as a multiplier in your performance models, bridging the gap between media quality and outcomes.

The “Effective CPM” of Attention

A cheap CPM often masks a high cost of attention.

  • Formula: $CPM / (Probability of Attention %)
  • Example:
    • Inventory A: $10 CPM, 20% Attention Probability = $50 Cost Per Attentive Thousand (CPAa)
    • Inventory B: $20 CPM, 60% Attention Probability = $33.33 Cost Per Attentive Thousand (CPAa)

By optimizing for the Cost Per Attentive Thousand (CPAa), you may justify paying higher CPMs for premium CTV inventory because the actual cost of mental engagement is lower.

Weighting Attribution Models

In multi-touch attribution (MTA), not all touchpoints are created equal. A fleeting banner view shouldn’t receive the same credit as a high-attention video view.

  • Action: Ingest attention scores into your attribution modeling. Weight exposures based on their attention quality. This provides a more accurate picture of which touchpoints are actually driving the conversion, allowing you to cut low-value partners that look good on paper but contribute little to the user journey.

Practical Buying Rules Marketers Can Apply

Here are three concrete rules you can implement in your next campaign:

  1. The “High-Impact” CTV Threshold:
    For CTV buys, set a custom algorithm rule to bid up 20% on inventory where the “Eyes-on-Screen” rate exceeds 60% (based on panel data calibration). This ensures you aren’t paying premium CTV rates for ads playing to empty rooms.
  2. The Display “Duration” Filter:
    For display, viewability is not enough. Filter for “Time-in-View > 5 seconds” combined with “Active User State” (user is scrolling/clicking, not idle). While this reduces scale, it significantly increases the quality of the retargeting pool.
  3. The Creative Rotation Rule:
    Monitor attention drop-off rates for creatives. If a specific creative asset sees its average attention score drop by 15% week-over-week, trigger an automatic creative swap. This detects ad fatigue faster than CTR declines.

How to Test and Validate Impact

Do not blindly accept that higher attention equals better performance. You must validate this for your specific brand.

The Split Test Framework:

  1. Control Group: Standard optimization (optimizing toward Viewability, CPC, or VCR).
  2. Test Group: Attention optimization (optimizing toward high-attention domains/placements using pre-bid segments).
  3. Measurement: Compare the CPA and ROAS between the two groups.

The Incrementality Test:
Take your high-attention inventory segments and run a holdout test. Does the group exposed to high-attention media show a statistically significant lift in site visits or conversions compared to the group exposed to standard viewable media?

If the test group drives a lower CPA or higher incremental lift, you have built the business case to integrate attention metrics permanently into your buying logic.

Attention as a Tool, Not a Trophy

Attention metrics are technically impressive, but they are not a silver bullet. They are a dataset—neutral until applied.

When operationalized correctly, they act as a high-fidelity filter, stripping away the waste that plagues programmatic display and CTV. They allow buyers to value media based on its ability to capture the human mind, rather than just the device screen.

However, the goal remains unchanged: business growth. If “high attention” inventory doesn’t correlate with sales or leads for your brand, drop it. Use attention metrics to inform your strategy, but let performance be the final judge.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post

How to Architect AI-First Media Planning Without Losing Control

Next Post

Fixing Programmatic Waste: A 2026 Supply Path Optimization Blueprint