Digital Attribution. What is it and can it be stolen?

⏱︎

Read time:

3–4 minutes
Digital Attribution. What is it and can it be stolen?

Digital attribution involves determining which online touchpoints (e.g., ads, searches, emails, social media posts, website interactions) contributed to the desired user action — purchase, registration, or lead acquisition — and assigning these touchpoints an appropriate portion of value to assess their relative impact.

Since customers usually go through many interactions before converting, attribution systems collect signals about these events (clicks, impressions, page views, sessions) and apply a specific model to distribute the conversion value among these touchpoints.

The latest data shows that digital attribution is widely used in measuring the effectiveness of marketing activities – 51.5% of marketers use it, of which 64.3% use it consistently (1).

What models are used in digital attribution?

The most commonly used models are: last click (all credit to the last touchpoint), first click (all credit to the first contact), linear model (equal value assignment to all contacts), time decay (greater weight for newer interactions), and positional model (greater weight for first and last interactions).

More advanced methods include statistical and algorithmic approaches — e.g., multi-touch, probabilistic, or data-driven attribution — which draw conclusions about the contribution of individual channels based on patterns in the data.

However, it is important to remember the limitations: rule-based models and data reported by platforms tend to over-attribute value to lower-funnel channels. Additionally, attribution windows, cross-device transitions, and signal loss due to privacy constraints can distort results. Relying solely on these signals can result in incorrect budget allocation and overlook the actual causal role of branding activities or offline touchpoints.

Can attribution be stolen?

It turns out it can be, and I’ll say more – it’s a common, real problem. Firstly, the so-called single-tool bias: one tool (e.g., Google Analytics) consolidates and favors its the best signals — usually onsite — which shifts the weight in attribution. Onsite tools “see” all clicks and events in a session, while external sources (ad platforms, org. referral) may be underrepresented due to data transmission gaps (loss of UTM, redirects, cross-device).

Additionally, implementation errors, such as lack of deduplication, inconsistent transaction IDs, delayed or double pings, make onsite appear as the sole final cause of conversion. Ad-blockers and third-party restrictions only deepen the onsite/first-party advantage.

This is actual “attribution theft” — but it is manageable: the key is data integration, deduplication, and rule unification.

How to prevent attribution theft?

Path mapping can partially solve the problem. Combining clickstream (UTM, ad click IDs) with onsite events through common identifiers (transaction_id, session_id, user_id) addresses part of the issue. A good solution is to use an intermediary system (CDP / server-side tracking) to normalize and deduplicate events before sending them to analytics and advertising platforms.

And even more simply – Google Analytics should be used mainly for measuring traffic sources and external campaign effectiveness. For measuring onsite activities (pop-ups, forms, CTAs, micro-conversions, UX), a more dedicated tool (product analytics, session-replay, event-tracking) with a different logic of collecting and detailing events should be used.

Why so? GA is optimized for aggregating traffic sources and campaigns; onsite tools better measure session behaviors, sequences, and micro-conversions. Separate systems prevent “single-tool bias,” where onsite signals dominate the entire attribution model. Reports in GA may look favorable for vendors, even if it’s a measurement effect, not actual effectiveness.

Onsite reports in GA should be treated as a suspicious signal requiring verification — integrating raw events, deduplication, and experiments are the only ways to distinguish the real impact of a tool from “attribution theft.”

One should also be cautious about onsite tools reporting their effectiveness in GA. Sooner or later, this will lead to canibalization of external channels and making wrong decisions about their use.

Does Quarticon use GA to measure effectiveness?

No, due to the above-described problems, Quarticon does not use GA to assess tool effectiveness (unlike some providers). At Quarticon, we use metrics based on raw data, access to raw logs, and clear conversion measurement rules.

(1) Based on: Only 39% of marketers measure business results