Making AI-driven discovery measurable — without overclaiming impact
Making emerging AI visibility measurable, comparable, and understandable
The context
As AI-driven discovery began showing up in analytics tools — including AI Overviews, answer engines, and citation reports — it became clear that this was something leadership would eventually ask about.
The problem was that AI visibility didn’t map cleanly to traditional organic metrics:
impressions didn’t equal traffic
citations didn’t equal clicks
brand presence was often ambiguous
and “share of voice” meant very different things depending on scale
There was no existing reporting framework inside the team to explain what this data actually represented — or how seriously to take it.
The goal wasn’t to optimize for AI.
It was to understand what visibility already existed, how it behaved, and how to report on it responsibly.
How the work actually started
This work began when AI visibility data started appearing in tools like Ahrefs.
I pulled:
citation data across AI platforms
impressions and referenced pages
early share-of-voice metrics
But it was immediately clear that raw numbers would be misleading without interpretation — especially because Nulab’s primary product name (“Backlog”) is also a generic project management term.
Before any conclusions could be drawn, the data needed context.
Key analytical decisions
Much of the work here was interpretive, not mechanical.
I made several deliberate decisions to avoid overstating impact:
Separating brand-qualified from ambiguous mentions
I defined and applied a Brand-Qualified Citation Rate (BQCR) to distinguish:
citations clearly referring to Nulab
versus generic or ambiguous uses of “backlog”
This prevented inflated visibility claims.
Normalizing competitive context
Rather than comparing Nulab only to direct peers, I evaluated AI share of voice across a large competitive set (70+ companies, including enterprise incumbents).
This made relative position interpretable rather than alarming.
Aligning AI metrics to existing organic reporting language
Instead of introducing a new, isolated dashboard, I structured AI reporting to mirror existing organic performance views — so stakeholders could understand it without relearning the model.
Focusing on what was cited, not just how often
I analyzed which content types AI systems referenced most consistently, rather than treating citations as interchangeable.
What the data actually showed
From the December snapshot:
Nulab had meaningful early AI visibility despite being a smaller brand
~33–34% of citations were brand-qualified once ambiguity was removed
Share of voice (~1.3%) placed Nulab around #17 out of 70+ competitors
AI systems consistently cited educational, framework-driven content — not product marketing pages
In other words, AI discovery was treating Nulab as:
a reference source for explanations and concepts
a contributor to answer generation
not just a destination for branded traffic
That aligned closely with how the Learn hub had been built.
What this work was not
This wasn’t:
an AI optimization strategy
an attempt to game answer engines
a claim that AI replaces organic search
or a finished attribution model
It was intentionally conservative.
The purpose was to reduce confusion — not to forecast upside that couldn’t yet be measured.
Why this mattered operationally
This work created a foundation for future decision-making.
It:
established a repeatable way to track AI visibility month over month
prevented leadership from over- or under-reacting to early AI metrics
aligned AI discovery reporting with existing organic performance language
surfaced which content types matter most in AI-driven environments
Most importantly, it turned a vague, emerging signal into something explainable — reducing institutional risk as AI discovery becomes harder to ignore.
What this shows about how I work
This case study reflects how I approach emerging platforms and metrics:
I prioritize sensemaking over hype
I design measurement systems before drawing conclusions
I’m careful about attribution under uncertainty
I’d rather under-claim than mislead
When new discovery surfaces appear, my instinct is to make them legible — not impressive.
Closing
AI-driven discovery is still evolving.
This work wasn’t about predicting the future — it was about making sure the present didn’t get misread.