Making organic search more useful during evaluation
Making organic search more useful during evaluation and decision-making
The context
As Nulab matured, there was increasing pressure for organic traffic to support pipeline — not just awareness.
Organic was expected to:
contribute more directly to trials and revenue
support evaluation and comparison, not just education
justify investment alongside paid acquisition
There wasn’t a formal “commercial SEO initiative.” There was simply a growing expectation that content should help people make buying decisions — not just find the brand.
What existed (and what didn’t)
Nulab already had some commercial and competitive content:
a handful of competitor pages
some solution-style content
a few use-case comparisons
But coverage was uneven.
Pages varied widely in:
positioning
depth
intent alignment
conversion clarity
Some ranked. Many didn’t.
Most existed because “we should probably have one,” not because they were intentionally designed.
There was no shared model for:
when to compete
how to position honestly
what intent each page should serve
how commercial pages connected to education or product discovery
How the work actually started
This work began with observation, not a playbook.
I spent time looking at:
which competitive and comparison queries people actually searched
where users entered the site before starting a trial
which pages contributed to evaluation versus confusion
how intent shifted between research and decision-making
That led to practical questions:
Where are we credible — and where are we stretching?
Which comparisons are worth showing up for at all?
What does a user need at this stage to move forward?
The answers weren’t uniform. They had to be discovered.
How competitive SEO was approached
Commercial SEO wasn’t treated as a keyword exercise.
In practice, the work involved:
deciding whether Nulab should compete on certain comparisons
rewriting pages to clarify who the product was for — and who it wasn’t
avoiding generic feature checklists when they didn’t reflect reality
balancing persuasion with accuracy
making tradeoffs explicit instead of hiding them
The goal wasn’t to “beat” competitors.
It was to intercept real evaluation moments and reduce friction in decision-making.
How pages evolved over time
There was no single rollout or finished framework.
Pages were:
launched imperfectly
refined based on performance and feedback
expanded with supporting content over time
linked more intentionally into Learn and product pages
Some pages proved consistently valuable.
Others stalled or underperformed.
Instead of forcing uniformity, I paid attention to what worked and adjusted accordingly.
How SEO and conversion were handled
There was no handoff between SEO and conversion.
I directly shaped:
page structure and intent framing
how users were guided from comparison to next steps
CTA tone and placement
the balance between clarity and aggressiveness
Commercial SEO here was about decision support — not just rankings.
What changed as a result
The outcome wasn’t a perfectly scalable engine — but it was meaningful.
Over time:
certain pages reliably attracted high-intent traffic
competitive content became a recognizable acquisition layer
the team developed a clearer sense of what “commercial organic” could realistically support
organic search became more useful during evaluation, not just discovery
The learning mattered as much as the wins.
Why this still matters
Search — especially with AI-driven discovery — increasingly collapses research and evaluation into the same moment.
This work reinforced an approach I still use:
show up selectively
be honest about tradeoffs
design for clarity over coverage
and treat competitive content as decision support, not persuasion theater
Commercial organic doesn’t need to dominate to be valuable — it needs to be useful.
What this shows about how I work
This case study reflects how I approach competitive and commercial content:
I’m selective about where to compete
I prioritize intent clarity over keyword breadth
I’m comfortable iterating without a rigid playbook
I value restraint as much as expansion
The result wasn’t a machine.
It was a more credible, more useful acquisition layer.