First-Principles ICP Definition: A Framework for Early-Stage B2B Outbound Email
A framework (with included exercises) for building outbound targeting based on first principles, before you have customers to analyze.
Most outbound fails for one reason: You’re targeting companies, not problems.
In this post I’ll walk through what that means, how to fix it, and some real-life examples from Grafana Labs.
Before we dive in, some quick background on why targeting matters so much in 2026:
Why Targeting is Crucial
Targeting and ICP (and other basics) aren’t “sexy” topics right now – everyone is trying to figure out how to deploy agents and automate their GTM motion wherever possible.
Ironically, I think that’s why it’s now more important than ever:
AI and agents amplify what you’re already doing. Period. If you have the basics down, automation can be extremely effective, but if not, you can be amplifying spam that gets your domain blacklisted.
Today's outbound playbook looks nothing like it did five years ago. Back then, high-volume templated sequences worked – landing in the primary inbox wasn't a challenge, and volume was the game.
But in the last few years, email providers have cracked down on spam, made it much harder to land in the “Primary” inbox, started banning domains, and have even limited the amount of emails you can send.
This means that today, you have a smaller, finite number of at-bats for cold email.
And as a result, targeting has become the main bottleneck for effective cold email: If your targeting is broken, no amount of AI or automation can save you.
Targeting via First Principles
Eric Nowoslawski, founder of Growth Engine X, teaches three strategies for list building: ‘Forward’, ‘Backward’ and ‘Circular’. Each has its place, but if you’re an early-stage founder or sales leader without many customers to analyze, you need to rely on the “forward” (in other words, first principles) approach.
First Principles Targeting Mistakes
The first (and obvious, surface-level) targeting mistake is being too broad.
Companies will define their ICP as “Series A-B fintech companies in the US with 200-500 employees” and then get confused when their email campaigns don’t convert.
Now, the fix may seem simple (“narrow your criteria”) but being too broad is actually just a symptom of the actual, fundamental issue behind most targeting definitions:
Not building your targeting criteria based on the actual problem(s) you solve.
This is the biggest re-frame: company filters ≠ problem indicators.
A Series B fintech with 300 employees might have your problem. Or they might not. Industry and company size doesn’t tell you.
You need to start with the problem, and then work upward to define aspects of your target profile. Not the other way around.
There’s three parts to doing this effectively: “Problem Definition”, “Solution Fit” and “Economic Drivers”.
Let’s start with Problem Definition.
Layer 1: Problem Definition
You need to ask yourself three questions:
What specific problem do you solve?
What’s evidence a company is experiencing this problem?
What’s evidence they’re actively trying to solve it?
Here are a few examples for Grafana Labs (note: Grafana Labs provides an observability platform built on open-standards that enables companies to keep their software up and running):
Specific problem: “Companies with high mean-time-to-resolution ("‘MTTR’) caused by observability data fragmented across multiple proprietary-agent tools, leading to slow correlation and troubleshooting during incidents”.
Note this is specific, not just “companies struggling with observability”.
Evidence of problem(s): “Several observability platforms listed on job postings and employee LinkedIn profiles, customer complaints on G2/Gartner/Reddit, downtime, distributed cloud architecture (this drives up complexity), an engineering blog mentioning fatigue, etc.” The list goes on.
Evidence of effort to solve: “Adopting Open Telemetry (vendor neutral code instrumentation that enables a migration), increased SRE job posts and/or recent hires, consolidation initiative on job posting, etc”.
I recommend starting with 1x problem and 3x evidence of the problem / effort to solve. Here’s a quick exercise:
Exercise: Problem Evidence Mapping
Problem I solve: _________________
3 observable signals a company has this problem:
_________________
_________________
_________________
3 observable signals they’re trying to solve it:
_________________
_________________
_________________
Layer 2: Solution Fit
The tough pill to swallow: not every company with the problem you solve is ‘right’ for your solution. Lots of ICP frameworks fall apart here.
They identify the problem but ignore the competitive context.
Answering these three questions will help you determine which companies dealing with these problems favourably map to your solution:
How else could they solve this problem? (List alternatives)
In what scenarios is your approach superior?
What’s evidence of those scenarios?
For example, if you’re selling a Prometheus and Grafana native observability solution (Grafana Labs 3-4 years ago before our SaaS platform was built out), you’re not superior for everyone with observability problems.
You’re superior for companies already invested in Grafana and Prometheus who want to maintain cost effectiveness and avoid vendor lock-in. If a company doesn’t care about these things, another solution may be a better fit.
Evidence of this scenario could be: Active Grafana doc views, GitHub repos showing Prometheus deployments, blog posts about open-source observability philosophy, job descriptions mentioning “Prometheus expertise required”, etc.
ICP isn’t “companies with observability requirements”. It’s “companies with observability requirements AND a Prometheus-first architecture AND dense engineering talent AND preference for open standards”.
See the difference?
Here’s the second exercise:
Exercise 2: Competitive Positioning Matrix
Alternative solutions to my problem:
_________________
_________________
_________________
My solution is superior when (be specific about conditions):
_________________
_________________
_________________
Observable evidence of those conditions:
_________________
_________________
_________________
Layer 3: Economic Drivers
Budget is a rough proxy for cost of pain.
A perfect-fit company with a $5K budget (low cost of pain) is a bad prospect if your ACV is $50K.
You’ll want to reverse-engineer where your biggest deals will come from (and in turn, where the most “pain” is), and focus your outbound efforts there. The people most willing to give you money are also the most willing to respond :)
Here are the three questions to predict propensity to spend:
At which types of companies does this problem have the highest cost of inaction?
What are the drivers of larger deal sizes?
What’s evidence of those scenarios?
Note: there’s typically a strong correlation between company size and deal size but that’s not all this is about. A 500-person company might have a tiny problem (budget: $10K) or a massive one (budget: $500K).
Here’s how I’d think about the economic-drivers of our ICP at Grafana Labs:
Highest cost of inaction:
Companies with customer-facing software tied to revenue (i.e., SaaS). Bonus points for low-switching costs (ease of losing a customer due to slowness/downtime). Inaction (or continued poor product performance/uptime) is expensive as it results in customer churn and slower feature development (slower customer acquisition rate).
Drivers of large deal sizes:
Observability is typically purchased via the consumption model, so more data → larger deals.
Evidence:
Company has customer-facing software, they have a large customer base, they’re expanding rapidly (headcount, revenue or funding) they run a distributed cloud architecture (higher telemetry volume, all else equal).
It might seem obvious because it’s OpenAI, but this is why OpenAI spends $200M/year on their observability vendor: Customer facing product, tons of users, growing fast, runs on the cloud, and low switching costs for a user to move to Anthropic, Gemini or xAI.
Extended downtime could be existential.
Exercise 3: Financial Fit Mapping
Company characteristics that imply high cost of inaction:
_________________
_________________
_________________
What are your drivers for larger deal sizes?
_________________
_________________
_________________
Observable evidence of these characteristics:
_________________
_________________
_________________
From Criteria to Scoring: Making It Operational
Now you have a list of characteristics – but there’s one more step.
Not all characteristics are ‘targetable’ until you translate them into objective ‘criteria’ or ‘signals’. This is where you separate “static” from “dynamic” criteria:
Static criteria tell you who could buy
Dynamic signals tell you who will buy now
Static criteria → company characteristics that define your universe (industry, headcount, tech stack, funding stage).
Dynamic criteria → time-sensitive signals that indicate urgency (recent funding, new job postings, leadership changes, product launches).
Exercise 4: Build Your Scoring Framework
Static Criteria
_________________
_________________
_________________
Dynamic Criteria
_________________
_________________
_________________
Next Steps
Once you’ve completed these exercises, you’ll have a well-defined and easily targetable ICP. To validate, I recommend plugging this framework and your flagged criteria/signals into a few LLMs to get feedback and refine what you’ve built.
You can then translate this into an internal source of truth or skill (.md file) that can be read by LLMs and used to start building your outbound campaign targeting.
-Cam Wright
P.S. - if you enjoyed this article, feel free to leave a “like”, “comment” or “subscribe”. I read every comment and will make sure I get back to you.


