Start by validating a single core need with a 14-day cycle. Build a minimal feature that solves that pain and collect feedback through email surveys and quick prompts. Track activation, retention, and a simple net promoter signal to determine if you’re close to product-market fit. If feedback shifts suddenly, adjust quickly.
Clay built a flexible, iterative loop instead of a strictly linear plan, and he was getting momentum year after year, though the pace never sacrificed depth. The rhythm kept product teams aligned with decision-making grounded in real usage, not speculation.
In the three-quarters of tests run during the first year, activation rose by 21%, retention after 60 days improved by 14%, and churn dropped 9%. We tracked cohorts by year and segment, then prioritized the top-3 features that moved metrics the most. This approach will allow teams to learn quickly and adjust.
cristina kept the team aligned with a simple decision-making rhythm. we know the customer pains and we felt the early signals; weve integrated a lightweight scoring model that creates a feed of customer signals into the backlog every Friday and informs prioritization decisions.
To replicate this path in your context, start with a single bet, then quantify impact in weeks, not months. This approach targets fulfilling outcomes for users. If a bet werent landing, you cut it in a week; it doesnt mean failure. Use a two-step test: problem-solution fit, then product-market fit. Keep communication lean: email updates, not long memos. Ensure you can stop if metrics fail to improve after two cycles.
Define Clay’s ICP: criteria, signals, and a practical interview plan
Recommendation: define Clay’s ICP around three major segments, apply six specific criteria per segment, and validate with a six-week loop of interviews and usage data. Build a shared source of truth with guardrails, then iterate when you uncover multiple signals that point in the same direction.
ICP Criteria and Signals
- Major segments: Enterprise, Mid-market, and SMB. Define ARR bands (for example, Enterprise 10M+, Mid-market 1–10M, SMB under 1M) and map each segment to a distinct value proposition that matches Clay’s engineering-led focus.
- Company attributes: geography, industry, employee count, procurement maturity, and language needs. Ensure accessibility of data across languages so the team can compare where opportunities live.
- Buyer roles and authority: identify economic buyers, technical leads, and influencers; capture who leads the ticket, who signs, and who vetoes, so you know who to engage first.
- Pain points and outcomes: link specific issues (manual ticket handling, data silos, slow onboarding) to measurable outcomes (time-to-value, cost reduction, release velocity), and set a higher ROI threshold for pilot success.
- Technical fit: assess core tech stack, integration paths, APIs, security posture, data residency, and single sign-on needs; a poor technical fit should reverse interest early.
- Adoption readiness and breadth: measure onboarding speed, number of users per account, breadth of feature adoption, and the mood across teams toward change; ensure the plan is usable by multiple engineers and operators.
- Acquisition signals: procurement tempo, budget-cycle alignment, inbound interest, and speed in progressing from demo to pilot; equally weigh signals across segments to avoid bias.
- Evidence and data sources: product telemetry, CRM notes, support tickets, CS conversations, and public references; these sources form the section of truth for ICP decisions.
- People and processes: involve amins (analysts) and everingham (framework owner) to guide data collection; maintain a shared guardrails approach so the team can act with consistency.
- Representative stories: include anecdotes like jackson to illustrate real-world dynamics, such as where onboarding was overdrawn and how a loop of feedback turned that around.
- Content and accessibility: produce material that is loving to customers and practical for engineers; keep language simple and actionable, not abstract.
Practical Interview Plan
- Target list and sampling: select 12–16 accounts across three segments, with 4–6 interviews per segment; balance inbound and outbound after assessing breadth of potential use cases. Use languages where needed to keep interviews accessible.
- Interview objectives: validate ICP criteria, confirm signals, and quantify expected outcomes; determine if Clay’s value prop aligns with the respondent’s real needs at the right price.
- Interview guide structure: start with discovery questions, then map back to criteria, and finish with a concrete pilot scenario. Use a loop to adjust questions after every two interviews when new signals emerge.
- Sample questions (engineering-focused):
- What is the top blocker your team experiences when evaluating new tools?
- How do you measure the impact of a tool on onboarding and deployment timelines?
- Which integrations and data flows must be preserved for you to adopt a new platform?
- Describe your procurement process and who signs off on vendor contracts; what would trigger a pilot in the next 60 days?
- Sample questions (economic buyer and leads):
- What outcomes would justify moving from a pilot to a production rollout?
- What is the typical budget cycle, and who negotiates terms?
- What ticket or support experiences would signal a successful partnership?
- Data capture and sources: record responses, tag per ICP criterion, and store quotes in a central section of the CRM with a shared source of truth; attach usage data when possible to corroborate statements.
- Guardrails and fairness: keep interviews accessible for non-native speakers, avoid leading questions, and ensure the mood of the room remains constructive; equally weigh feedback from multiple roles to prevent skew.
- Analysis and synthesis: code responses against ICP criteria, identify gaps, and use a reverse method to challenge assumptions if signals conflict; compile findings into a concise ICP update.
- Actions and next steps: after the first round, update the ICP section with new signals, adjust prospecting lists, and prepare a pilot plan that fits guardrails and budget realities.
Early PMF signals: actionable metrics, thresholds, and validation steps
Begin by selecting four signals you can act on within the next two sprints: activation rate, 7‑day retention, cohort repeat usage, and early monetization traction. Set explicit thresholds: activation rate ≥ 25% within 72 hours; 7‑day retention ≥ 40%; repeat usage ≥ 30% of active users; first‑dollar revenue or upsell ≥ $5 per user in the first month. Build a single dashboard that pulls these metrics by cohort and channel. whenever a signal crosses its threshold, pull the data, run a two‑week validation sprint focused on onboarding or messaging changes, and push the learning into the next campaign. Use a narrowed perspective that centers on core segments: new signups from outbound campaigns, street testers, and south‑market users. Pull figures from product analytics, support tickets, and youtube‑driven engagement to triangulate signals. gagan notes this pattern across campaigns and recruiting efforts, and the detail helps prevent drift. understood by the team and aligned with core goals.
Metrics and thresholds

Core signals: activation rate, 7‑day retention, cohort repeat usage, and early monetization. Thresholds: Activation ≥ 25% within 72 hours; 7‑day retention ≥ 40%; Repeat usage ≥ 30% of active users; Early monetization ≥ $5 per user in month 1. Measure by cohort, then break out by channel and geography, with special attention to the south market. Use a 3‑week rolling average to smooth volatility and spot true shifts. If a metric stays below threshold for two consecutive cohorts, treat onboarding, messaging, or pricing as the likely issue rather than featurism. Incorporate qualitative signals from support tickets and youtube comments to validate numeric trends.
Validation steps
Validation plan: choose a two‑week sprint per segment; implement a control vs treatment design by channel or messaging; preserve the baseline from the previous window. Track the same metrics, plus onboarding completion rate and support issue volume. If uplift exceeds 20% relative to baseline, scale changes to additional segments; if not, revert and capture learnings. Document issues, assign owners, and pull recruiting feedback to avoid recurring problems. Involve gagan and street testers to confirm that the observed gains reflect real value rather than artifacts. Conclude with a clear decision on next milestones and PMF direction.
Horizontal vs Vertical: concrete decision criteria and risk checks
Begin with a broad horizontal test across multiple target segments to surface universal value drivers and then narrow to a vertical niche that delivers a repeatable unit economics pattern. actionable steps include 6–8 weeks of cross-segment experiments, standardized onboarding trials, and weekly coaching reviews to keep decision-making sharp.
When you are having to decide between horizontal and vertical, use a small, time-bound goal: uncover which path produces the fastest learning velocity and a clear, scalable model. The approach should be collaborative and thoughtful, with one-on-one coaching moments that surface tacit knowledge from the field. jackson has seen that rapid cycles with disciplined reflection yield far better alignment than grand plans, while cristina emphasizes documenting learnings in a shared coaching log. fidji analytics can help quantify signals without bias, so you can compare segments on the same metrics and avoid optimistic bias.
Key decision criteria cover market, product, and execution signals. Whether you pursue a horizontal or vertical path, you should consistently track target size, activation speed, and unit economics.goal tracking, activation rate, and payback period become the backbone of a clean comparison. Also, track coaching notes and qualitative signals from customer conversations to avoid missing subtle preferences that numbers alone won’t reveal. Sometimes the strongest signal comes from a simple reflection: does the team confidently repeat the same value proposition across segments, or do you need a tailored message per group?
To stay concrete, align your path with these concrete signals: credibility of the problem, speed to value, and clean replication potential. Having a disciplined framework helps you assess risk without paralysis. Thoughtful experiments guarantee you see both the obvious and the subtle differences between segments, while a generalist with creative methods can surface cross-cutting insights that a narrow specialist might miss. This balance–actionable data plus thoughtful interpretation–often yields a successful pivot from broad exploration to tight focus.
Decision-making in this stage should blend objective data with human judgment. For instance, jackson often uses short, focused one-on-one sessions to validate a hypothesis about a segment, while cristina collects broader market feedback to verify whether the problem statement holds at scale. When the data align with a clear target and the cost of learning is low, you can proceed with confidence. If signals are mixed, you may need to try an intermediate path or a time-limited kill switch to prevent wasted effort.
| Criterion | Horizontal signal | Vertical signal | Data sources | Recommended action |
|---|---|---|---|---|
| Target market size | Broad interest across 3–5 segments; TAM > $500M | Single segment shows >$100M annual potential with clear growth | Top-down market reports, early pipeline, CRM trends | If vertical TAM is strong and cost to win is lower, tilt vertical; otherwise continue horizontal tests |
| Time to value (activation) | 2–4 weeks to initial value across segments | 1–2 weeks within chosen segment | Onboarding metrics, activation funnel, trials completed | Prefer vertical if activation is significantly faster and repeatable |
| Unit economics (CAC/LTV) | Average CAC high due to múltiples channels; LTV uncertain | Clear payback within 6–12 months; LTV/CAC > 3 | Billing data, onboarding cost, support hours | Switch to vertical when economics stabilize on a single segment |
| Channel risk | Multiple channels tested with mixed results | One or two channels dominate with stable CPMs | Marketing mix reports, attribution models | If channels drift, reassess; if vertical channels are stable, reduce breadth |
| Product-market fit signal | Consistent qualitative fit across segments, but variability in quant metrics | Strong, repeatable fit in target segment (NPS, activation, retention) | Interviews, NPS, churn, usage data | Lean into vertical if fit signals converge; otherwise broaden horizontal tests |
| Risks and complexity | Lower depth; higher breadth risk | Higher depth; execution more controlled | Delivery timelines, support load, product modulations | Adopt vertical to reduce complexity or maintain horizontal while stability cues emerge |
Actionable milestones for the next sprint: run 3–5 targeted interviews with each candidate segment, ship a vertical landing page variant, and measure activation within 14 days. Also, document learning in a shared reflection log to ensure you capture both quantitative and qualitative signals. Goal-oriented sprints keep the team focused and specifically track whether you are moving toward a scalable model. innovation sometimes benefits from a fidji dashboard that surfaces anomalies early, helping you adjust before costs explode.
In practice, the path often looks like this: run horizontal tests to identify universal value drivers, then pick the strongest segment and run a 4–6 week vertical sprint. The focus should be on replicable onboarding, tight pricing, and fast feedback loops. Continuous coaching, including one-on-one sessions with founders like jackson or operators like cristina, sharpens the decision frame and prevents drift. If you tried a mixed approach and saw clear signals in one dimension, lean into that direction with a rigorous kill curve so you can stop a misaligned path early rather than later. The magic lies in disciplined experimentation, honest reflection, and a patient, thoughtful pace that yields high-quality, scalable growth.
Deep ICP digging: segment by use case, industry, and buyer roles
Start by listing three use cases per ICP, label each with a specific industry and the buyer role that signs off, and quantify the impact in days to value.
Define the ICP matrix
The origin of the problem becomes visible when you map three distinct use cases per ICP and tie each to a specific industry. started signals come from CRM notes, product events, and support tickets; keep the data productive and clean to avoid overdrawn charts. A narrower view emerges when you require a single buyer role to own the decision; thats the line that keeps focus on feasible outcomes and reduces noise. For each segment, define the total value, the paying trigger, and the primary selling charge. These pieces form a soil of evidence you can pull toward a purchase decision. That evidence, when organized, can be pulled into a purchase decision. Walking the matrix helps the team feel grounded and catch wild patterns before they become costly assumptions. Use auto-segmentation to scale the mapping as you add more industries. That choice strengthens the case to invest now. A senior analyst says this pattern reduces noise.
Operationalizing personas and data
For each use case, define specific buyer roles: economic buyer, technical influencer, and end user. Meaning and intention matter: note what each role wants to see, how they view risk, and what moves them to act. Everyone should have a clear view of how value propagates, from early adopters to budget approvers. All sources are referenced and tagged for consistency. The data you reference must be clean, with identified sources and a messy-cleaning rule: tag, standardize, and de-duplicate. A quick reference from lenny in english notes referenced during workshops helps keep terminology consistent and avoids mixed signals.
In practice, build the segmentation by ways: three core use cases, six industries, and four buyer roles. Walking through the pieces, you see a total of 72 segments; track evidence for each: use case, industry, role, buying stage, with a clear value date. The result is a view that guides outreach, product positioning, and pricing alignment with real needs, not vague claims. This approach, when executed with discipline, produces a reliable map your team can act on day by day, paying attention to the customer charge and the path to conversion. Producing measurable outcomes helps everyone stay aligned and focused on the saved time and revenue impact.
7-year timeline: pivots, experiments, and decision gates that defined the path
Start with a clear plan: map a 7-year timeline with four decision gates at month 6, 18, 36, and 60. Build an emulator to run three scenarios–best, median, weaker–and collect data points from 20 prospects each quarter. Align every test to the mission, keep the testing simple, and place soil samples of customer feedback alongside usage metrics. This disciplined setup helps prospects grow from interesting signals into tangible traction; if you want to move fast, view the gates as routes, not goals. The habit requires committing to a practice you can repeat; months of data outperform impulse moves, and a steady pace keeps impatient energy productive.
Year-by-year pivots
Year 0–1 (months 0–12): before building features, map the real problem using 15 interviews per month and a 2-week prototype cycle. in glasgow, a first 6-week pilot with 6 prospects yields 4 stories that point to a real need. Use the soil metric: frequency of sessions, time to value, and drop-off points. Keep the backlog tight with a daily 15-minute standup and a weekly review to adjust the choice of features. This early work defines the path and sets the tone for the next gates.
Year 1–2 (months 12–24): if early signals weaken, theyre ready to pivot–swap target segment, adjust pricing, or remove non-core features. This is where the choice of market and model matters most. The team tests three new offers in parallel in glasgow and two other cities, collecting 18 stories per month to compare results. The emphasis is on landing a repeatable pattern rather than chasing novelty. This period keeps the soil fertile for the next gate.
Patterns that defined the path
The core pattern: test, measure, iterate, decide. The most interesting results come from small, repeatable experiments–months, not weeks. theyre able to use an emulator to run safe forecasts and avoid noise by combining qualitative stories with quantitative points. The team also keeps a simple habit: a weekly 60-minute review of the backlog and gates to avoid vanity features. They keep the mission crisp and the practice of hands-on customer engagement constant; this is how prospects convert and grows. glasgow becomes a recurring testing ground; somewhere between pilot and scale, the product starts to feel like a real solution.
Months 60–84 finalize the scale path: PMF is locked when paying customers expand 3x quarter-over-quarter and the cost to serve drops meaningfully. Implement a lean sales channel and a clear onboarding flow; the team keeps habitually reviewing metrics, not opinions, and treats the gates as a compass for allocation. They avoid heavy feature diets that don’t drive repeat value, and they allocate budget to channels that prove consistent return. The seven-year arc shows a disciplined, data-driven rhythm that sustains growth beyond the initial win.
Clay’s Path to Product-Market Fit – A 7-Year Overnight Success Story">
Kommentarer