Blog
From One of Venture Capital’s Youngest General Partners to a Trailblazing AI InvestorFrom One of Venture Capital’s Youngest General Partners to a Trailblazing AI Investor">

From One of Venture Capital’s Youngest General Partners to a Trailblazing AI Investor

von 
Иван Иванов
11 minutes read
Blog
Dezember 08, 2025

invest early in teams with proven data assets and a clear path to revenue, prioritizing models that translate insights into measurable impact within 18 months.

In practice, the core discipline is truth: the team must publish robust benchmarks and show what is actually seen in live deployments. They are chasing wins by iterating data sources, refining models, and adhering to governance rules. The arc evolved from vanity metrics to revenue-driven signals.

The market rewards teams that can scale with enterprise-grade reliability. The firm aims to invest in companies with strong unit economics and clear data-driven paths to revenues. They seek collaborators who care about customers and create durable value; they align with the community’s needs, and they measure impact by real-world outcomes. david has helped refine the diligence framework, balancing risk and opportunity; frank, data-driven discussions ensure decisions are transparent and aligned with core goals. The reach extends across mid-market and enterprise sectors, and modular models enable faster deployment. When a project fails to meet clear metrics, the team learns quickly and moves on–together.

источник data shows that the true advantage lies in execution discipline; выполните a phased rollout with a tight feedback loop, ensuring compliance and customer validation before expansion. This structure supports sustainable growth and long-term impact.

Robocap UCITS Fund: Since inception delivered over +250 and practical portfolio evolution for AI-focused bets

Recommendation: deploy a lean triad of bets spanning robotics, software, and intelligent agents, with staged capital, tight risk controls, and quarterly reviews. There, learning and customers insight feed product shaping, while transparent feedback loops keep stakeholders aligned.

Since inception, the portfolio delivered over +250 and shows a practical evolution across robotics, software platforms for AI, automation, and agents. The approach emphasizes modular design, original software cores, and disciplined testing within controlled pilots.

Customer insight is gathered through live pilots, monthly feedback, and комментарий from operators on the floor. This input powers learning loops that reduce down-time and deepen product-market fit, making the investments more resilient there, within the operating environment of each market segment.

David leads technical due diligence while Frank drives field validation. Their work centers on data quality, governance, and robust position sizing, translating science into conviction and ensuring the portfolio remains adaptive together with changing market conditions.

The team’s focus on learning, technical rigor, and deeply rooted feedback ensures everything works in concert. Within robotics and automation, the bets emphasize original software layers, scalable architectures, and clear KPIs that customers can validate, which strengthens the overall risk framework and positions the fund for disciplined expansion.

Define the AI-first Allocation Framework and Clear Diversification Rules

Define the AI-first Allocation Framework and Clear Diversification Rules

Recommendation: Allocate 60-70% of capital to AI-native, early-stage bets with proven data loops and a clear path to mass-market adoption; reserve 30-40% for adjacent bets to diversify risk, ensuring speed and learning come quickly. In this world, the approach is partnered with sequoia ecosystems and has evolved to align with bold, rapid execution and speed.

Diversification rules: domain diversification across LLM platforms, computer vision, and autonomous systems; geography diversification across 2-3 regions; stage diversification anchored in early-stage ventures; position sizes disciplined, with max 15% per name and 30% per cluster; use multiples as a core yardstick, alongside metrics, to calibrate value and determine when to exit or double down; through disciplined governance, the approach works and survives corner-case shocks that arise in the market.

Governance: establish a partnered committee in the office, with david and mike joining to evaluate proposals and enforce the original thesis; theyre empowered to pivot quickly, and thats the point for them, maintaining a bold stance and deeply thinking with источник data.

Signal and data: rely on источник along with internal telemetry, external benchmarks, and market signals; track metrics with a live dashboard; the thinking behind those measurements guides decisions through the portfolio, deeply embedding a learning loop.

Execution and horizon: The framework is designed to evolve across decades, joining forces with external partners and incubators (for example, Sequoia-backed programs) to accelerate pace and bold action; the office hosts regular learning sessions to maintain speed and ensure that every move moves toward the original thesis.

Aspect Guideline Metrics / Signals
Allocation bands AI-native/early-stage 60-70%; adjacent 30-40%; tail risk reserved MOIC, IRR, time-to-signal
Diversifizierung Domains: LLMs, CV/robotics, autonomous systems; Geographies: 2-3; Stages: mainly early-stage count of domains, geos, average round size
Position sizing Max 15% per name; max 30% per cluster concentration risk, drawdown
Signal framework Data velocity, model drift, product-market fit refresh rate, drift score, NPV
Governance Monthly reviews; decision owners: david, mike; process designed for speed time-to-decision, approvals

Refine Entry Timing and Position Sizing Across AI Sub-sectors

Next, implement a tiered entry framework tied to sub-sector maturity and conviction. Define explicit entry windows around idea validation, prototype, pilot with customers, and scale milestones. In fast-moving markets like software and platform tech, enter on clear pilot traction within 6–12 months; in autonomous, logistics, and industrial plays, extend to 18–24 months to confirm repeatability and regulatory alignment. Track capital use and runway with precision; if a pilot stalls, shift to else high-conviction areas. Maintain frank discipline to avoid fail loops and protect the incredible magic of scalable opportunities in professional tech and cybersecurity.

Position sizing by sub-sector: allocate capital on a tiered basis: core bets (platform, software, and cybersecurity) receive 50–60% of committed capital; side bets (autonomous, logistics, industrial) 25–30%; reserve 10–15% for experiments. Apply trigger-based reallocation: if metrics like CAC payback improve, shift a portion from side to core; if a pilot fails to hit revenue milestones, trim exposure earlier. This full framework improves risk-adjusted returns and preserves flexibility as markets shift.

Execution signals: quantify entry when customers validate the idea, where pilots prove repeatability, and whether the team can found a scalable model. As said by the svic community, in svic-backed rounds, the community enforces due diligence and frank conviction; together, founders and investors align on a clear roadmap. The cofounderceo perspective adds discipline to prioritization and helps the team stay focused on what matters to customers while scaling across markets.

Enhance Due Diligence: Founders, Data Access, Moats, and Competitive Leverage

Recommendation: demand a documented data-access plan with verifiable sources, a clearly defined moat, and a fast feedback loop that ties product bets to revenues within 90 days, ensuring the team can adapt at speed. This focus should translate into a measurable advantage that scales over time.

  1. Founders and execution discipline

    • Evaluate track record, time-to-value, and the ability to recruit a mentor network; look for a duo with complementary skills and a history of hitting milestones, not chasing vanity metrics.
    • Assess whether the team cares about user outcomes and can articulate a concrete path to profitability. If the team wasnt transparent about finances or unit economics, move on.
  2. Data access, provenance, and moat credibility

    • Data-access plan must include a documented источник for streams, data licenses, and refresh cadence; this enables an incredibly repeatable evaluation of model updates.
    • Licensing terms should be long enough to protect the moat; to ensure, чтобы youre able to audit improvements without friction, and to prevent premature leakage of the underlying advantage.
    • Assess data quality controls, lineage, and the ability to reproduce results; отредактировано and version-controlled docs should exist for every data source.
  3. Moats and competitive leverage

    • Look for durable advantages: first-mover assets, network effects, IP, or unique access to data that competitors would struggle to replicate; quantify switching costs and retention signals.
    • Evaluate market position: is the footprint scalable with unit economics that improve with growth, not merely top-line revenues? The answer should feel sustainable, not a one-off spike.
    • Test the speed of iteration against competitors; a well-timed pivot can preserve advantage during rapid times of change.
  4. Evidence, negotiation terms, and governance

    • Require a clear governance plan for data usage, including access resets, revocation rights, and ongoing due-diligence checkpoints; this matters for long-term value creation and minimizing disputes.
    • Document term sheets around data rights, licenses, and milestones; ensure the terms support scale and protect revenues as the portfolio firms grow, not just at inception.
    • Use a structured feedback loop: collect external opinions, internal metrics, and adviser input; the process should be incredibly disciplined and well-documented.
  5. People, time, and realism

    • Assess team cohesion,能动性, and whether headcount plans align with milestones; times to value should be realistic and aligned with capital needs.
    • Evaluate the culture around speed and quality: are decisions data-informed and iterative, or influenced by egos and hype? A grounded approach beats hype every time.
    • Look for evidence of scale capability; ensure the operating model can handle rapid growth without eroding margins or moat integrity.

Real-world signals to prioritize: verifiable revenues, defensible data access, durable moats, and a leadership team with tangible plans and a track record that resonates with decades of industry experience. This approach reduces risk, accelerates learning, and strengthens the overall position of the firm in a competitive ecosystem, where the fastest informed decisions often define who leads in robotics, software, and beyond.

Strengthen Risk Controls: Liquidity Floors, Drawdown Triggers, and Scenario Analysis

Strengthen Risk Controls: Liquidity Floors, Drawdown Triggers, and Scenario Analysis

Recommendation: establish a liquidity floor equal to 12 months of baseline operating costs, plus a 25% contingency. Example: if monthly burn is 2.0M, runway floor = 24M; with cushion, target = 30M. Automate forecasting and cash tracking so core data is visible in a single dashboard; automation builds trust and keeps founders aligned with ground truth. If runway touches the floor, выполните escalation steps to preserve long-term value; this approach mirrors established practices from sequoia and the sarahs data seen in practice; numbers are clear, started today to help the firm stay sunny and become full and resilient together.

Liquidity floors must be anchored to asset needs and the long-term vision. Build a dedicated reserve for operations, vendor obligations, and portfolio support, calibrated with historical data and stress tests. Use automation to pull data from accounting, treasury, and founders’ reports; compare against sequoia-like benchmarks and keep the core data visible to leadership. The ground truth should be deeply trusted, and the system should still operate during volatility; there is magic in automation that makes this transparent and repeatable.

Drawdown triggers define a ladder of responses. Set thresholds at 15% NAV decline to trigger an immediate risk review, 25% to prompt capital reallocation away from discretionary bets, and 40% to pause follow-on funding. Attach clear timeframes: respond within 2 quarters in a bear scenario; shorten to 6–8 weeks in sharper stress. This structure protects asset value there and helps founders navigate pressures without abrupt changes in strategy.

Scenario analysis tests resilience under three conditions: base (long-term growth, steady inflows), adverse (revenue and inflows shrink by mid-teens), and severe (additional shocks that push liquidity toward the floor). For each, simulate cash flows, asset values, burn, and financing inflows to generate 12-, 24-, and 36-month liquidity projections. Translate results into actionable steps: cut discretionary spend, renegotiate timing with suppliers, accelerate collections, and tighten non-core hires; the plan should be ready to implement together with the firm’s governance. The sunny forecast is preserved by disciplined playbook and clear triggers, like a well-tuned engine that becomes full of momentum over time.

Implementation steps are concrete: started last quarter, already tested in a dry run, and ready for live deployment. Assign ownership to the risk lead, configure the data pipeline, calibrate the floor and triggers, and build a quarterly review pack. Publish the scenario results to stakeholders and keep the document refreshed; отредактировано for the Q4 version to reflect new data and lessons learned. This program helps founders and team members see the numbers, stay aligned, and operate with trust and speed, deeply rooted in the core discipline of risk management.

Establish a Structured Post-Event Review Process for 2018 and 2022

Recommendation: implement a 7-step post-event review protocol, due within 10 business days after each gathering, with a named owner and a mentor to ensure accountability and clear ownership.

Collect 2018 and 2022 numbers: registration, attendance, session counts, follow-up actions, and sponsor exposure. Gather qualitative комментарий from participants and stakeholders; apply models to forecast outcomes in asset utilization, tech uptake, and industrial impact. Emphasize gaps versus plan and surface what likely shifted between editions; incorporate insight from sarahs feedback; align with geneva and office workflows; report to teams via linkedin and facebook channels to validate external resonance, yeah, without overloading stakeholders.

Define a compact action plan: assign owners for each item, set realistic deadlines, and track progress in a shared dashboard. Consolidate findings into a 2-page executive summary suitable for the office briefing and for external visibility. Ensure the process remains clear, concise, and focused on what works; structure the review to minimize disruption to ongoing operations and to support continuous improvement across teams.

Documentation and follow-up: the final outputs should be tagged выполните und отредактировано to signal execution and edits. The process should surface concrete next steps, close the loop on what says how, and identify actionable assets to reuse in future programs. This helps they look at the journey, spot gaps, and iterate, ensuring the numbers and qualitative outcomes tie to strategic goals–with Geneva, linkedin, and office channels reinforcing accountability across the ecosystem.

Kommentare

Einen Kommentar hinterlassen

Ihr Kommentar

Ihr Name

E-Mail