Most companies don't stall because of one issue. They stall because product, go-to-market, and operations are misaligned — and everyone is working hard on the wrong thing. When you miss the quarter, the reflex is to blame sales. It's usually not sales. I diagnose what's actually broken and fix that.
I work with CEOs, founders, and investors to get to the real constraint fast — then take ownership of fixing it. Not a deck. Not recommendations. Execution.
Most companies don't stall because of a single issue. They stall because product, go-to-market, and operations are no longer aligned. What worked to get here doesn't work to scale. And the symptoms show up everywhere at once.
This is the inflection point. Not a moment of opportunity — a moment where the business either evolves into something that can scale, or starts to break under its own complexity. Most leadership teams know something is wrong. Very few can pinpoint what.
I work with CEOs and investors at exactly these moments. I operate inside the business to identify where alignment has broken across product, GTM, and operations — not just the symptoms, but the underlying constraint. Then I fix it.
Having built platforms from zero to acquisition and run businesses at $1B+ scale, I've seen these patterns repeatedly. The problem is rarely obvious. But once identified, it can be fixed fast.
The goal isn't more strategy. It's a business that actually works at scale.
I don't offer generic services. I engage when something isn't working and needs to be fixed. Each engagement starts with a specific situation — something that's broken, misaligned, or at risk right now. If one of these sounds like your quarter, that's the conversation we should be having.
I don't produce decks and walk away. I work inside the business — identify the real constraint, align the team around what matters, and drive execution until the model works. That's the only definition of done I use.
I've spent my career at the exact moments when companies either break through — or stall. The good times teach you what great looks like. The hard times teach you what actually breaks, why it breaks, and what it takes to fix it.
Whatever your challenge is, I've likely faced it. The failure modes repeat. The patterns are recognizable. And knowing them — from lived experience at scale, not a framework — is the core of what I bring to every engagement.
I was building enterprise AI products when the rest of the industry was still debating whether AI was real. I understand exactly where this technology is taking every software business — and what the operating model needs to look like to win in that world.
I've repeatedly operated in environments where growth, complexity, and execution begin to break. The same failure modes show up across companies of different sizes and categories. Recognizing them early — and knowing what actually fixes them — makes diagnosis fast and action decisive. I've seen this before. I know how it ends — and how to change it.
Airespace didn't enter the wireless LAN market — it redefined it. Espressive didn't compete in the service desk category — it created a new one. Building a new category is a different challenge than competing in an existing one: it requires a different product strategy, a different GTM motion, and a different way of building conviction in the market. I've done it twice. I know what it takes — and what kills it.
Most operators specialize. I work across the full stack — product-market fit, positioning, go-to-market, and operating model. This matters because the problems companies face are almost never isolated to one function. A GTM problem is often a positioning problem. A pipeline problem is often a product-market fit problem. Fixing the real issue requires seeing all of it.
I was building large-scale enterprise AI products before most companies had AI on their roadmap. I've watched the AI wave build from the inside — not from an analyst's desk. Every company you compete with is racing to figure out what AI means for their product, their go-to-market, and their operating model. I've already lived through that race. I know what the winners get right and where the losers get trapped. That context shapes everything I diagnose and everything I build.
"I've built and scaled category leaders through the hard parts — the missed plans, the stalled growth, the pivots that had to work. I know where software and AI are taking every business right now. I step in when companies face those moments and need someone who's been there and knows what comes next."
Most leadership teams can name the symptom. Very few can identify the actual constraint. That gap — between what looks broken and what is actually broken — is exactly where I work.
If This Sounds Familiar, Let's TalkMost consultants hand over a slide deck. This is different. The engagement model is designed for companies that need someone who can think clearly and act decisively — a senior operator running alongside leadership, not behind it.
Each engagement starts with an honest diagnostic. I will tell you what I find, even if it's uncomfortable, and I will tell you what it will take to fix it. Then we get to work.
I listen to sales calls. I sit in pipeline reviews. I read the board deck and the real numbers behind it. I talk to customers, not just leadership. Within days, not months, I know where deals are dying, where the story is breaking down, and what the organization is actually optimizing for versus what it thinks it's optimizing for.
I share what I found — directly, without softening. I tell you what's actually driving the gap, what it will take to fix it, and what has to stop. Then we agree on a mandate. Not a project plan. A mandate. One clear thing we are going to fix, with clear ownership and a timeline that respects the urgency.
I work alongside the team — in deals, in reviews, in the room where the hard decisions get made. I don't send recommendations by email. I show up, I do the work, I change what needs to change. Accountable to outcomes, not deliverables.
The engagement ends when the company owns the trajectory — when the changes are durable, the team can sustain them, and there's no dependence on outside help to keep the engine running. That's the only definition of success I use.
Two-time founder. Two new product categories created. Built Airespace from zero to #2 market share in 18 months — acquired by Cisco for $450M. Founded Espressive and created the enterprise conversational AI category, achieving 85%+ adoption rates, 65%+ ticket reduction, and three consecutive Forrester Wave leadership positions — before acquisition by Resolve.
At McAfee, ran a $550M global business with full responsibility across product, GTM, and operations. At ServiceNow, led product expansion into new application categories, contributing to a $1B+ portfolio. At Cisco, built the enterprise networking and wireless architecture that defined a category.
Operated across the full range — pre-seed to $1B+. From a two-person founding team to running a $550M global P&L. Every stage, every inflection point. The pattern recognition that comes from operating at this range is what makes the diagnosis fast and the fix durable.
Built the first generation of enterprise AI products at scale — when most companies were still debating whether AI was ready for enterprise deployment. The operating experience from that era shapes how I think about every company navigating the AI transition today.
Multiple Gartner Magic Quadrant and Forrester Wave leadership positions — across companies and categories. Not as an AR exercise, but as a product and positioning problem. Analyst recognition at that level isn't about managing relationships. It's about making sure the product actually does what the category demands, that the narrative reflects it clearly, and that the market understands what you're winning on. I've done this from the inside, as the operator responsible for the outcome.
The same pattern runs through all of it: stepping into hard problems at genuine inflection points and driving them to outcomes. That is what Inflection Point Operating exists to do for others.
I've been on the inside of multiple Gartner MQ and Forrester Wave processes — not as an AR function, but as the operator accountable for whether the product actually deserved to be a Leader.
Most companies think analyst recognition is an AR problem. It isn't. The AR team can write a great briefing — but if the product has gaps, the messaging doesn't differentiate, the demo doesn't land, or the customer references aren't telling the right story, no amount of analyst relations work fixes that. The evaluation reveals what's actually true about your product and where you stand in the market.
I work with companies that want to compete for — or defend — a leadership position. Not by gaming the process, but by making sure the product, the positioning, the customer outcomes, and the competitive differentiation are genuinely strong enough to earn it. The MQ or Wave process is a forcing function. I help you use it that way.
I'll be direct about whether there's a fit — and if there is, I move fast. Reach out. One conversation is enough to know.