Updating Marty Cagan's Product Risk Taxonomy for the Generative AI Era

Josh Korr, Former Product Strategy Director

Article Categories: #Strategy, #Product, #Startups

Posted on

Integrating generative AI into your software introduces unique considerations that your product team should explicitly add to their risk taxonomy and process.

As more clients are seeking help with their AI product strategy, I’ve been thinking about AI’s implications for product teams that use Marty Cagan's product risk taxonomy framework.

A risk taxonomy is a product discovery tool that provides a structured way of assessing the potential — and risk — of a product idea. Product teams use risk taxonomies to ensure they aren't blindly latching onto a solution without considering things like whether it solves a real, high-value customer problem or whether it's viable for the business.

As Cagan writes, "Assessing the product risks is the first thing we do when we’re trying to discover a solution worth building. Once we understand the risks, we have many different techniques for quickly testing ideas against those risks."

I've realized integrating generative AI into your product introduces unique risk areas that I think are worth explicitly adding to the product risk taxonomy:

  • AI Mindshare risk: For a given AI feature idea, how likely are customers to use your product as their AI solution for that need?

  • AI Technical Feasibility risk: Can you implement the ideas given generative AI’s unique technical implications?

  • AI Business Viability risk: Do the ideas work for the business given generative AI’s unique operational implications?

A diagram of an updated version of Marty Cagan's product risk taxonomy, denoting updates for risks specific to Generative AI.

In all of these areas, past assumptions and mental models may not be sufficient for addressing AI-related risk. Here’s a rundown of these risk areas.

Note: In this article, I’m talking only about product discovery risk, not addressing broader risks around AI itself (socio-existential, copyright, truth, etc.).

Risk 1: AI Mindshare

In the past few months, how many times have you rolled your eyes at yet another product introducing the same core ChatGPT-like functionality you've already seen in a dozen other products? For example, summarize text, create from-scratch text, suggest edits to your text, etc.

Each 🙄 is an example of AI Mindshare Risk for that product: the risk that a customer will use someone else's product — not your product — for a given AI feature.

Here are some dimensions of AI Mindshare Risk to add to your product risk taxonomy:

AI Mindshare Risk: Problem/Solution Narrowness

To avoid eye rolls — or at least know when you might provoke them — it’s important to ask these questions as you assess potential AI solutions:

  • Is the problem generic / abstracted, or narrow / domain-specific?

  • Is the AI solution widely available, or narrowly available?

  • Is the AI solution an out-of-the-box LLM capability, or does it require custom functionality or data?

A diagram of Problem/Solution Narrowness, one of the AI Mindshare risks.

The more you address a generic problem with a widely available, out-of-the-box feature, the higher the AI Mindshare Risk. 

AI Mindshare Risk: Increased Product Strata Competition

My colleague Kevin came up with this helpful insight while we were collaborating on AI strategy: in the past, most software products have had a relatively narrow plane of competition. Products in one category or industry might compete with each other, but typically a SaaS product wouldn’t have to worry about direct competition from an operating system or browser.

Large Language Model applications’ wide-ranging core capabilities are changing this dynamic, at least in the near term. Suddenly, all layers of the product strata compete for the same AI Mindshare, as they all integrate with LLMs and get the same broad capabilities out-of-the-box. 

This is a significant new risk area for product teams.

A diagram of Product Strata Competition, one of the AI Mindshare risks.

If you're contemplating integrating an LLM's out-of-the-box functionality into your product, consider the competition across the product strata:

Operating System

Microsoft is integrating OpenAI’s generative AI applications into Windows under its Copilot brand:

A Microsoft marketing asset showing a screenshot of Windows Copilot.

Browser

Microsoft has incorporated Bing Chat into the Edge browser — again integrated with OpenAI’s LLMs and DALL-E image generator.

A screenshot of Bing Chat in the Edge browser.

Browser strata competition will increase further as Microsoft makes Bing available in other browsers, and if Google incorporates its own generative AI applications into Chrome.

Enterprise Office Suite

Microsoft and Google are both integrating generative AI functionality across their office suites. In its announcement of Microsoft 365 Copilot pricing, Microsoft clearly articulates the AI Mindshare Risk to potential competitors outside the office suite space:

"Microsoft 365 Copilot ... puts thousands of skills at your command and can reason over all your content and context to take on any task. It’s grounded in your business data in the Microsoft Graph — that’s all your emails, calendar, chats, documents and more. So, Copilot can generate an update from the morning’s meetings, emails and chats to send to the team; get you up to speed on project developments from the last week; or create a SWOT analysis from internal files."

Browser Extensions and Product Plugins

Products like Grammarly are staking out further claims on generic LLM functionality through browser extensions, plugins, and native integrations.

A screenshot of Grammarly's Chrome browser extension, opened on top of a Google Doc.

Software Incumbents

There's also added competition from incumbents at the top level of the product strata. As Lightspeed Venture Partners' Michael Mignano writes, "If a new startup is building an AI product that is likely to be offered by an incumbent, then the startup will inevitably face a very steep, uphill battle. After all, the incumbent can offer and aggressively market the same commoditized AI technology to an existing user base of highly qualified customers."

General Domain Products

Finally, there's increased strata competition from general domain products. 

In typical business software, addressing specific domain problems or segments can be a successful differentiation strategy. For example, the marketing software space has many specific categories and products that co-exist with general solutions like Hubspot and Salesforce.

This dynamic could turn out differently when it comes to AI Mindshare.

For example, Jasper is an AI copywriting product that lets marketers auto-generate content while maintaining their company’s brand voice and style guide. Jasper has templates for dozens of common marketing content types; includes an AI image generator (why not); and is getting in on the Product Strata Competition with a powerful-looking browser extension.

A screenshot of Jasper's marketing site, showing some of the types of marketing templates the product can generate from scratch.

While these solutions address fairly general domain problems, if marketing teams adopt Jasper as their primary AI solution, it may be harder for more specific marketing products to recapture those customers’ AI mindshare.

All of those examples highlight that if you're contemplating integrating an LLM's out-of-the-box functionality into your product, you should consider increased competition across the product strata.

AI Mindshare Risk: Overpromising

Can you implement a version of your idea that meets your sure-to-be-breathlessly-hyped marketing promise? 

For example, Framer makes some bold promises on its homepage:

A screenshot of Framer's marketing site, showing the marketing copy for its AI website creator feature: 'Start your dream site with AI. Zero code, maximum speed.'

As it happens, I’m in the market for a new website solution for my sing-along empire. (What, you don’t have one?) I thought, “Ok Framer, I’m skeptical but I’ll try it.”

I entered a prompt along the lines of “A website for The D.C. Sing-Along, a hootenanny for the digital age, inspired by concert posters from the 1960s through 1980s.”

Here’s what Framer generated:

A screenshot of the disappointing output of Framer's AI website creator feature.

🤔🤔🤔. Guess it’s time to check out Webflow!

Of course Framer's offering will improve, and there are definitely arguments for shipping even very-MVP features. But I think AI Overpromising is a real risk depending on your context — and how wide the gap between promise and reality.

AI Mindshare Risk: Customer AI Adoption Journey

Finally, how well do you understand your customers’ AI adoption journey?

How do they feel about AI in general? How much have they adopted AI products or features? Which ones? Are they frustrated or delighted by them?

Adding this area to your discovery learning goals will inform everything from value validation to future messaging to strategic urgency.

Risk 2: AI Technical Feasibility

The next set of risk taxonomy updates surfaces AI-specific considerations around technical feasibility. 

Technical Feasibility Risk: AI Competencies

Does your team have the skills and knowledge needed to implement your ideas?

Generative AI-related implementation is different in some respects from good ol’ web development: there are some different fundamental development paradigms, along with a new set of dev apps and tooling to figure out (see below).

Unusual Ventures offers an evocative example of a dev paradigm difference in their AI tech stack rundown:

"LLMs have weird APIs — you provide natural language in the form of a 'prompt' and get a probabilistic response back. It turns out that mastering this API requires a lot of tinkering and experimentation – to solve a new task, you’ll probably need to try a lot of different prompts (or chains of prompts) to get the answer you’re looking for. Simply getting comfortable with the probabilistic nature of LLM output takes time — you’ll need to do extensive testing to understand the boundary cases of your prompt."

We’re pretty far from GET and POST-land here.

As this example implies, AI Competency Risk will be centered around engineering roles. But AI competencies will be important across teams:

  • If product managers and designers aren’t up to speed, they won’t be able to come up with valuable, creative, and realistic AI solutions.

  • If sales and marketing teams aren’t up to speed, they won’t know how to communicate effectively to customers or interpret feedback about AI offerings.

  • If leaders aren’t up to speed, they won’t make informed strategic decisions about AI.

Technical Feasibility Risk: AI Infrastructure

In addition to dev planning around which AI applications to integrate and how to use their APIs, many AI feature ideas would immediately trigger a new set of technical infrastructure requirements.

A new AI integration tech stack is emerging to support use cases that involve getting your product’s data into (and out of) an LLM — i.e., almost any AI feature idea that goes beyond out-of-the-box LLM integration.

Every | VC | firm | has | its | own take on the AI tech stack, but here are some common areas:

  • Development and orchestration frameworks like LangChain, LlamaIndex, and Dust, to both broadly make AI integration development easier and connect all the pieces of your AI infrastructure.

  • Data retrieval / Vector databases like Pinecone, to convert (aka “embed”) your data and documents into a format that LLMs can work with (“vectors”).

  • Fine-tuning solutions like MosaicML or TensorFlow, to train LLMs on domain-specific data.

Like the new API paradigms, much of this infrastructure seems at least somewhat unique to generative AI.

Technical Feasibility Risk: Indefinite AI Development

I said a moment ago, “A new AI integration tech stack is emerging” — I can’t emphasize enough how real-time some of this is. Core AI development apps, tools, and frameworks are being launched, adopted, or funded weekly (if not daily), from new LLMs like Meta’s open-source Llama 2 (announced July 18), to AI development frameworks like LangChain's LangSmith (ditto).

All of this points to another AI product risk: once you begin integrating generative AI applications in your product, assume the integration will require an unusual amount of ongoing development work. For example:

  • Refining your initial API implementation, or adding new features atop the opaque / non-standard APIs 

  • Adopting new tools and frameworks, or revising your existing implementations as the tools are updated

  • “Upgrading” to an LLM’s latest model (e.g., going from OpenAI’s GPT 3.5 to GPT 4)

As Stratechery’s Ben Thompson noted in a recent episode of his Sharp Tech podcast (subscription-only), “One of the big questions is, can you build an application that can easily sub in new models … where you get the new capability for free. And the reality is … that requires a degree of standardization that just isn’t really available and possible now.”

Risk 3: AI Business Viability

The final set of risk taxonomy updates highlights AI-specific considerations around business viability.

Business Viability Risk: The Business's AI Adoption Journey

Just as it’s important to consider your customers’ AI adoption journey, you also need to consider your business’s AI adoption journey.

For example:

  • Has AI been a part of your product for years? Or have you only been paying attention since the ChatGPT boom?

  • Do you have any existing AI features?

  • How sophisticated is your internal understanding of AI?
    • Do you have any developers, product managers, or leaders with AI experience?
    • What's leadership's level of understanding?
    • Who in the company has the most sophisticated understanding of AI and its potential impact on the business and product?

The answers will inform which of your AI ideas are viable for the business, and when.

Business Viability Risk: AI Costs

As with any new technology, AI adoption will add costs, including:

  • Infrastructure and tooling (recurring costs, usage-based costs, and/or costs for managed services)

  • LLM costs (usage-based), if you use a proprietary rather than open-source solution

  • Hiring costs, and/or vendor costs to mitigate staffing gaps

  • Cost of the aforementioned indefinite development

Business Viability Risk: AI Legal Considerations

The legal implications around generative AI use are unsettled and evolving in real-time. Given the uncertainty, legal counsel should be deeply involved in your AI integration planning, whether strategic or tactical.

Key areas of consideration include:

  • Copyright infringement

  • Intellectual property ownership

  • Data privacy and protecting business information when enabling customers to use LLM features on their own data and files

Business Viability Risk: AI Ethical Considerations

As I noted at the start, this article does not address the many ethical questions around AI. But if your software company is considering adding AI features, you absolutely should address those questions.

Don’t just wing it and hope for the best. Be thoughtful and purposeful in your AI approach upfront, knowing you’ll inevitably adjust that approach along the way.

A Temporary Taxonomy Update?

I assume the AI space will eventually become settled enough that all of the above gets subsumed into Marty Cagan’s general product risk taxonomy. At least for now, though, I think product teams would benefit from making the AI-related product risks explicit. After all, we may be in this protean state of flux for a while.

In the meantime, I’d love to hear what other areas you’d add to the AI product risk taxonomy.

Related Articles