These 2 Mental Models Will Determine Whether Your AI Startup Will Last | HackerNoon

Historically, whenever a new, breakthrough technology has emerged, it’s flooded the market with tons of new custom built applications using that new technology in some shape or form, but it’s often taken a while until we started seeing great products that last.

It’s now the first time in a while that a new breakthrough technology (LLM) has emerged, and the 99999 AI products flood has hit the world, so we’re just about to find out which new technology products and companies are built to last.

The exciting thing here is that breakthrough technologies give young unknown founders and startups the chance to rise to the top and build new monopolies.

In this post, I’ll write about the mental model that I believe will set the lasting AI companies apart from those who will have a moment of fame but fade out afterwards. I’ll also share my thoughts on how long this evolution might take.

#<24 months to world’s new tech monopolies

Yes, I believe we’re gonna go from custom-built products (that may or may not stick) to AI products as commodities we can’t live without, in 24 months or less. A good way that I’ve found to describe this is the Wardley Value Chain map (a youtuber converted it to table form since the original form of the map is a bit hard to understand at first glance).

With LLMs, the world is at Stage 2, slowly moving towards Stage 3.

While most previous technologies took a decade or longer to go from their Custom Built stage to commodity, I believe LLM-powered products will get there in the next 24 months, given that LLMs are the first technology in history that don’t just offer a value multiplier to customers, but also make the production/design/thinking process radically faster and easier.

For example, for much of the past year, I’ve been using Chat GPT’s AI voice feature daily, at times for several hours. Sometimes as a technical advisor when it came to consequential and complex technical decisions’ trade offs, at other times to discuss HR and communication strategies and etc.

Having used it so often I can clearly see what they’re missing (e.g. remembering what we talk about so I can prompt less and less over time and just talk and make better decisions faster). I can also see that AI voice chat products will be something a billion users will use daily. What’s not clear to me is which company (or companies), through what specific user interface and business model and pricing, will deliver that experience. I can speculate (and in fact I’m personally gonna ship something in this space soon), but only time will tell.

Since I’ve been a power user of a lot of AI apps (and since I academically studied AI and started my career in AI product management), I notice I perceive AI products’ role and potential in my life differently from many people who have only recently started adopting AI products. In a way, I think I have really high expectations from AI (and I’m rarely disappointed these days, since AI can actually deliver on most of my expectations), yet most people have really low expectations from AI and therefore never experience the level of productivity/creativity they could reach.

This might be because I have had enough time to go from thinking of AI as a tool, to thinking of AI as an equal partner in my decision making in certain domains, and even build my skills in a way that takes this into account. So it doesn’t hurt me emotionally to talk to an AI like it is the expert in certain domains.

Generally speaking, I believe the winning AI companies and projects will be those that can see AI’s full potential. If you see AI is a marginal improvement, you’ll create products that are marginally useful. It turns out you never win by being marginally better, but by being an order of magnitude or more better.

You can already see this duality among engineers and designers. I believe only one of them will win:

#2 types of engineers

Most senior engineers that I know seem to avoid using AI tools for coding, as if it hurts them to acknowledge it can do their work. They almost get offended if you ask them whether they’d use AI. When they finally decide to try it (because they saw someone they respect do so), they don’t actually use its full capability. Instead, they write very short prompts, reducing AI to a code-completer, and then conclude “it’s just another tool”.

I think part of it is because they can’t [or have incentives against wanting to] think of AI as a better engineer than themselves in narrow problem spaces.

When I look at the prompts written by senior engineers vs junior engineers, the junior engineers are consistently better AI prompters (and therefore more productive in their relationship with AI) since they have incentives for using AI and therefore gain much more from AI. Incentives for, because they import AI’s skills, saving a huge amount of time and effort. Unlike senior engineers who spent years building skills that they can now import.

#2 types of designers

If you’re a designer, you can use AI to create new experience that radically improve how users interact with other users, and/or with information.

User-user and user-information interactions are the core value-generators in every technology product. If you make a 10x improvement to a product’s core value-generating/capturing interaction, you have just given your company an insane amount of competitive advantage in the market.

Recently I’ve been interviewing many UI designer for my startup. As part of the interview process, I try to see how they’d imagine AI usage for a specific problem in a specific type of product, and I noticed there are 2 types of approaches:

  1. AI as a tool: Click a button or icon to see a list of jobs AI can get done for you, such as “summarize”, “create __”, “ask a question” etc.
  2. AI as a participant in the experience user is having: Text AI like you would a human and get value. e.g. “ Who should i talk to if i need ___?”, “can you make an intro between me and user B?”, “what’s the opportunity cost of decision A given what user B shared the other day?” etc.

Again, the value generated and captured between these 2 are different in at least an order of magnitude. If you are a designer (or are hiring one), you should be conscious about whether they think in type 1 or type 2 way by default.

#Mental Model 2: On-Demand vs Always-On AI

I learned about this terminology from Rahul Vohra, CEO of Superhuman at a talk that he gave in 2024 at an event in San Francisco.

On-Demand AI is anything that responds to user’s prompt. Always-On AI, on the other hand, runs in the background and may take the first step in a human-AI interaction, and can form a human-AI-human or human-AI-information that generates value, without being asked.

I have a hard time quantifying which one of these will add the most value to users and companies, but if I were to guess, I’d imagine the latter. Imagine being in an online group chat or community and getting a message that says “I noticed fundraising is top of mind for you, and this other member is looking for companies to angel invest in. want a warm intro?”. That can create value that you as the user were not even aware you could gain in that group/chat or community, maybe because you’re not always checking the feed, but the AI is.

Similarly you can imagine how powerful Always-On AI can be in various other contexts. In a way, you can think of Always-On AI as a luck maximizer. A participant who is aware of your context, needs and wants, and is constantly scouting for new ways to maximize your luck; especially in ways that are unknown to you. I think that has uncapped value.

If you see AI companies that employ these winning mental models, jump onboard and work for them/invest in them because chances are they’ll be the next big tech monopolies.