The Real AI Future: Not Doom, Not Hype - Just Smart Reality | Ben Evans | The Knowledge Project | Podcast Notes | YouTube Summary

10% daily usage, $300B in spending: Uncover the surprising data and expert insights about AI's real impact on technology, work, and society.

The Real AI Future: Not Doom, Not Hype - Just Smart Reality | Ben Evans | The Knowledge Project | Podcast Notes | YouTube Summary

Table of Contents


AI Isn't the End of the World (But It's Not Nothing Either)

A refreshingly honest take on what AI actually means for the rest of us

Everyone's either panicking about AI destroying civilization or dismissing it as overhyped nonsense. Technology analyst Benedict Evans thinks both camps are wrong.
His take? AI is the biggest thing since the iPhone - significant, transformative, industry-reshaping - but also only the biggest thing since the iPhone. Not electricity. Not the industrial revolution. Not the singularity. Just another major platform shift that will dominate tech for 10-15 years before becoming invisible infrastructure.
What makes Evans' perspective valuable is that he's spent decades spotting patterns in platform shifts - from the internet to mobile to social media. He knows what actually matters versus what just makes headlines. And right now, he sees something fascinating: despite all the hype, most people still don't really use AI.
Survey data shows only 10% of people use tools like ChatGPT daily. Many try it once and never return. Even regular users struggle to think of things to do with it more than once a week. Meanwhile, tech insiders live in a bubble where 90% of people they know are power users.
This disconnect reveals something important about where we actually are in the AI revolution - and where we're headed.
In this deep dive, we'll explore:
  • Why every platform shift feels confusing in real-time (and AI is no different)
  • What past disruptions teach us about winners, losers, and unexpected outcomes
  • Why the "data moat" advantage isn't what you think
  • The real threats to Google, Apple, Microsoft, and Meta
  • Why most people still don't "get" AI - and what that means
  • The questions we should be asking (and the ones we're getting wrong)
No hype. No panic. Just clear thinking about what's actually happening.
Let's dive in.

The Centrist Position: Why AI Isn't Everything (But Still Matters)

Benedict Evans takes a refreshingly balanced view that puts him at odds with both extremes:
The perspective most people miss:
  • AI is the biggest thing since the iPhone - significant, but not civilization-ending
  • It's not electricity, not the industrial revolution, not a path to superintelligence
  • It's another platform shift that will dominate for 10-15 years, then something else will come along
Why this matters:
  • Every generation thinks "this time is different" - and they're both right and wrong
  • The dotcom bubble was different from the 1980s financial bubble, but still a bubble
  • AI will create new jobs, destroy others, and raise weird new questions - just like previous shifts did

History Repeats: What Past Platform Shifts Teach Us

The Internet (1995): Nobody Knew What Would Win

Remember when we weren't sure the internet would even be "the internet"?
What seemed unclear then:
  • Would it be centralized "information superhighways" controlled by cable companies?
  • Would email be bigger than the web? (Mary Meeker thought so in 1995)
  • Would browsers matter? (Microsoft dominated browsers but captured zero value)
  • Search advertising and social media came 5-10 years later - nobody predicted that
The lesson:
  • You can be certain something big is happening without knowing how it will unfold
  • The obvious winners often aren't
  • Value capture happens in unexpected places

Mobile Internet: The Small Mac Revolution

In 2000, everyone knew mobile internet was coming. What they got wrong:
What wasn't obvious:
  • Phones would become small computers, not just "phones with better UI"
  • It would take 10 years to really take off
  • Telecom companies would capture zero value
  • Microsoft and Nokia would become irrelevant
  • Mobile would replace the PC as the center of tech, not complement it
The "what's the use case?" problem:
  • People kept asking: "What would you do on mobile that you can't do on PC?"
  • The answer turned out to be: everything
  • We now don't even say "mobile internet" anymore - it's just the internet

The Elevator Story: How Innovation Becomes Invisible

Until the 1950s, elevators had human operators with levers.
The transformation:
  • Otis created the "autotronic" elevator with "electronic politeness" (the infrared door sensor)
  • It seemed weird and radical at the time
  • Now we don't even think about it - it's just a lift
Why this matters for AI:
  • Today's strange new technology becomes tomorrow's invisible infrastructure
  • We forget how weird previous innovations seemed
  • AI will eventually just be "software"

The Incumbent's Dilemma: Why Leaders Often Lose

The Pattern That Keeps Repeating

Every platform shift, incumbents try the same strategy:
What they always do:
  • Try to make the new thing just a "feature" of their existing product
  • Use it to automate what they're already doing
  • Protect their high-margin legacy business
What actually happens:
  • New companies unbundle the incumbent's offerings
  • The new technology enables things that weren't possible before
  • Sometimes incumbents adapt (Google), sometimes they don't (Kodak)

The Kodak Case Study: More Complex Than You Think

The popular narrative: "Kodak invented digital cameras but ignored them to protect film."
The reality was messier:
  • 1975: Their "digital camera" was the size of a refrigerator - not viable
  • Late 1990s: Kodak went all-in on digital, became the #1 digital camera seller in the US
  • They thought people would print more photos - they invested heavily in photo printers
What actually killed Kodak:
  1. Smartphones - cameras became free with your phone
  1. Social media - people stopped printing photos entirely
  1. Commodity hell - digital cameras had low margins and no differentiation, unlike high-margin film
The parallel to today:
  • Is Google's high-margin search business like Kodak's film?
  • Is AI a low-margin commodity business?
  • We don't actually know yet - the margins keep shifting

The Google Question: Reset or Reinforcement?

Why This Moment Is Dangerous for Google

The real threat isn't that AI search is better:
It's about the reset:
  • People reconsider their defaults during discontinuities
  • It's no longer automatic that you "just Google it"
  • Everyone's trying new things, forming new habits
  • Google's advantages matter less when everyone's starting fresh
Google's actual advantages:
  • Still the best traditional search engine by a wide margin (per the antitrust trial)
  • Massive resources and talent
  • Strong models (Gemini is competitive)
  • But do they have the right org structure and incentives to win the new game?

The Data Advantage Myth

Most people assume: "Google has YouTube, all that data - they'll dominate AI."
Why that's wrong:
  • LLMs need such enormous amounts of generalized text that no one company has enough
  • The data everyone needs is roughly equally available to everyone
  • Meta downloaded torrents of pirated books because they didn't have enough text
  • Google's snippets of text aren't the right kind for training
  • Anyone with a billion dollars can scrape the web
The reality:
  • Data is a level playing field for foundation models
  • Differentiation will come from somewhere else

The Usage Gap: Why Most People Still Don't Get It

The Surprising Numbers

Survey data reveals a massive disconnect:
Actual usage patterns:
  • ~10% use AI tools daily
  • ~15-20% use them weekly
  • ~20-30% tried it once or twice
  • ~20-30% looked and didn't understand the point
  • Many people who try it can't think of a reason to return
Why this matters:
  • Tech insiders live in a bubble where 90% of people they know use AI constantly
  • The real world looks completely different
  • This is early adoption, but with a twist - it's free and easy to access

The "Faster Than iPhone" Trap

People love to show charts: "AI adoption is faster than smartphones!"
Why that comparison is misleading:
  • Smartphones cost $1,000 (or $5,000 for early PCs adjusted for inflation)
  • AI is free - you just visit a website
  • There are way more people online now than in 2007
  • Of course absolute numbers are bigger and faster
The real question:
  • Why do people try it and not come back?
  • Why do even regular users only think of something to do once a week?
  • What does it mean that it's easy to access but hard to integrate into daily life?

The Product Problem: Why All AI Chatbots Feel the Same

The Commodity Challenge

Here's a thought experiment Evans proposes:
The blind test:
  • Give the same prompt to ChatGPT, Claude, Gemini, Grok, Mistral, DeepSeek
  • Do a double-blind test
  • Most people couldn't tell which is which
Yet ChatGPT dominates:
  • Way more usage than competitors
  • Top of app store rankings
  • Gemini bounces between 50-100 in rankings
  • Other AI chatbots don't crack the top 100

The Browser Parallel

AI chatbots today look a lot like web browsers:
What they have in common:
  • Different rendering engines underneath (like different LLMs)
  • But the product is identical: input box, output box
  • Only innovation in browsers in 25 years: tabs and merging search into the address bar
  • Success came from distribution and branding, not product differentiation
The question:
  • Is AI the same - where brand and defaults matter more than the actual product?
  • Or is it more like social media, where Instagram succeeded over Flickr despite both doing "photo sharing"?

No Network Effects (Yet)

Unlike previous platforms, AI doesn't get better because more people use it:
What's missing:
  • Operating systems: more users → more apps → more users
  • Google: more searches → better results → more searches
  • Social media: your friends are there → you're there → more friends join
AI today:
  • More users doesn't make the model better
  • No self-reinforcing cycle
  • "Memory" features create switching costs, not network effects
  • You could probably just ask one AI what it knows about you and tell another
This could change:
  • If AI develops true learning from usage
  • But we're not there yet
  • And we don't know what that would look like

What People Actually Use AI For (And Don't)

The Use Case Matrix

Evans identifies a pattern in who uses AI and why:
Four quadrants:
  1. People with tasks AI is obviously good at (coding, brainstorming) → heavy users
  1. People with tasks AI could help with, but not obviously → occasional users
  1. People good at thinking about new tools → find creative uses
  1. People not good at thinking about new tools → try once, abandon
Why Evans himself doesn't use it much:
  • Doesn't write code (no use for code generation)
  • Doesn't do brainstorming exercises
  • Doesn't need summarization
  • Thinks by writing, not by editing AI output
  • Has to actively think: "What am I doing that AI could help with?" - that's mental overhead

The Spreadsheet Comparison

VisiCalc (1978) offers an interesting parallel:
What made spreadsheets revolutionary:
  • Cost $15,000 (adjusted) for Apple II + VisiCalc + screen
  • Showed accountants: change interest rate, all numbers update instantly
  • Replaced literally a week of manual work
  • Accountants would finish in a week, then play golf for three weeks (didn't want clients to know it was that fast)
The difference with AI:
  • Spreadsheets had obvious, immediate value for specific jobs
  • AI requires you to imagine new workflows
  • The value isn't as clear or immediate
  • Most people don't naturally think "how could a tool change how I work?"

The Salesforce Button Test

The real adoption will come from integration:
The winning pattern:
  • You're in Salesforce, looking at a client
  • There's a button: "Draft email reply"
  • You click it, it works, you send
  • Massive adoption
Versus the chatbot:
  • Blank screen
  • You have to think what to ask
  • You have to form new habits
  • You have to remember it exists
The lesson:
  • Wrapped in existing workflows = adoption
  • Standalone chatbot requiring behavior change = limited adoption

The "Thinking by Writing" Problem

Why AI Changes How We Create

Evans has a fascinating take on why he doesn't use AI for writing:
His process:
  • Writes to figure out what he thinks
  • Writing is thinking, not just output
  • Editing AI output is fundamentally different from writing
  • It's like the difference between composing music and adjusting someone else's composition
His quality test:
  • "Is this what ChatGPT would have said?"
  • If yes, don't publish it
  • Not because people can get it from ChatGPT
  • But because anyone would have said that - it's not adding value
The insight:
  • AI raises the baseline of what counts as insight
  • It makes obvious analysis worthless
  • You have to go deeper to add value

The Slope Problem

Here's where it gets concerning:
The moving target:
  • ChatGPT's "insight level" is increasing rapidly
  • It's probably already at "intern level" in many domains
  • Maybe it hits "master's level" next year
  • In math, it's already superhuman in some areas
  • Eventually, it might surpass most humans in most domains
The question:
  • When the AI's slope of improvement is steeper than yours, what happens?
  • Maybe it passes the average person in 5 years
  • Maybe it passes Evans in 4 years
  • Maybe it passed some people already
The philosophical puzzle:
  • Does originality require knowing you're being original?
  • AlphaGo made "original" moves - but had a scoring system (win/lose)
  • Music generation can make "new" music - but how do you know it's good?
  • LLMs are trained to minimize variance - originality is penalized
  • How would an AI know something is "different but good" versus just different?

The Regulation Trap: Why "AI Regulation" Misses the Point

The Wrong Level of Abstraction

Talking about "regulating AI" is like saying "regulate databases" or "regulate spreadsheets":
Why it doesn't make sense:
  • You don't regulate the technology, you regulate applications
  • We regulate cars, but not "internal combustion engines"
  • We regulate medical devices, not "software"
  • The trade-offs depend entirely on what you're using it for
The economics lesson:
  • All regulation has trade-offs
  • To govern is to choose
  • You can't pull one lever without something else moving
  • If you make it hard to build AI, fewer people will build AI - that's a choice

The California Approach: Treating AI Like Nuclear Weapons

Some jurisdictions tried to regulate AI as inherently dangerous:
What that means in practice:
  • Tight controls on who can build models
  • Restrictions on what you can do with them
  • Assumption: this could create bioweapons or kill everyone
The trade-off:
  • Yes, you reduce hypothetical risks
  • But you also slow innovation dramatically
  • You make it expensive to start companies
  • You push development elsewhere
Evans' view:
  • The "AI will kill us all" narrative is "childish logical fallacies"
  • It's like social media moral panic 2.0
  • But even if you disagree, you must acknowledge the trade-off

The Housing Analogy

This is where it gets clear:
Simple economics:
  • If you make it really hard and expensive to build houses...
  • Houses will be more expensive
  • You can choose that, but you can't then complain about expensive houses
Applied to AI:
  • If you make it really hard to build models and start companies...
  • You'll have less innovation and fewer companies
  • You can choose that, but you can't then complain about falling behind
The US housing problem:
  • Neither free market (price signals work) nor government provision (like Singapore)
  • Broke the free market without replacing it
  • Same risk with AI regulation

Advice for Countries: How to Win at AI

What Evans Would Tell a President

If asked "How do we dominate AI?", his answer might surprise you:
The Silicon Valley replication question:
  • People used to ask: "How do we create another Silicon Valley?"
  • The answer was usually: "You can't"
  • Sometimes: create funding structures, make it easy to start companies
  • Mostly: get out of the way
The national champions problem:
  • Trying to pick winners rarely works
  • This is an economist's question, not a tech question
  • Where has industrial policy worked? Where hasn't it?
  • From a tech perspective: focus on enabling ecosystems, not specific companies
The practical answer:
  1. Don't make it harder - avoid California's approach of treating it like nuclear weapons
  1. Think of it as "more startups" - not picking a specific company to back
  1. Remove barriers - funding availability, regulatory overhead, talent mobility
  1. Accept trade-offs - tight control means less innovation, period

The EU vs. US Approach

Two different philosophies:
EU approach:
  • Treat AI as potentially dangerous
  • Regulate heavily upfront
  • Protect citizens from hypothetical harms
  • Result: harder to build, slower innovation
US approach (Biden era):
  • "This is social media 2.0"
  • Social media was terrible and destructive
  • Need to prevent those problems
  • Result: still restrictive, but less than EU
The cost:
  • Both approaches slow development
  • You're choosing safety/control over speed/innovation
  • That's fine - but own the choice

What Students Should Learn (And Why)

The "Learn to Code" Debate

Should everyone learn programming?
Evans' nuanced take:
  • No, you shouldn't assume you will or won't be a software engineer
  • It's like asking "should you learn an instrument?" or "take theater classes?"
  • Find out if you want to learn to code
  • Don't presume you need to
  • But don't presume you don't need to either
The bigger point:
  • Presume everything will change
  • Presume you'll have many careers
  • Presume you need to stay curious
  • Focus on learning how to think

What "Learning to Think" Actually Means

Everyone says this, but what does it mean?
Evans' experience studying history at Cambridge:
  • Didn't learn history facts
  • Learned how to ask the next question
  • Learned how to break things apart
  • Learned how to read 50-100 books in a week and find what matters
  • Learned how to synthesize information
  • Learned to ask: "What does this actually mean vs. what it looks like?"
  • Learned to evaluate credibility
The same applied to friends who studied:
  • English literature
  • Philosophy
  • Engineering
  • They weren't learning to be historians or philosophers - they were learning to think

The US vs. UK Education Philosophy

A cultural difference Evans noticed:
US approach:
  • If you want a good job: study math, business, engineering
  • Practical degrees lead to employment
  • Liberal arts seen as less useful
  • Focus on immediate job market value
UK approach (traditional):
  • Study what challenges you intellectually
  • Philosophy, history, literature are equally valid
  • You're learning how to think, not what to think
  • The subject matter is almost secondary
Evans' view:
  • He's hesitant to think you can only succeed with certain degrees
  • Goldman Sachs, McKinsey, law firms don't require specific majors
  • 20 years to figure out what he was good at
  • Students can't know yet what they'll be good at
The practical advice:
  • Try different things
  • Find what you're good at
  • Create options for yourself
  • Learn what challenges and pushes you
  • Develop the ability to learn in different ways

The "I Don't Know" Honesty

Evans' refreshing admission:
What he actually says:
  • "I don't fucking know"
  • Sounds like a university commencement speech
  • You don't know what you'll be good at
  • Try to create options
  • Find the skills that match how your brain works
Why this matters:
  • Most advice is overly prescriptive
  • The honest answer is: it depends on you
  • Different people need different paths
  • The goal is discovering your strengths, not following a formula

Lessons from Venture Capital: What Evans Learned at a16z

The Maxims and Sayings

Working in venture capital teaches you patterns:
Not about mechanics, but about understanding:
  • How startups actually work
  • How the "machine" of Silicon Valley creates companies
  • What makes something a good or bad idea
  • If it could work, what would it be?
  • Could these people make it work?
The wrong question:
  • "That's a dumb idea"
The right questions:
  • Could it work?
  • If it did work, what would it become?
  • Are these the people who could make it happen?
One of Evans' best metaphors:
The great museum experience:
  • Go to MoMA, the Met, the Louvre
  • Masterpieces everywhere
  • Hard to distinguish the truly great from the very good
The smaller gallery experience:
  • Wallace Collection in London
  • Old aristocratic palace in Rome
  • 10-15 rooms of paintings
  • Then you see the Raphael - it glows across the room
  • "Oh, THAT'S why he's Raphael"
Applied to startups:
  • See hundreds of pitches
  • "Oh, that's why he's Max Levchin"
  • "Oh no, that's why this is bullshit"
  • 10 minutes in, you know - but you have 45 more minutes to be polite
  • Pattern recognition comes from contrast and texture
What you learn:
  • What good looks like
  • What worked vs. what didn't
  • What people tend to say
  • How things tend to work
  • Pure pattern recognition

The High School Dynamics

Silicon Valley has a unique culture:
The college town effect:
  • Everyone's working on the same thing
  • Like a town with one subject
  • Everyone around you is doing a PhD
  • Of course you're going to do great work - everyone is
  • World experts are down the street
The advantages:
  • Surrounded by people who've done it before
  • Want a CTO who's done it 5 times? They're here
  • Want a head of growth who's scaled companies? They're here
  • Powerful peer effects and expectations
  • Resources and expertise everywhere
The disadvantages:
  • You'll never meet anyone NOT working on exactly this
  • No external context or perspective
  • No one interested in anything else
  • Want to see theater? Go to LA
  • Want to see art? Go to LA or Chicago
  • Intellectual monoculture
The insight:
  • Powerful for focus and execution
  • Dangerous for perspective and judgment
  • Easy to lose touch with how normal people think

The Current State of Play: Who's Winning?

The Great Capex Surge

The numbers are staggering:
2022-2024 spending:
  • Google, Microsoft, AWS, Meta spent ~$220 billion in capex last year
  • This year: probably over $300 billion
  • Nearly tripled in just a couple of years
The wild valuations:
  • Meta bought 49% of Scale.ai for $15 billion
  • OpenAI spin-outs (Safe Superintelligence, xAI) valued at tens of billions
  • Pre-product, pre-revenue labs
  • Just because someone from OpenAI is involved
Mark Zuckerberg in "beast mode":
  • Sam Altman complained: Mark offering people $100 million to join
  • Going all-in on AI infrastructure
  • Massive hiring spree

Model Quality: The Current Rankings

Who has the best models right now?
Google:
  • Clearly firing on all cylinders
  • Making great models
  • Gemini is competitive
Meta:
  • Llama 4 was apparently an embarrassment
  • Scrambling to catch up
  • Open source strategy: make models commodity infrastructure
OpenAI:
  • Still sets the agenda, but less than 2 years ago
  • Sam Altman is a "polarizing figure" (actually: opinions are unanimous and negative)
  • Everyone who's worked with him has quit
  • Weird, contentious relationship with Microsoft
Microsoft:
  • Own models aren't very good
  • Hired Mustafa Suleyman, still struggling
  • Dependent on OpenAI relationship
  • But: will sell tons of Azure to run everyone else's stuff
The China question:
  • "Will China catch up?" - answer was always obviously yes
  • DeepSeek demonstrated this
  • Models are becoming commodities

The Apple Question: Different Game Entirely

Apple's in a unique position:
Their traditional approach:
  • Don't need to be first
  • Want to do it right
  • Don't need every consumer internet thing
  • No YouTube competitor, no ride-sharing, no grocery delivery
  • Also: no chatbot (yet)
The real question for Apple:
  • Does integrating LLMs change the smartphone experience fundamentally?
  • Could it shift competitive balance with Pixel?
  • (Pixel only bought by Google employees and tech press - Google doesn't want to compete with Samsung)
The Microsoft parallel - the scary scenario:
  • 2000s: Everyone needed internet
  • To get internet, you needed a computer
  • Everyone bought Windows PCs
  • But they used them for web stuff, not Microsoft stuff
  • Microsoft lost despite selling the hardware
Could this happen to Apple?
  • You'll still buy the new iPhone (best battery, chip, screen, camera)
  • But everything you do will be someone else's cloud model
  • Not an app from the App Store - a model running elsewhere
  • Apple becomes a beautiful hardware shell for others' AI
The counterargument:
  • You already use iPhone for ChatGPT, DoorDash, Uber, Instagram, TikTok, games
  • It's always been a platform for others' services
  • As long as you're buying the iPhone, Apple wins
  • Plus: if we move to AR glasses, Apple will make those too

The Google Revenue Question

The existential threat to search:
The shift:
  • Search activity moves to LLMs
  • Where does the revenue go?
  • How do you map search behavior to LLM behavior?
  • Do people just shift habits - using ChatGPT as "the new Google"?
The publisher problem:
  • Google sends traffic to websites
  • LLMs just answer questions
  • Publishers lose traffic
  • How does the ecosystem survive?
Google's potential response:
  • They're not out of the game
  • Could absorb this shift
  • Instagram changed what advertising looks like - Google could too
  • But it requires execution and adaptation

The Meta and Amazon Strategy: Make Models Commodity

Two companies want the same thing:
Meta's approach:
  • Make LLMs open source
  • Drive models to commodity status
  • Sold at cost
  • They differentiate on top with Facebook/Instagram social stuff
  • The model is just infrastructure
Amazon's approach:
  • Also wants models to be commodity infrastructure sold at cost
  • That's what AWS is - commodity infrastructure done better than anyone
  • Make money from doing it at scale
  • AWS + ads = basically all of Amazon's profit ($50-60 billion in ad revenue alone)
Why this makes sense:
  • Neither company needs to own the model
  • Both have other ways to capture value
  • Commodity infrastructure benefits their core business

Microsoft: Grabbing God's Coattails

A Bismarck quote Evans loves:
"The great man hears God's footsteps through history and grabs onto his coattails as he walks past"
Microsoft's attempts:
  • First: VR/AR with HoloLens - "There it is!" (We don't talk about that anymore)
  • Now: AI - grabbing on again
Their position:
  • Own models not ranking well
  • Hired Mustafa Suleyman, still struggling
  • Weird, contentious relationship with OpenAI
  • Not really their models
But:
  • Will sell enormous amounts of Azure
  • Everyone needs cloud infrastructure to run this stuff
  • The tension: Do people use ChatGPT directly, or do great products get built on Azure?
The accounting software example:
  • Someone builds amazing accounting software
  • Connects to your bank, does the cool stuff
  • Runs on Azure, uses some LLM (who cares which one)
  • It's just better
  • Microsoft wins without owning the model
Evans' use case:
  • "Do my fucking invoicing for me"
  • Better yet: "Figure out why that client's ERP doesn't like my bank account"
  • "Stop me bouncing emails with someone in India for 3 months"
  • LLMs can't do that yet
  • When they can, that's the killer app

The Incumbent Analysis: Who's Positioned Best?

Google and Microsoft: The Disrupted Disruptors

Both face similar dynamics:
The pattern:
  • Incumbent business potentially disrupted by AI
  • But also: cloud business that sells all the new AI stuff
Google's position:
  • Search revenue at risk
  • But Google Cloud sells AI infrastructure
  • Own models are competitive
  • Question: Can they navigate the transition?
Microsoft's position:
  • Office/Windows less directly threatened
  • Azure is perfectly positioned
  • Models are weaker
  • Question: Can they capture value without owning the model?

Amazon: The Safe Play

Amazon's in the best position:
Why they're insulated:
  • E-commerce not obviously disrupted by AI
  • Might even be enhanced (better recommendations, easier shopping)
  • AWS perfectly positioned to sell infrastructure
  • Ad business ($50-60 billion) continues growing
The only question:
  • How does AI change how people shop on Amazon?
  • LLM recommendations instead of search?
  • But Amazon controls that experience

Meta: The Wild Card

Meta's in an interesting spot:
Challenges:
  • No cloud business to sell AI infrastructure
  • Llama 4 was disappointing
  • Playing catch-up on model quality
Advantages:
  • New ways to monetize through AI
  • Instagram well-positioned for AI-enhanced advertising
  • Open source strategy could pay off
  • Massive resources to invest
The requirement:
  • Need better models
  • Can't just rely on making models commodity if yours aren't competitive

Apple: The Hardware King

Apple's different from everyone else:
The bull case:
  • Still going to sell the nicest glowing rectangle
  • Best chip team in the world
  • Best camera, screen, battery
  • Even if all the AI is someone else's, you need great hardware to run it
  • If we move to glasses/AR, Apple will make those too
The bear case:
  • Becomes like Microsoft in the 2000s
  • Everyone buys the hardware
  • But uses it entirely for others' services
  • Captures less value from the ecosystem
The reality:
  • This has always been somewhat true (Instagram, Uber, etc. aren't Apple services)
  • As long as people keep buying iPhones, Apple wins
  • The 30% App Store cut helps
  • Hardware margins are healthy
The real question:
  • Does AI change the fundamental value proposition of the smartphone?
  • Or is it just another set of apps/services on the platform?
  • Too early to tell

Tesla: The Perpetual Question Mark

The Tesla debate fascinates Evans:
Bulls think:
  • It's a software/AI company
  • Autonomous driving will create winner-take-all effects
  • Camera-only approach will work eventually
  • All that driving data creates a moat
Bears think:
  • It's a car company
  • Competing with entire Chinese industrial policy
  • Flood of equally-good EVs coming
  • Protected by tariffs in US, vulnerable everywhere else
The 10-year-old question still unanswered:
  • Will Tesla get cameras working before Waymo can remove the LIDAR?
  • Waymo works now (with $50k of LIDAR per car)
  • Tesla doesn't work yet (with cameras only)
  • We've been asking this for a decade
The Android parallel:
  • People said Tesla was "the iPhone of cars"
  • Actually: cars are becoming Android with no iPhone
  • Tesla is just another Android phone maker
  • Competing with dozens of Chinese manufacturers
Recent "autonomous" launch:
  • Half a dozen existing Model cars
  • With test drivers
  • Doing geofenced drives
  • Everyone else was doing this 10 years ago
  • Is this the breakthrough, or more of the same?

The Philosophical Questions: What We Still Don't Know

Can AI Be Truly Original?

This is where it gets fascinating:
The AlphaGo example:
  • Made "original" moves no human had tried
  • But: had an external scoring system
  • Every move has a score (closer to winning)
  • Feedback loop tells it the move was good
The monkeys and typewriters problem:
  • Infinite monkeys would eventually type Shakespeare
  • But there's no feedback loop
  • No way to know which output is the masterpiece
  • The Borges infinite library contains masterpieces - but which ones?
For LLMs:
  • Variance is bad
  • Originality gets a lower score
  • Trained to match patterns, not break them
  • How would it know something is "different but good" vs. just different?
The music example:
  • Easy to generate more stuff that sounds like Pink Floyd
  • Easy to make more Grateful Dead
  • ("What do Grateful Dead fans say when they run out of drugs? 'This music's terrible'")
  • But how would AI know people are fed up with 70s prog rock and want punk?
  • How would it know Christian Dior's "New Look" would express post-war desire for luxury?
The deeper question:
  • Is knowing something is "original and good" just pattern matching at a longer frequency?
  • If you zoom out enough, is it still just following patterns?
  • Does it matter if it's "really" reasoning or just right 99.9999% of the time?
  • Is this even the right question to ask?

The Boutique Renaissance

An interesting consequence of AI commodification:
The Tokyo bookshop:
  • Sells only one book
  • Changes monthly
  • You don't choose - they choose for you
  • But you have to know it exists
The spectrum:
  • Amazon: has everything, but how do you choose?
  • Can't just say "what's a good lamp?" when there are 10,000 lamps
  • The boutique: curated, individual, unique
  • But requires discovery
The paradox:
  • LLMs might suggest the unique individual thing
  • But would LLMs also create the unique individual thing?
  • The more LLMs do what everyone would do...
  • The more valuable the truly unique becomes
Applied to content:
  • Scott Galloway does different stuff than Benedict Evans
  • Mary Meeker does different stuff
  • Some is about who you are, your story, authenticity
  • Some is about saying interesting stuff regardless of who you are
  • Some is recommendation algorithms
  • All have value in different contexts

The Department Store Parallel

From Zola's "Bonheur des Dames" (The Happiness of Women):
The 19th century Jeff Bezos:
  • Creates department stores through force of will
  • Invents fixed prices (so you can have discounts and loss leaders)
  • Invents mail order
  • Invents advertising
  • Puts slow-moving expensive stuff upstairs
  • Puts food and makeup downstairs (impulse buys)
The shopkeepers across the street:
  • "Have you seen what that maniac's doing?"
  • "Selling hats and gloves in the same shop!"
  • "He has no morals!"
  • "He'll be selling fish next!"
The lesson:
  • Nothing new under the sun
  • People have freaked out about mass-produced products before
  • People have freaked out about "too much content" before
  • Erasmus was supposedly the last person to read every book
  • "Too much AI slop" - but how many books were published in 1980? Did everyone read them all?
The pattern:
  • Every generation thinks their disruption is unique
  • It is unique - but it's also familiar
  • Just different scales and contexts

The Practical Reality: Why Most People Don't Use AI

The Weekly User Problem

This is the puzzle Evans keeps returning to:
The pattern:
  • Someone tries ChatGPT
  • They "get it" - they see the value
  • But they only come back once a week
  • Why can't they think of more to do with it?
The mental load issue:
  • You have to actively think: "What could AI help me with?"
  • Most people don't naturally think that way
  • It's cognitive overhead
  • Easier to just do things the way you've always done them
The habit formation problem:
  • New tools require new habits
  • Habits are hard to form
  • Especially when the tool is a blank screen waiting for you to think of something
  • Compare to: phone buzzes, you check it (easy habit)

The Use Case Matrix

Evans breaks down AI adoption into four quadrants:
Quadrant 1: Obvious tasks AI is good at
  • Coding assistance
  • Brainstorming
  • Summarization
  • Image generation
  • These people are heavy users
Quadrant 2: Tasks AI could help with, but not obviously
  • Requires imagination
  • Requires understanding what AI can do
  • Requires trial and error
  • These people are occasional users
Quadrant 3: Good at thinking about new tools
  • Naturally curious about workflows
  • Constantly optimizing
  • Find creative applications
  • These people find uses even without obvious tasks
Quadrant 4: Not good at thinking about new tools
  • Just want to get work done
  • Don't naturally think "how could I do this differently?"
  • Try once, don't see the point, abandon
  • This is most people
The reality:
  • You need to be in multiple quadrants to be a heavy user
  • Most people are only in one or two
  • This limits adoption

The "Roughly Right" Problem

When does accuracy matter?
Evans' biography example (2023):
  • Conference organizers used ChatGPT to write his bio
  • Didn't tell him
  • Everything was the "right kind" of thing
  • Right kind of degree, university, experience, jobs
  • Just not actually right
  • But for him: completely useful - spent 30 seconds fixing it instead of an hour writing from scratch
The key insight:
  • For them (needing accurate bio): useless
  • For him (needing a starting point): very useful
  • "Right or wrong" depends on why you wanted it
The quantitative analysis problem:
  • Evans thinks AI has "zero value" for quantitative work today
  • Because the numbers need to be actually right
  • Not "roughly right"
  • You don't want π to be 3.1
  • (Though it depends how big the circle is...)
The error rate reality:
  • Not wrong once in a billion years
  • Wrong a dozen times per page
  • Can't just output it and give to someone
  • Requires checking everything
  • Often easier to just do it yourself

Why Evans Doesn't Use AI Much

This is surprisingly revealing:
His tasks:
  • Doesn't write code (no use for code generation)
  • Doesn't do brainstorming exercises
  • Doesn't need summarization
  • Doesn't create images
  • Thinks by writing, not by editing
His quality bar:
  • "Is this what ChatGPT would have said?"
  • If yes, don't publish
  • Not because people can get it from ChatGPT
  • But because it means he's not adding value
  • Anyone would have said that
His friend's use case:
  • Works at consultancy
  • Needs pencil sketches of concepts
  • Uses Midjourney to generate them
  • Perfect use case
  • Doesn't matter if the person in back has three legs (doesn't anymore anyway)
  • Could Photoshop it out if needed
The mismatch:
  • Things AI is good at: not what Evans does
  • Things Evans does: AI not yet very good at
  • Things where "roughly right" would help: he doesn't do those things
The broader point:
  • This is true for millions of people
  • The use cases don't map to their actual work
  • Or the quality isn't there yet
  • Or both

The Future Scenarios: What Might Happen

The Search and Discovery Revolution

This is the big unknown:
The current system:
  • Google sends you to websites
  • Websites have ads or sell things
  • Publishers get traffic
  • Ecosystem works (sort of)
The LLM future:
  • You ask: "What mattress should I buy?"
  • LLM just tells you
  • No website visit
  • No publisher revenue
  • No ads (or different ads)
The questions this raises:
  • What is SEO for LLMs?
  • How do products get discovered?
  • How do publishers survive?
  • Where does advertising happen?
  • Who captures the value?
The infinite product problem:
  • Infinite retail options
  • Infinite media options
  • Infinite everything
  • How do you choose?
  • LLMs could solve this
  • But they could also create new gatekeepers

The Differentiation Question

Will AI products be distinguishable?
The current state:
  • All LLM chatbots feel the same
  • Input box, output box
  • Different colors and icons
  • But fundamentally identical experience
The browser precedent:
  • Browsers have been commodities for 25 years
  • Same rendering, same basic UI
  • Only innovation: tabs and merged search/address bar
  • Winner was about distribution and defaults, not product
The social media counter-example:
  • Photo sharing is a commodity
  • But Instagram beat Flickr decisively
  • Product and experience mattered
  • Network effects mattered
The open question:
  • Which model does AI follow?
  • Browser-like (distribution and brand matter most)?
  • Or social-like (product and network effects matter)?
  • We don't know yet

The Memory and Switching Costs

One potential differentiator:
The feature:
  • AI remembers your previous conversations
  • Knows your preferences
  • Builds context over time
Is this a network effect?
  • Probably not
  • More like a switching cost
  • But you could probably ask one AI what it knows about you
  • Then tell another AI
  • So maybe not even a strong switching cost
The real network effects would require:
  • The model getting better because more people use it
  • Self-reinforcing cycle
  • More users → better product → more users
  • We don't see this yet
  • Might never see it
  • Or might see it in ways we can't predict yet

The Integration vs. Standalone Question

Two possible futures:
Future 1: Standalone chatbots win
  • People go to ChatGPT/Claude/Gemini
  • Like they go to Google
  • Becomes the new default
  • Brand and distribution matter most
Future 2: Integration wins
  • AI wrapped into existing products
  • Salesforce button: "Draft reply"
  • Photoshop: "Remove this object"
  • Excel: "Analyze this data"
  • People never see the underlying model
Evans' bet:
  • Integration will drive most adoption
  • Because it removes the mental overhead
  • You don't have to think "what should I ask AI?"
  • The button is just there when you need it
The implication:
  • Model providers might not capture much value
  • Application layer captures value
  • Like cloud infrastructure: valuable but commoditized
  • Or like the browser: necessary but not where the money is

The 10-Year View: What Comes Next

The Platform Shift Timeline

Based on previous shifts:
Years 1-3 (where we are now):
  • Confusion about what matters
  • Dozens of competitors
  • Unclear business models
  • Lots of experimentation
  • Early leader might not be final winner (MySpace effect)
Years 4-7:
  • Consolidation begins
  • Clearer use cases emerge
  • Business models solidify
  • Some companies break out
  • Others fade away
Years 8-10:
  • Dominant players established
  • Network effects solidified (if they exist)
  • Becomes "just software"
  • Next platform shift starts emerging
Years 10-15:
  • Mature market
  • Incremental improvements
  • Everyone's moved on to talking about the next thing
  • AI is just part of the infrastructure
The pattern:
  • This has happened with PCs, internet, mobile
  • No reason to think AI is different
  • Except in the specific ways it's different
  • Which we won't fully understand until later

The Employment Impact

Evans' centrist view:
Not civilization-ending:
  • Won't destroy all jobs
  • Won't create mass unemployment
  • Impact similar to previous platform shifts
Not nothing either:
  • Will change what jobs exist
  • Will change how work gets done
  • Will create new categories
  • Will eliminate some roles
The spreadsheet parallel:
  • Accountants thought it would destroy their jobs
  • Instead: could do more work, more complex work
  • Some jobs changed, some disappeared
  • But accounting as a profession grew
The likely pattern:
  • Some jobs automated
  • New jobs created
  • Most jobs transformed
  • Net effect: probably positive, but with disruption
  • Winners and losers, like always

What Comes After AI?

The humbling reality:
In 10-15 years:
  • There will be something else
  • We'll all be talking about that
  • AI will be "just software"
  • Like we don't say "mobile internet" anymore
  • Like automatic elevators are just elevators
The automatic elevator reminder:
  • Until the 1950s: manual operators
  • Otis creates "autotronic" with "electronic politeness"
  • Revolutionary at the time
  • Now: just a lift
  • Nobody thinks about it
Applied to AI:
  • In 2035, nobody will say "AI-powered"
  • It'll just be software
  • The next generation won't remember when it wasn't there
  • We'll be worried about whatever comes next
The lesson:
  • This feels revolutionary because we're living through it
  • Everything feels revolutionary when it's happening
  • Then it becomes normal
  • Then something else comes along

The Final Takeaway: Measured Optimism

What Evans Gets Right

The centrist position is valuable:
Avoiding the extremes:
  • Not "AI will kill us all"
  • Not "AI changes nothing"
  • Not "this is the singularity"
  • Not "this is just hype"
The realistic view:
  • Biggest thing since the iPhone
  • Will reshape industries
  • Will create new winners and losers
  • Will raise new questions
  • Then will become normal
Why this matters:
  • Helps cut through the noise
  • Focuses on actual impact
  • Acknowledges uncertainty
  • Avoids both panic and complacency

The Questions That Matter

What we should be asking:
Near-term (1-3 years):
  • Where does search revenue go?
  • How do publishers survive?
  • What are the real use cases?
  • Why don't more people use it regularly?
  • Can error rates be controlled?
Medium-term (3-7 years):
  • Where is value captured?
  • Do network effects emerge?
  • How does this change work?
  • What new categories get created?
  • Who are the winners and losers?
Long-term (7-15 years):
  • Does this change computing fundamentally?
  • What comes after smartphones?
  • How does this reshape industries?
  • What's the next platform shift?

The Honest Uncertainty

What Evans models well:
Admitting what we don't know:
  • "I don't fucking know"
  • "We don't have answers yet"
  • "It depends"
  • "We'll see"
Why this is valuable:
  • Most people pretend to know
  • Certainty sells better than uncertainty
  • But honesty is more useful
  • Helps avoid bad decisions based on false confidence
The historical lesson:
  • We didn't know how the internet would work
  • We didn't know mobile would replace desktop
  • We didn't know social media would matter
  • We figured it out as we went
  • Same will happen with AI

For the Rest of Us

What to actually do:
If you're building:
  • Focus on real problems
  • Don't assume AI solves everything
  • Don't assume it solves nothing
  • Find where it actually adds value
  • Be prepared to pivot
If you're investing:
  • Understand the trade-offs
  • Don't bet on certainty
  • Diversify across scenarios
  • Remember: early leaders often don't win
  • Value capture might be in unexpected places
If you're working:
  • Stay curious
  • Learn how to learn
  • Don't assume your job is safe
  • Don't assume your job is doomed
  • Develop skills that complement AI
If you're leading:
  • Don't ignore this
  • Don't panic about this
  • Experiment thoughtfully
  • Focus on real use cases
  • Accept that you'll get some things wrong

The Bottom Line

AI is real and important:
  • Not hype
  • Not nothing
  • Genuinely transformative
But it's not magic:
  • Has limitations
  • Has trade-offs
  • Will take time to play out
  • Will surprise us in unexpected ways
The pattern holds:
  • Platform shifts are always confusing
  • We figure them out as we go
  • Some things we expect don't happen
  • Some things we don't expect do happen
  • Eventually it becomes normal
The advice:
  • Stay curious
  • Stay skeptical
  • Stay flexible
  • Don't believe the extremes
  • Focus on what's actually happening, not what people say is happening
And remember:
  • In 10 years, we'll be talking about something else
  • AI will just be software
  • This too shall pass
  • And that's okay

Key Quotes to Remember

On AI's importance:
"This is like the biggest thing since the iPhone, but I also think it's only the biggest thing since the iPhone."
On usage:
"Why is it that somebody looks at this and gets it and goes back every week, but only every week?"
On quality:
"Is this what ChatGPT would have said? If yes, don't publish it."
On regulation:
"If you make it really hard and expensive to build houses, houses will be more expensive. You can choose that, but you can't then complain."
On advice:
"I don't fucking know. You don't know what you're going to be good at. Try to create options for yourself."
On the future:
"In 10 years time it'll just be software."

This is the measured, realistic take on AI we need more of: not panic, not hype, just thoughtful analysis of what's actually happening and honest uncertainty about what comes next.
Watch the full episode:
Video preview

Join The Wisdom Project

Get 1 new concept, idea, framework every week to think better, live better and to make better sense of the world around us.

Free Sign Up
Ayush

Written by

Ayush

Writes articles on The Wizdom Project

    Related posts

    Huberman Sleep Protocols | The Wisdom Project | Podcast Notes | YouTube SummaryHuberman Sleep Protocols | The Wisdom Project | Podcast Notes | YouTube Summary
    Ray Dalio Warns: US Debt Crisis Imminent - The $36 Trillion Problem & Your Investment Strategy for 2025 | Podcast Notes and Summary | The Wisdom ProjectRay Dalio Warns: US Debt Crisis Imminent - The $36 Trillion Problem & Your Investment Strategy for 2025 | Podcast Notes and Summary | The Wisdom Project
    Comprehensive Review of The 5 AM Club: Unlocking Morning Success StrategiesComprehensive Review of The 5 AM Club: Unlocking Morning Success Strategies
    What Are Mental Models & Why Are They Important (With A Simple Example)What Are Mental Models & Why Are They Important (With A Simple Example)
    Naval Ravikant's Guide to Success Without Sacrifice: 15 Life-Changing Insights on Happiness, Wealth, and Freedom | Podcast Notes | YouTube SummaryNaval Ravikant's Guide to Success Without Sacrifice: 15 Life-Changing Insights on Happiness, Wealth, and Freedom | Podcast Notes | YouTube Summary
    Peter Thiel's Shocking Predictions: Why Humanity May Be Facing Extinction (Joe Rogan Podcast) | Podcast Notes | YouTube SummaryPeter Thiel's Shocking Predictions: Why Humanity May Be Facing Extinction (Joe Rogan Podcast) | Podcast Notes | YouTube Summary
    The Art of Spending Money | Morgan Housel | Summary | Podcast Notes | YouTube | BooksThe Art of Spending Money | Morgan Housel | Summary | Podcast Notes | YouTube | Books