Welcome to the Free edition of How They Make Money.
Over 160,000 subscribers turn to us for business and investment insights.
In case you missed it:
NVIDIA reported its October quarter earlier this week, with the world holding its breath and dissecting every move in the stock price.
While Wall Street often fixates on the latest quarter, today, let’s take a step back to examine the broader trend shaping NVIDIA’s historic rise. The stock is up 10X since we first covered it in this newsletter two years ago. Already more profitable than Alphabet and Amazon, NVIDIA is on track to leapfrog Microsoft and Apple in net income in the coming quarters—a dazzling ascent for the history books.
The turning point? November 2022, when OpenAI introduced ChatGPT to the world. NVIDIA CEO Jensen Huang has called this the “iPhone moment of AI.”
Fast forward two years, and NVIDIA’s latest GPU architecture, Blackwell, is shipping at scale. As Huang put it:
“The age of AI is in full steam, propelling a global shift to NVIDIA computing. Demand for Hopper and anticipation for Blackwell — in full production — are incredible as foundation model makers scale pretraining, post-training and inference.”
Huang highlighted two trends fueling this cycle:
Platform shift from coding to machine learning (IT modernization).
Rise of AI factories (new industries emerging from gen AI apps).
AI-native startups are flourishing, and successful inference services are multiplying. If AI follows the trajectory of the mobile economy, it’s 2009 all over again—Instagram didn’t even exist yet—and we’re just getting started.
Today at a glance:
NVIDIA’s Q3 FY25.
Scaling limits of AI.
Key quotes from the call.
What to watch looking forward.
1. NVIDIA Q3 FY25
NVIDIA’s fiscal year ends in January, meaning the recently reported October quarter marks Q3 FY25. I’m focusing on sequential growth (quarter-over-quarter), a better representation of the momentum.
Income statement:
Revenue jumped +17% Q/Q to $35.1 billion ($2.0 billion beat).
⚙️ Data Center grew +17% Q/Q to $30.8 billion.
🎮 Gaming grew +14% Q/Q to $3.3 billion.
👁️ Professional Visualization grew +7% Q/Q to $0.5 billion.
🚘 Automotive grew +30% Q/Q to $0.4 billion.
🏭 OEM & Other grew +10% Q/Q to $0.1 billion.
Gross margin was 75% (-1pp Q/Q), in line with guidance.
Operating margin was 62% (flat Q/Q), in line with guidance.
Non-GAAP operating margin was 66% (flat Q/Q).
Non-GAAP EPS $0.81 ($0.06 beat).
Cash flow:
Operating cash flow was $17.6 billion (50% margin).
Free cash flow was $16.8 billion (48% margin).
Balance sheet:
Cash and cash equivalents: $38.5 billion.
Debt: $8.5 billion.
Q4 FY25 Guidance:
Revenue +7% Q/Q to $37.5 billion ($0.5 billion beat).
Gross margin 73 % (-2pp Q/Q).
So what to make of all this?
NVIDIA delivered a revenue beat of 6%, slightly ahead of last quarter’s 5% beat but below the high double-digit beats earlier this year. This reflects the market's elevated expectations and the inevitable "law of large numbers." While NVIDIA’s growth remains robust, the pace is naturally slowing as comparisons toughen.
⚙️ Data Center accounted for 88% of overall revenue (flat Q/Q), growing 112% year-over-year and 17% sequentially. Key drivers within Data Center:
⚡ Compute: Demand for Hopper GPUs fueled 22% sequential growth. These chips enable AI model training and inference at scale, with H200 revenue reaching “double-digit billions.”
🔌 Networking: Sequentially declining due to demand lumpiness, growth is expected to resume in Q4. Spectrum-X Ethernet solutions for AI tripled year-over-year.
🎮 Gaming revenue grew 14% sequentially to $3.3 billion, driven by GeForce RTX GPU demand and strong back-to-school sales. Supply constraints are expected to impact Q4.
👁️ Professional Visualization revenue rose 7% sequentially to $486 million, with Omniverse and AI-related workflows driving adoption.
🚘 Automotive revenue surged 30% sequentially to $449 million, driven by strong adoption of NVIDIA’s AI-powered autonomous driving and cockpit solutions.
📉 Margins showed slight pressure due to the Blackwell production ramp, with gross margins dipping to 75%. Management expects further compression toward the low-70s in early FY26, followed by a rebound as production scales. Operating expenses increased, reflecting investments in next-generation products.
🔮 Looking Ahead: Demand for both Hopper and Blackwell chips continues to outstrip supply and is expected to remain constrained into FY26. Management’s improving supply visibility will support future growth. While NVIDIA remains at the forefront of the AI revolution, tougher comparisons and rising competition from AMD and custom AI chips could present challenges.
2. Scaling limits of AI
A growing number of skeptics argue that we may be approaching a ceiling in the scalability of AI applications. Jensen Huang, however, remains optimistic, pointing to untapped opportunities driven by advancements like reinforcement learning and inference-time scaling.
During the Q&A portion of NVIDIA's earnings call, the first question addressed this critical debate: has scaling for pre-training large language models (LLMs) hit its limit?
Huang offered an insightful perspective:
“This is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we're learning, however, is that it's not enough, that we've now discovered two other ways to scale.
One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback, and all forms of synthetic data generated data that assists in post-training scaling.
And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI's o1, which does inference time scaling, what is called test time scaling. The longer it thinks, the better and higher-quality answer it produces.”
Huang’s comparison between physical laws (universal and immutable, like gravity) and empirical laws (patterns observed through experimentation) is crucial. The scalability of AI—an empirical trend where better models arise from increased data and compute—continues to evolve. Huang’s remarks suggest these new scaling methods, though still in early stages, could extend AI's growth trajectory.
AI is still in its infancy. Mobile-first apps like Uber and Airbnb didn’t emerge until 3 years after the iPhone’s launch. Similarly, the most transformative applications of AI may still be on the horizon.
3. Key quotes from the earnings call
CFO Colette Kress:
On Data Center:
“NVIDIA Hopper demand is exceptional and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest product ramp in our company's history.”
Kress provided updates on the three major customer categories:
☁️ Cloud Service Providers (CSPs): Contributed ~50% of Data Center revenue (up from 45% last quarter), with hyperscalers like Amazon, Microsoft, and Google leading the charge.
💬 Consumer Internet Companies: Meta, with its Llama models, is a key player, leveraging AI for applications like agents, deep learning recommender engines, and gen AI workloads.
🗄️ Enterprise: Thousands of companies are building generative AI apps and co-pilots across industries like healthcare, education, and robotics.
On Blackwell:
“Blackwell is in full production after a successfully executed mass change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. […] Blackwell is now in the hands of all of our major partners and they are working to bring up their Data Centers.”
Kress highlighted Oracle’s Zettascale AI Cloud Computing clusters, scaling to over 131,000 Blackwell GPUs. Blackwell GPUs offer a 4x reduction in cost compared to H100s for GPT-3 benchmarks, enhancing total cost of ownership.
On NVIDIA AI Enterprise:
“We expect NVIDIA AI Enterprise full year revenue to increase over 2 times from last year and our pipeline continues to build.”
NVIDIA’s software, service, and support revenue is set to exit the year annualizing at over $2 billion. This segment—less prone to hardware cyclicality—could become a cornerstone of long-term growth as the CUDA-compatible installed base expands. If you invest in NVDA for the next decade, this part of the thesis is crucial.
On export restrictions:
“As a percentage of total Data Center revenue, it remains well below levels prior to the onset of export controls. We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.”
As a reminder, the US regulations affect the highest performance levels.
CEO Jensen Huang:
On the AI factory metaphor:
“We're essentially saying that these data centers are really AI factories. They're generating something. Just like we generate electricity, we're now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. […] It's unlike a data center of the past.”
Huang has a knack for explaining the shift from traditional data centers to facilities purpose-built for gen AI.
On Physical AI:
“There's now a whole new era of AI, if you will, a whole new genre of AI called physical AI. Just those large language models understand the human language and how the thinking process, if you will. Physical AI understands the physical world.
And it understands the meaning of the structure and understands what's sensible and what's not and what could happen and what won't. And not only does it understand but it can predict, roll out a short future. That capability is incredibly valuable for industrial AI and robotics.”
NVIDIA’s Omniverse is central to this vision, enabling simulations and digital twins for industrial applications.
On the multi-trillion-dollar opportunity:
“The age of AI is upon us and it's large and diverse. NVIDIA's expertise, scale, and ability to deliver full stack and full infrastructure let us serve the entire multi-trillion dollar AI and robotics opportunities ahead.”
From enterprises adopting AI agents to industrial robotics investments, Huang sees vast, untapped markets.
On the CUDA ecosystem:
“Everybody knows that if they innovate on top of CUDA and NVIDIA's architecture, they can innovate more quickly […] (This) large installed base (means) that whatever you create could land on a NVIDIA computer and be deployed broadly all around the world in every single data center all the way out to the edge into robotic systems.”
With a gigantic installed base in the making, CUDA could be NVIDIA’s next big growth avenue. The main difference? It would be recurring revenue with software-like margins.
4. What to watch looking forward
Buybacks continue
NVIDIA maintained its aggressive buyback pace in Q3:
$9.5 billion in FY24.
$7.7 billion in Q1 FY25.
$7.2 billion in Q2 FY25.
$11.0 billion in Q3 FY25.
While this signals management's confidence in the company’s prospects, some investors might question whether the current cash influx would be better spent on R&D, M&A, or other growth initiatives to sustain long-term growth.
Valuation at a crossroads
NVDA was notably absent from high-profile hedge funds’ top picks in the latest 13F filings we discussed on Tuesday, suggesting a cautious stance in 2024. Funds have trimmed their exposure slightly, but NVIDIA remains one of the most owned stocks—investors aren’t jumping ship.
Here are the current forward PE ratios (via Ychartz):
NVIDIA 52.
AMD 41.
Microsoft 32.
Apple 31.
Despite a seemingly higher multiple, NVIDIA’s valuation reflects its rapid earnings growth. With $19 billion in net profit in Q3, this isn’t a dot-com-style bubble like Cisco in 2000. NVIDIA’s profits are tangible, and its fundamentals are solid.
So what’s the catch? NVIDIA’s forward earnings rely heavily on sustained demand for Blackwell GPUs. But what comes after? The semiconductor industry is notoriously cyclical, and NVIDIA’s reliance on a few very large customers creates a concentration risk. A shift toward internal solutions by CSPs could significantly impact revenue.
Meanwhile, competition is heating up. AMD grew its Data Center revenue by 25% sequentially in Q3 (albeit from a much smaller base). Could differentiated offerings help AMD close the gap? We recently visualized AMD’s earnings and covered these dynamics.
The trillion-dollar question
Jensen Huang directly addressed a critical question during the Q&A: when will the hardware cycle enter its “digestion” phase? His response:
“I believe that there will be no digestion until we modernize a trillion dollars with the data centers. […] As you know, IT continues to grow about 20%, 30% a year, let's say. And let's say by 2030, the world's data centers for computing is, call it a couple of trillion dollars. And we have to grow into that. We have to modernize the data center from coding to machine learning. […]
The second part of it is generative AI […] If you look at OpenAI, it didn't replace anything. It's something that's completely brand new. It's in a lot of ways as when the iPhone came, it was completely brand new. It wasn't really replacing anything. And so we're going to see more and more companies like that.”
This is such a crucial soundbite showing Huang’s belief that the shift to GPUs combined with the rise of gen AI could extend this hardware cycle well beyond traditional models. Analysts may be underestimating the timeline, as gen AI opens entirely new markets.
Final thoughts
If Chat-GPT was the "iPhone moment of AI"—as Huang described—then it’s like 2009 for the app economy. The true transformation is still ahead, with the potential for many more AI-first companies (the next Instagrams, Ubers, and Airbnbs) to emerge. The shift from mobile-first to AI-first is just beginning.
That’s it for today!
Stay healthy and invest on!
Get Your Business a Custom Visual
Interested in custom visuals for your organization or brand? Complete the form here, and we'll get in touch.
Disclosure: I am long AAPL, AMD, AMZN, GOOG, and NVDA in App Economy Portfolio. I share my ratings (BUY, SELL, or HOLD) with App Economy Portfolio members.
Author's Note (Bertrand here 👋🏼): The views and opinions expressed in this newsletter are solely my own and should not be considered financial advice or any other organization's views.