More than two years since the launch of OpenAI’s ChatGPT catalysed a wave of interest in generative artificial intelligence (AI), the long-term implications of the technology remain highly uncertain. The pace of innovation is rapid, regulatory frameworks are still evolving, and the ultimate economic value of many AI use cases is yet to be proven. However, amid this uncertainty, the tech giants’ strategies are starting to take shape with some investing deeply in building their own models, such as Alphabet and Meta, and others taking a more model-agnostic approach, such as Microsoft and Amazon.
In a landscape where large language model (LLM) performance gains can be quickly eroded, the ability to scale distribution across deeply entrenched enterprise and consumer platforms is likely to prove more decisive than technical differentiation alone. In this newsletter, we explore how two Seilern holdings, Microsoft and Alphabet, are approaching this challenge. While their strategies may differ in execution, both have recognised the need for large upfront investments which are necessary to unlock new revenue streams and will reinforce their competitive positions, anchored in their roles as cloud hyperscalers and the powerful distribution leverage this provides.
Microsoft’s model-agnostic approach
Microsoft’s generative AI strategy is to focus on distribution via an LLM-agnostic approach. In contrast to competitors prioritising frontier model development, Microsoft has strategically emphasised downstream deployment, based on the view that LLMs are on a path to commoditisation. Although a large share of Microsoft’s AI-related revenues today, forecast to be $13bn in annualised terms, are derived from OpenAI’s model training workloads (the “compute layer” in the figure below) hosted on Azure, Microsoft is increasingly orienting its infrastructure investments and revenue generation toward inference workloads (the “application layer”), which offer more immediate monetisation opportunities and, the company believes, greater long-term potential.
Figure 1: Large Language Model (LLM) supply chain example1

Source: The Ada Lovelace Institute
An example of this is the Azure AI Foundry which offers enterprise customers access to over 10,000 models with exclusive access to OpenAI’s models alongside many others such as Meta and DeepSeek’s open-source models and, most recently, xAI’s offerings. The aim of the Foundry is to provide a unified platform for developers to use whichever model suits their purpose best and allows them to test and deploy AI models directly. The ultimate goal is to drive inference compute demand for the Azure network by making it easy and attractive to use a diverse range of AI models on their platform. This model-agnostic approach ensures Microsoft remains a central player regardless of which specific AI models gain prominence.
Microsoft’s evolving partnership with OpenAI, which began in 2019, provides key insights into its strategic direction. Notably, Microsoft has remained absent from OpenAI’s ambitious “Stargate Project,” which aims to invest $500bn in compute infrastructure over the next four years. While Microsoft retains exclusive distribution rights for OpenAI’s models, its decision not to vastly increase infrastructure commitments for training purposes aligns with the company’s broader model-agnostic strategy. Microsoft aims for Azure to be the platform of choice, not just for OpenAI models, but for any model its customers choose for their inference needs.
Rather than aggressively ramping up capital expenditure in anticipation of future breakthroughs, Microsoft is calibrating its incremental supply-side investments (such as GPU clusters, data centre expansions, and proprietary model development) to observable customer demand and realised value from inference workloads. In practical terms, Microsoft closely monitors usage patterns, enterprise adoption rates, and revenue contribution from deployed AI tools and uses these metrics to guide further infrastructure investment. This demand-driven approach ensures that capital is allocated where usage is likely to be stickier and where future profit pathways are likely to be clearer and more predictable.
Alphabet’s technical and cost advantage
Alphabet’s approach to generative AI is rooted in its philosophy of continuous innovation leveraging its capabilities across research via Google DeepMind, infrastructure with its Google Cloud service and distribution with its products and services used by billions of customers globally. Although the company offers third-party AI models, it is also strategically investing in building its own generative AI ecosystem with its multimodal2 LLM family, Gemini. While all three major hyperscalers (Amazon, Microsoft and Google) offer comprehensive cloud services, it is Google that is often cited as having the most vertically integrated cloud infrastructure, from subsea cabling to its extensive use of in-house custom silicon, which provides several strategic advantages.
A prime example of this is their in-house Tensor Processing Units (TPUs), chips specifically engineered for AI compute purposes. First deployed in their data centres in 2015, these TPUs now power all training and inference for their cutting-edge Gemini models. Through years of continuous iteration, TPUs have enabled Google to offer some of the best-in-class AI models, but also with remarkable cost-efficiency. As the chart below illustrates, Google’s models operate at the “Pareto frontier” of performance (shown on the y-axis) and cost (on the x-axis) thereby making them an appealing proposition for customers. In other words, any alternative would be sub-optimal for the user such that any gain in one area (like price) would inherently require a trade-off in another (like performance).
Figure 2: Large Language Model (LLM) Performance vs. Cost Per Compute

Source: Google for Developers Blog (17 April 2025)
These chips give Google a structural cost advantage that runs throughout its entire AI operation. While competitors like OpenAI (reliant on Microsoft Azure) and Anthropic (reliant on Amazon Web Services) must procure expensive third-party cloud infrastructure for their compute requirements – often bearing what some term the “Nvidia tax” on high-end GPUs – Google’s proprietary hardware circumvents a significant portion of these external costs.
DeepSeek’s release in February of their highly competitive open-source AI models exposed the staggering cost inefficiencies of many large language model (LLM) developers and served as a stark wake-up call for the industry. Yet for Google, with its years of investment in cost-optimised training and a decade of continuous iteration and improvement on its TPUs, the impact was remarkably muted. In a recent interview, Alphabet CEO Sundar Pichai noted that when Google benchmarked itself against DeepSeek’s models, it found its own Gemini models were as efficient and even better in some cases. This shows the significant edge Google derives from its integrated hardware approach.
Distribution remains key
Whilst the current momentum around generative AI appears unstoppable there is always the risk that demand cools more rapidly than anticipated. However unlikely this scenario may seem today, Alphabet and Microsoft are insulated from such a downturn precisely because of their cloud hyperscaler status and unparalleled distribution capabilities. Even if the immediate hype surrounding specific AI applications diminishes, the underlying demand for cloud computing, data storage, and essential enterprise software will persist. As the generative AI landscape continues to evolve, the ultimate winners will be determined not solely by who builds the most advanced models but by who can distribute them most effectively at scale.
Consider the case of OpenAI. While it is widely perceived as a leader in AI model innovation, its operational and commercial viability is inextricably linked to Microsoft. Under their partnership, OpenAI leverages Microsoft Azure for its compute needs,3 with all its API workloads hosted solely on Azure. This arrangement not only grants Microsoft exclusive access to OpenAI’s cutting-edge technology but also solidifies Azure’s position as a foundational layer in the burgeoning AI stack. Crucially, this highlights a significant strategic constraint for OpenAI given its reliance on Microsoft’s infrastructure and distribution capabilities which limits its independence and amplifies its execution risk. Without its own hyperscale-grade cloud, OpenAI must depend on Microsoft not only for the raw compute power essential for training and inference but also, increasingly, for reaching customers across critical enterprise touchpoints. This deep dependency makes OpenAI’s path to sustained profitability considerably more uncertain.
Conversely, the strength of hyperscalers like Alphabet and Microsoft lies in their unparalleled control over the fundamental resource for AI – compute infrastructure – and their ability to operate it with extreme efficiency at scale alongside their vast distribution capabilities. As previously explored by my colleague Nina, the core of Google’s strategy is encapsulated by its long-standing motto: “focus on the user and all else will follow”.4 By seamlessly integrating advanced AI capabilities into existing products and services that billions of users already rely on – such as Android, Gmail, YouTube, Workspace, and Search – Google is able to generate higher engagement and deliver more personalised experiences. This high-frequency usage, coupled with tight product integration and continuous data collection, provides a robust foundation for user lock-in and, ultimately, monetisation.
Similarly, Microsoft is strategically embedding its AI-powered digital companion, Copilot,5 directly into its widely-used software offerings. This approach adds AI capabilities precisely where a significant portion of enterprise work already occurs, eliminating the need to create entirely new applications for AI usage to manifest. By integrating AI features into Microsoft 365, GitHub, and Teams, Microsoft directly taps into an installed base of hundreds of millions of users. In turn, this positions Microsoft to capture further value by enhancing an already established software ecosystem.
In this evolving landscape, companies that lack these two critical pillars – proprietary, cost-efficient compute infrastructure and deeply embedded, expansive distribution channels – may produce brilliant AI models today, but will face significant hurdles in reaching users at scale and achieving sustainable margins over the long term. In this regard, Microsoft and Alphabet are both well-positioned to lead the next era of AI-driven innovation and value creation.
1Note: this is one possible model (there will not always be a separate or single company at each layer – for example Microsoft provides both the Compute and Application layers).
2Multimodal LLMs can process and understand information from multiple modalities such as text, images, audio and video and generate output in multiple formats too.
3An amendment to the agreement earlier this year gives Microsoft the “right of first refusal” on any new cloud computing capacity if OpenAI seeks a rival cloud provider.
4https://about.google/company-info/philosophy/
5Microsoft currently employs OpenAI’s latest models for its Copilot services to customers.
This is a marketing communication / financial promotion that is intended for information purposes only. Any forecasts, opinions, goals, strategies, outlooks and or estimates and expectations or other non-historical commentary contained herein or expressed in this document are based on current forecasts, opinions and or estimates and expectations only, and are considered “forward looking statements”. Forward-looking statements are subject to risks and uncertainties that may cause actual future results to be different from expectations.
Nothing contained herein is a recommendation or an offer or solicitation for the purchase or sale of any financial instrument. The material is not intended to provide, and should not be relied on for, accounting, legal or tax advice, or investment advice. The content and any data services and information available from public sources used in the creation of this communication are believed to be reliable but no assurances or warranties are given. No responsibility or liability shall be accepted for amending, correcting, or updating any information contained herein.
Please be aware that past performance should not be seen as an indication of future performance. The value of any investments and or financial instruments included in this website and the income derived from them may fluctuate and investors may not receive back the amount originally invested. In addition, currency movements may also cause the value of investments to rise or fall.
This content is not intended for use by U.S. Persons. It may be used by branches or agencies of banks or insurance companies organised and/or regulated under U.S. federal or state law, acting on behalf of or distributing to non-U.S. Persons. This material must not be further distributed to clients of such branches or agencies or to the general public.
Get the latest insights & events direct to your inbox
"*" indicates required fields