Nvidia has announced a strategic partnership with OpenAI that will see up to $100 billion in investment, coupled with the deployment of massive new compute infrastructure. The deal involves Nvidia progressively investing as each gigawatt of AI data-center capacity is built, with a goal of deploying at least 10 gigawatts of Nvidia systems. The first gigawatt is slated to come online in the second half of 2026, using Nvidia’s upcoming Vera Rubin platform.
This partnership serves multiple purposes. First, it gives OpenAI the financial and hardware support needed to scale compute capacity fast as it develops more ambitious AI models. Nvidia, meanwhile, secures a deepened relationship with one of its biggest customers by becoming OpenAI’s preferred compute and networking partner, aligning its hardware roadmap with the demands of OpenAI’s future model infrastructure.
There are several key unresolved questions, however. Nvidia’s investment will be for non-voting shares, meaning it acquires a financial stake but not governance rights. It is not yet clear how OpenAI’s corporate structure—especially with its evolving status toward or within its public benefit enterprise form—affects how investment is made or regulated. Also uncertain are the sources of the remaining capital needed to build out the full 10 gigawatts, since infrastructure (power, real estate, cooling, operations, etc.) costs are much larger than merely the cost of chips and servers.
This move comes at a time when competition for AI compute resources is intensifying. OpenAI already collaborates with Microsoft, Oracle, SoftBank, and others on data-center and infrastructure projects. For Nvidia, this deal potentially positions it not just as a supplier of chips but as a partner in shaping future AI systems—both hardware and software direction. The announcement has also raised potential antitrust and competitive access questions: if Nvidia allocates significant chip and infrastructure capacity to OpenAI, how will rival AI developers compete for similar supply?
Broader implications are substantial. If executed fully, this partnership could shift the economics of large-scale AI development: lowering barriers for compute access, accelerating model training, and possibly redefining which companies can compete at the frontier. For policymakers, the deal amplifies the urgency of thinking about regulation, fair access to infrastructure, energy and environmental impacts, and how public interest is preserved as private capital becomes deeply embedded in the underpinnings of AI. The evolving Nvidia-OpenAI tie may therefore become a case study in how the next generation of AI powerhouses is built.