OpenAI Sets Out to Create Custom AI Chipset, Redefining its Hardware Strategy

OpenAI is said to be gearing up to create its inaugural custom artificial intelligence (AI) chipset this year. According to sources, the San Francisco-based AI company has initiated the internal design phase and is expected to wrap up the processor design within the approaching months. One of the primary motivations for developing custom AI chipsets is to lessen dependence on Nvidia and to enhance its bargaining position with other chip manufacturers. Significantly, a recent trademark application by OpenAI disclosed the company’s intention to produce a diverse array of hardware, including chipsets.

OpenAI’s Chipset

As reported by Reuters news, OpenAI is presently completing the design of its in-house chipset and anticipates finalizing the process in the next few months. The publication, citing insiders, noted that the AI company will then likely tape out (the stage of transferring the first design to a chip manufacturing facility) the chipset at Taiwan Semiconductor Manufacturing Company (TSMC).

TSMC is expected to manage the manufacturing process for OpenAI. The suggested chipset is anticipated to utilize 3-nanometer process technology featuring a systolic array architecture along with high-bandwidth memory (HBM) and robust networking capabilities. Notably, HBM design is also incorporated in Nvidia’s chipsets.

OpenAI reportedly believes that developing its own chipsets will bestow a competitive edge over other suppliers during negotiations. This strategy is also intended to reduce dependency on Nvidia, whose chips have been widely utilized by the company. The publication indicated that the AI firm aims to create “increasingly sophisticated processors with extended functionalities” in future iterations of the chipset.

According to sources, Reuters stated that the chipset is being engineered by OpenAI’s in-house team, directed by Richard Ho, the head of hardware at the company. Interestingly, Ho has prior experience with Lightmatter and Google and specializes in semiconductor engineering. The team he leads has reportedly expanded significantly over the past months, now comprising 40 members.

Importantly, the report specifies that OpenAI’s initial chipset will first be deployed on a limited basis, primarily for executing some of the company’s AI models. Currently, its role within the company’s infrastructure is somewhat restricted, yet there is potential for expansion in the future. Ultimately, the AI organization aims to utilize these chips for both inference and training of AI models.

[IMAGE_1]