Home  /  Technical   /  Transceiver Module

NVIDIA GB300 AI Server Launch Sparks Surge in Demand for 1.6T Optical Modules!


/ From: / Review:6


As NVIDIA’s GB200 AI server chip enters its peak mass-production phase, the company is preparing to launch its next-generation flagship product, the GB300 in the second half of 2025. The upcoming new release is expected to significantly boost the AI infrastructure ecosystem, with major cloud service providers accelerating adoption and relevant server manufacturers ramping up inventory preparation.


Industry reports indicate that, backed by its highly vertically integrated operations, Hon Hai (Foxconn) is poised to secure the largest share of GB300-related orders, becoming NVIDIA’s most important server partner.


Systems based on the GB300 are currently undergoing testing, with mass shipments scheduled to launch later this year. As the GB200 and GB300 share a common architecture, manufacturers expect a smoother production ramp‑up for the new generation. GB300-based AI servers are projected to account for more than half of Foxconn’s server revenue in 2025.


Other manufacturers including Quanta, Wistron NeWeb, and Inventec are also accelerating development of GB300 products. Quanta, which expanded GB200 server shipments in the second quarter, is now in the testing and validation phase for the GB300, targeting volume shipments starting in September. Despite the short interval between the GB200 and GB300 launches, Quanta management believes strong demand from top-tier cloud infrastructure providers for large language model training will support growth momentum across both product lines.


Wistron NeWeb plans to begin delivering GB300 servers in August or September, with a significant increase in shipments expected in the second half of the year. Although the company captured relatively few initial orders in the first wave of the Blackwell series, it reports securing new market opportunities and customer orders in the current cycle. Meanwhile, Inventec is targeting entry-level configurations based on the B300 platform, with shipments expected to start at the end of the third quarter.


The GB300 delivers substantial performance improvements. Manufactured by TSMC, the high-end GB300 NVL72 system integrates 72 Blackwell Ultra GPUs and 36 Arm Neoverse‑based Grace CPUs, delivering 1.5 times the AI computing power of the GB200 NVL72. Compared with the previous-generation Hopper architecture, Blackwell is expected to drive a 50‑fold increase in revenue opportunities for AI factories.


The launch of the GB300 is also catalyzing a new wave of technological advances in optical communications, especially 1.6Tbps interconnection solutions using co-packaged optics (CPO) technology. This leap improves transmission speed by 60% and reduces power consumption by 40%, significantly enhancing training and inference performance for next‑generation AI workloads.


Optical module manufacturers are rapidly expanding capacity to meet surging demand.

Innolight has begun small‑batch deliveries of 1.6T modules, claiming a 12‑month lead over industry peers. To meet projected 2025 demand, the company is raising monthly production capacity at its Thailand factory from 300,000 units to 500,000 units. In the first quarter of 2025, Innolight achieved a record quarterly revenue of 6.674 billion yuan, with net profit rising more than 56% year‑on‑year.



Accelink Technologies is advancing independent R&D in silicon photonics and continuous‑wave light sources, with 1.6T modules currently under verification. Its monthly capacity has reached 500,000 units. The company posted revenue of 2.222 billion yuan in Q1 2025, up 72.14% year‑on‑year, while net profit nearly doubled.


Eoptolink has launched 800G and 1.6T modules based on 200G per wavelength optics, alongside a portfolio of VCSEL, EML, silicon photonics, and thin‑film lithium niobate solutions. The company has also introduced 400G/800G ZR coherent modules and LPO‑based products. In Q1 2025, it achieved revenue of 4.052 billion yuan and net profit of 1.569 billion yuan, representing a year‑on‑year increase of over 380%.


Huagong Tech has completed independent R&D on 200G per wavelength silicon photonic chips and 1.6T module solutions. Monthly capacity rose from 200,000–300,000 units in Q1 to 1 million units in Q2, yet demand still outstrips supply. Its Thailand factory is now operational, focusing on 800G and 1.6T module production, with a target of reaching 200,000 units per month by the end of Q2. In Q1 2025, Huagong Tech recorded revenue of 3.355 billion yuan and net profit of 409 million yuan, both showing substantial year‑on‑year growth.

NVIDIA AI Factory Infrastructure Solution

NVIDIA DGX™ GB300 is the most powerful and largest‑scale AI system currently available for enterprise deployment. Built on the NVIDIA Grace™ Blackwell Ultra superchip and scalable to thousands of nodes, the DGX GB300 is a liquid‑cooled rack‑scale system designed for modern data centers equipped with NVIDIA MGX™ racks.

With the DGX GB300, leading global enterprises can break through performance bottlenecks and build infrastructure solutions capable of solving any challenge.