0 point by adroot1 3 hours ago | flag | hide | 0 comments
Key Points
The rapid proliferation of artificial intelligence has fundamentally altered the computational demands placed on modern data centers. To a layperson, the internet and cloud computing might seem like an invisible, wireless ether, but the physical reality is a labyrinth of processors, switches, and cables tightly packed into massive warehouses. For decades, these systems have communicated by sending electrical signals over copper wires. However, as AI models grow to require clusters of tens of thousands—and soon millions—of specialized chips, the sheer volume of data moving between them has created an unprecedented bottleneck. Pushing more data through copper wires at higher speeds generates excessive heat, consumes vast amounts of power, and results in signal corruption over short distances.
To solve this, the industry is looking toward light. By replacing electrical signals with lasers transmitting data through microscopic glass fibers—a technology known broadly as photonics—data centers can move information faster, over longer distances, and with a fraction of the energy. While fiber optics have long been used to connect buildings and cities, the latest breakthroughs involve shrinking these optical engines down to the microscopic level and packing them directly alongside the computing chips. This transition from electrons to photons is not merely a hardware upgrade; it represents a fundamental paradigm shift in computing architecture. It promises to drastically reshape how data centers are built, operated, and cooled, while simultaneously igniting a fierce competitive battle among the world’s leading semiconductor and networking companies.
The insatiable demand for artificial intelligence (AI) and high-performance computing (HPC) is pushing the boundaries of global data center infrastructure [cite: 1]. Training large language models (LLMs) with trillions of parameters, coupled with the delivery of real-time inference to billions of devices, requires massive computing infrastructure that must be optimized simultaneously for scale, efficiency, and performance [cite: 2]. As these AI infrastructures scale at unprecedented rates, the primary bottleneck impeding progress is no longer compute capacity itself, but rather the movement of data, alongside the resulting constraints of power and heat [cite: 3].
In modern AI clusters, the movement of data between processors, servers, and racks can consume up to 50% of the total system energy [cite: 3]. Historically, the transmission of this data has relied heavily on traditional copper-based electrical interconnects [cite: 1]. However, as the industry pushes signaling speeds beyond 800 Gigabits per second (800G) and toward 1.6 Terabits per second (1.6T), copper architectures are confronting severe physical limitations [cite: 3].
This impending physical limitation is widely referred to within the industry as the "Copper Wall" [cite: 4, 5]. At high data transmission speeds, specifically around 224 Gbps and 448 Gbps per lane, the electrical signal in copper degrades so rapidly due to frequency-dependent attenuation that its reliable physical reach is reduced to a matter of meters, or even centimeters [cite: 2, 4]. To force high-speed electrical signals through these resistant, lossy mediums, systems require vast amounts of power, generating unsustainable levels of heat and necessitating the use of complex, power-hungry Digital Signal Processors (DSPs) to correct signal degradation [cite: 6, 7].
Consequently, the AI hardware industry has arrived at a pivotal architectural crossroads [cite: 5]. To decouple bandwidth growth from crippling power constraints, the industry is transitioning toward chip-scale light interconnect technologies—specifically Silicon Photonics and Co-Packaged Optics (CPO) [cite: 5]. By moving data via photons rather than electrons, optical interconnects promise petabit-per-second connectivity, nanosecond-level latency, and energy efficiencies previously thought impossible [cite: 5].
The transition from copper to optical interconnects in AI data centers is driven by rigorous technical imperatives. Academic and industry simulations consistently indicate that optical interconnects outperform traditional copper interconnects across all major design criteria at the global interconnect level, particularly as CMOS technology scales down [cite: 8, 9].
Bandwidth requirements in AI clusters are growing exponentially. NVIDIA’s NVLink 6 communication protocol, for instance, defines a peak transmission rate of 400G SerDes (Serializer/Deserializer) per lane, establishing a staggering bandwidth ceiling of 3.6 Terabytes per second (TB/s) per individual GPU [cite: 10]. In scale-up networks linking tens of thousands of GPUs, the aggregate bandwidth demands are monumental [cite: 2].
The Limits of Copper Bandwidth: At data rates of 224 Gbps, advanced design techniques can allow copper traces to manage short distances [cite: 2]. However, as the industry eyes 448 Gbps per lane, copper transmission lines face severe bandwidth limitations due to high-frequency attenuation [cite: 2]. Additional bottlenecks manifest through catastrophic signal loss and drastically reduced signal-to-noise ratios, meaning copper simply cannot reliably carry these frequencies over distances longer than approximately one meter without massive active signal correction [cite: 2, 10].
The Optical Bandwidth Advantage: In stark contrast, fiber optics operate like a "high-speed train traveling on a frictionless track," allowing massive volumes of data to move over kilometers with virtually no signal loss [cite: 4]. Optical physical layers (PHY) inherently overcome the electrical resistance challenge [cite: 6]. Furthermore, optical transmission leverages Wavelength-Division Multiplexing (WDM), a technique that enables multiple distinct wavelengths of light to be carried simultaneously over a single optical fiber [cite: 10]. This capability dramatically increases transmission and bandwidth density, achieving a physical footprint and throughput scale that copper-based transmission physics cannot replicate [cite: 10]. Emerging optical PHY standards are targeting data delivery capacities of 3.2 Tb/s and beyond [cite: 6].
Energy efficiency has transitioned from a secondary operational consideration to a primary "survival metric" in the AI data center landscape [cite: 3]. As AI data centers grow to gigawatt scales, the power consumed by networking and interconnect fabrics must be aggressively curtailed to leave sufficient power envelopes for the compute units (GPUs/TPUs) themselves.
The Power Cost of Copper and Traditional Optics: Copper is an inherently resistant medium; transmitting data at high speeds requires tremendous power [cite: 6]. While traditional pluggable optical transceivers solve the distance limitations of copper, they introduce their own power burdens. A fully retimed 1.6T pluggable optical transceiver (in the OSFP format) can consume up to 25 Watts of power [cite: 2]. Crucially, up to 15 Watts of this power budget is consumed solely by the DSP, which is required to clean up and retime the signal after it travels over the 14-to-16-inch copper printed circuit board (PCB) traces connecting the switch ASIC to the front-panel pluggable module [cite: 2, 7]. In early 2025, it was reported that interconnect fabrics were consuming nearly 30% of total AI cluster power [cite: 5].
Energy Savings via Co-Packaged Optics (CPO): Chip-scale light interconnects, specifically CPO, fundamentally alter this energy equation. By moving the optical engine off the front panel and soldering it directly into the same physical package as the switch ASIC, the electrical signal path is drastically shortened from centimeters (or inches) to mere millimeters [cite: 1]. This intimate cohabitation reduces the attenuation associated with high-speed electrical traces, enabling the complete elimination of bulky, power-hungry external DSPs [cite: 1, 7].
By containing the electrical SerDes within its optimal, millimeter-scale physical envelope, CPO slashes power consumption per bit by over 65% to 70% [cite: 1, 5]. Transitioning to silicon photonics and CPO reduces data transmission energy from roughly 15 picojoules per bit (pJ/bit) in traditional pluggable systems to less than 5 pJ/bit [cite: 5]. Overall, co-packaged silicon photonics can reduce interconnect power consumption by 3.5x compared to traditional pluggable transceivers [cite: 7]. For hyperscalers managing millions of connections, every watt saved in the interconnect translates directly into optimized Total Cost of Ownership (TCO) and improved cluster efficiency [cite: 3].
In AI training clusters, particularly within tightly coupled scale-up fabrics, latency must be kept to an absolute minimum to maintain tight synchronization across thousands of parallel-processing GPUs [cite: 11].
Latency Bottlenecks in Copper and Pluggables: In traditional transceiver-based switches, signals must navigate 14 to 16 inches of PCB copper routing [cite: 7]. The degradation of the signal over this physical distance requires the aforementioned DSPs to clean and correct the data. The mathematical processing performed by the DSP introduces significant and compounding latency [cite: 7]. Furthermore, copper interconnects are highly susceptible to delay uncertainty as CMOS technology scales down [cite: 8, 9].
Optical Signal Integrity: With integrated silicon photonics, the signal path before modulation into light is reduced to less than half an inch, drastically minimizing the risk of signal corruption [cite: 7]. Because the extra DSP processing step is eliminated, the latency profile of the switch is substantially improved, making it vastly more efficient for high-speed AI workloads [cite: 7]. Additionally, the latency of an optical signal is strictly governed by the speed of light within the fiber medium [cite: 9]. Variations in transmission delay are negligible, making timing models highly predictable and accurate—a critical feature for synchronized clock distribution networks in AI fabrics [cite: 9]. Photonic waveguides also do not generate heat, preserving thermal stability and further ensuring signal integrity [cite: 12].
| Metric | Traditional Copper | Pluggable Optics (DSP-based) | Chip-Scale Photonics (CPO) |
|---|---|---|---|
| Max Reach (400G+) | < 1 to 2 meters [cite: 4, 10] | Kilometers [cite: 4] | Kilometers [cite: 4] |
| Bandwidth Density | Low (Limited by cable bulk) | Medium (Front-panel limited) | Extremely High (WDM integration) [cite: 10] |
| Power Efficiency | High loss over distance | ~15 pJ/bit [cite: 5] | < 5 pJ/bit [cite: 5] |
| Latency | Low (Over very short links) | High (Due to DSP processing) [cite: 7] | Ultra-Low (DSP eliminated) [cite: 7] |
| Thermal Output | High (Resistance) [cite: 6] | High (25W per module) [cite: 2] | Low (Waveguides do not heat) [cite: 12] |
To properly contextualize the market transition, one must distinguish between the two primary networking domains within an AI data center: Scale-up and Scale-out architectures [cite: 11, 13]. The deployment of copper versus optics is not a wholesale replacement, but rather a pragmatic division of labor dictated by these distinct domains [cite: 11].
Scale-up refers to maximizing computational performance within a tightly coupled system, such as a single server chassis or a single rack [cite: 11]. The objective is to aggregate massive amounts of compute and memory while maintaining nanosecond latency and rigid synchronization [cite: 11]. These connections (e.g., linking GPUs to L1 compute switches) operate over very short physical reaches, typically well under three meters [cite: 11].
In this specific scale-up domain, high-speed passive copper interconnects currently remain dominant [cite: 11]. Driven by mature SerDes architectures and proprietary protocols like NVIDIA's NVLink, copper is highly prized for its ultra-low latency, relatively low cost, and proven reliability at scale [cite: 11, 14]. For links under one meter, copper avoids the electro-optical conversion tax entirely. Consequently, major platforms aggressively push copper for intra-rack connections [cite: 11].
However, as AI models balloon in size, the scale-up domain is forced to expand. When a scale-up configuration grows from a single rack to cross-rack deployments—such as a 576-GPU cluster linking multiple racks—copper cabling hits a physical wall [cite: 2, 10]. A hypothetical multi-rack cluster of 128,000 GPUs would require millions of transceivers just for the scale-up network [cite: 2]. At these multi-rack distances, the sheer density, weight, and airflow restriction of copper cables make them untenable, forcing the introduction of optics into the scale-up fabric [cite: 2, 13].
Scale-out, conversely, refers to distributing workloads across hundreds or thousands of independent servers to maximize the total throughput of the data center [cite: 11]. Once data traffic moves higher up the network hierarchy—from L1 compute switches to L2 aggregation switches and across rows of server racks—the physical distances span tens to hundreds of meters [cite: 11].
In the scale-out domain, optical interconnects are an absolute necessity [cite: 11]. Traditional Ethernet and InfiniBand optical pluggables form the backbone of these large-scale clusters [cite: 11]. As physical distances increase, the "must" of optics becomes unavoidable [cite: 4]. Historically, the industry operated under the maxim: "copper when you can, optics when you must" [cite: 4]. The current paradigm shift is characterized by the boundary of "must" moving inexorably closer to the processor [cite: 4].
As data centers race to deploy optics more deeply, two distinct technological implementations have emerged to solve the power and latency penalties of traditional pluggable modules: CPO and LPO.
Co-Packaged Optics (CPO): As established, CPO removes the optical engine from the pluggable form factor entirely, soldering it directly alongside the ASIC on the same substrate [cite: 1]. While offering the absolute highest gains in bandwidth density and power reduction, CPO introduces extreme complexity into the manufacturing and maintenance lifecycle. The integration demands direct liquid cooling to manage thermal density, and the unpluggable nature of the optics makes field repairs of laser failures highly complex [cite: 1, 5].
Linear Pluggable Optics (LPO): LPO serves as an interim or alternative evolution. LPO retains the familiar, field-replaceable modular pluggable form factor (like OSFP), but it entirely removes the DSP from the optical module [cite: 15]. Instead, the linear analog signal is driven directly from the host switch's internal SerDes [cite: 15]. By removing the DSP from the plug, LPO reduces optical module power consumption by up to 50% compared to traditional retimed optics [cite: 16]. However, LPO is not universally plug-and-play; it requires meticulous pair-wise tuning and interoperability validation between the host silicon and the optical module to ensure signal integrity without a dedicated DSP [cite: 15].
While the industry narrative heavily favors optics, advanced engineering is attempting to prolong the viability of copper. For instance, Swiss deep-tech startup Kandou AI has gained significant traction (securing a $400 million valuation backed by SoftBank, Synopsys, and Cadence) by championing specialized copper interconnects [cite: 17].
Kandou employs a proprietary "Chord signaling" technology [cite: 17]. Rather than sending a simple binary signal over a single differential pair, Chord technology transmits correlated signals across multiple copper wires simultaneously [cite: 17]. This approach allegedly boosts bandwidth by two to four times and cuts power usage in half compared to standard copper, potentially reducing system costs by 90% when compared to transitioning to optical infrastructure [cite: 17]. While optical technologies will likely surpass copper eventually, Kandou's licensing model (similar to Arm's IP model) offers a highly attractive, non-disruptive transitional step for data centers seeking immediate, cost-effective bandwidth upgrades without overhauling existing rack architectures [cite: 17].
The pivot toward chip-scale optical interconnects is causing tectonic shifts in the semiconductor and networking hardware markets. Connectivity is universally recognized as the central bottleneck in AI system supply and speed [cite: 18]. Hardware incumbents that successfully deliver high-density, low-power optical solutions at scale are projected to enjoy massive tailwinds to revenues and profit margins through the late 2020s [cite: 18]. The competitive landscape is characterized by divergent philosophies regarding system integration, open standards, and foundational foundries.
NVIDIA, the undisputed leader in AI compute, approaches the interconnect bottleneck with a philosophy of holistic, system-level integration [cite: 1]. NVIDIA views the network not as a distinct set of components, but as an extension of the compute fabric itself [cite: 1].
Scale-Up Dominance: Within the rack, NVIDIA remains aggressively "copper-first" [cite: 13]. Through its proprietary NVLink protocol, NVIDIA has optimized massive copper matrices—such as the 5,000 unique copper cables routing 200 Gbps bidirectional links inside the NVL72 rack system [cite: 2]. By tightly controlling the ASIC, the SerDes, and the copper backplane, NVIDIA ensures the lowest possible latency and maximum reliability [cite: 11, 13].
Moving to Optics: However, NVIDIA recognizes the physical limitations of this approach as clusters scale to million-GPU milestones [cite: 5]. To command the optical frontier, NVIDIA is developing CPO scale-up versions of its NVLink switches, aligned with its upcoming "Rubin Ultra" platform slated for late 2027 or 2028 [cite: 18]. The company's Quantum-X Photonics InfiniBand platform replaces bulky copper arrays with thin, liquid-cooled optical fibers that offer 10x better network resilience [cite: 5].
Supply Chain Strategy: Crucially, NVIDIA is internally developing its own 1.6T DSPs to fulfill 50% of its internal demand, reducing reliance on third parties [cite: 18]. To secure its physical optical supply chain, NVIDIA has executed landmark procurement agreements, including a combined $4 billion investment split between optical component manufacturers Lumentum and Coherent [cite: 10, 19]. By funding advanced laser and optical components, NVIDIA ensures priority access to the "arms dealers" of the optical revolution [cite: 4, 10]. NVIDIA's ultimate goal is an end-to-end "compute-plus-fabric" ecosystem that is deeply proprietary and highly resistant to disaggregation [cite: 1].
If NVIDIA is the architect of closed system-level integration, Broadcom is the champion of modularity and open merchant silicon [cite: 1]. Broadcom operates under the assumption that hyperscalers (like Meta, Google, and AWS) want flexibility to build heterogeneous systems using multi-vendor hardware [cite: 1].
Dominating the Switch and DSP Market: Broadcom is setting the industry pace with its flagship Tomahawk 6-Davisson switch, explicitly designed to push the boundaries of CPO [cite: 5]. Broadcom is recognized as having best-in-class 1.6T DSPs, and market checks indicate that a majority of Meta's 1.6T DSP demand will be awarded to Broadcom [cite: 18]. Furthermore, Broadcom is massively expanding its foundational optical hardware capacity; its production of Electro-Absorption Modulated Lasers (EML) is scaling from 43 million units in 2025 to 50 million in 2026, while its Continuous Wave (CW) laser production will double from 15 million to 30 million units in the same timeframe [cite: 18].
Technological Pragmatism: Technologically, Broadcom has favored mature Mach-Zehnder Modulator (MZM) technology for its optical engines [cite: 1]. While slightly larger than competing microring modulators, MZMs are exceptionally thermally stable and highly reliable [cite: 1]. Broadcom is advancing this proven architecture toward 3nm semiconductor nodes to extract further power efficiencies [cite: 1]. Broadcom's strategy is to remain the indispensable merchant silicon provider for hyperscalers who prefer open ethernet standards over NVIDIA's proprietary InfiniBand [cite: 1]. Broadcom also supplies the remaining 50% of NVIDIA's external 1.6T DSP needs, ensuring it profits regardless of which architectural ecosystem wins [cite: 18].
Cisco Systems is aggressively positioning itself to defend its legacy networking dominance by offering highly programmable, power-efficient optical solutions aimed at massive, distributed AI deployments (scale-across environments) [cite: 16, 20].
Silicon One G300: At the heart of Cisco's strategy is the newly unveiled Silicon One G300, an ultra-high-end switching silicon capable of a staggering 102.4 Tbps capacity [cite: 16, 21]. The G300 integrates 512 lanes of internally developed 200 Gbps SerDes, allowing for flatter network topologies that connect more compute resources at the edge with lower power consumption [cite: 15, 21]. Cisco claims this architecture improves AI job completion times by 28% through advanced path-based load balancing and proactive telemetry [cite: 16, 21].
Embracing Linear Pluggable Optics (LPO): Rather than moving entirely to CPO, Cisco is deeply committed to 800G Linear Pluggable Optics (LPO) [cite: 15]. Cisco recognizes that hyperscalers desire immediate power reductions without the locked-in manufacturing complexity of CPO [cite: 15]. Cisco's Nexus 9000 and 8000 series routers support LPO modules that reduce overall switch power by 30% and optical module power by 50% [cite: 16]. Cisco utilizes its broad portfolio spanning servers, switches, and optics to perform rigorous multi-vendor interoperability testing—a crucial requirement for unretimed LPO modules, which must be perfectly tuned to the host ASIC [cite: 15, 22].
Multi-Rail Line Systems and Transport: Beyond the data center walls, Cisco is optimizing long-haul optical transport for AI. The company introduced the Open Transport 3000 Series, a multi-rail open line system that aggregates multiple fiber rails (such as 128 parallel fibers passing through a repeater) into a single line card [cite: 23, 24]. This multi-rail system supports C&L-band spectrum and achieves a reported 75% power reduction and 80% decrease in rack space compared to legacy amplification huts [cite: 24]. Furthermore, Cisco has upgraded its NCS 1014 multi-haul system to support 12.8 terabits of capacity utilizing sixteen 800G pluggable transponders in a single card, alongside developing in-house 100ZR coherent pluggables based on proprietary silicon photonics [cite: 25]. Through these broad portfolio upgrades, Cisco aims to be the neutral, reliable backbone connecting distributed AI clusters across different geographic locations [cite: 20].
Marvell Technology is a dominant force in the high-speed optical DSP market, fiercely competing with Broadcom. Financial analysts note that Marvell commands a 70% market share for 800G DSPs [cite: 18]. Notably, Marvell has secured virtually all of Google’s demand for next-generation 1.6T DSPs, largely due to superior performance on its Transimpedance Amplifiers (TIA) and advanced packaging architecture [cite: 18].
Acquisitions and Custom Silicon: Marvell is aggressively utilizing its capital to cement its position in the silicon photonics ecosystem. The company made a blockbuster acquisition of the photonics startup Celestial AI for up to $5.5 billion [cite: 18, 26]. This acquisition is viewed as a strategic maneuver to elevate Marvell into a full-stack, one-stop Application-Specific Integrated Circuit (ASIC) design partner, capable of challenging Broadcom's dominance in custom AI silicon [cite: 18]. Marvell has also engaged in deep partnerships with NVIDIA, including a $2 billion investment aimed at integrating NVIDIA’s Aerial AI-RAN into telecommunications networks and enhancing optical interconnects [cite: 19].
Astera Labs: Another notable beneficiary of the connectivity bottleneck is Astera Labs, a pure-play connectivity semiconductor company. Market checks indicate Astera's Scorpio X switches are highly integrated into advanced hyperscaler platforms, such as Amazon's (AWS) Trainium-3 AI accelerators [cite: 18]. Astera represents the growing ecosystem of specialized firms capitalizing on PCIe and Ethernet connectivity standards that could eventually take market share from closed protocols like NVLink [cite: 18].
The transition to optical computing requires the maturation of an entirely new supply chain. The intricate manufacturing processes of silicon photonics—such as embedding photonics and electronics on a single silicon wafer—heavily rely on advanced foundries like TSMC and its COUPE (Compact Universal Photonic Engine) technology [cite: 6].
To mitigate the risk of a monopolized, single-vendor supply chain and to accelerate industry-wide adoption, tech titans have formed the Optical Compute Interconnect Multi-Source Agreement (OCI MSA) [cite: 6]. Unveiled in 2026, the OCI MSA comprises competitors and partners alike, including AMD, Broadcom, NVIDIA, OpenAI, Meta, and Microsoft [cite: 6].
The primary mandate of the OCI MSA is to define an open, protocol-agnostic physical layer (PHY) specification for optical scale-up interconnects delivering up to 3.2 Tb/s [cite: 6]. By standardizing the communication lanes, the OCI MSA allows data centers to utilize the same underlying optical infrastructure to route distinct protocols—such as NVIDIA's NVLink or AMD's UALink—simultaneously [cite: 6]. This standardization ensures interoperability across pluggable modules, on-board optics, and CPO platforms [cite: 6]. In theory, this open ecosystem will dramatically drive down the cost of optics at scale, alleviate current supply chain constraints associated with copper, and effectively de-risk the optical transition by fostering a multi-vendor ecosystem independent of any single foundry's packaging IP [cite: 6]. Companies like POET Technologies are also contributing to this supply chain expansion by introducing optical interposer platforms that utilize CMOS-compatible passive alignment and wafer-level testing, drastically improving manufacturing yields and lowering the barriers to high-volume production for 800G and 1.6T applications [cite: 3].
While the current battleground centers on replacing copper interconnects to facilitate data movement, the technological trajectory is already pointing toward a much deeper integration of light in AI architectures.
The next frontier beyond Co-Packaged Optics is Optical Computation [cite: 5]. Startups like Lightmatter are pioneering "Photonic Compute Units" that perform complex mathematical operations—specifically the massive matrix multiplications required for AI neural networks—using the physical properties of light rather than electrical transistors [cite: 5]. By modulating light to perform calculations, these chips promise up to a 100x improvement in energy efficiency for specific AI inference tasks [cite: 5].
If successfully commercialized, optical computing could potentially replace traditional electrical GPUs for certain workloads by the late 2020s [cite: 5]. The integration of optical computation with chip-scale optical interconnects would represent a true post-silicon, post-copper era, wherein the entirety of the AI workload—from calculation to transmission—occurs seamlessly within the photonic domain [cite: 5].
The AI industry has unequivocally arrived at the "Copper Wall." The physical limitations of traditional electrical signaling—manifesting as debilitating power consumption, crippling thermal output, and severely restricted bandwidth density—are incompatible with the scale of next-generation, million-GPU AI data centers.
Emerging chip-scale light interconnect technologies, ranging from Linear Pluggable Optics (LPO) to Co-Packaged Optics (CPO) and unified Silicon Photonics, technically benchmark vastly superior to copper. By eliminating extensive PCB copper traces and power-hungry Digital Signal Processors, photonics drastically lower signal latency, reduce interconnect energy consumption from 15 pJ/bit to under 5 pJ/bit, and leverage Wavelength-Division Multiplexing to shatter the bandwidth ceilings of electrical SerDes limitations.
While copper will maintain a stronghold in ultra-short reach, intra-rack scale-up domains in the immediate future, its viability is rapidly eroding as compute clusters physically expand across multiple racks. This transition is actively reshaping the global semiconductor and networking markets. Incumbents like NVIDIA are striving to dominate the transition through closed, fully integrated system architectures, while Broadcom and Cisco advocate for open, modular, and interoperable Ethernet-based ecosystems. Massive capital inflows into specialized DSP manufacturers, optical component foundries, and open standards consortia like the OCI MSA underline a universal industry consensus: the future of artificial intelligence will not be bound by electrons in copper wire, but liberated by the speed of light.
Sources: