AI is reshaping the global economy and social interactions, driving a surge in bandwidth needs with lower power and cost, and pushing larger CPU/GPU clusters and resource-efficient architectures like compute disaggregation and memory pooling that require longer-reach, higher-bandwidth fabrics. Electrical copper IO delivers high bandwidth density and low power but is limited to sub-meter reach, while data-center pluggable optics provide long reach at power and cost levels that are unsustainable for AI scaling. Co-packaged optical interconnects (OCI) meet these demands by integrating optics with CPUs, GPUs, IPUs, and SoCs to deliver higher bandwidth, energy efficiency, low latency, and longer reach. Intel’s 4 Tb/s bidirectional OCI chiplet, built on its silicon photonics platform, integrates a photonic IC with on-chip lasers and optical amplifiers plus an electrical IC, supports 64×32 Gb/s per direction over tens of meters, and was demonstrated at OFC 2024 carrying error-free fiber traffic, with a roadmap to 32 Tb/s per chiplet. Enabled by integrated DWDM laser arrays and optical amplifiers, the PIC offers reliability surpassing traditional lasers, backed by more than 8 million shipped PICs and over 32 million lasers in the field, positioning Intel’s field-proven silicon photonics to power scalable AI infrastructure.