Photonic Processor Could Enable Ultrafast AI Computations with Extreme Energy Efficiency

Typography

The deep neural network models that power today’s most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware.

The deep neural network models that power today’s most demanding machine-learning applications have grown so large and complex that they are pushing the limits of traditional electronic computing hardware.

Photonic hardware, which can perform machine-learning computations with light, offers a faster and more energy-efficient alternative. However, there are some types of neural network computations that a photonic device can’t perform, requiring the use of off-chip electronics or other techniques that hamper speed and efficiency.

Building on a decade of research, scientists from MIT and elsewhere have developed a new photonic chip that overcomes these roadblocks. They demonstrated a fully integrated photonic processor that can perform all the key computations of a deep neural network optically on the chip.

Read more at: Massachusetts Institute of Technology

Researchers demonstrated a fully integrated photonic processor that can perform all key computations of a deep neural network optically on the chip, which could enable faster and more energy-efficient deep learning for computationally demanding applications like lidar or high-speed telecommunications.(Photo Credit: Sampson Wilcox, Research Laboratory of Electronics)